What can you tell me about Expert Systems?

curiouswebster
curiouswebster used Ask the Experts™
on
What can you tell me about Expert Systems?

I work for a technology company where there are hundreds of very smart engineers such as Electrical Engineers, Chemical Engineers and Manufacturing engineers.

They are all very smart and skilled. But the downside of every one is they are human.

I look at Supercomputers, like Summit, by IBM (200 Trillion computations per second), as well as Watson, which I know very little about, and I worry that Watson could wipe out small businesses like the one I work for.

How close are Expert Systems like Watson from performing completed designs, automatically? Let's say, for example, an integrated circuit.

I am curious to hear what a company that sees Watson as their true competition can do on a shoestring budget to harness their own engineers knowledge in a form that can be coalesced into a internal use only decision support system.

The sky's the limit on your theories...This is a wide open question, meant to tickle the mind.

Thanks
Comment
Watch Question

Do more with

Expert Office
EXPERT OFFICE® is a registered trademark of EXPERTS EXCHANGE®
you are referring to artificial intelligence (AI)...

it includes such things as robotics, facial recognition, financial analysis, decision analysis and many other areas.

here is a good place to start:
Applications of artificial intelligence (From Wikipedia, the free encyclopedia)

Artificial intelligence, defined as intelligence exhibited by machines, has many applications in today's society. More specifically, it is Weak AI, the form of AI where programs are developed to perform specific tasks, that is being utilized for a wide range of activities including medical diagnosis, electronic trading platforms, robot control, and remote sensing. AI has been used to develop and advance numerous fields and industries, including finance, healthcare, education, transportation, and more.
https://en.wikipedia.org/wiki/Applications_of_artificial_intelligence
Commented:
Artificial intelligence is still derived from human intelligence. It has to be "fed" materials to train it to do something. It cannot innovate from nothing.

Applications that require creativity tend to be weak areas for AI. You can see the results of AIs that were trained to write songs and music - they might be LOOSELY considered art but they don't really make logical sense. I would consider something like an integrated circuit to be akin to a song - it requires both logic and purpose but also requires a level of artistic creativity on the design side.

So generic AI wouldn't likely be able to design one like a human could. However, if you trained an AI to complete those specific tasks, you could probably get it to a workable point. I wouldn't be surprised if we saw various "themed" AIs sold in the future as tools.

However true innovation will probably never stop being an AI weakness.
I wouldn't be too concerned about Watson taking out your business.  AI systems are best when there are vast training sets - which means lots of examples of doing a problem coupled with human feedback on how well the AI is doing at solving the problem.

E.g. Amazon learning what products to show you in the "you may also like" fits this well.  They have millions of users looking at their pages, so lots of data to learn from.  And when we click on those recommended items that gives them the human feedback - if you show me batteries when I buy an electric toy, I am more likely to click on the batteries than if you showed me a book there.

For things like design work - there's usually not a ton of training examples (people don't design IC circuits at the rate of millions per year) and there's also no simple way to gather feedback on AI designed circuits (it's very expensive to build each one and only then test it to see how well it performs the task).

So routine and relatively simple tasks are going to be rapidly replaced by AI systems.
Higher order and more complex functions are still a long way off.

Doug
(Ph D in AI from Univ of Michigan)
Ensure you’re charging the right price for your IT

Do you wonder if your IT business is truly profitable or if you should raise your prices? Learn how to calculate your overhead burden using our free interactive tool and use it to determine the right price for your IT services. Start calculating Now!

Commented:
It's also worth mentioning that even routine/simple tasks might not be replaced if they need to stick to routine, and the use of any technology will also naturally create new supervision jobs for humans.

The nature of AI is typically to continue to grow. Consider a manufacturing assembly line where it is critical that things are done in a precise and repetitive manner, and it has mass output. Any deviation from the routine could be catastrophic, so we build robotic, non-AI systems, which still require engineers to design, build, and maintain. So even as those automated systems replaced some manual manufacturing, they created other jobs in the process. AI would instinctively try to learn and deviate, which would be a terrible combination in those situations.

However, it might be good for an AI to take lots of data from airframe testing and suggest a new design for an airplane wing, but it would be idiocy to use the design without additional human review.

Now, you could, in theory, also implement boundaries for the AI so it sticks to certain routines in certain situations, but at that point, there's no cost-benefit to using AI over standard automation.

In my opinion, there's a niche for AI, just as there is a niche for non-AI automation. Niche technologies may make some older jobs obsolete, but they usually create newer ones, and they rarely ever -completely- eliminate human employment in a situation.
curiouswebsterSoftware Engineer

Author

Commented:
Well, thanks for the feedback. Maybe AI and Machine Learning is not really what I am referring to.

And I am not trying to save my job...I am trying to find how we can beat Watson from ever eating my company for lunch ;)

As a software engineer, I get the part about artistic design is something where computers a very lone way to go...

And I understand that even training by experts provides data points that can be re-used, but the system would still be stymied because of the context needs to repeat in order for those data points to be useful.

How long until a Watson style machine can read and learn? We learn by reading, so long as we already have the contextual knowledge that lets us make sense of this knew knowledge.

This is how I define "learning". Break the context and what's read will not be understood or soon forgotten.

So, how long is it before a Watson type machine can also could read its way to become an expert chip designer?

Thanks
Commented:
I think anyone who believes they could provide a realistic answer to that is probably not someone you want to listen to. That's like asking how long it will be before we colonize Mars. I'm not trying to be rude - just realistic. You're asking people to predict a very vague event in the future.
Well, thanks for the feedback. Maybe AI and Machine Learning is not really what I am referring to.

please elaborate as to exactly what you are referring to...

I look at Supercomputers, like Summit, by IBM (200 Trillion computations per second), as well as Watson, which I know very little about, and I worry that Watson could wipe out small businesses like the one I work for.
Watson was conceived to win on Jeopardy - and that's it!


Infinite monkey theorem (From Wikipedia, the free encyclopedia)

The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare. In fact, the monkey would almost surely type every possible finite text an infinite number of times. However, the probability that monkeys filling the observable universe would type a complete work such as Shakespeare's Hamlet is so tiny that the chance of it occurring during a period of time hundreds of thousands of orders of magnitude longer than the age of the universe is extremely low (but technically not zero).
How long until a Watson style machine can read and learn? We learn by reading, so long as we already have the contextual knowledge that lets us make sense of this knew knowledge.
AI systems can currently read and have a limited understanding of what they read.  For example they could read these paragraphs and know that we're talking about learning.  But it's a huge stretch from that to reading and fully comprehending what is being read to the extent of learning new skills.  Reading is a good way to gain "declarative" knowledge - that's knowledge of facts.  When was the first computer built.  How many people are in the US Senate.  That sort of thing.  And AI systems can do that.  But to learn a skill (or more "procedural" knowledge) is a different beast and much harder to learn from a book - for either a human or a computer.  We tend to learn these skills by performing tasks and getting feedback on them.  E.g. Riding a bike and learning which way to lean to not fall off.

I would suggest that design skills are more akin to learning to ride a bike than learning who the Kings and Queens of England were.  So again, not an area that current AI systems are very successful at today.  But give it another hundred years and I expect that will have changed.
E.g. Chess used to be too hard of a skill for AI systems and today the best chess players are all computers.  And cars are clearly learning how to drive - an area we would have said was beyond computer skill just a decade or two ago.

Doug
curiouswebsterSoftware Engineer

Author

Commented:
thanks

Do more with

Expert Office
Submit tech questions to Ask the Experts™ at any time to receive solutions, advice, and new ideas from leading industry professionals.

Start 7-Day Free Trial