What Does IBM’s Watson Tell Us About Potential Future Expert-Systems?
During the test-run of IBM’s Watson in a short series of Jeopardy! matches we saw the expert-system manage to answer a lot of questions quickly and correctly—but we also saw it make some extremely wacky mistakes. The lead researcher on IBM’s Watson project has been quoted saying that he doesn’t fully understand why Watson makes the decisions he does and why he gets some things wrong and why he gets some right…
George Dvorsky, a Canadian futurist and ethicist who writes about advancements in computer-science and technology, decided to take this quote and extend it into its logical extreme:
This kind of freaks me out a little. When asking computers questions that we don’t know the answers to, we aren’t going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don’t know the answer ourselves, and because we don’t necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.
This doesn’t freak me out at all, if the questions we ask Watson’s eventual smarter counterpart have an equal chance of being right or terribly wrong we won’t put much stock in its answers. The whole point of expert-systems is that they will act like experts. We don’t trust our current human experts without corroborating their answers with other experts and the final decider of all truth is still reality.
If we ask a computer a question to which we don’t know the answer and the answer is actually testable we’re in no trouble at all.
In fact, the Jeopardy! questions are an excellent example of this: the answers that Watson spat out would win or lose it the round, that’s the test. As I said above, when we pose questions to human experts we may not know exactly what’s going on in their brains but we lend them a sort of credibility based on their track record and weigh that against the risk we’re taking by listening to them.
When the IBM lead says that he’s not totally certain why Watson made a decision or got something right or wrong he’s betraying that he’s not fully cognizant of the model that Watson is using. As a student of computer science, I find this a little bit silly from the outset because we should be able to get Watson to replay everything it thought and did during that round of Jeopardy and watch its functions in motion. With sufficient storage, we can literally have Watson’s computer brain states on slideshow and suss out where it went haywire.
Of course, Watson’s appearance on Jeopardy! is largely a PR stunt; but it’s also the case that his makeup is still a developing system and the Jeopardy! rounds could be seen as a field test.
If an advanced prototype engine blows out on us in real world test conditions we don’t immediately know what went wrong or why it went wrong because we don’t have instant access to it, but as long as everyone was watching very closely we can model what happened and discover the flaw after the fact.
Looking even further ahead, it’s becoming painfully obvious that any complex system that is even remotely superior (or simply different) relative to human cognition will be largely unpredictable. This doesn’t bode well for our attempts to engineer safe, comprehensible and controllable super artificial intelligence.
Erm, well, okay.
When we look far into the future by attempting to map our current trajectory we discover quickly that we engender a sort of fanciful, if not romantic speculation. I really must conclude my disagreement with George’s assessment here with pointing out that a misunderstanding of Watson’s mistakes doesn’t give us any real pause as to boding well for future advancements in AI (let alone “super artificial intelligence.”)
The anxiety that the AIs we construct will be much smarter than we are, work based on alien psychologies, and potentially act in fashions that we find ultimately unpredictable and potentially dangerous to us happens to be the mainstay of a lot of futurist science fiction. We see it as the primary role of the apocalypse of the Dune universe as well as Battlestar Galactica—and for anyone who read When H.A.R.L.I.E. Was One can see these very human fears laid out on the table.
There’s no reason why extremely complicated artificial intelligence systems couldn’t just show their work. Every question that humanity asks about the universe that can be tested can be rendered into a hypothesis, set to a model, devised into a series of tests, and finally simulated for accuracy. A flaw in the initial reasoning or in the hypothesis will probably throw off the final answer to whatever degree. This is already true with expert systems made up of humans.
IBM’s Watson is probably a good foundation for an expert-system capable of fact-finding or filtering large amounts of data in order to highlight patterns or discover information out of what we already know. Looking at the sort of questions that Watson had trouble with, it’s obvious that the system may be good with relatable facts, but not cultural-context (another highly complex and sometimes unpredictable emergent property of large scale human interaction and language.)
Furthermore, if what we’re looking for from something like Watson is instead an expert system capable of predicting the weather or the economy—Watson certainly is not the droid we’re looking for.
Until then, speculating that the next highly complex AI that we build will develop a pathology or an agenda may yet be a little premature.
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU