So . . . last night on Jeopardy! the IBM supercomputer Watson held its own against two highly-respected challengers, at least in the first round of the first "exhibition match" between man and machine. This isn't the first time IBM has created a computer to take on human champions at a feat of cognitive strength (see Deep Blue playing chess against Kasparov). And Watson clearly demonstrated that it was possible to create a computer that didn't just have a lot of digital knowledge, but could also decode clues and respond to indirect language. But is there something flawed with this type of artificial intelligence? The answer, in the form of a question, after the jump.
Computers and search engines are designed to respond to keywords and return the best matches. Even phrasing things as direct questions allows for programmers to "teach" a computer how to decode a simply-worded question and provide the correct answer. But when it comes to Jeopardy! clues (yes, as a former contestant I am legally obligated to put the exclamation point each time) the language is not always direct and there are multiple ways to approach a response. From the two-minute explanation of how Watson works, it seems that there is an elaborate pattern-recognition system in place to a) find which words in the clue are important and b) find the pattern that links them.
(aside: speaking of patterns, how eerie was it that Watson's first selection when in control of the board was the Daily Double? Either the IBM programmers cracked the code of where the Daily Double appears, or that was one hell of a lucky guess).
The system is not flawless; Watson got some questions wrong. If the two human contestants were better at straightforward pattern recognition (instead of the more abstract kind that makes a good Jeopardy! contestant) they might have noticed that Watson was very poor at deciphering the clues in the category about decades and taken advantage of that fact, but they didn't seem to notice. It used the clues to usually arrive at a specific year rather than a decade, and this may be because it didn't "understand" the nature of the category.
And this is the crux of my problem with Watson. These AI programs or computers are not replicating typical human intelligence, because typical humans are not Jeopardy! juggernauts or chess grandmasters. If anything, these systems are like people with Asperger's syndrome: difficulty in understanding abstraction and social situations, and an intense and deep focus on a particular area of interest. A few decades ago, before it was deemed a politically incorrect term, Watson would have been called an idiot savant (well, a few decades ago Watson would have totally freaked people out because they may have assumed he was sent from the future to destroy them all). I would have been more impressed if IBM had built a computer that could have conversed with Trebek and told an amusing anecdote - that would be true artificial intelligence.
If the goal is to replicate the way the human mind works, shouldn't researchers be trying to develop them the way a human mind develops? Gradually learning about one's environment, picking up information organically, following areas of interest in depth and ignoring other areas. Creating a computer that knows everything and can sometime decipher what you are asking of it isn't intelligence; it's just a program. It's an impressive feat of programming, no doubt, but it's a stretch to say that this represents artificial intelligence.
Of course, my favorite moment on the show was when Watson was briefly human, exhibiting a very human characteristic. One of the human contestants gave an incorrect response, and then Watson rang in with the same incorrect response. It's clear it wasn't listening, just like us humans often don't.
I find it interesting that you have assumed that the only form of intelligence that seems to apply is human intelligence. And that the goal of "artificial intelligence" must therefore have come up short if it is distinguishable from human intelligence.
ReplyDeleteThe "Turing Test" that you seem to gloss over is one test to show that a machine can mimic a human, and therefore it could possible be called intelligence, but it is a logical fallacy to assume that the converse (that if it doesn't fool a human) means that it is not intelligence.