It is becoming acceptable again to talk of computers performing human tasks such as problem-solving and pattern-recognition.
After years in the wilderness, the term ‘artificial intelligence' (Al) seems poised to make a comeback. Al was big in the 1980s but vanished in the 1990s. It re-entered public consciousness with the release of Al, a movie about a robot boy. This has ignited public debate about Al, but the term is also being used once more within the computer industry. Researchers, executives and marketing people are now using the expression without irony or inverted commas. And it is not always hype. The term is being applied, with some justification, to products that depend on technology that was originally developed by Al researchers. Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to avoid using it. But the fact that others are starting to use it again suggests that Al has moved on from being seen as an over- ambitious and under-achieving field of research.
The field was launched, and the term ‘artificial intelligence’ coined, at a conference in 1956,,by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon and Alan Newell, all of whom went on to become leading figures in the field. The expression provided an attractive but informative name for a research programme that encompassed such previously disparate fields as operations research, cybernetics, logic and computer science. The goal they shared was an attempt to capture or mimic human abilities using machines. That said, different groups of researchers attacked different problems, from speech recognition to chess playing, in different ways; Al unified the field in name only. But it was a term that captured the public imagination.
Most researchers agree that Al peaked around 1985. A public reared on science-fiction movies and excited by the growing power of computers had high expectations. For years, Al researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in 1967 that within a generation the problem of creating ‘artificial intelligence' would be substantially solved. Prototypes of medical-diagnosis programs and speech recognition software appeared to be making progress. It proved to be a false dawn. Thinking computers and household robots failed to materialise, and a backlash ensued. 'There was undue optimism in the early f 980s,’ says David Leake, a researcher at Indiana University. ‘Then when people realised these were hard problems, there was retrenchment. By the late 1980s, the term Al was being avoided by many researchers, who opted instead to align themselves with specific sub-disciplines such as neural networks, agent technology, case-based reasoning, and so on.’
Ironically, in some ways Al was a victim of its own success. Whenever an apparently mundane problem was solved, such as building a system that could land an aircraft unattended, the problem was deemed not to have been Al in the first place. ‘If it works, it can’t be Al,' as Dr Leake characterises it. The effect of repeatedly moving the goal-posts in this way was that Al came to refer to 'blue-sky' research that was still years away from commercialisation, Researchers joked that Al stood for 'almost implemented’. Meanwhile, the technologies that made it once the market, such as speech recognition, language translation and decision-support software, were no longer regarded as Al. Yet all three once fell well within the umbrella of Al research.
But the tide may now be turning, according to Dr Leake. HNC Software of San Diego, backed by a government agency, reckon chat their new approach to artificial intelligence is the most powerful and promising approach ever discovered. HNC claim that their system, based on a duster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or extract a voice signal from a noisy background - tasks humans can do well, but computers cannot. ‘Whether or not their technology lives up to the claims made for it, the fact that HNC are emphasising the use of Al is itself an interesting development,' says Dr Leake.
Another factor that may boost the prospects for Al in the near future is that investors are now looking for firms using clever technology, rather than just a clever business model, to differentiate themselves. In particular, the problem of information overload, exacerbated by the growth of e-mail and the explosion in the number of web pages, means there are plenty of opportunities for new technologies to help filter and categorise information - classic Al problems. That may mean that more artificial intelligence companies will start to emerge to meet this challenge.
The 1969 film, 2001:A Space Odyssey, featured an intelligent computer called HAL 9000. As well as understanding and speaking English, HAL could play chess and even learned to lipread. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like computer. Individual systems can play chess or transcribe speech, but a general theory of machine intelligence still remains elusive. It may be. however, that the comparison with HAL no longer seems quite so Important, and Al can now be judged by what it can do, rather than by how well it matches up to a 30-year-old science-fiction film. ‘People are beginning to realise that there are impressive things that these systems can do.’ says Dr Leake hopefully.