Relying on technology may lead to dire results or danger, say scientists
NEW YORK: A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones which, though still controlled remotely by humans, come close to a machine that can kill autonomously.
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society's workload, from waging war to handling customers on the phone.
Their concern is that further advances could create profound social disruptions and even have dangerous consequences.
'Something new has taken place in the past five to eight years,' said Microsoft researcher Eric Horvitz. He was part of a group of researchers - including computer scientists, artificial intelligence researchers and roboticists - who met at the Asilomar Conference Grounds on Monterey Bay in California on Feb 25 this year to discuss the issue.
'My sense was that sooner or later we would have to make some sort of statement or assessment, given the rising voice of the technorati and people very concerned about the rise of intelligent machines,' Mr Horvitz said.
The group discounted the possibility of highly centralised superintelligences and the idea that intelligence might spring spontaneously from the Internet.
However, they did cite examples of dangerous consequences of advanced technology, such as computer worms and viruses that defy extermination and could be said to have reached a 'cockroach' stage of machine intelligence. They also agreed that robots that can kill autonomously are either already here or will be soon.
The scientists noted that there was legitimate concern that technological progress would transform the workforce by destroying a widening range of jobs. Possible threats to human jobs could include self-driving cars, software-based personal assistants and service robots in the home. Just last month, a service robot developed by Willow Garage in Silicon Valley proved it could navigate the real world.
The researchers also focused particular attention on the spectre that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smartphones?
At the conference, organised by the Association for the Advancement of Artificial Intelligence (AAAI), Mr Horvitz said he believed computer scientists must respond to the notions of superintelligent machines and artificial intelligence systems run amok.
The AAAI report, to be out soon, will try to assess the possibility of 'the loss of human control of computer-based intelligences'. It will also grapple, Mr Horvitz said, with socioeconomic, legal and ethical issues, as well as probable changes in human-computer relationships. How would it be, for example, to relate to a machine as intelligent as your spouse?
Mr Horvitz said the panel was looking for ways to guide research so that technology improved society rather than move it towards a technological catastrophe.
NEW YORK TIMES