
A robot that makes a morning cuppa, a fridge that orders the weekly shop, a  car that parks itself. 
Advances in artificial intelligence promise many  benefits, but scientists are privately so worried they may be creating machines  which end up outsmarting — and perhaps even endangering — humans that they held  a secret meeting to discuss limiting their research. 
At the conference,  held behind closed doors in Monterey Bay, California, leading researchers warned  that mankind might lose control over computer-based systems that carry out a  growing share of society’s workload, from waging war to chatting on the phone,  and have already reached a level of indestructibility comparable with a  cockroach. 
“These are powerful technologies that could be used in good  ways or scary ways,” warned Eric Horvitz, principal researcher at Microsoft who  organised the conference on behalf of the Association for the Advancement of  Artificial Intelligence. 
According to Alan Winfield, a professor at the  University of the West of England, scientists are spending too much time  developing artificial intelligence and too little on robot  safety. 
“We’re rapidly approaching the time when new robots should  undergo tests, similar to ethical and clinical trials for new drugs, before they  can be introduced,” he said. 
The scientists who presented their findings  at the International Joint Conference for Artificial Intelligence in Pasadena,  California, last month fear that nightmare scenarios, which have until now been  limited to science fiction films, such as the Terminator series, The Matrix,  2001: A Space Odyssey and Minority Report, could come true. 
Robotic  unmanned predator drones, for example, which can seek out and kill human  targets, have already moved out of the movie theatres and into the theatre of  war in Afghanistan and Iraq. While at present controlled by human operators,  they are moving towards more autonomous control. 
They could also soon be  found on the streets. Samsung, the South Korean electronics company, has  developed autonomous sentry robots to serve as armed border guards. They have  “shoot-to-kill” capability. 
Noel Sharkey, professor of artificial  intelligence and robotics at Sheffield University, warned that such robots could  soon be used for policing, for example during riots such as those seen in London  at the recent G20 summit. “Is this a good thing?” he asked. 
Scientists  are particularly worried about the way the latest, highly sophisticated  artificially intelligent products perform human-like functions. 
Japanese  consumers can already buy robots that “learn” their owner’s behaviour, can open  the front door and even find electrical outlets and recharge themselves so they  never stop working. 
One high-tech US firm is working on robotic nurses,  dubbed “nursebots”, that interact with patients to simulate empathy. Critics  told the conference that, at best, this could be dehumanising; at worst,  something could go wrong with the programming. 
The scientists dismissed  as fanciful fears about “singularity” — the term used to describe the point  where robots have become so intelligent they are able to build ever more capable  versions of themselves without further input from mankind. 
The  conference was nevertheless told that new artificial intelligence viruses are  helping criminals to steal people’s identities. Criminals are working on viruses  that are planted in mobile phones and “copy” users’ voices. After stealing the  voice, criminals can masquerade as a victim on the phone or circumvent speech  recognition security systems. 
Another kind of smartphone virus silently  monitors text messages, e-mail, voice, diary and bank details. The virus then  uses the information to impersonate people online, with little or no external  guidance from the thieves. The researchers warned that many of the new viruses  defy extermination, reaching what one speaker called “the cockroach  stage”. 
Some speakers called for researchers to adopt the “three laws”  of robotics created by Isaac Asimov, the science fiction author, that are  designed to protect humanity from machines with their own agenda. Each robot,  Asimov said, must be programmed never to kill or injure a human or, through  inaction, allow a human to suffer. A robot must obey human orders, unless this  contravenes the first law. A robot must protect itself, unless it contravenes  either of the first two laws. 
While many scientists fear artificial  intelligence could run amok, some argue that ultrasmart machines will instead  offer huge advances in life extension and wealth creation. 
Some pointed  out that artificial intelligence was already helping us in complex, sometimes  life-and-death situations. Poseidon Technologies, the French firm, sells  artificial intelligence systems that help lifeguards identify when a person is  drowning in a swimming pool. Microsoft’s Clearflow system helps drivers to pick  the best route by analysing traffic behaviour; and artificial intelligence  systems are making cars safer, reducing road accidents.