A professor at the University of California, Berkeley, has warned that robots being developed by the U.S. military will ultimately be responsible for humanity's doom.

Computer science professor Stuart Russell referred to two programs commissioned by the U.S. Defense Advanced Research Project Agency (DARPA) that are aimed at developing drones capable of tracking with and without a handler, according to The Telegraph. Russell said these "killer robots" could breach the Geneva Convention and "leave humans utterly defenseless."

"Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans," he said.

The first of these two programs, called Fast Lightweight Autonomy (FLA), will involve finding a way for drones to fly through windows at 20 meters-per-second, without being directly controlled by a human, Wired reported. The second, called Collaborative Operations in Denied Environment (CODE), aims at making drones work together in fleets on mission and make up for the strengths and weaknesses of each member.

However, Russell says robots like these, called lethal autonomous weapons systems (LAWS), present a danger to humanity because they could develop the ability to decide who lives and dies without direct human control.

"LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill- for example, they might be tasked to eliminate anyone exhibiting 'threatening behavior,'" he said.

Those who don't believe robots will turn on their creators include Dr. Sabine Hauert, a lecturer in robotics at the University of Bristol who said that she and her colleagues are working on technology designed for positive uses like helping the elderly and exploring space and the ocean, The Telegraph reported. Manuela Veloso, a professor of computer science at Carnegie Mellon University, said humanity should be more accepting of the advancements in robotics.

"Although we have a way to go, I believe that the future will be a positive one if humans and robots can help and complement each other," Veloso said, Wired reported. "There are still hurdles to overcome to enable robots and humans to co-exist safely and productively. My team is researching how people and robots can communicate more easily through language and gestures, and how robots and people can better match their representations of objects, tasks and goals."

The Campaign to Stop Killer Robots, which includes Tesla and SpaceX founder Elon Musk, is one of many groups focusing on establishing stricter rules regarding the use of artificial intelligence. The United Nations is holding meetings to discuss the legality of such technology and what bans should be implemented.

"Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment."

Russel published an article about his thoughts regarding the evolution of robotics in the journal Nature.A professor at the University of California, Berkeley, has warned that robots being developed by the U.S. military will ultimately be responsible for humanity's doom.

Computer science professor Stuart Russell referred to two programs commissioned by the U.S. Defense Advanced Research Project Agency (DARPA) that are aimed at developing drones capable of tracking with and without a handler, according to The Telegraph. Russell said these "killer robots" could breach the Geneva Convention and "leave humans utterly defenseless."

"Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans," he said.

The first of these two programs, called Fast Lightweight Autonomy (FLA), will involve finding a way for drones to fly through windows at 20 meters-per-second, without being directly controlled by a human, Wired reported. The second, called Collaborative Operations in Denied Environment (CODE), aims at making drones work together in fleets on mission and make up for the strengths and weaknesses of each member.

However, Russell says robots like these, called lethal autonomous weapons systems (LAWS), present a danger to humanity because they could develop the ability to decide who lives and dies without direct human control.

"LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill- for example, they might be tasked to eliminate anyone exhibiting 'threatening behavior,'" he said.

Those who don't believe robots will turn on their creators include Dr. Sabine Hauert, a lecturer in robotics at the University of Bristol who said that she and her colleagues are working on technology designed for positive uses like helping the elderly and exploring space and the ocean, The Telegraph reported. Manuela Veloso, a professor of computer science at Carnegie Mellon University, said humanity should be more accepting of the advancements in robotics.

"Although we have a way to go, I believe that the future will be a positive one if humans and robots can help and complement each other," Veloso said, Wired reported. "There are still hurdles to overcome to enable robots and humans to co-exist safely and productively. My team is researching how people and robots can communicate more easily through language and gestures, and how robots and people can better match their representations of objects, tasks and goals."

The Campaign to Stop Killer Robots, which includes Tesla and SpaceX founder Elon Musk, is one of many groups focusing on establishing stricter rules regarding the use of artificial intelligence. The United Nations is holding meetings to discuss the legality of such technology and what bans should be implemented.

"Debates should be organized at scientific meetings; arguments studied by ethics committees. Doing nothing is a vote in favour of continued development and deployment."

Russel published an article about his thoughts regarding the evolution of robotics in the journal Nature.