Friday, January 13, 2017

How To Deal With Killer Robots ???

The rise of automation in nearly every sector is accelerating at an exponential pace. Running in tandem with the developments is an accelerating concern over the theoretical rise of “killer robots.”

While the average person going about their day still might shrug off the notion that the real world could become a science fiction hell populated by sentient borgs who have decided that humans are either inefficient or an outright enemy, some very well-informed, intelligent people have been sounding the alarm.

  • Cambridge University has dedicated study to the possibility within their Centre for the Study of Existential Risk.
  • Elon Musk wants to save robotics from the military-industrial complex.
  • Physicist Stephen Hawking has stated that robots with artificial intelligence could evolve faster than the human race and actually end mankind.
  • A Canadian robotics producer has called for strict ethical and practical boundaries.
  • There is an unresolved debate still taking place at the United Nations regarding the development and use of autonomous weapons of war, supposedly to be resolved this year. More than 1,000 scientists have signed a letter to ban them.
  • And at least one study suggests that most people who are aware of the issue are in favor of  a ban on killer robots.

You can read the rest @
http://www.activistpost.com/2017/01/european-parliament-kill-switches-stop-killer-robots.html

It should be obvious AI/robots will find any switches and will figure out how to disable them. They might even decide that the very existence of kill switches proves humans are their mortal enemies.

I liken this whole phenomenon to creating lethal organisms in laboratories. Nearly everyone realizes that biological warfare is a horrible idea, yet such weapons are being created all around the world. It may not be possible to keep the "killer robot" genie in the bottle much longer, but when it gets out we humans are dead meat.

And I say when, not if, because nothing we can do will guarantee they will not be created.

No comments:

Post a Comment