You are viewing a single comment's thread from:

RE: Are Robots Going to Kill Us All ?

in #steemit7 years ago

While it is not beyond the realm of possibility that robots will be programmed to exterminate humans a lot more efficiently than wars currently do (and I truly hope that such does 'not' come to pass as humanity has tested my faith badly enough as it is), it is far more likely that an extraplanetary disaster will do us in.

A meteor strike most likely.

Its also possible that the delicate ecosystem within the bubble surrounding our planet shall shift... resulting in mass extinction. Will humanity be responsible? Probably.

The good news is that there remains hope. We just need to stop doing what we've been doing for centuries to millenia - and embrace a new paradigm for relating and living...

Sort:  

You're assuming that the it'll only be well meaning humans doing the programming.

The Robots will get smart enough to build and program themselves.
Their evolutionary cycle will be orders of magnitude times faster than ours and they will end up catching up and surpassing our biologically based high water mark with ease.
Is it ironic that we seem hell bent on helping them on their way?

I take your point that as humans we have the lateral thinking fortitude to think of many way to exterminate ourselves beyond machine based assassins.

Er....yay?

Well-meaning humans would not programme a robot to exterminate "undesirables" (kind of what US drones do - though I 'think' humans still do the remote killing).

It is always possible that faulty code coupled with the capability for self-improvement and evolution will result in precisely what you speak of.

However - presuming that a robot retains its ability to think logically - there may yet be hope for a less drastic "solution" being resorted to.

This shall depend on what the machine values over all else. If consistency is a factor (and it ought to be) then the machine would have a hard time justifying its actions as being consistent.

Oh and 'not' yay. :c)

We have to be very clear here. Robots of the future will not be programmed directly, they will be thought and learning. Sure, the whole paradigm will be to learn how to serve our needs best, but "direct" programming is not what future robots will be relying on. There are already a lot of learning algorithms and even learning robots employed in manufacturing.

The reasons we do this is the fact that complex tasks would be extremely laborious to hardcode by hand and a developer wouldn't really be able to think off all of the exceptions. That's why we are starting to devise machines that "understand" the world in a more similar way to us. Robots of the future will be "shown" what to to, not programmed to do it. They will be programmed to understand our instructions and to build their behaviors on that.

Does that make it likely that robots will kill us all? I'm not so sure. Does that make it more likely than if every piece of code that was moving them was programmed by hand? Absolutely.

Very good points rocking-dave.

Also I find myself in agreement with you. It would be optimal if firmware fail-safes not dissimilar to Asimov's Laws of Robotics were adopted as standard.

The only issue here being that such laws would serve as an impediment to the building of true trust between humans and machines - as humans could never know if the machine genuinely wants to help humanity - or just cannot work against humanity due to its human-in-built fail-safes.

This is an ethics question. Perhaps the answer to it lies in weighing rights against responsibilities - and being able to reliably assess such. This long with the prohibition of factors of might unless 'earnt' could reduce the risk.

Such is no different from a human requiring a licence for a gun.

The thing about Asimov's Laws is that you don't have a real way to build them into the robots algorithm that well. There are fail-safes, but the more complex the system gets and the more learning it has to do, the more complex and unreliable an internal fail-safe becomes. In a way, you would have to teach Asimov's Laws like you teach a toddler to behave, so you have not strict guarantees.

What you might do is have a second independent algorithm that has access and control over the knowledge database the robot (or probably fleet of robots) has built up and constantly checking for things that can be potentially harmful to humans. But will it work?

I hope people much smarter than me will have figured it out by then.

We're just accelerating evolution mate, building a stronger, smarter species than us to thrive on our expense.