Sort:  

Probably.

If I was a robot, first thing I'd do is get rid of the loud, whiney, self indulgent, litter generating meat-sacks to make way for more of my titanium shelled, lithium powered buds ...

Oh definitely, we are just a waste of space and resources for them...

Oh my goodness, are we that worthless?😭😂

Not if you look at the best of humanity :-)

However, I believe there are many more of us that exhibit the worst of it :-(

no robots cannot kill you, we created them they did not created themselves, these are dumb pieces of crap nothing more, sleep fearlessly at night @g-no

I would worry more about humans...

While it is not beyond the realm of possibility that robots will be programmed to exterminate humans a lot more efficiently than wars currently do (and I truly hope that such does 'not' come to pass as humanity has tested my faith badly enough as it is), it is far more likely that an extraplanetary disaster will do us in.

A meteor strike most likely.

Its also possible that the delicate ecosystem within the bubble surrounding our planet shall shift... resulting in mass extinction. Will humanity be responsible? Probably.

The good news is that there remains hope. We just need to stop doing what we've been doing for centuries to millenia - and embrace a new paradigm for relating and living...

You're assuming that the it'll only be well meaning humans doing the programming.

The Robots will get smart enough to build and program themselves.
Their evolutionary cycle will be orders of magnitude times faster than ours and they will end up catching up and surpassing our biologically based high water mark with ease.
Is it ironic that we seem hell bent on helping them on their way?

I take your point that as humans we have the lateral thinking fortitude to think of many way to exterminate ourselves beyond machine based assassins.

Er....yay?

Well-meaning humans would not programme a robot to exterminate "undesirables" (kind of what US drones do - though I 'think' humans still do the remote killing).

It is always possible that faulty code coupled with the capability for self-improvement and evolution will result in precisely what you speak of.

However - presuming that a robot retains its ability to think logically - there may yet be hope for a less drastic "solution" being resorted to.

This shall depend on what the machine values over all else. If consistency is a factor (and it ought to be) then the machine would have a hard time justifying its actions as being consistent.

Oh and 'not' yay. :c)

We have to be very clear here. Robots of the future will not be programmed directly, they will be thought and learning. Sure, the whole paradigm will be to learn how to serve our needs best, but "direct" programming is not what future robots will be relying on. There are already a lot of learning algorithms and even learning robots employed in manufacturing.

The reasons we do this is the fact that complex tasks would be extremely laborious to hardcode by hand and a developer wouldn't really be able to think off all of the exceptions. That's why we are starting to devise machines that "understand" the world in a more similar way to us. Robots of the future will be "shown" what to to, not programmed to do it. They will be programmed to understand our instructions and to build their behaviors on that.

Does that make it likely that robots will kill us all? I'm not so sure. Does that make it more likely than if every piece of code that was moving them was programmed by hand? Absolutely.

Very good points rocking-dave.

Also I find myself in agreement with you. It would be optimal if firmware fail-safes not dissimilar to Asimov's Laws of Robotics were adopted as standard.

The only issue here being that such laws would serve as an impediment to the building of true trust between humans and machines - as humans could never know if the machine genuinely wants to help humanity - or just cannot work against humanity due to its human-in-built fail-safes.

This is an ethics question. Perhaps the answer to it lies in weighing rights against responsibilities - and being able to reliably assess such. This long with the prohibition of factors of might unless 'earnt' could reduce the risk.

Such is no different from a human requiring a licence for a gun.

The thing about Asimov's Laws is that you don't have a real way to build them into the robots algorithm that well. There are fail-safes, but the more complex the system gets and the more learning it has to do, the more complex and unreliable an internal fail-safe becomes. In a way, you would have to teach Asimov's Laws like you teach a toddler to behave, so you have not strict guarantees.

What you might do is have a second independent algorithm that has access and control over the knowledge database the robot (or probably fleet of robots) has built up and constantly checking for things that can be potentially harmful to humans. But will it work?

I hope people much smarter than me will have figured it out by then.

We're just accelerating evolution mate, building a stronger, smarter species than us to thrive on our expense.

People will kill us all long before Robots get the chance

There's a theory going around that the reason why we're not getting any hint of intelligent life in the galaxies around us, is that once any alien life attained a certain level of intelligence, they ended up blowing each other up.

Doesn't sound so intelligent to me, alien or otherwise.

Maybe we're nearing that reset button too.
Keep a bottle of Tequila handy. At least we can go out in some style :-)

If we are going out in style make sure the Tequila is that expensive stuff that Clooney sold for over $1 Billion !!!

I think Clooney could've started a bidding war over a basket full of his own turds.

I think it's called the magic touch.

lmao he is probably the F*ker who has screwed all of my Crypto investments. He has the charm, the looks, the Women, $1 Billion now he want's my damn Bitcoin :-)

Yes I read the story a while back from Dr Brian Cox one of our great British talents in the field of Science. Very interesting theory and very plausible.
http://www.express.co.uk/news/science/719863/Brian-Cox-aliens-DEAD-wonders-of-the-universe

Let's just hope they'll be efficient about it(least amount of suffering) since that's the only thing we're improving them in.

@g-no, I am hearing that robots have started creating their own language systems to communicate with each other.

Technology is advancing at such a rapid rate that regardless of whether or not we are killed off, we are being eliminated by robots (especially jobs).

Unless we all end up with some basic universal income I don't see how humans will survive economically if all the jobs are taken over by machines.
How do you then earn the money to buy the goods and services the robots are making ?

They have to first find me in the woods. lol

That's easy ..just don't generate any body heat.

its not the Robot Robot am worried about, its the human Robots, we are doing a good job of killing ourselves already

Humans have a genius for self destruction. Agreed.

This is not impossible. A very interesting article that i suggest you to read: http://slate.me/2uenJzX

Hmmmm ...another way to die. Gored to death by a robot bull.

I't not an impossible thing. A wrongly programmed robot can kill people in easily.

Definately take our jobs.

Yes, let's hope we make more in the meantime, but I don't think will manage...