Jump to content

Intelligent Machines


Sepherose

  

18 members have voted

  1. 1. Should intelligent machines, if they arise, be given equal rights?

    • Yes
      9
    • No
      7
    • Unsure
      2


Recommended Posts

First I'm going to post links to various articles about innovations in robotics to give some perspective:

 

Technological singularity <-Wiki

Drones that can predict enemy intent

Nanowire artificial skin

Robotic skin factory

Skin grafts grown outside of petridish <-Wiki

RoboEarth (AKA Skynet Lite)

Intelligent, learning satellite network

Robot capable of learning and mimicking human emotion (kind of)

Robot designs and builds other robots without human interaction

Fast learning program can figure out how to walk within minutes

Robot that can adapt to injury

Innovations that could lead to "indestructible" bots

And finally, this one I believe is very important:

British Ministry of Defense concerned about possible "Terminator" event

 

Alright, I am unsure about that last one, but I have watched a documentary about the "Technological Singularity" and the man saying that it will happen has a lot backing him in terms of previous technological predictions, as for the rest, I am fairly certain it will speak for itself, and it comes down to two question really.

 

If self aware machines come about:

 

1. In terms of a "Terminator" event, could we avoid it?

 

2. And in terms of "Robotic Rights", akin to Asimov's "I, Robot", should self aware machines be granted equal rights?

 

My answers to these questions are fairly straightforward, to the first, in all likelihood, yes. So long as the answer to the second is ALSO yes. What it boils down to really is the limitations of the machine instead of the programming. A learning, adaptive program, given the proper hardware on which to store it's collected information, could in theory become aware, and in that event, how should it be handled? There are probably many here who have played the Mass Effect series, and what led to the Quarians near extinction was there paranoid overreaction to a Geth gaining self awareness, attempting to wipe them all out, and as we would in an event like that, the Geth fought back. This is a perfect example of how NOT to react to a situation like that if it were to happen. But then where does that leave us with the exponential increase in technological advancement?

 

We will be weaker, and likely stupider in a very short amount of time. To them, death would be a learning experience (ala' Battlestar Galactica Cylons), but to us, it is finality. I personally believe that a machine that becomes self aware will develop morals, but those morals will be colored by their initial interaction with humans. If intelligent machines come into being, I believe the only way we could survive it is if we do not treat them as trinkets or tools, but as self-aware beings. What are my fellow Nexus user's thoughts on this?

 

TL;DR: Intelligent machines, if they come about, how should they be treated, and could we avoid a "Terminator" like end to the situation?

 

EDIT: I used "sentient" instead of self-aware, this was a mistake.

Edited by Sepherose
Link to comment
Share on other sites

Nope, our own lust for power and the cost of sending people to war will almost certainly bring that future around. Humanity is simply just not wise enough to know when to stop. But hey, an AI war is better than the Grey Goo scenerio. atleast we have a chance of winning the favor of our robotic masters.

 

But you do realize that robots at this stage are far more likely to be guiltless killing machines than contemplate philosophy, and for the most part, that is the only thing we have going for us. An enemy which thinks of nothing more than killing is easy to predict. An enemy which can formulate new strategies is one that would be nearly impossible. But when it happens, blame IBM for making computers that play chess (our only hope is to hinge the fate of the world on a game of Go ((A game that even an advanced computer would have trouble with)).

Link to comment
Share on other sites

Nope, our own lust for power and the cost of sending people to war will almost certainly bring that future around. Humanity is simply just not wise enough to know when to stop. But hey, an AI war is better than the Grey Goo scenerio. atleast we have a chance of winning the favor of our robotic masters.

 

Agreed on that. I was reading a while ago about the Grey Goo scenario, apparently there is a similar scenario that could come about by "Strange Matter". Fascinating stuff really.

 

Slightly OT: Zombie Cancer

Link to comment
Share on other sites

Intelligent machines would be like creating another race.

 

I wouldn't enslave another race, and I wouldn't want to deal with a revolt if we did enslave another race.

 

If machines can think the same way humans do, they deserve to be treated like humans.

Edited by marharth
Link to comment
Share on other sites

What would be the purpose of building machines with emotion and intelligence anyways? They're made to serve a purpose. Creating them so they can fail to serve their purpose seems counter-intuitive.
Link to comment
Share on other sites

What would be the purpose of building machines with emotion and intelligence anyways? They're made to serve a purpose. Creating them so they can fail to serve their purpose seems counter-intuitive.

That is a good question.

 

Creating machines intelligent enough to think for themselves is entirely counter productive.

Link to comment
Share on other sites

What would be the purpose of building machines with emotion and intelligence anyways? They're made to serve a purpose. Creating them so they can fail to serve their purpose seems counter-intuitive.

That is a good question.

 

Creating machines intelligent enough to think for themselves is entirely counter productive.

 

Not necessarily, innovation, some would argue, requires creative thought, and that, some would again argue, requires emotion. They could still serve a purpose however. There are many dangerous jobs that they could do with ease comparative to us, also they could help develop cleaner, more efficient technology, at a faster rate I might add. I don't see how allowing them free will and intelligence would make them LESS productive.

Link to comment
Share on other sites

Intelligence is a useful trait for a machine, for it to be able to make decisions. However, emotion is a trait that seems to be only useful for showing off one's AI skills.
Link to comment
Share on other sites

What would be the purpose of building machines with emotion and intelligence anyways? They're made to serve a purpose. Creating them so they can fail to serve their purpose seems counter-intuitive.

That is a good question.

 

Creating machines intelligent enough to think for themselves is entirely counter productive.

 

Not necessarily, innovation, some would argue, requires creative thought, and that, some would again argue, requires emotion. They could still serve a purpose however. There are many dangerous jobs that they could do with ease comparative to us, also they could help develop cleaner, more efficient technology, at a faster rate I might add. I don't see how allowing them free will and intelligence would make them LESS productive.

You would be required to give the machines equal pay and equal rights though.

 

It is immoral to put intelligent beings into slave labor, and it will come back to bite you.

 

So intelligent machines doing jobs that are dangerous to humans? Maybe, but it isn't worth it in my opinion. There are other options.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...