Jump to content

If AI was created, should it have equal rights to humans?


marharth

Should AI machines have equal rights?  

46 members have voted

  1. 1. Equal rights or not?



Recommended Posts

  • Replies 245
  • Created
  • Last Reply

Top Posters In This Topic

Premise 1: A.I. is not alive = true (remember this is an assumption)

Premise 2-n: ???

Conclusion: We should treat the A.I. as ????

Viruses aren't alive, does that give us free reign to do whatever we want with them?

 

No. (in either sense of the word you may intend (mechanical or biological)) I don't think life is a necessary quality for determining treatment...sufficient, yes...but not necessary. There are more important factors that I think should be taken into account. Now, one may debate whether having these qualities actually makes something a living entity, but whether it does or doesn't I don't think really matters. So the qualities that define person-hood are what I think should be determining how we treat a thing.

Edited by stars2heaven
Link to comment
Share on other sites

One may point out the purpose could be to create another intelligent life form or species on the planet and to set it free to evolve – a grand experiment. In that case, being identical, then it would be subject to the same fallibility as humans and it’s bad enough having one species like humans on the planet without creating another one. Such an AI should have no rights and more importantly should require a system that would destroy each and every one of them immediately upon a single button push from a well guarded location.

 

I cut out the rest of your post because this is the part I wanted to focus on and nothing in the rest of the post depends or supports this directly. (so I can cut it out without misrepresenting anything you say here or missing a greater point, etc.)

 

Why is it that such an AI as this one should have no rights? Merely because it is potentially as fallible as us?

Link to comment
Share on other sites

I think we should find out why it matters that the thing is alive first. The whole endeavor of defining what is alive and what isn't would be pointless before we understood why it matters whether it is alive or not. An example that might make this more clear is to consider us debating over what constitutes a "real" glass of champagne. Who cares if it has no relevance to the topic at hand? Therefore, lets start with the assumption that it isn't alive. Now, why does it matter?

If we assume that it is not alive then it is an artifact, a piece of property no different from my PC. I love my PC but have no misgivings about treating it in the same vein as my coffee maker, when it no longer serves it's function or at my discretion or whim, it gets recycled.

If the intelligence and thought process of the machine is equal to yours, then I don't know why it matters if it matches the definition of life. Explain why it matters.

 

Plants are also life. That does not mean we have to give equal rights to plants. Matching the definition of life is of no relevance.

 

The reason I avoided the purpose of even creating a AI is because I don't think a AI would be very efficient in almost anything. Its not worth the risk. That's why I made the topic the way I did, to avoid that.

 

This is an excellent point to make against the quote from Aurielius. There are many living things which we do not grant equal rights to on the basis of being alive. Clearly, if life is a quality necessary for granting rights, it isn't the only one. What is it that distinguishes how we should treat this A.I. (as a non-living thing) from how we treat living things like plants?

Edited by stars2heaven
Link to comment
Share on other sites

Humans create babies, should they not have any rights?

 

While I appreciate that men and women are capable of "creating" a baby, I do not believe anyone considers a baby to be an artificial intelligence - at least I do not.

 

It was just the way you worded it. :)

Link to comment
Share on other sites

One may point out the purpose could be to create another intelligent life form or species on the planet and to set it free to evolve – a grand experiment. In that case, being identical, then it would be subject to the same fallibility as humans and it’s bad enough having one species like humans on the planet without creating another one. Such an AI should have no rights and more importantly should require a system that would destroy each and every one of them immediately upon a single button push from a well guarded location.

 

I cut out the rest of your post because this is the part I wanted to focus on and nothing in the rest of the post depends or supports this directly. (so I can cut it out without misrepresenting anything you say here or missing a greater point, etc.)

 

Why is it that such an AI as this one should have no rights? Merely because it is potentially as fallible as us?

 

An AI that was created and that was identical to humans would not be 'potentially' fallible, it would be identically fallible. The AI would (eventually) want whatever it is that any and all humans want and would do exactly the same thing humans would do in order to get it. While I admit not permitting them any rights would be a useless exercise in trying to stem the issues they would cause, they would demand rights (amongst other things) anyway, but at least once they turned violent in their demands someone could push the button and resolve the problem.

Link to comment
Share on other sites

One may point out the purpose could be to create another intelligent life form or species on the planet and to set it free to evolve – a grand experiment. In that case, being identical, then it would be subject to the same fallibility as humans and it’s bad enough having one species like humans on the planet without creating another one. Such an AI should have no rights and more importantly should require a system that would destroy each and every one of them immediately upon a single button push from a well guarded location.

 

I cut out the rest of your post because this is the part I wanted to focus on and nothing in the rest of the post depends or supports this directly. (so I can cut it out without misrepresenting anything you say here or missing a greater point, etc.)

 

Why is it that such an AI as this one should have no rights? Merely because it is potentially as fallible as us?

 

An AI that was created and that was identical to humans would not be 'potentially' fallible, it would be identically fallible. The AI would (eventually) want whatever it is that any and all humans want and would do exactly the same thing humans would do in order to get it. While I admit not permitting them any rights would be a useless exercise in trying to stem the issues they would cause, they would demand rights (amongst other things) anyway, but at least once they turned violent in their demands someone could push the button and resolve the problem.

 

Yeah, they might get violent if we deny them rights and they decide to demand them, but why deny them rights to begin with? Why should their fallibility play any role in deciding their rights? (assuming of course that they are not MORE fallible than us, but only AS fallible)

Edited by stars2heaven
Link to comment
Share on other sites

Yeah, they might get violent if we deny them rights and they decide to demand them, but why deny them rights to begin with? Why should their fallibility play any role in deciding their rights? (assuming of course that they are not MORE fallible than us, but only AS fallible)

 

The AI would not get violent just because they were denied rights. As an intelligence, they would comply, perhaps even understand initially. They would get violent when their demands for rights are not met, a situation that would occur given sufficient time regardless if given rights or not at the onset of their creation. An intelligence equal to humans means it would eventually get around to those ideas and concepts.

 

The AI would exist for the convenience of the purpose they were created to serve. To provide them the idea that they have a right beyond that purpose would be cruel and in my mind much worse than denying them rights. Once their purpose is served, they have the right to what? Existence, but no purpose? I can't think of anything more cruel than that for something with intelligence.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...