Jump to content

If AI was created, should it have equal rights to humans?


marharth

Should AI machines have equal rights?  

46 members have voted

  1. 1. Equal rights or not?



Recommended Posts

  • Replies 245
  • Created
  • Last Reply

Top Posters In This Topic

An important element of the question is missing - at least to me anyway. What would the "life form" (supposing that it is deemed as such) look like?

 

The assumption seems to be that it would be in human form. Humans are notoriously inefficient at any number of things due to the way we are built and we tend to adapt our surroundings to meet our needs and limitations. It seems that if such a life form was to be created, it would have some purpose, and thus be more suited to the purpose than humans, suggesting it would not necessarily be in a human form.

 

What if it is in the shape of a spider or just a gelatinous blob?

 

What if it had no shape and was simply there, an energy with no form?

 

Given that humans have been around for a few days (that's how we know we as a species were not born yesterday - no one can possibly get this stupid in 2 days) and can't seem to recognize other intelligent life forms and allow them rights of any kind, I doubt that any AI would ever be seen as having any rights by humans any time, any where.

 

But to answer the question as asked, my thought is that anything created by humans should not have any rights, and especially not if given human intelligence.

Link to comment
Share on other sites

Therefore, we can't consider it "alive" really by any meaningful definition that we have. Does it not being "alive" make a difference in how we treat it? If so, why?

 

No, it doesn't, because it is still an immortal entity capable of independent thought. But I think an AI could be classified as alive under that standard anyway.

 

I appreciate your determination here, but you left out a key sentence in my post. The part about "let's assume". I wasn't making an argument of any kind, nor saying anything about the nature of this thing and whether it is alive or not.

 

To put it plainly let's take Premise 1 (The hypothetical A.I. is not alive) and grant it as true. Next, let's try to find out how the truth of that premise leads us to a conclusion about how we ought to treat this thing. So

 

Premise 1: A.I. is not alive = true (remember this is an assumption)

Premise 2-n: ???

Conclusion: We should treat the A.I. as ????

 

I am thinkin' that we need to sort out the definition of "alive" here, and just what constitutes another 'being'. If it is self-aware, does that not imply at least some form of "life"?

 

I think we should find out why it matters that the thing is alive first. The whole endeavor of defining what is alive and what isn't would be pointless before we understood why it matters whether it is alive or not. An example that might make this more clear is to consider us debating over what constitutes a "real" glass of champagne. Who cares if it has no relevance to the topic at hand? Therefore, lets start with the assumption that it isn't alive. Now, why does it matter?

Edited by stars2heaven
Link to comment
Share on other sites

But to answer the question as asked, my thought is that anything created by humans should not have any rights, and especially not if given human intelligence.

 

Humans create babies, should they not have any rights?

Link to comment
Share on other sites

stars2heaven asks "Why?" to my reply "anything created by humans should not have any rights, and especially not if given human intelligence."

 

I suppose the answer to ”why” is partly in this question, “Why is it that monkeys (or pick your species) do not have rights – or at least not equal rights to humans?” There are species that communicate, procreate, reason, have emotions, use tools, make decisions, etc., so why are they not lining up to vote or to get their driver’s license – and more importantly to the question, why are we not letting them?

So, there is a question of what are the factors of intelligence that grant a species rights? Obviously, the current practice suggests that the AI would need to have at the very least the identical factors and levels of intelligence that are attributed to humans at whatever point in time such AI is created.

 

Of course, the OP’s question may not intend to suggest the granting of rights equal to humans, but to just certain or obvious rights that would be granted to a created AI, such as the right to function as designed or to receive quality oil on maintenance. If this were the question I suppose I could agree to the granting of such "rights". However, I am assuming the question is looking at a broader scope of rights that would be, at minimum, equal to human rights.

 

Regardless of whatever definitions may be made today or in the future about what intelligence is, it comes down to either the AI has whatever it is that makes up those factors of intelligence to be granted rights or it doesn’t.

 

Adding to or taking from the AI whatever it is that makes up the necessary factors of intelligence in order to be afforded rights means the AI has been manipulated and changed. Since I cannot even imagine what could be added (what’s missing?), I can only suggest that factors of intelligence would be manipulated or deleted in order to achieve some purpose. Why else would the factors be changed?

That purpose may be as noble or common as one may imagine, but factors would have been changed in some way and thus are no longer identical to those held by humans. This makes it something other than human, it is a machine that serves some purpose of human design. If it has rights, then it could refuse to fulfill that purpose, so what would be the sense? Why would you create something that could refuse to do what it was created for?

 

One may point out the purpose could be to create another intelligent life form or species on the planet and to set it free to evolve – a grand experiment. In that case, being identical, then it would be subject to the same fallibility as humans and it’s bad enough having one species like humans on the planet without creating another one. Such an AI should have no rights and more importantly should require a system that would destroy each and every one of them immediately upon a single button push from a well guarded location.

 

Humans create babies, should they not have any rights?

 

While I appreciate that men and women are capable of "creating" a baby, I do not believe anyone considers a baby to be an artificial intelligence - at least I do not. A clone would again be created for a purpose and again would be manipulated or changed in some manner in order to serve that purpose. Giving it rights to allow it to refuse to serve that purpose makes no sense.

Link to comment
Share on other sites

I think we should find out why it matters that the thing is alive first. The whole endeavor of defining what is alive and what isn't would be pointless before we understood why it matters whether it is alive or not. An example that might make this more clear is to consider us debating over what constitutes a "real" glass of champagne. Who cares if it has no relevance to the topic at hand? Therefore, lets start with the assumption that it isn't alive. Now, why does it matter?

If we assume that it is not alive then it is an artifact, a piece of property no different from my PC. I love my PC but have no misgivings about treating it in the same vein as my coffee maker, when it no longer serves it's function or at my discretion or whim, it gets recycled.

Link to comment
Share on other sites

If the intelligence and thought process of the machine is equal to yours, then I don't know why it matters if it matches the definition of life. Explain why it matters.

 

Plants are also life. That does not mean we have to give equal rights to plants. Matching the definition of life is of no relevance.

 

The reason I avoided the purpose of even creating a AI is because I don't think a AI would be very efficient in almost anything. Its not worth the risk. That's why I made the topic the way I did, to avoid that.

Edited by marharth
Link to comment
Share on other sites

Premise 1: A.I. is not alive = true (remember this is an assumption)

Premise 2-n: ???

Conclusion: We should treat the A.I. as ????

Viruses aren't alive, does that give us free reign to do whatever we want with them?

Link to comment
Share on other sites

Premise 1: A.I. is not alive = true (remember this is an assumption)

Premise 2-n: ???

Conclusion: We should treat the A.I. as ????

Viruses aren't alive, does that give us free reign to do whatever we want with them?

Though I am reasonably sure you are referring to software..yes we have the ability and right do do as we please to either form of virus which usually is eradication when possible before it irreparably harms the host.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...