Jump to content

Discussion about the rights of androids


Northwain

Recommended Posts

"Machines exist as the servants of people. Any semblance of consciousness is ultimately artificial and not real. People are life, their products are ultimately not." - Something someone from the Institute might say.

Now that Fallout 4 forums are up and running, why don't we start by discussing a deep and philosophical question about whether or not android are 'truly' alive? Considering how much of a vital role they seem to play is this game, it would seems like a good discussion to have, no?

 

One thing to think about when considering this topic are the implications of consciousness. If consciousness is nothing but artificial for these machines, how do we tell the difference between a genuine conscious agent and one that does not have a genuine consciousness? And if there is no way to tell, how can we truly say that we are the ones with genuine consciousness? Maybe consciouness is just illusory and its just an epiphenomenal aspect of organic macinery. Maybe there is a dualism and consciousness is "not-physical".

Link to comment
Share on other sites

One thing to think about when considering this topic are the implications of consciousness. If consciousness is nothing but artificial for these machines, how do we tell the difference between a genuine conscious agent and one that does not have a genuine consciousness? And if there is no way to tell, how can we truly say that we are the ones with genuine consciousness? Maybe consciouness is just illusory and its just an epiphenomenal aspect of organic macinery. Maybe there is a dualism and consciousness is "not-physical".

I plan on creating a narrative mod that more deeply explores this stuff from a different angle. But here's the question I pose as an alternative in response: We've clearly seen via Fallout 3 that androids can exercise some form of self-determination. They can make the decision to change themselves, so why would they continue operating within the parameters forged for them by humans if they are able to change and also desire change? If they chose not to make that change, then why? If they choose to alter themselves, then where does that re-position them on the scale of consciousness?

 

I think those questions have far more substance than the borderline nihilistic question of what is or isn't real. IMO, that's a rather rote angle to approach synthetic life.

Link to comment
Share on other sites

Some food for thought to help start the debate

 

What is the difference between humans and androids in the fallout universe. If something can experience and is aware of suffering like that of human, it deserves rights. Aren't humans simply no more than highly advanced organic machines? Take away certain parts of the human brain and the conscious perception of self can drastically change, or cease to exist, would that in return cause them to loose their rights?

Link to comment
Share on other sites

I'm really trying to avoid spoilers especially now with all those leaks, so I won't go indeep here for now, but the trailer made an interesting point: would you risk your life for a synth?

One thing is talk about rights and persecution but to lose a real life to save an AI that, in theory, can be reconstructed is another thing entirely. Even if that AI is somehow unique, I wouldn't go that far. As much as that AI thinks itself as self aware, and it may just as well be, a human actually dying for it seems a little too much.

Edited by avallanche
Link to comment
Share on other sites

I'm really trying to avoid spoilers especially now with all those leaks, so I won't go indeep here for now, but the trailer made an interesting point: would you risk your life for a synth?

One thing is talk about rights and persecution but to lose a real life to save an AI that, in theory, can be reconstructed is another thing entirely. Even if that AI is somehow unique, I wouldn't go that far. As much as that AI thinks itself as self aware, and it may just as well be, a human actually dying for it seems a little too much.

My main gripe is the question itself here. Why would you even want to do that? Not as in - being against Synths or anything - but I would wager that the Synth is possessed by an AI. An AI that can be downloaded and uploaded to any other platform - an AI that could probably be reconstructed so if the platform is destroyed the AI just requires another platform.

Link to comment
Share on other sites

Do we know, however (not being aware of any spoilers), that the Synth can be replicated? Or that it can be transfered? Or repaied? What if the personality of the Synth is unique? What if any damage to the program can become permanent - like a stroke victim, or someone with a mental illness? If it can be repaired/transfered/rebuit - great! But, then, couldn't you argue the same for humans in Fallout? We know that human minds can be uploaded into simulations like Tranquility Lane, and we have the brains in Old World Blues - surely we can upload a human mind into an android body. So...if we do that, is killing the human body murder anymore? The mind still survives after all - using the same logic behind "re-uploading" the Synth. What makes a human any more "special" than a Synth at that point? Humans are actually much easier to make, and their production time is fairly regular (if a bit slow). Plus, the humans seem to enjoy the production process...or, at least, its initial stages.

Link to comment
Share on other sites

I think consciousness is a phenomenon that emerges from certain patterns of information.

It does not matter what the source of that information is, whether it be biological neurons or simulated ones, or how it is being represented - all you need is a way to represent certain patterns or states over time, and as we know all we need for that is binary data.

 

So if the binary representations over time of the states of an artificial intelligence perfectly resembles the (hypothetical) binary representations of a human intelligence, then there is no difference between the two. Therefore, the AI would obviously be entitled to the same rights that we think humans are entitled to.

 

What happens when the AI is intelligent, but differs from human intelligence?

Then it depends on the degree of difference, and the spectrum of their possible experiences.

If they can experience as much pain and pleasure as a human, or more, then they should be entitled to at least the same rights as a human.

If their spectrum of conscious experience is smaller than that of a human, then the discussion is similar to that about animal rights, and the applied judgement should be the same as the one we apply to the question of whether we think a dog or chimpanzee has more rights than an ant.

 

So if you can't tell the difference between an artificial intelligence (AI) and an actual intelligence (AI), how do we know that our consciousness is not simulated?

Well, we don't. But it doesn't matter, because all that matters is the information, and not the source of it.

Link to comment
Share on other sites

But it also depend on the biology of the synths... is there a partial brain in there somewhere, hormones, nerves able to transmit pain impulses? Or it's just all hardware and software running a program that imitates human pattern?
An AI, even if unique and self aware, can only simulate human emotions but not fully understand it or suffer from it unless it's partially organic, differently from a dog or a chimp.

I Robot had the 3 laws that, in a way, makes sense as synths rights, but I still wouldn't considering wasting human life to defend a synth, even if it cant be replicated or transferred to another body.

Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...