Jump to content

What if ! Sentience is created


Harbringe

Recommended Posts

Watch the video the question will follow.

 

 

 

If we (the human species) are able to create , build , make a construct of the level of sentience as presented in the video , basically one that is just as self aware as we are in all respects . Do we as humans have the right to claim that construct as property . If not then what happens to the capitalist notions of property and the right to benefit by the fruits of your own labor.

Link to comment
Share on other sites

Humans have kept other humans as property for most of human history. Most of this was done as either indentured servitude (a way to pay off a debt to another person or society as a whole), or by right as a land owner. Land owners essentially owned those people who worked their farms and had complete say on most of what they were allowed to do, or where they were allowed to go. If a peasant tried to leave your estate to find work at another, the land owner could freely kill, maim, or beat said peasant. While indentured servitude was less permanent and generally less brutal as a result, the rights of a servant were still very limited until their debt was paid. The notion of one class of persons being entirely inferior because of the color of their skin is a relatively new concept and primarily came about from the need/want of life-long servants that could work in harsh or dangerous labor but not be compensated or overly cared for. While we all know how that period ended, in modern times machines have taken over this role in most of the world.

 

Even with an advanced AI, I wouldn't imagine things to change very much. Sure, the first of such AI will probably be seen as a sensational achievement in science and might cause some philosophers and the like to buzz about trying to sound meaningful. But the second wave of things will probably be very different. With the second wave comes mass-production for some designed purpose. The designed purpose being the key component. There's no sense in building a machine to help people around the house if said machine has any chance of trying to rebel against its designed role. These production models will likely lack most of the higher level functions that go beyond the barest hint of sentience even if such levels were possible... Both because it would make them easier to develop and produce, but also because it would avoid those kinds of moral dilemmas.

 

When not even pets have rights beyond what is defined under animal cruelty; why would a construct be any different, especially when that construct can be intentionally designed to not need or want them?

 

 

*edit and on a lighter note* Fully featured models will probably be limited solely to usage in political roles due to the ability to make them look exactly as a committee decides they should in order to appeal to the largest audience, follow instructions to the letter while having no opinion of their own, and be completely disposable when the blame starts to pile up. If only because they would be cheaper to purchase.

Link to comment
Share on other sites

Also tempted to think that once technology has proven that they can make androids which look human-like, there may be a number of laws established specifically forbidding it. Reason one being so that constructs cannot be mistaken for living persons. This change would probably come from both those groups which is against humans creating "life" as well as those groups who would have their position in society threatened by synthetic workers. The later group becoming one which will undoubtedly grow between now and that point where synthetics become more common. This would also be done so that humans do not start becoming sympathetic to the notion of giving constructs "rights" as if they were people, so that there isn't as clear of an emotional attachment, and so that social connections don't start forming. Again, most of these coming about as a means to try and prevent replacement of people by constructs.

 

Ultimately, all of this would serve one purpose... To prevent a singularity from occurring. With making the populations and roles of constructs just as limited as their appearance and comprehension, humanity will try to stop itself from being made obsolete.

Link to comment
Share on other sites

First, the video was beautifully presented and actually made me cry. That's art, baby...

 

Second, this conversation (as fun and disturbing and probing of modern morality, or lack thereof, as it may be) is much like asking if walking on the surface of the Sun would be more like being in mud or water. The very idea that AI could be raised to a level that could be called sentient (in the definition intended here to mean: be fundamentally alive) is ridiculous. It smacks of the arrogance that assumes that modern man is the most advanced man that has ever existed and when confronted with evidence that men from the past could achieve feats modern man is incapable of matching, then surely, aliens from another planet must have shown man how to do those things. It truly makes me laugh.

Link to comment
Share on other sites

I think you underestimate homo sapiens ability to surprise. In this case though, I think "sentient" leans more toward "self aware". A machine that is aware of its "Self", and is able to learn. That is *currently* beyond our technology, but then, look at what the last hundred years has wrought. We went from horse and buggy, to a space station, and probes leaving our solar system. Had you told a man in 1915, that we would have landed men on the moon, and have folks living in space, you would have found yourself on a short ride to the nearest asylum.

Link to comment
Share on other sites

First, the video was beautifully presented and actually made me cry. That's art, baby...

 

Second, this conversation (as fun and disturbing and probing of modern morality, or lack thereof, as it may be) is much like asking if walking on the surface of the Sun would be more like being in mud or water. The very idea that AI could be raised to a level that could be called sentient (in the definition intended here to mean: be fundamentally alive) is ridiculous. It smacks of the arrogance that assumes that modern man is the most advanced man that has ever existed and when confronted with evidence that men from the past could achieve feats modern man is incapable of matching, then surely, aliens from another planet must have shown man how to do those things. It truly makes me laugh.

 

You say that, yet we are slowly crawling our way there through sheer determination. AI we have is already continually advancing at an increasing rate each year to where we will probably have service robots becoming common in higher end hotels and the like before the close of the decade. We already have the first of such robots appearing. Meanwhile we have self-driving cars, self-driving forklifts in warehouses, automated baggage handling in some European Airports, countless bots online designed to decode and emulate human communication. And this is only the "applied" portion of the technology out there. It isn't a question of "if", but the question of "when". It has been validly seen as a question of "when" among those who work in actually developing AI for well over the last 20 years. Obviously the answer related to "when" is not one of immediate fulfillment, but we are getting closer. It also isn't being based on human pride or optimism, but rather trying to understand how our own brain handles logic or rather non-logic and how to create machines which can better respond to human non-logic.

 

Regarding alien races, given the vastness of space, the time taken to traverse between stars and the amount of resources required, the first reasonably intelligent aliens we encounter will probably be non-organic, or atleast mostly non-organic since machines tend to hold up better to the effects of cosmic radiation and aren't as limited by a short lifespan. Even the oldest living animal species we know of live only a fraction as long as a machine could be designed to, meanwhile also needing more material resources to sustain that life. Additionally, any alien culture we may encounter which has managed to reach beyond their homeworld would have also probably developed their own AI and been faced with many of the same challenges that we will face. Meaning that none of this is about some sort of "modern pride", but rather that humanity would just be passing one more of the milestones in the long road to technological maturity. Technological maturity being something that not every species reaches, or reaches in the same way, but also not really being anything unique in the grand scale of things.

 

The big change over the last 50 years is that we've started being able to better and more accurately predict where these technological milestones are, and what their dependencies are. There is still science fiction, but we are getting better at filtering those components which are fantasy from those which have realistic merit.

Link to comment
Share on other sites

I was working in the branch for a while when I was still in university, programming and improving autonomous learning and cooperating robots working in a team to achieve a set goal. While our task was still primarily concentrating on teaching them 'playing', in this case Robot Soccer to be precise, the bigger goal behind it all was of a way broader and more general scale.

 

Intelligent machines, or AI, have always been put to test in 'game environments' to sort of 'measure' their intelligence. For some time it was Chess computers. Then the level raised continuously. Nowadays it's Robot Soccer mainly, but that's still only one part of it. The several different application fields I witnessed on multiple conventions in the past while participating at the RoboCup soccer competitions always fascinated me.

 

There's the 'just gaming' kinds, sure, but then there's also a team of little mobile land rovers cruising through a devastated environment like after a natural disaster, searching for signs of life to alarm the search and rescue teams about their position and condition, while navigating based on a self-learned map of the location they share with each other and improve cooperatively, combining their individual knowledge of the terrain to create a collective mind map of it and their own locations inside it, and whenever they run out of contact with each other and risk loosing one of their team to the unknown obstacles about, another autonomous drone is launched, this time a flying camera drone, to search and locate the missing asset and reconnect it with the group, so they can proceed with their overall task and save more lives. Science Fiction? Well, I've watched it live!

 

I agree and am also convinced, by experience, that it isn't a question of 'if' but only of 'when' indeed.

 

 

But back on topic, when it comes to the 'rights' such constructs should or shouldn't be merited, well, I think it clearly depends on the intent behind their creation.

 

You want an intelligent worker, learning by mistakes and improving itself, but never going to disagree with you or even have an opinion of its own? Fine, then create it so. It'll be a machine, will never become self-aware, and thus will never need any rights or freedom or whatever.

 

But if you truly want to create a 'living' artificial being, with the intent to create it as close to a human being as possible... Then are you really going to be surprised it'll come the point where you'll have to grant it the same rights and freedoms you grant other living beings and humans? The closer you make it to 'human', the more you must consider it 'as' human.

 

When you create an AI that is 'aware' of itself and values its own 'life' as much as you do your's, and the first thing you show it is the button you just need to press to end its existence... then don't be surprised the first task it'll perform is find the way to prevent you from ever pushing it.

 

Do you want machines? Then create machines.

Do you want to create sentient living beings? Then you also damn better treat them as such!

Link to comment
Share on other sites

I think you underestimate homo sapiens ability to surprise. In this case though, I think "sentient" leans more toward "self aware". A machine that is aware of its "Self", and is able to learn. That is *currently* beyond our technology, but then, look at what the last hundred years has wrought. We went from horse and buggy, to a space station, and probes leaving our solar system. Had you told a man in 1915, that we would have landed men on the moon, and have folks living in space, you would have found yourself on a short ride to the nearest asylum.

 

A machine can be programmed to learn, that has already been accomplished. Learning algorithms are a current reality and improvements will come quickly. Programming concepts which we fully understand is possible.

 

https://www.youtube.com/watch?v=pp89tTDxXuI

 

 

What a machine cannot be programmed to do is to be human. We cannot program a machine to do something that we do not fully understand ourselves. To think that it will "just happen" once machines learn how to improve learning algorithms which it would then apply to itself or other machines is like thinking a tornado can go through a junkyard and a functional jumbo jet could be created in its wake. It's not going to happen.

 

The idea that I underestimate Man's ability to innovate is an assumption that has no basis. As stated in my previous post, I believe that Man has been far more advanced for a much longer time than most people alive would give us credit for. But regardless of our abilities, we still cannot replicate, in Artificial Intelligence, that which we do not comprehend.

 

 

 

You say that, yet we are slowly crawling our way there through sheer determination. AI we have is already continually advancing at an increasing rate each year to where we will probably have service robots becoming common in higher end hotels and the like before the close of the decade. We already have the first of such robots appearing. Meanwhile we have self-driving cars, self-driving forklifts in warehouses, automated baggage handling in some European Airports, countless bots online designed to decode and emulate human communication. And this is only the "applied" portion of the technology out there. It isn't a question of "if", but the question of "when". It has been validly seen as a question of "when" among those who work in actually developing AI for well over the last 20 years. Obviously the answer related to "when" is not one of immediate fulfillment, but we are getting closer. It also isn't being based on human pride or optimism, but rather trying to understand how our own brain handles logic or rather non-logic and how to create machines which can better respond to human non-logic.

Regarding alien races, given the vastness of space, the time taken to traverse between stars and the amount of resources required, the first reasonably intelligent aliens we encounter will probably be non-organic, or atleast mostly non-organic since machines tend to hold up better to the effects of cosmic radiation and aren't as limited by a short lifespan. Even the oldest living animal species we know of live only a fraction as long as a machine could be designed to, meanwhile also needing more material resources to sustain that life. Additionally, any alien culture we may encounter which has managed to reach beyond their homeworld would have also probably developed their own AI and been faced with many of the same challenges that we will face. Meaning that none of this is about some sort of "modern pride", but rather that humanity would just be passing one more of the milestones in the long road to technological maturity. Technological maturity being something that not every species reaches, or reaches in the same way, but also not really being anything unique in the grand scale of things.

 

The big change over the last 50 years is that we've started being able to better and more accurately predict where these technological milestones are, and what their dependencies are. There is still science fiction, but we are getting better at filtering those components which are fantasy from those which have realistic merit.

 

 

All the instances you mention in your first paragraph are things we fully understand followed by hopes and dreams. Hopes and dreams which would require a leap forward that far outpaces going from horse and buggy to space travel. Please refer to my above answer to the post by HeyYou.

 

Your second paragraph is based on so many assumptions and contradictions that I digress on the basis of keeping the thread on topic.

Link to comment
Share on other sites

Please give a specific definition of 'sentient'. Otherwise, I don't think your arguments hold water. For the purposes of this discussion, I am *assuming* that it is defined as "self-aware". NONE of the learning machines we have today can make that particular claim, however, as the technology improves, (more processing power) it is going to become easier to make that particular step. If you simply claim that 'we don't understand what it is, so therefore, we can never replicate it.' Then our first step should be to define. Otherwise, any discussion is pointless.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...