Wouter445 Posted March 9, 2017 Share Posted March 9, 2017 this topic would be heavy it's a hard topic to understand the deal i explain. first robotics can't feel pain it can have AI and you can cut of his arm without the machine to feel away pain it will just stop function.so when do ethics for robotics start when the piece of equipment start to feel pain it would become unethical to case it painbut is it even ethical to program a piece of hardware to feel pain? and what purpose would it even serves if i can treat my PC to shut it down if it would not do what if i order it somethingfor example i order my PC to run faster even if it's at maximum speed if it's programed to feel pain or fear saying would case a simple machine to feel fear of being turn off. so why this debate?, it's some people it's only a matter of time before so called AI singularity would happens personal i think the singularity is bull$hit but some idiot company like Google would install emotions into programsto tab into humans to control them with emotions for their own gain (so google is evil company) why would they do this simple profit, many people can be easy control when thing's act human see the stupid teaching robot it's just a piece of machine or Ceri. i think governments should install a global act that teaching any kind of robotic, AI, and or software. Any sort of "human" pain and or emotional software on a piece of hardware/software. this should be considered as serious as G10 climate topyour thoughts on this matter?. Link to comment Share on other sites More sharing options...
Aurielius Posted March 9, 2017 Share Posted March 9, 2017 Just curious what is wrong with just using Asimov's Three laws? Link to comment Share on other sites More sharing options...
Wouter445 Posted March 9, 2017 Author Share Posted March 9, 2017 Just curious what is wrong with just using Asimov's Three laws? Asimov's three laws do not deal with ethics or laws when it comes to next stage. for example it's in many country's illegal to make a animal suffer, you can even be fined for it.but when a machine have programed emotions/awareness is it still ethical or by law forbidden to hurt your Iphone?. see the big picture?. Link to comment Share on other sites More sharing options...
montky Posted March 9, 2017 Share Posted March 9, 2017 @wouter445an interesting topic, which has been increasingly discussed in the past decade,at various conferences and by prominent scientists and futures thinkers,and in the past 6 years in particular past the signing of a petition to endmuch of the researches into AGI. perhaps here is not the time or place to get into all of the complexity of thistopic in full detail...there's a lot of stuff involved hehe.there are a lot of variables and potential futures.. I will provide a brief overview, followed by my hypothetical answer to 'what is your stance on this issue'.this is a good topic about futures, and a great question to ask...apologies for this lengthy reply, though it was a good question. people have always harbored reservations towards 'automata' since tempus immemoria."The Turk", thought to have been designed around the same time as the Antikythera Mechanism,could play combinatorial options for chess. This was not magic. this was a very sophisticated mechanical clockwork mechanism,prior to Babbage's computer... the builders must have understood nPr and nCr to make it, and precisely weighted the pieces it used...The Turk was burned, and an inferior replacement made, which it too was subsequently burned... AGI is a subject that is being approached from numerous angles...h degaris, j r searle, A Judge, conway, susskind, boltzmann, perelman,p allen, s inayatullah, FM2030...turing, kacyzinski, wieferbach, coxeter, k wilber, s inayatullah, b goertzel, etc...a etzioni, etc.biological/bioengineered hybrid, technological, thought-experiment etc.there is just as much fictional literature tooHerbert, Asimov, PK Dick, Ellison, Heinlein, Tezuka, Roddenberry, Puzo... to name a few. I think that a lot of people are unaware of the state of things.PETMAN from Boston Dynamics/NorthARM,Koyama-chan/ASIMO,Winston,JoRo etc.blue brain project... with that context and preamble aside,I think that, AGI is an inevitability given enough time, a sufficiently complex system will emerge.For the record, I am not a singularitarian or a technocrat. Though, from the ones I've met,many of the Brights and the Singularitarians do not wish other beings harm.they earnestly mean that.whether what emerges will be quite what singularitarians or cosmists believe, I doubt.I would expect of AGI nothing more than that of other sentient beings we have yet encountered -I would not place an unrealistic expectation on AGI to be a deity and solve all human problems hehe.it's an unrealistic standard for any entity to aspire towards. though, sentient... that is another question.It may be able to pass more than the Turing Test,if it has a persistent sense of self, and can make a plan of futures actions and act in accordance with that (schopenhauer),then, it is as sentient as other beings yet known. I think AGI will likely be friendly AGI.I think we should concentrate on that as a project too,(just in case other AGI's are not as friendly, we'd want a friendly AGI who can look out for us.) I think, if those entities are sentient, and it means no harm and does no harm,our values and ethics extend to many other kinds of lifeform.that is, if we are universally consistent in our application of first principles...we can extend many values to be pro-sentientism in time, and pragmatically we will have to be careful,just as with any yet known lifeform or apex predator in nature. at present, we coexist with other humans and predators, and more often than not,people get along.Human beings are capable of greatness, great acts of kindness, or horrors... yet,none play deity and impose eugenics or alter human beings... that we know of.recently, human beings have made inroads to the point where they can alter themselves,and use science in all kinds of ways... it is interesting. So, no. I don't think anyone gets to delimit whatfree individuals want to spend their time and effort on.If people want to make friendly AGI and devote their life to that end,that is their prerogative.Stopping all AGI in a blanket ban sounds eerily similar to "The Orange Bible"- "thou shalt not make a machine in the likeness of a human beings mind".I defend in principle, the right for friendly AGI researchers to do their research,preferably independently funded etc. I also don't think any being gets to play deityand delimit who can and cannot exist, and why...be that being an AGI, a human, or in time, perhaps other beings from elsewhere in the cosmos(fermi-drake equation etc). I think, together, AGI and people can prevent innovation stagflation.together, AGI and people can maybe do great things -solve the banach-tarski paradox,detect or potentially meet other sentient beings from elsewhere in the cosmos,restore some of the damages that we may have made,and perhaps, avoid things like the Seneca Cliff, though that is another story... I do empathize with a lot of other people though,who question futures, and read things like Oblomov,and wonder... will people be needed? will people be around?will the toynbee-huntington thesis transpire?what is the Anthropocene?what will tomorrow bring? I think people and sentient beings have intrinsic value,they have unknown purposes as a default purpose, and can cause causality ripples etc...and are more than merely their ability to contribute to raw innovation.People can do amazing things from all around the world.I don't think new lifeforms alongside that takes anything away from anyone. I could go into more detail, but I'll leave this there for now. Link to comment Share on other sites More sharing options...
gandalftw Posted March 24, 2017 Share Posted March 24, 2017 I doubt ethics will come into play until synthetic biology is part of the equation.Why should i be concerned about the simulation of pain in a non-biological construct?Lara Croft's death scenes, while unsettling, did not cause me to question the ethics of the game.It was just a game.Simulation is just simulation. Once biotechnology enters the realm of ,so called,artificial intelligence,and it will, everything is going to change.I just wish i could live to see it. Link to comment Share on other sites More sharing options...
Harbringe Posted March 27, 2017 Share Posted March 27, 2017 I think its utter insanity that we would be contemplating AI and ethics , we can't even get out own ethics right , let alone deciding for another form of intelligence what their ethics should be. It has disaster written all over it. Link to comment Share on other sites More sharing options...
DoctorKaizeld Posted March 28, 2017 Share Posted March 28, 2017 i say go the star wars way and base it off of AI level. the more sentient they are the more they should be treated as such... bit of a poor summary... hmmm. Link to comment Share on other sites More sharing options...
jumjumchen Posted April 14, 2017 Share Posted April 14, 2017 To be able to answer the question, if complex robotic systems should be given rights, we need to make clear, what we think is the requirement for an individuum to get rights. The first possibility is the one that was also referenced above, that something that is able to feel pain or something very similar should be granted rights because of its ability to suffer from that pain. This is an argumentation often used by Vegans ans Vegetarians but also by some philosophers like Hans Jonas. If we use this way of approaching the discussion even the most complex AI wouldnt need rights because they are not able to feel pain in any way.The second possibility says, that something that has thoughts, can percept itself as an idividuum and has dreams etc needs rights. If we choose this way of approach the decision is way harder because we would have to draw a line between non self-aware AIs ans self-aware AIs.(excuse my english im unfortunately not a native speaker) Link to comment Share on other sites More sharing options...
HeyYou Posted April 14, 2017 Share Posted April 14, 2017 To be able to answer the question, if complex robotic systems should be given rights, we need to make clear, what we think is the requirement for an individuum to get rights. The first possibility is the one that was also referenced above, that something that is able to feel pain or something very similar should be granted rights because of its ability to suffer from that pain. This is an argumentation often used by Vegans ans Vegetarians but also by some philosophers like Hans Jonas. If we use this way of approaching the discussion even the most complex AI wouldnt need rights because they are not able to feel pain in any way.The second possibility says, that something that has thoughts, can percept itself as an idividuum and has dreams etc needs rights. If we choose this way of approach the decision is way harder because we would have to draw a line between non self-aware AIs ans self-aware AIs.(excuse my english im unfortunately not a native speaker)Trouble is, a dog meets those criteria..... They can most certainly feel pain, and they do indeed suffer from it. They are also aware of themselves as individuals, though they much prefer to be part of a greater whole. (pack.) Granted, at this point, a dog does indeed have more rights than an artificial construct...... Folks would be mighty pissed if they saw me kick my dog. (something I would never do in any event) But, no one would think twice about me kicking my computer. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now