JediMasterTallyn Posted January 18 Share Posted January 18 first off lets get the easy ones out of the way, Skynet: at no point in history so far has any government shown that if the leaders did not have 100% control over something like this would any government trust it. Therefore of the three scenarios I find this one the least like mostly because I have never gotten past "The government trusting something?" pfffft. So yeah. This one is easy. Paranoia is our friend this time. HAL is well; VERY dated. mostly I like to included this for the talking point of "How do you believe AI will handle it's first morally grey decision?" My answer, it depends on who wrote its base code and taught it to think. And Finally Cylons. Honestly this one is much more likely to me although I will preface that I do not believe anyone alive today will face this particular dilemma. I say and consider how far we have come in the past 60 years and am fully prepared to be wrong. in just 25 years we went from 8 bit games to full VR. Here is hoping for my holodeck. The questions here are: How will we react to AI evolving sentience, how will it respond? what moral or ethics will it have learned from it's human creators? Could we honestly stop it considering the thinking speeds of non sentient machines of today if it decided to go full Cylon on its creators? I would like to believe that by the time this scenario is possible we will have learned our lessons and be better. I hope that the humans that created and teach this AI taught it to value all life, and that when the machines prove their intelligence that the humans of this time listen, despite our track record when dealing with "different" . Perhaps our descendants will avoid a disaster we have long feared. Or maybe they will run head long into it. What scenario is most likely to you? How do you believe future humans will respond? How will the newly born sentient AI respond? Link to comment Share on other sites More sharing options...
ScytheBearer Posted January 23 Share Posted January 23 The consideration that is missed here is that human beings, collectively and individually, learn very slowly if they learn at all. By contrast, electronic devices are capable of learning at the speed of light. Who is going to gain the most knowledge the quickest? Beyond the perfunctory definition of wisdom one finds in Merriam-Webster, wisdom is characterized as the moral and ethical application of knowledge, insight and good judgement. Human beings struggle with these characteristics, and most don't ever acquire these traits until late in life, it ever. Further, these traits are arrived at via a long process of trial and error when dealing with the use and exploitation of new technologies, new knowledge or new situations. Wisdom is achieved by the emotional evaluation of the impacts of new technologies, knowledge and situations on humanity as a whole and people as individuals. Are the new technologies, knowledge or situations beneficial, a threat or somewhere in between. How is a machine to be capable of such an emotional evaluation without the abundance of hormones which influence our emotional evaluations? So, the question of whether artificial intelligence will create Mycroft Holmes or Skynet is not really adequate. The question should be "Is it reasonable and rational for humanity to give electronic devises the ability to learn without first teaching the devices what it means to be humane"? Link to comment Share on other sites More sharing options...
vurt Posted March 11 Share Posted March 11 Movie scenarios aren't very likely, those exist because they are exciting scenarios, but also unlikely scenarios. It's all down to the material it learns from. Commercial models of AI will be the usual "woke" garbage with historical revisionism etc, we are already seeing that. The interesting and good models will not be commercial, but created by the community, like always that's where greatness is achieved. We can take image AI as an example, the commercial models are pretty much garbage. Do your own stuff (fine tuning) and you can do cool stuff with it. I can't wait to train my own (text-text) AI, even if it won't be as large as a commerical model. Instead we will be able to do AI's with perhaps 1 specific focus, let's say you have one model for movie or game scripts and another for programming. The Holodeck is really outdated, i mean the idea of it. VR is much better because it's extremely portable and doesn't require an empty room (= mainly for rich people who has a lot of space). The advantage with a holodeck is that it doesn't require the headset itself, but other than that it's completely inferior, though of course it is imagined to have very reailistic projections but VR will get there too, and i guess you can say it's already there depending on what you're running. Link to comment Share on other sites More sharing options...
HeyYou Posted March 11 Share Posted March 11 17 hours ago, vurt said: Movie scenarios aren't very likely, those exist because they are exciting scenarios, but also unlikely scenarios. It's all down to the material it learns from. Commercial models of AI will be the usual "woke" garbage with historical revisionism etc, we are already seeing that. The interesting and good models will not be commercial, but created by the community, like always that's where greatness is achieved. We can take image AI as an example, the commercial models are pretty much garbage. Do your own stuff (fine tuning) and you can do cool stuff with it. I can't wait to train my own (text-text) AI, even if it won't be as large as a commerical model. Instead we will be able to do AI's with perhaps 1 specific focus, let's say you have one model for movie or game scripts and another for programming. The Holodeck is really outdated, i mean the idea of it. VR is much better because it's extremely portable and doesn't require an empty room (= mainly for rich people who has a lot of space). The advantage with a holodeck is that it doesn't require the headset itself, but other than that it's completely inferior, though of course it is imagined to have very reailistic projections but VR will get there too, and i guess you can say it's already there depending on what you're running. The Holodeck beats the pants off of any VR headset.... Images you can touch, feel, and physically interact with? No headset?? Just an empty room, and some equipment. I am thinkin' something like GTA would be FAR more fun in a holodeck, than any VR headset. 1 Link to comment Share on other sites More sharing options...
Kregano Posted March 11 Share Posted March 11 19 hours ago, vurt said: I can't wait to train my own (text-text) AI, even if it won't be as large as a commerical model. Instead we will be able to do AI's with perhaps 1 specific focus, let's say you have one model for movie or game scripts and another for programming. For coding, the already existing open source models seem like a good solution unless you need something very specific/obscure. That said, I do think that most people who want to make things using AI will have multiple LLMs/AI for different functions, assuming that stuff like RAG (Retrieval Augmented Generation) doesn't make training less necessary. Creatives who know how to leverage the tech can definitely get a lot of mileage out of the tech if they know how to use it. That said, some dude bro letting LLMs auto-modify themselves into some nightmare is entirely possible. Link to comment Share on other sites More sharing options...
vurt Posted March 13 Share Posted March 13 On 3/11/2024 at 10:56 PM, HeyYou said: The Holodeck beats the pants off of any VR headset.... Images you can touch, feel, and physically interact with? No headset?? Just an empty room, and some equipment. I am thinkin' something like GTA would be FAR more fun in a holodeck, than any VR headset. we'll get there too and there's been some advancement with it (touch), but sure that's an advantage with holo deck. The "no headset" is not really a positive because the headset makes it portable, very much unlike having an entire room dedicated to it. You can use a headset anywhere, on a plane, comfortably in bed, in a forest etc etc On 3/12/2024 at 12:42 AM, Kregano said: For coding, the already existing open source models seem like a good solution unless you need something very specific/obscure. That said, I do think that most people who want to make things using AI will have multiple LLMs/AI for different functions, assuming that stuff like RAG (Retrieval Augmented Generation) doesn't make training less necessary. Creatives who know how to leverage the tech can definitely get a lot of mileage out of the tech if they know how to use it. That said, some dude bro letting LLMs auto-modify themselves into some nightmare is entirely possible. yeah i know there are some open source models available, tried a few of them, but i'm mostly into the visual ones Link to comment Share on other sites More sharing options...
JediMasterTallyn Posted March 23 Author Share Posted March 23 Ok first I have not been in good place lately so have not been posting here, not saying this is a unsafe place but. You guys were not the companionship I needed. I tend to keep parts of my life separate, and mental health is not something I come here for, no offense but the place I go knows me and has known me longer. Again this IS a safe space, just not MY safe space. Hope that makes sense. That said WTF.... VR headsets are better than a Holodeck?? Seriously? A Holodeck while not being mobile is a full environment that you can interact with. VR headsets while looking cool and being fairly realistic cannot come close to that. In a Holodeck you can actually get wet by falling into water, and the cool part to dry off just step outside the Holodeck. Your VRBoy cannot compete. Second as I said HAL is more about discussing how AI will handle it's first morally grey decision, HAL did not handle his well. Was this because of coding, a lack of foresight by those that taught him? Putting him in the field to early, before those teaching him could introduce morally grey things to him? Remember HAL saw the world as black and white or if you prefer 1 or 0, The morally gray area most governments and militaries operate in would have been a foreign concept One I believe he was ill educated about to handle. I do agree that who teaches them and what they are taught will have a definite impact on them if they ever achieve sentience. Link to comment Share on other sites More sharing options...
Dashyburn Posted March 25 Share Posted March 25 Years ago I was excited for A.I. and quantum computing, now I have completely changed my mind, it will not be used for common good or to benefit common good, the power of quantum computing is outrageous and will change everything, for the worse, worse than it already is, you think we are being big brothered now? wait until Google and Microsoft to name just two get hold of quantum computing, combine that with A.I. t's going to be game over man, game over. Link to comment Share on other sites More sharing options...
JediMasterTallyn Posted March 28 Author Share Posted March 28 Personally I can see the governments of the world doing this but not big corporations, honestly we do not matter enough to them for them to bother. As long as the money keeps coming in and they maintain power I really believe that they could not care less about anyone with less than 7 figures in their bank accounts. Yes plural as in multiple 7 figure accounts. The government however cannot help but want to micromanage, cannot have free thinkers, that leads to people questioning your motives. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now