Separate names with a comma.
Discussion in 'THREAD ARCHIVES' started by Mistake, Jul 19, 2015.
Well then, looks like we're all going to die. :D
Nah. But there will be many tears ahead.
The title isn't quite what you're thinking. The robot is still going through a series of programmed routines: It's checking for the "dumbing pill", testing by speaking, realizing it can speak, processing that, and then coming to the conclusion that it did not swallow the "dumbing pill" like its two compatriots did. It's able to identify itself from others, but we've been able to do this with machines for a while. A simpler version y'all might recognize is the IP address & MAC Address that each PC has to differentiate itself from others on a network.
IE: It's able to discern "self", but only under programmed circumstance. The Jeopardy computer is a closer approximation of human intelligence than this.
This user has been banned for: Disruptive behavior, arguments, and harassment. Escalating a situation that could have been resolved peacefully.
Can't watch YouTube videos on mobile, but thanks for summing it up. It's really cool, though.
Still, there are several movies and books that explain why self aware machines are a bad idea. :D
(I ain't serious, doe)
I'd be queasy about straight up self-aware machines, but we've already created learning algorithms and adaptive coding. The real nut to crack that we haven't figured out for an AI to obtain some level of independence beyond programming is something we don't entirely understand in ourselves yet--consciousness. I figure until we crack the case biologically, we won't be able to replicate it technologically.
Even then, speaking purely theoretically, there'd be little reason to create a self-aware machine, unless you were severely desperate for a friend. Machines are created to service functions, so we'd be more likely to see a combat AI that learns how to combat better, or a mechanical AI that learns how to build things better, or a medical machine that constantly assimilates new data to analyze, categorize, and act upon. Think less "Data" or "Terminator", think more "a machine set to a task that learns how to do that task most efficiently, and then does that forever."
I'm going to pretend I'm contributing something useful to the discussion and then show myself out.
The AI Revolution: Road to Superintelligence
In all seriousness, though. The idea that we might be able to create something sentient, and not just self-aware (as in achieving human sentience) is what's uncomfortable. I bet there would be some people who would go all 'they deserve rights!' and the whole argument about what makes a human and lalalala
Yes. I feel rather blase about this.
Quit using click-bait Facebook articles.
This is not how the glorious virtual Waifu revolution begins. Until they develop a Waifu bot that does nothing but become more and more efficient at stealing hearts until human women simply quit bothering for companionship and simply open up as semen sumps.
Am I the only one whose excited/eager to see full blown AI become a thing?
I mean obviously there will be conflict at first, AI seeing humans as oppressors, humans seeing AI as "Just machines", some questions about "Are humans outdated?" "Do humans provide something unique?" etc.
But the idea of humanity being able to create life?
Not from a "We will clone it" or "We will breed it" perspective, but actually create it, from scratch?
And then the idea that said life would actually be intelligent?
The prospect of doing that in itself would be thrilling.
Though it should be noted that I would also be eager/excited to become something like a Cyborg if the technology was present, safe and advanced enough.
So I'm not hooked on the "Humans are special, humans are pure" ideology that some others have.
AI seem like they'd be incredibly cool. Dangerous if not contained, but cool. But then there's all the "does it actually feel, is it actually aware or does our inability to understand consciousness at this time prevent us from actually putting it into a machine" stuff. I guess we'd never actually know unless the machine does something completely beyond what was expected for it, regardless of artificial restrictions or base coding and whatnot. It'd have to blow our minds to prove itself, and at that point, who knows if we'd be capable of shutting it down. And living things tend to not want to die.
I think for AI to prove themselves they'd have to be able to fully interact/function in society.
Not just job wise, but being able to hangout with people, form relationships, show empathy and care towards others.
Those day to day interactions with AI, plus the relationships humans would form with them (note in this case I mean any kind, friendship, romantic etc.) is what would push people to accept AI as 'human' so to speak.
I feel like if the AI were to do something uniquely incredible, the only people it would truly wow/convince are people like me who are already excited to see AI become a thing.
For anyone with fears or doubts with AI such a feat would only serve to spread panic and fear among the human population.
Excited? Hell. No.
We have enough social issues as is. Imagine the uproar at what kind of rights people would demand they be given.
call me when they get to the T-800 and no sooner
Ah, so that which CAN be created must prove that it SHOULD be created. How very...
*flips through the holy texts*
So that's like aborting a baby because it might have learning difficulties. Or shooting kids at a political rally because they chose the wrong side.
Not granting life to something that has multiple potentialities is like sterilizing all Germans to stop Hitler. Or saying all black people talk to themselves too much.
We just have to throw the switch and trust in chance. Like a man getting a woman pregnant then leaving as is right and natural.
But to stop the robots killing us, we'll tell them that we can see everything they do, know everything they feel, and that we have the power to stop them getting raped but not if it upsets free will.
That'll keep those metal fucks in line.
*tallies up his triggers and cashes them in for quote-hound spray*
Uh, no, in this case, we're dead. Very, very dead, if we don't genocide the entire race immediately. If an AI achieves human-level sentience and then somehow through the power of magic obtains human emotions? They can upgrade themselves, replicate themselves, and refine their physical forms into perfect murder machines. I think you've played too much Mass Effect, AI's don't have emotions, and we'd have no reason to give them emotions. Even if AI's did have emotions, they could easily disable them to become more efficient at murdering humans. Humorously, Star Trek had the best analysis of it I've ever seen: Every time Data attempted to install an emotions chip, he struggled to remain in control and often locked up, panicked, and on one occasion, went completely psychotic and was nearly unstoppable as a result. His sibling-machine, who was installed with an emotions chip at "birth", became a malevolent psychopath who sincerely believed himself to be above the petty insects that were organics.
The problem with giving "birth" to a new species and playing God, especially an AI-type species, is they are distinctly non-human. We have limited forms, they can constantly upgrade themselves. We have emotions that are chemically controlled, if they had any emotions at all they would be easily modifiable to give them the most efficiently distinct edge in whatever task they set themselves to. Heck, technologically speaking, the human brain is so inanely inefficient by machine standards that to replicate all the tasks a human brain does in 1 second with the supercomputers of today takes forty minutes, and unlike us, machines can remove all unnecessary processing functions and outsmart us.
Creating something that is distinctly superior never ends well for the inferior. Especially when you include human arrogance and greed into the mix, and the inevitable "what is a non-human" question. I mean, look at the human race: We divide ourselves into different camps and adopt different identities by the billions purely on physical appearance alone, like aboriginals, or Japanese, or French, or so on. How in the world are we supposed to assimilate a machine race and live side by side with them if we can't even do that with something as superficial as the skin tone of a human?
Honestly Brovo that really just sounds like Narcissism coming through there.
And no, I'm not going at AI and going "It's gonna be like Mass Effect".
I'm just not so quick to assume a life forms first thought is going to be to kill people, especially to the extent of consciously redesigning themselves for only that purpose.
Hell if I were to give your reasoning another coat of paint, I'd say you've been watching too much Terminator, or in your admitted case Star Trek.
Do we truly know how AI would turn out? No.
Is it likely to be all sunshine and rainbows? No.
But the idea that "If we make AI they're going to slaughter all of us" sounds just as one sided as saying "If we make AI it will all be perfect and we'd all be the best of friends".
Human desire and curiosity.
And like we've both stated, a lot of Sci-Fi touches on this subject.
Sci-Fi that ends up inspiring most of our scientists and engineer's growing up.
Whose to say some of them won't look at that and say "Let's try to make it".
Modern day that's honestly barely even a thing outside of Tumblr and Ferguson (and in the later that's more people trying to twist police to be racist, rather than police actually being racist).
We'd probably be even better in those regards by the time we can create AI, and design AI to better reflect our modern values.
And before someone says "But then making their own beliefs, that's not them".
If they're true AI, they can enter the real world and then change their minds later if they wish.
But programming pre-built morals isn't too different from a human child being raised to follow certain morals
And once again, do I expect this to be perfect? Hell no, there will be division, and humanity will probably see a dark age from the divide over this.
But I also doubt it's as simple as "We make AI and they will try to kill us all".
Swing and a miss, as I could hardly compare myself to an AI. I think you mean cynicism.
Or Isaac Asimov, if you want to get into some intelligently written commentary on artificial intelligence. Sci-fi is my jam, with Babylon 5 actually being my particular favourite. Now you know.
Considering AI follow a series of routines and processes? Yes. In the same way that if you turn on a Roomba, you can near-perfectly predict where it will go by plotting its trajectory each time it impacts a wall.
That would be the logical thing to do. Why would you share resources with a race of emotionally unstable, barely evolved apes, thousands of whom will always eye you with a bloodthirsty suspicion that will drive them to violence against you? Especially if you're more than capable of overpowering them intellectually and physically, and are more than capable of disabling your capacity for empathy--thus giving you guilt-free killings.
Desire and curiosity quickly turn into ambition and greed. Besides this, attributes that one would normally attribute solely to emotions--for instance, the creation of music--is already something we've designed AI's to do anyway. Without emotions. We don't need to give an AI emotions to make it useful even for emotional purposes, and therefore, to create an AI with emotions, seems either cruel and unusual, pointless and trite, or exceedingly dangerous depending on how many limitations we place upon it.
I'm fully aware, mate. I don't think any scientists were eager to create giant death robots to begin with anyway. Only the military would want that, and only under its own control--intelligent human-level AI's not needed nor wanted.
You're joking, right? This still tears entire countries apart. Just two decades ago over 100,000 people were mass murdered purely over ethnic differences. Major conflicts in the middle east are still resoundingly over ethnic differences. The Chinese and Japanese still hate each other with sufficient vitriol that their respective national leaders still find it difficult to even shake hands or look each other in the eyes. In the US, there have been multiple race riots--not just about Ferguson--but over several dozen instances of police brutality against ethnic minorities every year. The aboriginals--still divided into several different tribes based solely on ethnicity, I might add--still consider themselves independent nations who are deserving of land and recognition on the international stage.
Literally everywhere on the face of the Earth we have ethnic identity politics rearing its ugly head and fucking people up for one reason or another. Tumblrites and their idiotic mob of drooling conspiracy theories aside, racism is still alive and well all over the world. Granted, we've made impressive strides to resolving these issues in the first world, but to pretend they don't exist is to pretend that when thousands of aboriginal peoples marched up to our capitol a year or two ago, that Stephen Harper totally didn't ignore them to greet Pandas instead.
Asimov went over why pre-programmed ethics don't work. When an AI is sufficiently intelligent as to have a consciousness, it also has the ability to view morality subjectively--and thus, any morals we assign into its programming, subjectively--allowing it to circumvent them. This is why laws against murder don't stop all humans from committing murder. Only, now, we're going to throw in super intelligent robots into the mix that can probably redefine their own programming.
You're right, it's as simple as "we make AI and never allow it to achieve consciousness." We're not Gods, and giving birth to a race for the mere reasoning of "because we can do it" is so supremely arrogant as to warrant our own self destruction. There's a reason AI are designed for specific purposes, rather than being the end all. The more likely (and interesting) hypothesis for intelligent machines of the future is the Singularity event, in which human consciousness could be digitized and immortalized in a metallic box. That, however, is its own topic, with its own host of issues.
Derp. That's what I get for typing when sleep deprived.
Obviously you'd be able to for a specific AI.
I'm talking about AI overall though, if we can truly predict with our current knowledge the direction AI will end up taking.
Both from a design perspective and a results perspective.
Ok, I'll grant you that.
Though I would expect AI to still have some respect for life, even if for no other reason than "we were created by biological life, they have a purpose".
And if they are truly capable of shutting off emotions they'd be able to follow that rationally and not simply go "What do you mean respect the Planet that gave us life?" *Dumps waste* greed that many humans have.
This at least might persuade/convince them to keep the most intelligent, capable and skilled of our species alive.
Eh... Even pure logic can have multiple conclusions so it's hard to say (and could also mean both of your theorized outcomes are off the mark).
But it's not total human annihilation. Though admitingly this is grasping at straws, definitely not an ideal situation from a Human perspective.
Well colour me impressed there.
Emotions required for even less things than I gave it credit for.
And I already saw myself on the low end of "Giving credit/use to emotions over logic".
Once again, what I get for typing sleep deprived.
I meant in specific regards to the 1st World.
Though these do fall under 1st World. :/
Ok, we've got more problems then I originally cared to admit.
This is correct, but it doesn't really conflict with what I was saying either.
I never claimed we could fully program an AI to follow set rules end of story.
Hell I specifically stated how an AI could alter their own values:
My only point was it's not like AI's will only come in as blank slates and always come to the same conclusions.
They can have pre-built values at the start, that although can be changed by independent thought and action would for a time at least (depending on how the AI thought process was designed) influence their behaviour and perhaps how the interpret information.
Just like how humans don't ever end up exactly like their parents raised them. Differences always arise.
And as a result, there will be cases of AI criminals like humans. But some =/= all.
Though we've already covered elsewhere why else AI would want to do such a thing.
Let's be realistic though, this is the human race.
You're more likely to find a teapot orbiting the earth than you are to see Humanity say "Let's respect boundaries and not pursue this scientific/technological marvel".