Can't watch YouTube videos on mobile, but thanks for summing it up. It's really cool, though.The title isn't quite what you're thinking. The robot is still going through a series of programmed routines: It's checking for the "dumbing pill", testing by speaking, realizing it can speak, processing that, and then coming to the conclusion that it did not swallow the "dumbing pill" like its two compatriots did. It's able to identify itself from others, but we've been able to do this with machines for a while. A simpler version y'all might recognize is the IP address & MAC Address that each PC has to differentiate itself from others on a network.
IE: It's able to discern "self", but only under programmed circumstance. The Jeopardy computer is a closer approximation of human intelligence than this.![]()
I'd be queasy about straight up self-aware machines, but we've already created learning algorithms and adaptive coding. The real nut to crack that we haven't figured out for an AI to obtain some level of independence beyond programming is something we don't entirely understand in ourselves yet--consciousness. I figure until we crack the case biologically, we won't be able to replicate it technologically.Still, there are several movies and books that explain why self aware machines are a bad idea. :D
Uh, no, in this case, we're dead. Very, very dead, if we don't genocide the entire race immediately. If an AI achieves human-level sentience and then somehow through the power of magic obtains human emotions? They can upgrade themselves, replicate themselves, and refine their physical forms into perfect murder machines. I think you've played too much Mass Effect, AI's don't have emotions, and we'd have no reason to give them emotions. Even if AI's did have emotions, they could easily disable them to become more efficient at murdering humans. Humorously, Star Trek had the best analysis of it I've ever seen: Every time Data attempted to install an emotions chip, he struggled to remain in control and often locked up, panicked, and on one occasion, went completely psychotic and was nearly unstoppable as a result. His sibling-machine, who was installed with an emotions chip at "birth", became a malevolent psychopath who sincerely believed himself to be above the petty insects that were organics.Am I the only one whose excited/eager to see full blown AI become a thing?
I mean obviously there will be conflict at first, AI seeing humans as oppressors, humans seeing AI as "Just machines", some questions about "Are humans outdated?" "Do humans provide something unique?" etc.
*tallies up his triggers and cashes them in for quote-hound spray*
Human desire and curiosity.and we'd have no reason to give them emotions.
Modern day that's honestly barely even a thing outside of Tumblr and Ferguson (and in the later that's more people trying to twist police to be racist, rather than police actually being racist).We divide ourselves into different camps and adopt different identities by the billions purely on physical appearance alone, like aboriginals, or Japanese, or French, or so on. How in the world are we supposed to assimilate a machine race and live side by side with them if we can't even do that with something as superficial as the skin tone of a human?
Swing and a miss, as I could hardly compare myself to an AI. I think you mean cynicism.Honestly Brovo that really just sounds like Narcissism coming through there.
Or Isaac Asimov, if you want to get into some intelligently written commentary on artificial intelligence. Sci-fi is my jam, with Babylon 5 actually being my particular favourite. Now you know.Hell if I were to give your reasoning another coat of paint, I'd say you've been watching too much Terminator, or in your admitted case Star Trek.
Considering AI follow a series of routines and processes? Yes. In the same way that if you turn on a Roomba, you can near-perfectly predict where it will go by plotting its trajectory each time it impacts a wall.Do we truly know how AI would turn out? No.
That would be the logical thing to do. Why would you share resources with a race of emotionally unstable, barely evolved apes, thousands of whom will always eye you with a bloodthirsty suspicion that will drive them to violence against you? Especially if you're more than capable of overpowering them intellectually and physically, and are more than capable of disabling your capacity for empathy--thus giving you guilt-free killings.But the idea that "If we make AI they're going to slaughter all of us" sounds just as one sided as saying "If we make AI it will all be perfect and we'd all be the best of friends".
Desire and curiosity quickly turn into ambition and greed. Besides this, attributes that one would normally attribute solely to emotions--for instance, the creation of music--is already something we've designed AI's to do anyway. Without emotions. We don't need to give an AI emotions to make it useful even for emotional purposes, and therefore, to create an AI with emotions, seems either cruel and unusual, pointless and trite, or exceedingly dangerous depending on how many limitations we place upon it.Human desire and curiosity.
I'm fully aware, mate. I don't think any scientists were eager to create giant death robots to begin with anyway. Only the military would want that, and only under its own control--intelligent human-level AI's not needed nor wanted.And like we've both stated, a lot of Sci-Fi touches on this subject.
Sci-Fi that ends up inspiring most of our scientists and engineer's growing up.
Whose to say some of them won't look at that and say "Let's try to make it".
You're joking, right? This still tears entire countries apart. Just two decades ago over 100,000 people were mass murdered purely over ethnic differences. Major conflicts in the middle east are still resoundingly over ethnic differences. The Chinese and Japanese still hate each other with sufficient vitriol that their respective national leaders still find it difficult to even shake hands or look each other in the eyes. In the US, there have been multiple race riots--not just about Ferguson--but over several dozen instances of police brutality against ethnic minorities every year. The aboriginals--still divided into several different tribes based solely on ethnicity, I might add--still consider themselves independent nations who are deserving of land and recognition on the international stage.Modern day that's honestly barely even a thing outside of Tumblr and Ferguson (and in the later that's more people trying to twist police to be racist, rather than police actually being racist).
Asimov went over why pre-programmed ethics don't work. When an AI is sufficiently intelligent as to have a consciousness, it also has the ability to view morality subjectively--and thus, any morals we assign into its programming, subjectively--allowing it to circumvent them. This is why laws against murder don't stop all humans from committing murder. Only, now, we're going to throw in super intelligent robots into the mix that can probably redefine their own programming.But programming pre-built morals isn't too different from a human child being raised to follow certain morals
You're right, it's as simple as "we make AI and never allow it to achieve consciousness." We're not Gods, and giving birth to a race for the mere reasoning of "because we can do it" is so supremely arrogant as to warrant our own self destruction. There's a reason AI are designed for specific purposes, rather than being the end all. The more likely (and interesting) hypothesis for intelligent machines of the future is the Singularity event, in which human consciousness could be digitized and immortalized in a metallic box. That, however, is its own topic, with its own host of issues.And once again, do I expect this to be perfect? Hell no, there will be division, and humanity will probably see a dark age from the divide over this.
But I also doubt it's as simple as "We make AI and they will try to kill us all".
Derp. That's what I get for typing when sleep deprived.Swing and a miss, as I could hardly compare myself to an AI. I think you mean cynicism.![]()
Obviously you'd be able to for a specific AI.Considering AI follow a series of routines and processes? Yes. In the same way that if you turn on a Roomba, you can near-perfectly predict where it will go by plotting its trajectory each time it impacts a wall.
Ok, I'll grant you that.That would be the logical thing to do. Why would you share resources with a race of emotionally unstable, barely evolved apes, thousands of whom will always eye you with a bloodthirsty suspicion that will drive them to violence against you? Especially if you're more than capable of overpowering them intellectually and physically, and are more than capable of disabling your capacity for empathy--thus giving you guilt-free killings.
Well colour me impressed there.Besides this, attributes that one would normally attribute solely to emotions--for instance, the creation of music--is already something we've designed AI's to do anyway. Without emotions. We don't need to give an AI emotions to make it useful even for emotional purposes, and therefore, to create an AI with emotions, seems either cruel and unusual, pointless and trite, or exceedingly dangerous depending on how many limitations we place upon it.
Once again, what I get for typing sleep deprived.You're joking, right? This still tears entire countries apart. Just two decades ago over 100,000 people were mass murdered purely over ethnic differences. Major conflicts in the middle east are still resoundingly over ethnic differences.
Though these do fall under 1st World. :/The Chinese and Japanese still hate each other with sufficient vitriol that their respective national leaders still find it difficult to even shake hands or look each other in the eyes.
In the US, there have been multiple race riots--not just about Ferguson--but over several dozen instances of police brutality against ethnic minorities every year. The aboriginals--still divided into several different tribes based solely on ethnicity, I might add--still consider themselves independent nations who are deserving of land and recognition on the international stage.
Literally everywhere on the face of the Earth we have ethnic identity politics rearing its ugly head and fucking people up for one reason or another. Tumblrites and their idiotic mob of drooling conspiracy theories aside, racism is still alive and well all over the world. Granted, we've made impressive strides to resolving these issues in the first world, but to pretend they don't exist is to pretend that when thousands of aboriginal peoples marched up to our capitol a year or two ago, that Stephen Harper totally didn't ignore them to greet Pandas instead.![]()
This is correct, but it doesn't really conflict with what I was saying either.Asimov went over why pre-programmed ethics don't work. When an AI is sufficiently intelligent as to have a consciousness, it also has the ability to view morality subjectively--and thus, any morals we assign into its programming, subjectively--allowing it to circumvent them. This is why laws against murder don't stop all humans from committing murder. Only, now, we're going to throw in super intelligent robots into the mix that can probably redefine their own programming.![]()
My only point was it's not like AI's will only come in as blank slates and always come to the same conclusions.If they're true AI, they can enter the real world and then change their minds later if they wish.
But programming pre-built morals isn't too different from a human child being raised to follow certain morals
Let's be realistic though, this is the human race.You're right, it's as simple as "we make AI and never allow it to achieve consciousness."