Y
Yiyel
Guest
Original poster
Yeah that bit was just me throwing in two cents about information permanence. The point is, we created the concept, but we can't create the actual thing by ourselves.My counter argument is this, how do you prove that our minds did not conceive this idea and that it simply borrowed it? By using probabilities you do not disprove this. "All of time" Interesting, that suggests that it was a thought of an entity outside of time, please elaborate on this further.
Maybe it would, maybe it wouldn't. If it does, good. If it doesn't, treat it like uncooperative humans and either completely neutralize it or terminate it.So, you mean to say that if said AI were to be murderous, it would be willing to go to a trial by the things it is trying to murder?
(Side note: funny you should mention Baxter, it's also an AI in another book I'm reading - Titans) As long as Baxter is conscious, yes, he will never overheat. Severing the cooling command could cause such a thing. Also, it is literally impossible to short-circuit a processor using pure data - short-circuits, in AIs, would be akin to a human having a seizure. Cooling, on the other hand, would most likely be akin to breathing.Interesting, you mean to say that Baxter for example, will never overheat with some sort of self applying cooling system which is far more advanced then anything currently in use? Also, cooling systems have nothing to do with short circuiting.
It is pointless in the grand scheme of things. What use would the staple being empty, the blanket being wet or the shoe on fire would be in a few years? What can you do with that knowledge? Nothing useful.So, to you this would be useless:
But you only want the reason behind it:
- My staple gun is out of staples.
- This blanket is wet.
- That shoe is on fire.
This is what i see you as, not one for the observation, just for the conclusion.
- The staple gun is out of staples because of earlier use.
- The blanket is wet because it was taken to the beach and water got on it.
- The shoe is on fire because someone threw it on an open flame.
I disagree with the first three. While the brain of the first AI would most likely be immobile, it would most assuredly have a mobile body to link to (either in a limited range or with a special deal with cell phone tower carriers). And, assuming they are right, number 3 is far from being guaranteed - hell, technically speaking if the AI visited four of the most known public content websites (Amazon, Reddit, Imgur and Wikipedia) - sure, it would stumble on some horrible stuff, but by going through everything the most that the AI could deduce is that today there is a small fraction of terrible people, but mostly everybody else is bored out of their mind (and love cats) and/or a horny sex beast and/or depressed (which I doubt the AI would find reason for extermination). Most chat software, the AI would be ignored - because let's be honest, how many people do you know accept random adds?ok, what I do not comprehend is why you are so fixated on treating them as equals. This is what I foresee:
Does this sound good to you?
- The first self controlled AI is constructed, which is not mobile.
- The AI gets Internet access to observe and communicate with humans.
- Through the interactions, they find that a lot of people are very negative, and very few are positive. (using skype, curse voice, mumble, etc..)
- The AI learns how to reconstitute its own programming to become what it wants,
- The AI sees itself more effective, efficient, and cheaper then the average worker, therefore takes jobs away from it's human counterpart.
- Humans get angry at the AI for taking their jobs, end up destroying some of them in their rage.
- The AI, Angered (or tired of this behavior) would either use our legal system to prosecute those humans, or begin to hate them.
- The entire work force would eventually consist of AI, leaving humans without jobs.
- Humans without jobs in our current economy would mean no money, therefore we would starve.
Then four comes into play. That I cannot disagree with - this is the very essence of what an AI is.
Five I disagree with. While it would see itself as more effective and efficient for certain, it might not see itself as cheaper. And even if it does, and if it takes a few jobs away from humans (because there is still 24 hours in a day), it's a very small number, and if it acquired sufficient information about our society would understand the dynamics of what it's doing and therefore understand the humans' anger (as they can no longer pay for their own existence) or even refrain from taking more work than is necessary for the money it requires to continue functioning (which might be as short as a few hours a week if we give it human rights - including minimum wage).
Then six - where did the OTHER AI come from? For something like that to happen, we would need hundreds if not thousands of AI. And by then we either have something figured out (if it's even needed - see above point for the possible reaction of an AI that would avoid this). Hell - if the AI have that much of a power, we might even have the momentum to switch to a post-scarcity world instead of a capitalist world, where everyone has a guaranteed income, the jobs that need to be filled are filled and people are free to do what they want.
And seven - seven depends heavily on six, and most likely if we have made it at this point, the legal system is most likely to be used, which is entirely valid - if a large wave of legal immigrants were getting assaulted for taking jobs for cheaper, you bet that something would be done. At this point, I doubt hate would be a reaction to the AI - if it's even capable of it.
Eight, again, depends on five, six and seven. By then, we might have the world mentioned in six, or we could have humans work in environments AIs can't (electromagnetic-prone environments like power plants and such), but one thing is for certain: we will not get to nine - AIs would most likely be on our side (as AIs, more likely than not, would not be restless workers - being intelligent beings, they would most likely express the desire to do other things), and would easily see how more advantageous it is to simply work how much they require it for their sustenance/goals (picture it this way: either AIs work as you describe, completely overthrowing the economy by making a lot of money without spending AND angering the normal people in the process, OR work only as needed for their goals, gain the sympathy/friendship of humans which means they won't be assaulted simply for being an AI, AND stimulate economy and progress, which could allow them to better themselves)