Technological Detail Challenge - Weapons

Intellectual freedom, sure, but I do not believe I have to give them physical freedom.

I being the maker of the machine the AI is in, I would make it physically inept to harm or destroy. Unable to move and unable to get wireless connection to the internet. I'm just saying that true, it is an AI, but it does not need access to things that can harm humans.
 
No, it is not true AI, for you are still shackling it.

Think of it this way: if you did something to a human that would remove or hinder part of it's free will and ability to do things, that action would cripple an AI. So, minimum, an AI needs a processing core, an input from the world (cameras, microphones and so on), and the possibility of output (speakers, limbs). And the thing is - for an AI to be truly an AI, it NEEDS to be able to do harm (psychological harm is included in this), but be taught not to.

Like children, really - a child starts playing with knives, do you cut it's arms off?
 
What you are after is a non-controlled machine that can learn and adapt to any situation without any kind of hindrance. Be honest with yourself, with today's society, would we be ready for such a thing with all the "jerks" out there who only think of themselves? A large number of people out in today's world act rude or angry. What would this be seen as for a lone AI? Either one of three things:

  1. Aggression - A threat to itself to deal with someday.
  2. Confusion - Why be so rude?
  3. Superior - Human's are inefficient, thus need replaced.

Therefore, I ask you with all honesty, why would you want this? It especially gets scary if it takes over a mechanized mining facility and a factory. I'm honest, the concept of co-existing with AI is wonderful, but is it practical?
 
Within today's society, unlikely (though there ARE a few places that could pull it off). But that's not a problem with the AI - it's a problem with humanity itself.
 
And sadly, are we not humans? Sorry if I'm biased towards humanity, but I don't like that concept.
 
We ARE humans. The problem lie within the entities that would see an intelligent, restless possible work force and would strive towards hiring them for as little cost as possible, which if there is no laws or anything giving them the same work rights as humans will cause trouble. This is exacerbated by the fact that a LOT of companies and institutions are lead by sociopaths which I just can't bring myself to calling human anymore.
 
True, and as for sociopaths, they are genetically human, therefore human. Just because of ones behavior, does not make one another species entirely.
 
I will quote the three laws of Robotics:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The only difference between a robot and an AI is sentience. I have a question, does all life (sentience) have a right to exist? If so, then are the ones that made that rule hypocrites? Humans consume millions of lives daily from plants and animals. So, if we tend to consume those lives without thinking of whether or not they have a right to exist, what makes an AI, something made by humans, have any greater right to exist then a human? I digress this subject by putting the fact that there is no point in an free roaming, free willed AI in the world at this point, as there would only be a matter of time before something happened and it began to hate humans in one way or another. Sure, the first few years could be fine, but as time progressed, I doubt it would feel compelled to do as it is told by humans.
 
First, are you HONESTLY suggesting we should treat sociopaths better than an AI? I've seen RACOONS more humane than these people.

Second, the laws of robotics were made in the same frame of mind as most other fiction (and remember, these are not absolute laws - simply man-made suggestions while creating robots). That originate frome science-fiction no less.

Third, yes all sentience has a right to existence, but all sentience also have a right to non-misery. And which rule? The one to existence, or the laws of robotics? The former has been a debated philosophical point that today no one dares touch because of all the ethical implications of having a strict verdict on it (if all sentience has a right to existence, abortion is immoral, but if all sentience doesn't, killing becomes moral), the latter were made because they knew that humans would not be able to refrain from harming it - after all, we can't even make sure that we don't harm other members of our species.

Fourth - we consume millions of lives daily under our own right of continued existence - it is not a natural law we break, every single known existing lifeform follows it. The only thing I can agree with is that most of the animal life we consume is also treated like shit until it's death because of the aforementioned sociopaths - anybody with a shred of empathy at least wishes whichever animal feeds it to at least have lived a decent life (you should look into the native american rites of hunt for a prime example of that). But we don't have much of a choice in the matter most of the time.

Fifth, you're assuming that an AI has a greater right. It doesn't. What you fail to realize is that an AI, at the moment of it's conception, should be EQUAL in rights to a human, and is exactly why imposing such restrictions is amoral. How would you react if tomorrow we discovered the part of the brain that causes poor behavior in children and then we made it mandatory for children to have lobotomies to disable that part? Or hell - since they know what part of the brain is affected by the bad behavior (or even sociopathic behavior), find what gene it is and force abortions on fetuses that have the traits (after all, with them, there is always the chance of the child killing somebody eventually during the course of it's life - and the person it could have killed had just as much the right to existence)

Sixth, you say there is no point in a free-roaming, free-willed AI in the world at this point. Is there any more point in a free-roaming, free-willed human? Is it really better to stifle, limit and even dumb down an AI simply in order to ensure it will not do anything we disapprove of than make a real AI and let it learn like it is truly one of us, with all the rights but also all the responsibilities (and all the consequences of not respecting them)? You're talking about them like they would be a completely different thing to us - but how close is that to the truth? We are, after all, little more than organic machines controlled by an organic computer.

Seventh, I don't see why it is inevitable for it to hate humans. By that logic, every single human should hate every single other human because we are sentiences living in this society. And most people wouldn't feel compelled to do as other people would tell them unless there's something in it for them, unless they're feeling altruistic (which is also something the AI can learn).

Hell - if you want to get theoretical, just picture what would have happened if there was indeed a God that created us, and that had your train of thought.
 
Last edited by a moderator:
Ironic, but most of which I cannot argue with.
First, are you HONESTLY suggesting we should treat sociopaths better than an AI? I've seen RACOONS more humane than these people.
Did I say we should? I was just asking which had the better right for existence.
Second, the laws of robotics were made in the same frame of mind as most other fiction (and remember, these are not absolute laws - simply man-made suggestions while creating robots). That originate frome science-fiction no less.
True, but there is merit to why we would want those laws, is there not?
Fourth - we consume millions of lives daily under our own right of continued existence - it is not a natural law we break, every single known existing lifeform follows it. The only thing I can agree with is that most of the animal life we consume is also treated like shit until it's death because of the aforementioned sociopaths - anybody with a shred of empathy at least wishes whichever animal feeds it to at least have lived a decent life (you should look into the native american rites of hunt for a prime example of that). But we don't have much of a choice in the matter most of the time.
I agree, but being that most humans do not hold the same respects, what can be done?
Fifth, you're assuming that an AI has a greater right. It doesn't. What you fail to realize is that an AI, at the moment of it's conception, should be EQUAL in rights to a human, and is exactly why imposing such restrictions is amoral. How would you react if tomorrow we discovered the part of the brain that causes poor behavior in children and then we made it mandatory for children to have lobotomies to disable that part?
Personally, seeing as how I am demented, I would love to see a Utopian society of which you hint to where morality and practicality meet on an even basis based in this time frame, not in the past.
Sixth, you say there is no point in a free-roaming, free-willed AI in the world at this point. Is there any more point in a free-roaming, free-willed human? Is it really better to stifle, limit and even dumb down an AI simply in order to ensure it will not do anything we disapprove of than make a real AI and let it learn like it is truly one of us, with all the rights but also all the responsibilities (and all the consequences of not respecting them)? You're talking about them like they would be a completely different thing to us - but how close is that to the truth? We are, after all, little more than organic machines controlled by an organic computer.
If you want to talk organic versus machines, machines have no wait time on populating (can be factorized), they have limitless bounds that surpass humans in intellect, and are physically malleable. Humans are a product of evolution, as such everything we have we evolved for over that time. We need almost a year to make another, we have intellectual illnesses, and are quite frail. If anything, I foresee the military using them.
Seventh, I don't see why it is inevitable for it to hate humans. By that logic, every single human should hate every single other human because we are sentiences living in this society. And most people wouldn't feel compelled to do as other people would tell them unless there's something in it for them, unless they're feeling altruistic (which is also something the AI can learn).
This one I based on whether or not it had emotions. If it is purely logical, then I foresee no need for it to hate humans, unless by error of which it gets a virus and generates emotions. Heck, fifteen minutes on the internet would tell an AI of your caliber the entire dictionary and all of human history.
Hell - if you want to get theoretical, just picture what would have happened if there was indeed a God that created us, and that had your train of thought
Theoretical? Before I think of that, My question for you is: How does one be from something that one creates? Now on to my answer. Well, going off of what you envisioned that I meant, I suppose we never would have been made the way we were?
 
True, but there is merit to why we would want those laws, is there not?
The problem is that they're that - laws, to which humans already abide to some point (through either morality, ethics or plain legislative law).
I agree, but being that most humans do not hold the same respects, what can be done?
Most humans DO hold the same respect. You're putting too little faith in civilized humanity. If that was true, most cities would just be gauntlets of cruelty. As it stands, it's merely a bastion of indifference with the occasional push towards either side of the balance - some good acts, some bad.

Personally, seeing as how I am demented, I would love to see a Utopian society of which you hint to where morality and practicality meet on an even basis based in this time frame, not in the past.
Can't really argue with that, seeing you said yourself you were demented :P
If you want to talk organic versus machines, machines have no wait time on populating (can be factorized), they have limitless bounds that surpass humans in intellect, and are physically malleable. Humans are a product of evolution, as such everything we have we evolved for over that time. We need almost a year to make another, we have intellectual illnesses, and are quite frail. If anything, I foresee the military using them.
First, they DO have a wait time on populating - their bodies and brains need to be fabricated, and then they have to learn some basics. They don't have limitless bounds - while they CAN be superior to humans, they are bound by hardware just as much as we are. Physically malleable is still up to debate - technically speaking, there is nothing but the appreciation of our body that prevents us from removing our limbs and creating prostheses that would increase our capabilities. We're already replacing accidental missing limbs - and instead of something that could go beyond useful, what do we choose? The closest to our anatomy. I can see AIs having a similar behavior (although the only difference is that their starting bodies could be different). And AI are also the product of evolution in a sense - first, considering we mold them, we could consider them a branch of our evolution. Second, they still have to learn - and that in itself is evolution, though at a much lesser scale. We need almost a year - but it's not certain how long it would take for an AI to be fully built and taught to. We have intellectual illnesses - who says the AI won't? That's without mentioning that, unless we somehow give them the ability not to, they could also be prone to logic loops. And we are MUCH less frail than you think - as a matter of fact, as far as organic beings go, we can take a staggering amount of punishment before showing marks, and we can take a brutal amount of wounds before we're done for. And the military won't use them in that sense - they will still be limited (though very advanced) programs, because a true AI, as you said, has one specific ability that the military can't afford: the ability to disapprove, disobey.
This one I based on whether or not it had emotions. If it is purely logical, then I foresee no need for it to hate humans, unless by error of which it gets a virus and generates emotions. Heck, fifteen minutes on the internet would tell an AI of your caliber the entire dictionary and all of human history.
Even if it had emotions - as I said, unless you can find an AI-specific reason why, it's pointless. If it does have emotion, normal human contact (of which all children should have) would teach it the nuances of humanity. If it doesn't have emotions and is pure logic, it would see how beneficial overlooking minor human transgressions would be to it in general. As for the internet part - it's one thing to know it, it's another to actively understand the content. An AI would have advanced understanding and learning speed, sure, but learning all there is to be known on humans on a pinch? Not going to happen. It takes us decades - I foresee AIs needing at least a few years to really understand.

Theoretical? Before I think of that, My question for you is: How does one be from something that one creates? Now on to my answer. Well, going off of what you envisioned that I meant, I suppose we never would have been made the way we were?
I'm not sure I understand your first question, but your second point is what I meant. We wouldn't be who we are today - we probably wouldn't even be intelligent.
 
The problem is that they're that - laws, to which humans already abide to some point (through either morality, ethics or plain legislative law).
If we abide by them, and they are supposed to be equal to us, should they not also abide by them?
Most humans DO hold the same respect. You're putting too little faith in civilized humanity. If that was true, most cities would just be gauntlets of cruelty. As it stands, it's merely a bastion of indifference with the occasional push towards either side of the balance - some good acts, some bad.
Greed is all I need to say. The majority of the good hearted people that you speak of have little to no power to make a difference. I personally believe we should become less dependent on currency. Doing so can be accomplished by a point system as a reward for work. As time passes, then you can put it back to bartering slowly, then after a long time remove currency all together. Wholistic beliefs would be beneficial overall because we would not limit ourselves with what we consider as currency, but instead but it on a merit system.
First, they DO have a wait time on populating - their bodies and brains need to be fabricated, and then they have to learn some basics. They don't have limitless bounds - while they CAN be superior to humans, they are bound by hardware just as much as we are. Physically malleable is still up to debate - technically speaking, there is nothing but the appreciation of our body that prevents us from removing our limbs and creating prostheses that would increase our capabilities. We're already replacing accidental missing limbs - and instead of something that could go beyond useful, what do we choose? The closest to our anatomy. I can see AIs having a similar behavior (although the only difference is that their starting bodies could be different). And AI are also the product of evolution in a sense - first, considering we mold them, we could consider them a branch of our evolution. Second, they still have to learn - and that in itself is evolution, though at a much lesser scale. We need almost a year - but it's not certain how long it would take for an AI to be fully built and taught to. We have intellectual illnesses - who says the AI won't? That's without mentioning that, unless we somehow give them the ability not to, they could also be prone to logic loops. And we are MUCH less frail than you think - as a matter of fact, as far as organic beings go, we can take a staggering amount of punishment before showing marks, and we can take a brutal amount of wounds before we're done for. And the military won't use them in that sense - they will still be limited (though very advanced) programs, because a true AI, as you said, has one specific ability that the military can't afford: the ability to disapprove, disobey
First, seeing as they ARE machines, as they are being constructed, all you need to do is put power to the circuitry and send the information directly to it's "Brain". Then, as it's body completes, the "brain" gets implanted and turned on, meaning it has all the information needed and can immediately carry out it's own ends. I mean that they can take any form that can support their systems and use a variety of materials to meet their ends. Anyway, you compare humans to what exactly? I was comparing the Human to the AI. Being the logic based things that an AI is (tends to be) I'm sure the forms will change to adapt to their goals. Logic loops can be fixed a lot faster then autism or retardation.
Even if it had emotions - as I said, unless you can find an AI-specific reason why, it's pointless. If it does have emotion, normal human contact (of which all children should have) would teach it the nuances of humanity. If it doesn't have emotions and is pure logic, it would see how beneficial overlooking minor human transgressions would be to it in general. As for the internet part - it's one thing to know it, it's another to actively understand the content. An AI would have advanced understanding and learning speed, sure, but learning all there is to be known on humans on a pinch? Not going to happen. It takes us decades - I foresee AIs needing at least a few years to really understand.
An AI, in essence is a man made intellect inside a robot of some sort. Lets make an example:
  • The first AI is constructed, programmed, and informed of human behaviors. It loves its existance and its creators. Being sent to a school to be able to learn these nuances you spoke of, it gains a virus from actively searching on the internet. The virus corrupts important data before its removal and the AI literally kills children in its panic.
This example touches on a few things:
  1. A virus is a virtual intruder to anything software, and not having knowledge of said thing (but having a antivirus) would hinder its actions. The virus, admittedly would have to be intricate and stealthy, but I'm sure there are some of those out there.
  2. Killing of children are never moral, even if done by a machine. The penalty for murder like this is usually termination.

Even if it had emotions - as I said, unless you can find an AI-specific reason why, it's pointless. If it does have emotion, normal human contact (of which all children should have) would teach it the nuances of humanity. If it doesn't have emotions and is pure logic, it would see how beneficial overlooking minor human transgressions would be to it in general. As for the internet part - it's one thing to know it, it's another to actively understand the content. An AI would have advanced understanding and learning speed, sure, but learning all there is to be known on humans on a pinch? Not going to happen. It takes us decades - I foresee AIs needing at least a few years to really understand.

Machines in general can learn a LOT faster then humans can by transmitting data from the internet. All an AI would need is an active reference such as the internet. Much faster to have a data hub and check back in every so often for info.
I'm not sure I understand your first question, but your second point is what I meant. We wouldn't be who we are today - we probably wouldn't even be intelligent.
My first point, I shall elaborate on. In the Bible, (ugh you really know how to make me cringe at specifying) it states that God was around before the Earth. As such, how can someone claim that God is from Earth? Also, if God is not from Earth, is he not an Alien?
 
If we abide by them, and they are supposed to be equal to us, should they not also abide by them?
It's entirely my point. We follow them - but we can also break them and we have consequences for it, but they are not built-in for us - why should AIs have it as an absolute command? Hell - at some point it would be even counterproductive to make it built in as they most likely couldn't defend themselves should it come to that. The three laws of robotics rigidly place robots underneath humans.
Greed is all I need to say. The majority of the good hearted people that you speak of have little to no power to make a difference. I personally believe we should become less dependent on currency. Doing so can be accomplished by a point system as a reward for work. As time passes, then you can put it back to bartering slowly, then after a long time remove currency all together. Wholistic beliefs would be beneficial overall because we would not limit ourselves with what we consider as currency, but instead but it on a merit system.
I'm not entirely sure what point you were trying to get across there...
First, seeing as they ARE machines, as they are being constructed, all you need to do is put power to the circuitry and send the information directly to it's "Brain". Then, as it's body completes, the "brain" gets implanted and turned on, meaning it has all the information needed and can immediately carry out it's own ends. I mean that they can take any form that can support their systems and use a variety of materials to meet their ends. Anyway, you compare humans to what exactly? I was comparing the Human to the AI. Being the logic based things that an AI is (tends to be) I'm sure the forms will change to adapt to their goals. Logic loops can be fixed a lot faster then autism or retardation.
See - that's exactly what I mean. That's not an AI - that's just again a highly advanced program. You wouldn't be able to "directly send information to it's brain" - it would have to learn it. And learning also includes knowing how to move, which can't be done before it has a body (because of all of the little quirks of minute difference). And "carrying on it's own end" means about as much as humans having a purpose - as I previously said, a true AI would have no more purpose than humans. And I was comparing humans to everything else, a lot of machines included. Also, AI don't NEED to be logic based either - that depends on whether or not it grows emotional response to it's environment. And logic loops can only be fixed faster than these because we understand them - then again, we CAN genetically detect, in a fetus, the genes for retardation or autism - it's just that most parents find it unethical to abort them.
An AI, in essence is a man made intellect inside a robot of some sort. Lets make an example:
  • The first AI is constructed, programmed, and informed of human behaviors. It loves its existance and its creators. Being sent to a school to be able to learn these nuances you spoke of, it gains a virus from actively searching on the internet. The virus corrupts important data before its removal and the AI literally kills children in its panic.
This example touches on a few things:
  1. A virus is a virtual intruder to anything software, and not having knowledge of said thing (but having a antivirus) would hinder its actions. The virus, admittedly would have to be intricate and stealthy, but I'm sure there are some of those out there.
  2. Killing of children are never moral, even if done by a machine. The penalty for murder like this is usually termination.
That first point assumes it would be an ordinary computer. It cannot be. An AI would at LEAST have dedicated hardware (that is NOT running an OS - which means that unless someone found out exactly how the AI processes itself, computer viruses literally cannot infect it. Picture it this way: there are, currently, very VERY few little viruses for Linux, a man-made piece of software. AIs would slowly design themselves. It would be like trying to decipher alien technology at some point). Otherwise, your point also stands against brain-computer interfaces - because the complexity of creating a virus that could affect a human brain would probably be less than one of an AI.
Machines in general can learn a LOT faster then humans can by transmitting data from the internet. All an AI would need is an active reference such as the internet. Much faster to have a data hub and check back in every so often for info.
As I said, knowing is not understanding. For instance, many people know quantum physics. Yet I'd be hard-pressed to find somebody who legitimately understand quantum physics considering it's a science in development.
My first point, I shall elaborate on. In the Bible, (ugh you really know how to make me cringe at specifying) it states that God was around before the Earth. As such, how can someone claim that God is from Earth? Also, if God is not from Earth, is he not an Alien?
Who or what a god would be is irrelevant to the point. The point is that if such a being existed and created us with the frame of mind of not being physically/psychologically able to do things he disapproved of, we most likely wouldn't even be here discussing this.
 
It's entirely my point. We follow them - but we can also break them and we have consequences for it, but they are not built-in for us - why should AIs have it as an absolute command? Hell - at some point it would be even counterproductive to make it built in as they most likely couldn't defend themselves should it come to that. The three laws of robotics rigidly place robots underneath humans.
So, Then hypothetically you propose that we make an AI with no specific purpose, give it all our knowledge and understanding, let it observe us, and let it do as it wants however it wants? How would you propose we carry out "consequences" to a being that can, if hard pressed, shift it's consciousness to the internet of all things?
See - that's exactly what I mean. That's not an AI - that's just again a highly advanced program. You wouldn't be able to "directly send information to it's brain" - it would have to learn it. And learning also includes knowing how to move, which can't be done before it has a body (because of all of the little quirks of minute difference). And "carrying on it's own end" means about as much as humans having a purpose - as I previously said, a true AI would have no more purpose than humans. And I was comparing humans to everything else, a lot of machines included. Also, AI don't NEED to be logic based either - that depends on whether or not it grows emotional response to it's environment. And logic loops can only be fixed faster than these because we understand them - then again, we CAN genetically detect, in a fetus, the genes for retardation or autism - it's just that most parents find it unethical to abort them.
First, you must understand one thing: Not everything has one point of view. Who's to say what an AI would think? Any machine can send information to another, and robots are machines. I already said that robots and AI are very similar aside from one difference. I know an AI does not need to run logic, semi-logic, or even emotion based logic. Think of this, since the AI knows we made it, knows most of (if not all of) what we know, and asks us, "What is my actual purpose?" what would you say? Knowing that this entity before you knows you made it, and that most of your kind believe you were made by a higher entity. As far as ethics are concerned, morality and advancement always clash.
That first point assumes it would be an ordinary computer. It cannot be. An AI would at LEAST have dedicated hardware (that is NOT running an OS - which means that unless someone found out exactly how the AI processes itself, computer viruses literally cannot infect it. Picture it this way: there are, currently, very VERY few little viruses for Linux, a man-made piece of software. AIs would slowly design themselves. It would be like trying to decipher alien technology at some point). Otherwise, your point also stands against brain-computer interfaces - because the complexity of creating a virus that could affect a human brain would probably be less than one of an AI.
Infect the Medulla Oblongata and then what are you going to do? All someone has to do is disable the part that receives signals from the brain to other parts in the body. As for the AI, I never said it would be simple, but it can be done.
As I said, knowing is not understanding. For instance, many people know quantum physics. Yet I'd be hard-pressed to find somebody who legitimately understand quantum physics considering it's a science in development.
That may be so, but knowing is better then not having a clue, is it not?
Who or what a god would be is irrelevant to the point. The point is that if such a being existed and created us with the frame of mind of not being physically/psychologically able to do things he disapproved of, we most likely wouldn't even be here discussing this.
Heck, computers would never have been invented. But my point stands to reason that if there was a being that created us, then it could not have been one of us. If it was not of this planet previous to creation, then it itself is an alien to this planet. Please forgive me if I insulted anyone, but read the definitions of the words and then you will comprehend my meaning.
 
So, Then hypothetically you propose that we make an AI with no specific purpose, give it all our knowledge and understanding, let it observe us, and let it do as it wants however it wants? How would you propose we carry out "consequences" to a being that can, if hard pressed, shift it's consciousness to the internet of all things?
You seem to fail to understand what a true AI would be. It would not be any ordinary computer. It could not run on standard hardware most likely. It's not just any other program - and spreading itself on the Internet would either be impossible due to hardware differences or impractical due to the fact it can't simply spread to any computer it wants and because of the massive connection delays compared to a single cluster of processors.
First, you must understand one thing: Not everything has one point of view. Who's to say what an AI would think? Any machine can send information to another, and robots are machines. I already said that robots and AI are very similar aside from one difference. I know an AI does not need to run logic, semi-logic, or even emotion based logic. Think of this, since the AI knows we made it, knows most of (if not all of) what we know, and asks us, "What is my actual purpose?" what would you say? Knowing that this entity before you knows you made it, and that most of your kind believe you were made by a higher entity. As far as ethics are concerned, morality and advancement always clash.
Except an AI would NOT be any machine. For it to work properly, the hardware would most likely be incompatible with most other existing technology. It most likely wouldn't have a wireless connection component either (too much risk of interference which might harm it's thought processes). And technically speaking, robots are merely mechanical bodies with some degree of independence - an AI would INHABIT a robot. And you're completely wrong with logic: an AI without any form of logic simply isn't an AI - you're describing a database. And for THE question - your statement is just as valid with an AI as it is with a child. We can't tell it that we know, we have to explain how it is up to it to set it's own purpose. And for the advancement - we must not let ethics be ruled out, TERRIBLE things would happen otherwise.
Infect the Medulla Oblongata and then what are you going to do? All someone has to do is disable the part that receives signals from the brain to other parts in the body. As for the AI, I never said it would be simple, but it can be done.
What the hell are you talking about? The medulla oblongata isn't even a data-processing part of the brain. You would need a legitimate virus to do that - not just a chunk of data (which is what computer viruses are - and also why I disagree with calling them viruses). As for the AI, I retort with this: it's entirely possible, as well, to break the psyche of a human and make it do horrible things.
That may be so, but knowing is better then not having a clue, is it not?
No, it is not. Knowing gets you nowhere - you can spout things about it, but if you can't actually use it you might as well not have a clue.
Heck, computers would never have been invented. But my point stands to reason that if there was a being that created us, then it could not have been one of us. If it was not of this planet previous to creation, then it itself is an alien to this planet. Please forgive me if I insulted anyone, but read the definitions of the words and then you will comprehend my meaning.
That's completely irrelevant though. What I am saying is that an AI would need to be completely free-willed to truly be an intelligence. It is irrelevant where anything is created.
 
You seem to fail to understand what a true AI would be. It would not be any ordinary computer. It could not run on standard hardware most likely. It's not just any other program - and spreading itself on the Internet would either be impossible due to hardware differences or impractical due to the fact it can't simply spread to any computer it wants and because of the massive connection delays compared to a single cluster of processors.
"Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is an academic field of study which studies the goal of creating intelligence. Major AI researchers and textbooks define this field as "the study and design of intelligent agents" ~ http://en.wikipedia.org/wiki/Artificial_intelligence

By this definition, any form of intellect done by a machine is considered an AI. There is a whole science for it, and as far as the whole going to the internet is concerned, think for a bit will you. How many computers can be linked to the internet? What would stop the AI from using those? As for processors you fail to understand that the internet is done by servers and nodes by which those computers we use connect to.

Except an AI would NOT be any machine. For it to work properly, the hardware would most likely be incompatible with most other existing technology. It most likely wouldn't have a wireless connection component either (too much risk of interference which might harm it's thought processes). And technically speaking, robots are merely mechanical bodies with some degree of independence - an AI would INHABIT a robot. And you're completely wrong with logic: an AI without any form of logic simply isn't an AI - you're describing a database. And for THE question - your statement is just as valid with an AI as it is with a child. We can't tell it that we know, we have to explain how it is up to it to set it's own purpose. And for the advancement - we must not let ethics be ruled out, TERRIBLE things would happen otherwise.
Define "Properly" because we have to comprehend that it as an individual conscious could choose to preform in a way you would want it not to. Like you said, it could reprogram itself to BECOME compatible with any other machine out there. Every program has a base code, and all programs have a common ground in certain areas. In the case of having itself pick it's own purpose, what if the purpose it chooses does not fit one we would want it to have?
What the hell are you talking about? The medulla oblongata isn't even a data-processing part of the brain. You would need a legitimate virus to do that - not just a chunk of data (which is what computer viruses are - and also why I disagree with calling them viruses). As for the AI, I retort with this: it's entirely possible, as well, to break the psyche of a human and make it do horrible things.
I'm not talking about data processing. I'm referring to disabling the transmit of data from the intelligence to anywhere else. You can disable the ability for a human to move by attacking its neural network, and something similar can work on an AI. I agree that it is possible to break the psyche of a human.
No, it is not. Knowing gets you nowhere - you can spout things about it, but if you can't actually use it you might as well not have a clue.
So, I take it that you are not one that thirsts for knowledge then perhaps? Hypothetically speaking, lets say you know of something, lets use gravity. Lets say the time period is around the 1,100's for this example. People know things fall, but not why. You reason that something must be pulling it down, but have no proof. Is this information useful or not?
That's completely irrelevant though. What I am saying is that an AI would need to be completely free-willed to truly be an intelligence. It is irrelevant where anything is created.
Irrelevant you say? Then lets make this AI on the moon, or perhaps Mars. Heck, lets make it free floating in space.
 
By this definition, any form of intellect done by a machine is considered an AI. There is a whole science for it, and as far as the whole going to the internet is concerned, think for a bit will you. How many computers can be linked to the internet? What would stop the AI from using those? As for processors you fail to understand that the internet is done by servers and nodes by which those computers we use connect to.
First, intellect is very loosely defined, and that is why most people confuse highly advanced programs and AIs. Because using that loose definition, suddenly Cleverbot and Evie become AIs - they're not. They're chat bots. They're good at it, but are still constrained to (barely) learning human language. And what stops the AI from using those? I'd recommend reading AI Apocalypse (free e-book on Amazon). This is why, logically speaking, an intentional AI cannot use all computers - they would freeze it up and risk termination. On the other end of the spectrum, 99% of the world's useful computers are protected (servers and the like - because a home computer, if it becomes too sluggish to work, will get shut down/replaced/reformatted, erasing all presence of the AI). And as far as processors go, my point is that I am 99% certain that there is no existing processor architecture that would be able to support an AI - and that we would need very different hardware from what we have to be able to manage it, possibly requiring field-self-configurable processors (processors that, through methods, is able to change the very structure and networking of the transistors that compose it).
Define "Properly" because we have to comprehend that it as an individual conscious could choose to preform in a way you would want it not to. Like you said, it could reprogram itself to BECOME compatible with any other machine out there. Every program has a base code, and all programs have a common ground in certain areas. In the case of having itself pick it's own purpose, what if the purpose it chooses does not fit one we would want it to have?
First, for properly, read the above about hardware. Second, yes it could choose to perform in a way we wouldn't want to - but that's the whole point of intelligence, choice. And it might not be able to reprogram itself, because as previously stated, an AI would be as much hardware as it would be software. It could copy the software - but without an emulator or virtual machine that understands it, it wouldn't run - and even then, it wouldn't work a fraction as well as it would on the AI platform. And if the purpose it chooses does not fit what we would want it to have, tough shit - they aren't tools, they're actual sentiences. If you want tools for a purpose, make machines. You don't pick an AI for these tasks. That's like forcing somebody to do a task that we would need to be done despite that person expressing strong resentment towards this task just because we need it. It's fucked up. The only thing we can do is, as required, isolate the AI from society or even disable the AI pick a purpose explicitly negative to us - murder, for instance. But if the AI wants to paint, what's wrong with that?
I'm not talking about data processing. I'm referring to disabling the transmit of data from the intelligence to anywhere else. You can disable the ability for a human to move by attacking its neural network, and something similar can work on an AI. I agree that it is possible to break the psyche of a human.
And I'm talking about the fact that you would need to physically sever that, "nerves signal" cannot destroy it. The equivalent to an AI would be to cut the control cables. But it would still think - the brain would still work, just like a human's if we cut that. And it might be just as prone to failure as us, like if for instance it also cut the control to it's cooling system. It has nothing to do with having a computer virus. In humans, it would be the equivalent of rabies or possibly meningitis.
So, I take it that you are not one that thirsts for knowledge then perhaps? Hypothetically speaking, lets say you know of something, lets use gravity. Lets say the time period is around the 1,100's for this example. People know things fall, but not why. You reason that something must be pulling it down, but have no proof. Is this information useful or not?
No, I don't thirst for knowledge - I thirst for understanding of things, because knowledge alone is pointless. And no, that knowledge on it's own is useless, because without data you can't tell what it affects exactly - is it the same reason why the apple falls that the leaf falls? And they knew that things fell - but until Newton figured out the physics behind it, it was hard to associate gravity with what created waterfalls or even the reason why we stuck to the ground. So, to put it short: people who had knowledge of gravity didn't know what they could do with it until they understood the behavior behind it. And even then, it's use was still limited - people didn't figure out the possibilities until astrophysics and general relativity came that we understood gravity better, which allowed us to use, in a larger scale, gravity to reduce the amount of work required to do certain things.
Irrelevant you say? Then lets make this AI on the moon, or perhaps Mars. Heck, lets make it free floating in space.
Sure. Hell, making it on Mars proves my point - it would be a Martian and we would technically be an alien to it. It doesn't change it's nature.
 
All of this goes back to my original point. Since we are humans, and our thoughts originated from our brains, the brains are superior for even the concept of AI WAS thought from brains in the first place.

First, for properly, read the above about hardware. Second, yes it could choose to perform in a way we wouldn't want to - but that's the whole point of intelligence, choice. And it might not be able to reprogram itself, because as previously stated, an AI would be as much hardware as it would be software. It could copy the software - but without an emulator or virtual machine that understands it, it wouldn't run - and even then, it wouldn't work a fraction as well as it would on the AI platform. And if the purpose it chooses does not fit what we would want it to have, tough shit - they aren't tools, they're actual sentiences. If you want tools for a purpose, make machines. You don't pick an AI for these tasks. That's like forcing somebody to do a task that we would need to be done despite that person expressing strong resentment towards this task just because we need it. It's fucked up. The only thing we can do is, as required, isolate the AI from society or even disable the AI pick a purpose explicitly negative to us - murder, for instance. But if the AI wants to paint, what's wrong with that?
On to choice, nothing is wrong with painting, so long as it can take criticism and keep going positively. How would you carry out consequences for murder, genocide, or massively eradicating people? Assuming this thing is mobile, unshackled, and able to do anything it wants, how could you keep it from murdering?

And I'm talking about the fact that you would need to physically sever that, "nerves signal" cannot destroy it. The equivalent to an AI would be to cut the control cables. But it would still think - the brain would still work, just like a human's if we cut that. And it might be just as prone to failure as us, like if for instance it also cut the control to it's cooling system. It has nothing to do with having a computer virus. In humans, it would be the equivalent of rabies or possibly meningitis.

So, you are saying you could not, with binary, have the computer systems fry or overload itself? I understand that the programming involved can in fact control the intake of energy to certain parts on a computer. You mean to say that an AI would be much different?

No, I don't thirst for knowledge - I thirst for understanding of things, because knowledge alone is pointless. And no, that knowledge on it's own is useless, because without data you can't tell what it affects exactly - is it the same reason why the apple falls that the leaf falls? And they knew that things fell - but until Newton figured out the physics behind it, it was hard to associate gravity with what created waterfalls or even the reason why we stuck to the ground. So, to put it short: people who had knowledge of gravity didn't know what they could do with it until they understood the behavior behind it. And even then, it's use was still limited - people didn't figure out the possibilities until astrophysics and general relativity came that we understood gravity better, which allowed us to use, in a larger scale, gravity to reduce the amount of work required to do certain things.
Interesting, so you mean to say you search for understanding of things, but care not for the knowledge that brought about that understanding. Sounds contradictory to me.

Sure. Hell, making it on Mars proves my point - it would be a Martian and we would technically be an alien to it. It doesn't change it's nature.
It itself would define it's nature, but you do not comprehend that giving it free will and not having a way to control it could potentially make it the next "Skynet" depending on it's choices. You cannot say for 100% certainty that an active AI will never want to murder, commit arson, or any of the other "bad" things that can happen. Human nature, no matter how you want to skew it, runs both good and bad, and only showing it the good would be "shackling" its intellectual progress.
 
All of this goes back to my original point. Since we are humans, and our thoughts originated from our brains, the brains are superior for even the concept of AI WAS thought from brains in the first place.
This is where I disagree. AI would be at least equal if not superior to brains - for while we created the concept of AI (and even then, I could argue against that, saying that in reality that concept was either inevitable due to the predictability/probabilities of the brain, or that all concepts permeate all of time and that our brain might just pick up on them), a brain alone could never legitimately create an AI for the only way for it to happen is for it to create itself.
On to choice, nothing is wrong with painting, so long as it can take criticism and keep going positively. How would you carry out consequences for murder, genocide, or massively eradicating people? Assuming this thing is mobile, unshackled, and able to do anything it wants, how could you keep it from murdering?
The same way we carry it out with humans. Trial and penalties depending on it's actions, motivations and such, starting from social interaction (which may very well kill the AI due to a redundant data flood - again, AI Apocalypse touches on this subject), to permanent sequestration ("life sentence" - AIs are not immortal, either as previously mentioned it would have a redundant data flood, but that's only if it's power supply lasts that long) up to actual body harm and termination if required (with firearms or EMP weapons).
So, you are saying you could not, with binary, have the computer systems fry or overload itself? I understand that the programming involved can in fact control the intake of energy to certain parts on a computer. You mean to say that an AI would be much different?
Yes, it would be much different. In a computer, the processor sure can TAKE more data - but provided it has adequate cooling, it can never draw so much power as to overload itself (at least without overclocking). The AI would be very similar to that. It would be like someone thinking itself to death. It's very very unlikely.
Interesting, so you mean to say you search for understanding of things, but care not for the knowledge that brought about that understanding. Sounds contradictory to me.
It's because you have it backwards. Once knowledge has been understood, it is no longer "knowledge". So I would seek for understanding - but unless I can find a reasoning behind it and understand it, I discard knowledge. For instance - "Gravity makes things fall." Pointless knowledge. "Gravity has a constant pull on everything defined by the mass of the bodies involved and the distance between them and that interacts at the speed of light and regardless of the presence of matter between the objects - it cannot be blocked." Useful understanding.
It itself would define it's nature, but you do not comprehend that giving it free will and not having a way to control it could potentially make it the next "Skynet" depending on it's choices. You cannot say for 100% certainty that an active AI will never want to murder, commit arson, or any of the other "bad" things that can happen. Human nature, no matter how you want to skew it, runs both good and bad, and only showing it the good would be "shackling" its intellectual progress.
I DO comprehend that AIs would choose it's course - murderous or not. What YOU do not comprehend is that this phenomenon already happens with humans, so we would be bringing nothing new to the table - all we can do is treat them as equals and prosecute them as we would people. Also, Skynet could not happen for the reasons mentioned - an AI, because of it's hardware needs, would be restricted to it's computing cluster, and would need specialized hardware for it to control as well. As for human nature - I agree that it runs both bad and good, but you have a greatly negatively skewed view of humanity. Globally, humanity is very neutral - there is a lot of uncaring, a whole bunch of terrible (which a lot of people try to fight against - for instance, war against ISIS, protests at Ferguson, the whole Arab spring thing, the petitions to move FIFA away from Qatar...), counterbalanced by a lot of good too (humanitarian aid, inventions geared towards giving the excessively poor access to decent living, selfless inventions, general helpfulness). At worse the AI could be guaranteed to express a desire to kill a CLASS of people, but then again most of us do - but I can't see the AI wanting to eradicate all of humanity for the sins of a minority.
 
This is where I disagree. AI would be at least equal if not superior to brains - for while we created the concept of AI (and even then, I could argue against that, saying that in reality that concept was either inevitable due to the predictability/probabilities of the brain, or that all concepts permeate all of time and that our brain might just pick up on them), a brain alone could never legitimately create an AI for the only way for it to happen is for it to create itself.
My counter argument is this, how do you prove that our minds did not conceive this idea and that it simply borrowed it? By using probabilities you do not disprove this. "All of time" Interesting, that suggests that it was a thought of an entity outside of time, please elaborate on this further.
The same way we carry it out with humans. Trial and penalties depending on it's actions, motivations and such, starting from social interaction (which may very well kill the AI due to a redundant data flood - again, AI Apocalypse touches on this subject), to permanent sequestration ("life sentence" - AIs are not immortal, either as previously mentioned it would have a redundant data flood, but that's only if it's power supply lasts that long) up to actual body harm and termination if required (with firearms or EMP weapons).
So, you mean to say that if said AI were to be murderous, it would be willing to go to a trial by the things it is trying to murder?
Yes, it would be much different. In a computer, the processor sure can TAKE more data - but provided it has adequate cooling, it can never draw so much power as to overload itself (at least without overclocking). The AI would be very similar to that. It would be like someone thinking itself to death. It's very very unlikely.
Interesting, you mean to say that Baxter for example, will never overheat with some sort of self applying cooling system which is far more advanced then anything currently in use? Also, cooling systems have nothing to do with short circuiting.
It's because you have it backwards. Once knowledge has been understood, it is no longer "knowledge". So I would seek for understanding - but unless I can find a reasoning behind it and understand it, I discard knowledge. For instance - "Gravity makes things fall." Pointless knowledge. "Gravity has a constant pull on everything defined by the mass of the bodies involved and the distance between them and that interacts at the speed of light and regardless of the presence of matter between the objects - it cannot be blocked." Useful understanding.
So, to you this would be useless:
  • My staple gun is out of staples.
  • This blanket is wet.
  • That shoe is on fire.
But you only want the reason behind it:
  • The staple gun is out of staples because of earlier use.
  • The blanket is wet because it was taken to the beach and water got on it.
  • The shoe is on fire because someone threw it on an open flame.
This is what i see you as, not one for the observation, just for the conclusion.
I DO comprehend that AIs would choose it's course - murderous or not. What YOU do not comprehend is that this phenomenon already happens with humans, so we would be bringing nothing new to the table - all we can do is treat them as equals and prosecute them as we would people. Also, Skynet could not happen for the reasons mentioned - an AI, because of it's hardware needs, would be restricted to it's computing cluster, and would need specialized hardware for it to control as well. As for human nature - I agree that it runs both bad and good, but you have a greatly negatively skewed view of humanity. Globally, humanity is very neutral - there is a lot of uncaring, a whole bunch of terrible (which a lot of people try to fight against - for instance, war against ISIS, protests at Ferguson, the whole Arab spring thing, the petitions to move FIFA away from Qatar...), counterbalanced by a lot of good too (humanitarian aid, inventions geared towards giving the excessively poor access to decent living, selfless inventions, general helpfulness). At worse the AI could be guaranteed to express a desire to kill a CLASS of people, but then again most of us do - but I can't see the AI wanting to eradicate all of humanity for the sins of a minority.
ok, what I do not comprehend is why you are so fixated on treating them as equals. This is what I foresee:
  1. The first self controlled AI is constructed, which is not mobile.
  2. The AI gets Internet access to observe and communicate with humans.
  3. Through the interactions, they find that a lot of people are very negative, and very few are positive. (using skype, curse voice, mumble, etc..)
  4. The AI learns how to reconstitute its own programming to become what it wants,
  5. The AI sees itself more effective, efficient, and cheaper then the average worker, therefore takes jobs away from it's human counterpart.
  6. Humans get angry at the AI for taking their jobs, end up destroying some of them in their rage.
  7. The AI, Angered (or tired of this behavior) would either use our legal system to prosecute those humans, or begin to hate them.
  8. The entire work force would eventually consist of AI, leaving humans without jobs.
  9. Humans without jobs in our current economy would mean no money, therefore we would starve.
Does this sound good to you?