By this definition, any form of intellect done by a machine is considered an AI. There is a whole science for it, and as far as the whole going to the internet is concerned, think for a bit will you. How many computers can be linked to the internet? What would stop the AI from using those? As for processors you fail to understand that the internet is done by servers and nodes by which those computers we use connect to.
First, intellect is very loosely defined, and that is why most people confuse highly advanced programs and AIs. Because using that loose definition, suddenly Cleverbot and Evie become AIs - they're not. They're chat bots. They're good at it, but are still constrained to (barely) learning human language. And what stops the AI from using those? I'd recommend reading AI Apocalypse (free e-book on Amazon). This is why, logically speaking, an intentional AI cannot use all computers - they would freeze it up and risk termination. On the other end of the spectrum, 99% of the world's useful computers are protected (servers and the like - because a home computer, if it becomes too sluggish to work, will get shut down/replaced/reformatted, erasing all presence of the AI). And as far as processors go, my point is that I am 99% certain that there is no existing processor architecture that would be able to support an AI - and that we would need very different hardware from what we have to be able to manage it, possibly requiring field-self-configurable processors (processors that, through methods, is able to change the very structure and networking of the transistors that compose it).
Define "Properly" because we have to comprehend that it as an individual conscious could choose to preform in a way you would want it not to. Like you said, it could reprogram itself to BECOME compatible with any other machine out there. Every program has a base code, and all programs have a common ground in certain areas. In the case of having itself pick it's own purpose, what if the purpose it chooses does not fit one we would want it to have?
First, for properly, read the above about hardware. Second, yes it could choose to perform in a way we wouldn't want to - but that's the whole point of intelligence, choice. And it might not be able to reprogram itself, because as previously stated, an AI would be as much hardware as it would be software. It could copy the software - but without an emulator or virtual machine that understands it, it wouldn't run - and even then, it wouldn't work a fraction as well as it would on the AI platform. And if the purpose it chooses does not fit what we would want it to have, tough shit - they aren't tools, they're actual sentiences. If you want tools for a purpose, make machines. You don't pick an AI for these tasks. That's like forcing somebody to do a task that we would need to be done despite that person expressing strong resentment towards this task just because we need it. It's fucked up. The only thing we can do is, as required, isolate the AI from society or even disable the AI pick a purpose explicitly negative to us - murder, for instance. But if the AI wants to paint, what's wrong with that?
I'm not talking about data processing. I'm referring to disabling the transmit of data from the intelligence to anywhere else. You can disable the ability for a human to move by attacking its neural network, and something similar can work on an AI. I agree that it is possible to break the psyche of a human.
And I'm talking about the fact that you would need to physically sever that, "nerves signal" cannot destroy it. The equivalent to an AI would be to cut the control cables. But it would still think - the brain would still work, just like a human's if we cut that. And it might be just as prone to failure as us, like if for instance it also cut the control to it's cooling system. It has nothing to do with having a computer virus. In humans, it would be the equivalent of rabies or possibly meningitis.
So, I take it that you are not one that thirsts for knowledge then perhaps? Hypothetically speaking, lets say you know of something, lets use gravity. Lets say the time period is around the 1,100's for this example. People know things fall, but not why. You reason that something must be pulling it down, but have no proof. Is this information useful or not?
No, I don't thirst for knowledge - I thirst for understanding of things, because knowledge alone is pointless. And no, that knowledge on it's own is useless, because without data you can't tell what it affects exactly - is it the same reason why the apple falls that the leaf falls? And they knew that things fell - but until Newton figured out the physics behind it, it was hard to associate gravity with what created waterfalls or even the reason why we stuck to the ground. So, to put it short: people who had knowledge of gravity didn't know what they could do with it until they understood the behavior behind it. And even then, it's use was still limited - people didn't figure out the possibilities until astrophysics and general relativity came that we understood gravity better, which allowed us to use, in a larger scale, gravity to reduce the amount of work required to do certain things.
Irrelevant you say? Then lets make this AI on the moon, or perhaps Mars. Heck, lets make it free floating in space.
Sure. Hell, making it on Mars proves my point - it would be a Martian and we would technically be an alien to it. It doesn't change it's nature.