AI driven weapons

Status
Not open for further replies.

Xander95

The lover of all
Original poster
FOLKLORE MEMBER
Invitation Status
Posting Speed
  1. 1-3 posts per day
Writing Levels
  1. Give-No-Fucks
  2. Adaptable
Preferred Character Gender
Genres
Romance, Furry, Military, scifi
Ever since the creation of the first weapon with the power to kill in one shot, man has been on a collision course with doomsday. As we speak the world's biggest military powers are trying to create the next generation in weapons, completely autonomous weapons. For those of you that already see what I'm getting at, and probably saw the Terminator movies, you already know what's to come. From my vast research, I have come to the conclusion that within the next two decades the US, Japan, and other high tech nations with the capability to create artificial intelligence, we will have weapons that will not be guided in any way by humans. These weapons will be completely self controlled. And you know what it will target? Us. That's right. It will view us as the threat because just as in the movie Terminator it will see us as not only its creator but also its greatest threat. It will see us realizing our mistake and trying to shut it down. As it sees us trying to do so, it will begin attacking the people trying to take it down. If you think I am just crazy and I don't know what I'm talking about, go ahead and laugh. For years I have been a conspiracy theorist and I have been able to piece together things not many people would think would happen. I have been able to use cause and effect on a much higher level. Using cause and effect I have determined that we will be spelling our own demise as we create these weapons. Google has an AI that is constantly learning. And you know why? So that eventually the US military can take it for it's own and use it in weapons like drones, missiles, which we already are getting ahead in, and also machines like what has already been developed. For those of you that saw the youtube video of that machine that was completely autonomous, you will see we are already on our way to destruction by our own creations. Below is a link to that video.

http://www.newsy.com/videos/google-atlas-robot-kind-of-emulates-karate-kid-move/

I was unable to find the actual youtube videos but this is what I saw. Now imagine seeing this armed and armored. This could be the first machine that will lead the way to the extermination of the human race. I have always seen this coming ever since I started getting into computers. I have known that with the advancements we are making in technology, we will eventually create weapons that will not need human guidance. They will be able to fight our wars. But as I said earlier, they will see us as a threat and determine that we are no longer needed and will do all it can to kill us. For those that don't believe what I am saying, heed my warning, if this is not stopped, we will all be doomed. We will all be slaughtered like sheep.
 
Considering we still have no idea what makes humans self aware, AI will remain either one of two things:

- A gigantic state machine (if-else)

- Neural network (statistical fits to huge data sets to predict behaviour)

Also, that website is terrible clickbait dumbshit. Googling "google-atlas-robote" gets you the result. Are you trying to drive traffic to that website?
 
Well, at least you admit you're a conspiracy theorist so I don't have to say it for you. You win points for self awareness.

Now, the major problem with this kind of nonsense (aside from the assumptions that we will ever perfect true AI and the assumption that true AI would inevitably decide humanity should be destroyed, but I'll overlook those for the sake of argument) is that it assumes the robots will stand some chance to do serious damage to humanity. Why exactly would anyone believe that? Anyone with a modicum of sense, especially if they've seen Terminator and various other stories about robots/AI gone rogue, would build a fucking remote kill switch into these creations. Seriously. You think they'd make thousands of robot killing machines and not have a shutdown switch built in? Fuck no they wouldn't, because those movies and such exist in such prevalence that these kinds of backup plans will certainly be at the forefront of the minds of the creators. Silly little robot fuckers start going batty and all that has to happen is someone enacts the mass shutdown protocol and they're fucking done. GG robots, humans win.

Even if they didn't have such a security measure built in, why would you think they'd somehow take shit over and destroy humanity? Sure, they're made of metal, but they're not indestructible killing machines. Break their shit and they stop functioning. One bullet in their CPU equals lights out. If nothing else, we can always use electromagnetic pulses to shut them down. It's also not as if they'd somehow be able to secretly spread out and take over. The moment one town gets wrecked by robots it's going to be reported thanks to the prevalence of cell phones, and you bet your ass there'll be a heavy response. Y'know what would really ruin a rogue AI robot army's day? The US military showing up and mowing them the fuck down.

And then the real kicker is that odds are pretty fucking high that AI weapons would stop being produced as they exist if they started going rogue. People would probably try all sorts of hardwired fail-safes to make it so the things break if they go a little crazy, and hell, it might even work. Or the whole practice will be banned and we will have learned a valuable lesson about not fucking with AI weaponry. Humanity can be really fucking stupid, but when it comes to major and immediate threats to our survival we have a way of agreeing to stop being idiots. Notice how there were only ever two atomic bombs used on live targets before people decided it was a really shitty idea? Yeah, odds are good the same thing would happen with AI weapons if they ever started going on a murder spree.

So please, calm down, the scary robots won't be murdering you in your bed any time soon. :P
 
  • Like
Reactions: Peter Feels
They will if I have anything to do about it. >:[
 
Well, at least you admit you're a conspiracy theorist so I don't have to say it for you. You win points for self awareness.

Now, the major problem with this kind of nonsense (aside from the assumptions that we will ever perfect true AI and the assumption that true AI would inevitably decide humanity should be destroyed, but I'll overlook those for the sake of argument) is that it assumes the robots will stand some chance to do serious damage to humanity. Why exactly would anyone believe that? Anyone with a modicum of sense, especially if they've seen Terminator and various other stories about robots/AI gone rogue, would build a fucking remote kill switch into these creations. Seriously. You think they'd make thousands of robot killing machines and not have a shutdown switch built in? Fuck no they wouldn't, because those movies and such exist in such prevalence that these kinds of backup plans will certainly be at the forefront of the minds of the creators. Silly little robot fuckers start going batty and all that has to happen is someone enacts the mass shutdown protocol and they're fucking done. GG robots, humans win.

Even if they didn't have such a security measure built in, why would you think they'd somehow take shit over and destroy humanity? Sure, they're made of metal, but they're not indestructible killing machines. Break their shit and they stop functioning. One bullet in their CPU equals lights out. If nothing else, we can always use electromagnetic pulses to shut them down. It's also not as if they'd somehow be able to secretly spread out and take over. The moment one town gets wrecked by robots it's going to be reported thanks to the prevalence of cell phones, and you bet your ass there'll be a heavy response. Y'know what would really ruin a rogue AI robot army's day? The US military showing up and mowing them the fuck down.

And then the real kicker is that odds are pretty fucking high that AI weapons would stop being produced as they exist if they started going rogue. People would probably try all sorts of hardwired fail-safes to make it so the things break if they go a little crazy, and hell, it might even work. Or the whole practice will be banned and we will have learned a valuable lesson about not fucking with AI weaponry. Humanity can be really fucking stupid, but when it comes to major and immediate threats to our survival we have a way of agreeing to stop being idiots. Notice how there were only ever two atomic bombs used on live targets before people decided it was a really shitty idea? Yeah, odds are good the same thing would happen with AI weapons if they ever started going on a murder spree.

So please, calm down, the scary robots won't be murdering you in your bed any time soon. :P

1024px-Foundation_3_-_dragon%27s_teeth.png


CHRIP CHRIP BRRRRRRRRRRRFFFT


My two cents, I rather they not look into autonomous weapon systems that don't require human input to identify the target and pull the trigger. Wars are already brutal affairs with the human element, but remove it and it just feels like unless people understand the horrors of war, they won't look for alternative solutions.

I'm not at all concerned with AI armies turning on their human masters. I think a lot of the science fiction that depicts that scenario is putting human values into machines; there's nothing saying that a truly sentient AI would even entertain the thought of killing humans because they don't have emotions like we do. For all we know, the AI would see problems that could be fixed that we're too indecisive or emotional about and just go about doing it without our input. There's no reason to automatically assume a sentient AI would deem humanity unworthy of living and be innately violent.
 
This is a great piece of scare tactic you got going here. I like that.

Now unlike these folk who easily write off anything that doesn't fit into their logical little world, I would like to hear more about your theory. If you could just show me the evidence that you have pieced together, so these fools could see how your understanding works, that would be perfect.

I would also like an explanation as to your reasoning for believing that the AI would turn on us aside from in fiction, because let me tell you, ever since I saw those Terminator movies I have been deathly afraid of metal overlords taking over.

And lastly, what do you think we need to do to prepare? Have you considered trying to work from within the military industrial complex? Perhaps even an independent contractor, or something of the like. I just feel like you're the man to lead us away from this near inevitable ending and into a more robot friendly/free world.

I am eager to hear back on this. :)
 
Last edited:
My two cents, I rather they not look into autonomous weapon systems that don't require human input to identify the target and pull the trigger. Wars are already brutal affairs with the human element, but remove it and it just feels like unless people understand the horrors of war, they won't look for alternative solutions.

I'm not at all concerned with AI armies turning on their human masters. I think a lot of the science fiction that depicts that scenario is putting human values into machines; there's nothing saying that a truly sentient AI would even entertain the thought of killing humans because they don't have emotions like we do. For all we know, the AI would see problems that could be fixed that we're too indecisive or emotional about and just go about doing it without our input. There's no reason to automatically assume a sentient AI would deem humanity unworthy of living and be innately violent.

Violent dissenters against the peacefully constituted established order are always going to exist. Why waste human lives tackling them, when we can simply have the problem solved through a non-human intermediary?
 
Violent dissenters against the peacefully constituted established order are always going to exist. Why waste human lives tackling them, when we can simply have the problem solved through a non-human intermediary?
Because the non-human intermediary lacks the judgement of a person. One of the training exercises I did in the army was a scenario where my platoon is stationed in a foreign country where protestors carry weapons and shoot them in the air periodically (or if not protestors, a celebration of sorts), which is a common enough cultural practice that while the threat of violence is real, it doesn't mean they're intending harm. Whereas a human has to make a judgement call of if the people who are armed are a threat or not. You shoot a guy for shooting a gun in the air, you risk the entire crowd turning on you for an unprovoked attack and you might have accidentally triggered a massacre or a war. You don't pay attention and notice one of those guys was about to shoot at you, you might die. If you have an AI that's programmed to recognize weapons and their use, I don't have confidence they'll ever be sophisticated enough to be able to discern intent.
 
But guys...















Terminator












It's just like what the movie said.
 
"Excuse me sir?" The sentient Howitzer spoke in soft tones to his handler.
"Um.." The poor, mortal being responded.
"I like You fleshbag. That's why I fired upon your house a hour ago. YOur family are just charcoal and bloody smears in a crater now. This way we can be together forever." The several tons heavy and senstient cannon spoke.
 
  • Love
Reactions: Windsong
Okaaay...why do I feel I am being mocked? This kind of shit happened the last time I spoke out about one of my theories.
 
  • Like
Reactions: Peter Feels
Can't take friendly joking around, don't stay on the internet. Jorick have allready responded to you in a thought out manner. Some of us are just playing around with the idea of intelligent weapons and the hilariusly consequences.
 
Can't take friendly joking around, don't stay on the internet. Jorick have allready responded to you in a thought out manner. Some of us are just playing around with the idea of intelligent weapons and the hilariusly consequences.


what he said
 
tl;dr




EVERYTHING IS AWESOME.





Especially robots that will murdurlize us in the future! Or robots programmed with basic algorithms with hunt/kill loops until humans are gone. Or something.




Wait, nevermind. Optimus Prime.
 
While artificial intelligence may be feasible, by the time we create it, I doubt it will be the biggest problem we'll have to worry about. We can't simply worry about the end-goal, we must look at the consequences of the discoveries needed to get there.

[Warning, science-based deterministic reasoning on the nature of free will ahead.]

When people say artificial intelligence, they mean self-governing thought, right? Such as that displayed by humans? Essentially, the human body is just a vastly complicated machine formed out of biological constructs, and the brain is no exception. Every act we take is dependent upon the specific layout of our brain, which though adapting in reaction to our environment, is still governed by the processes that allow it to adapt. Therefore, I like to assume AI just needs to be as advanced as human minds.

For that to be possible, we will have to recreate a brain-like system in the robot, requiring immense knowledge of the brain. And at that point, why build AI robots? We could simply alter human minds to be entirely subservient, or to remove their morality, or whatever. Even before we get to the point of developing fully self-governed robots, capable of countering direct orders, we'll be able to wipe out the individuality of each of us.

So which would you consider worse. Creating robots with the semblance of free will that could potentially wipe us out? Or creating "perfect humans" that lose even the semblance of free will?
 
AI's aren't an issue. It's another Hollywood monster myth thing. It's what we do with AI's that's a potential issue. Humanity has a tendency to abuse any new invention to often detrimental effects. Besides, we already have AI driven weapons, like sniper rifles that aim for you and limited AI's in UAV's and Predator Drones. The issue with the idea of an AI designed to exceed its creator's imposed limitations is that kind of thing has to be a programmed behaviour in the first place, and it would never pass basic tests because the ultimate aim of an AI is to be a controlled, disposable entity that mimics certain features of human intelligence. We already have an AI that can compose piano music that sounds like a human being made it, we have AI's in factories mass producing content via managing several robotic components at rates we can't keep up with. We have AI's capable of beating chess masters (Big Blue), AI's capable of winning Jeopardy. Google wrote an AI that has figured out human speech via learning from humans talking to it and can hold its own in conversations so long as it's kept fairly simple.

We've already made AI's capable of wiping us out with ease: They think faster, smarter, better, and with the utmost of efficiency. So why don't they?

Ambition. Emotions. You can program an AI to mimic happiness, pride, anger, et cetera, but how do you program an emotion? You can program the symptoms of an emotion, you can program an AI to mimic producing serotonin and other chemicals in the brain which induce emotional states via outside stimulus, but the actual act of programming a set of emotions that can grow from experience is immensely difficult. You'd have to program the AI with the capacity to want to destroy its human masters before you'd get an AI weapon out of control, but why would you do that? An AI weapon's function is to execute the commands of its host, end command line. There'd be no purpose to programming a combat AI with the intricate complexities of human emotions, of human ambition.

As for the ethical qualms involving dehumanizing war even further than it already is, would you make the same argument about bombers? They never see the targets they hit, or the collateral damage they create. Would you make the same argument about tanks? How about artillery? What makes it so different that a human pulls the trigger rather than a human ordering the robot to pull the trigger? The robot doesn't have a consciousness: It's a tool, a further precaution to avoid human casualties on side A while fighting side B.

Too much superstition. Too much fear mongering. Tsk.
 
Status
Not open for further replies.