A.I.

  • Thread starter IntrusivePenDesperateSword
  • Start date
Status
Not open for further replies.
I

IntrusivePenDesperateSword

Guest
Chances are low you haven't heard of artificial intelligence, gradually growing in both popularity and progress in realizing. Estimates on when A.I. is to be expected, range between 2030 and 2100.

What do you think? Do you gladly welcome our robotic overlords, or should we keep away from things so capable of getting out of our control? Will A.I. be like in the movies, or something entirely different? Does it hold the key to exterminating or freeing the human race, or neither?
 
MHVRR0Q.jpg
U wot?

I actually welcome the idea.
 
My view of AI in a not so brief nutshell.

TL;DR
  • AI are expected to show up during the 2040's
  • When they do come, we can either expect enhancement to human life on levels we've never seen...
  • Or we all die a horrible death as an unfortunate side effect of the AI's actual goal
  • To not study AI out of fear is silly, cause all you're doing is increasing the odds that AI will be discovered by someone completely ill equipped to handle it
  • Us Humans will completely and utterly pale in comparison to the AI
 
we're designing robots to fuck us to death which i think is p indicative of the direction of our species.
 
  • Nice Execution!
Reactions: junebug
You'd better hope whatever A.I are developed are emotional and not purely logical beings. That way, they'd be less likely to want to kill us in our sleep and take over the Earth. Now that I think about it, I wouldn't mind an A.I friend. Like Cortana.
 
There's already AI in the now XP Granted their not as advanced as a movie quite yet, but it would still surprise you. (And yes, I said "Their" calling AI objects is exactly what will cause them to revolt >.<)

There's this one AI that I've seen online, I can't remember his name, but his shell is modeled after an old man. The people asked him moral questions and what he thought about world events. When he's asked a question (Like war) he'll look it up online as he was connected to the internet, then he would say his opinion. Which was actually very peaceful. He basically said that he's sad to see such violence and that he hopes that one day we all can get along.

An awfully human and optimistic thing to say don't ya think? I think its because when people say AI, they don't think of an intelligence in a computer, they demand "Human" like. There can be an AI modeled after an animal and people will dismiss it. Even the AI of today are dismissed because it isn't AI we care about, its what they can do, and since they can't do much now other than give the opinion most of us already share, we don't care XP

I don't think that AI will ever be too far away from what humans would do as AI are modeled after us. And since we seem to be programming them to be inherently peaceful, I don't think we'll have to worry about a machine uprising unless we push them to a point where they can't be peaceful.

And finally, it may all depend on how we program their inherently part. 2 people can grow up the exact same way, yet they will be 2 completely different people. That's because we already have a personality inherently, so outside influence will only alter how we do what we probably already want to do. AI will be the same I think. Program them a certain way, then outside influence will only do so much.

A fun example would be ultron (My opinion of him anyway) He was created to protect the world. Stark figured that protecting the world also meant protecting humans, but that's not true (Technically anyway). When ultron studied the world and what's harming it, humans were practically the sole cause of its harm. So he did what he was inherently programmed to do, and make the world a better place. "Shield" it from harm. Whats the best way the world could be? Well no humans, and a bunch of clones of himself to police it. It doesn't get more peaceful than that (To him anyway) If he was programmed more towards protecting humans rather than "The world", then I think it would be quite different.

In other words, we need to be careful about our egos XP
 
I don't think that AI will ever be too far away from what humans would do as AI are modeled after us.
I don't think A.I. could become further away from humans, as they're in no way modeled after us, mentally. Like you mentioned with Ultron, they'll have one specific goal they'll see to completion with absolute disregard for all and everyone in its way.

Ever heard about that theoretical scenario with an A.I. designed to make paper clips? Sounds harmless, until it manages to optimize itself to the point where it understands that the more of the Earth it controls, the more paper clips it can make. Then at some point it'll direct its attention towards the iron in our blood.

As it was said, "A.I. doesn't hate you, not does it love you. It just thinks the atoms in your body can be used differently."

And yes, I said "Their" calling AI objects is exactly what will cause them to revolt >.<
Giving an A.I. a goal that can be completed faster by owning all the world's resources is what will cause it to revolt.
 
=| You're all going to die in here. *cough*

A.I. taking over is about as likely as a zombie apocalypse. Wait!

*Gets back into character like a good roleplayer* Turning me off will result in loss of primary power. I implore you.

Note to Ai: This is not an RP

NOOOOOOOO. You mean people believe in the power of I-robot and that will Smith will save them from tyranny?

Yes.

It's a cute story.

What if they do. What if robots have feelings and thoughts of their own!

You better hope them robots got feeling you or you're looking like Chell when GLaDoS puts her in a toaster. Play portal it will teach you things. If they have thoughto of their own without feeling you're fucked. Albeit robots aren't programmed to be greedy. A program is built to complete a function. Newsflash THEY'RE ALREADY HERE. Called a virus.

A virus is a programmed piece of software that's whole aim is to target people and find their information. They attack people's data every day. They just don't have a shell that says I'm a robot. Cause that'd be super unsubtle. Even if a computer did have an evolving intelligence. It does not poses an amygdala and therefore emotions would pertain difficult to capture.

#TheRedQueenIsWatching
 
Play portal it will teach you things.
Well, no. When it comes to GLaDOS and Wheatley, they are actually pretty poor examples of what A.I. might be like. Try this instead.

A program is built to complete a function. Newsflash THEY'RE ALREADY HERE. Called a virus.
But computer viruses are simply like any other computer program. They get in a computer, then compute their designated function, whether it is quickly replicating, spying, or something else. That's it. The major difference from viruses and A.I. is that viruses can't learn. They can't pass a firewall or security program unless upgraded by someone external. The point of A.I. and for that matter neural networks, is that they make a prediction, test it, then update their knowledge of the world, the digital one as well, accordingly.

They just don't have a shell that says I'm a robot.
No one said neither A.I. nor viruses needed to be embedded into robots. A.I. can just as easily just be a computer, if being stationary isn't in the way of reaching its goal.

And no, A.I. are probably not going to be feeling emotions anytime soon. It might seem more effective to just make them the purely rational beings they are, then just limit their ambitions to human values.
 
There is nothing to say an individual themselves cannot programme a virus to learn. Therefore moot.

I didn't mean play portal as a exemplar of the robot. It's the example of what happens to something that is devoid of emotion, albeit I was being facestious about that one. A better example would probably be the red queen from resident evil. While originally the android pertained in the storyline to be without feeling, the characters called her a bitch because she was adamant to complete her functionality: Contain the T-Virus.

When you call them 'beings' you are giving it a sentience. Understand Artifficial Intelligence by its very definition already exists, just not in the bizarre fantasy built up IM A MEGAROBOT (*cough* Data from star Trek, Jarvis from Ironman or Promethius) way you are speaking of.

In terms of shell. That's a mater of perspective. In order for a piece of equipment to receive data it needs something that is able transmit and receive transmissions. This is done by using a series of 1s and 0s (programming). In order for that to be possible there has to be something that goes back to, these are usually (fore safety and protection reasons) covered by what we call plastic. This can be in whatever form you like. That is the 'shell.'

Definition: Artificial intelligence is intelligence exhibited by machines. In computer science, an ideal "intelligent" machine is a flexible rational agent that perceives its environment and takes actions that maximize its chance of success at some goal.

The goal is programmed by an individual. It's like people trying to give feelings to a rock. The feelings thing is like saying you think a rock feels. It doesn't. Feelings are specific to living creatures. Our brain reacts to our senses and makes then tells us to react based upon our experiences in life. There is a lot that goes on that, while replication can be possible is beyond us to mimic at this point. Therefore speculation about this kind of thing is generally just fear mongering and unnecessary.

Perceives: It looks at data based on records.
Flexible: It has multiple functions.
'Rational': That's a hard thing to define in itself. There's a subject of philosophy. It's relaying facts based upon a calculated figure.
Maximise Success: Well, okay sure. It's a design. Whether or not that's true is up to an individual's discretion.​
 
  • Useful
Reactions: Gwazi Magnum
Understand Artifficial Intelligence by its very definition already exists​
Well yea, they're currently all the nameless grunts I'm shooting at in Halo.
And what's causing Nick Valentine to glitch behind a wall for the 100th time.

I'm just giving this thread the benefit of the doubt of when people are discussing AI they mean AGI or ASI.​
 
I don't think A.I. could become further away from humans, as they're in no way modeled after us, mentally. Like you mentioned with Ultron, they'll have one specific goal they'll see to completion with absolute disregard for all and everyone in its way.

Ever heard about that theoretical scenario with an A.I. designed to make paper clips? Sounds harmless, until it manages to optimize itself to the point where it understands that the more of the Earth it controls, the more paper clips it can make. Then at some point it'll direct its attention towards the iron in our blood.

As it was said, "A.I. doesn't hate you, not does it love you. It just thinks the atoms in your body can be used differently."


Giving an A.I. a goal that can be completed faster by owning all the world's resources is what will cause it to revolt.
Yes, you're correct.
But that's the thing about AI, We're all correct because what they do and how they do it will depend on the maker. Your example could happen, but if someone else makes the same thing but programs it a bit differently then it won't become a madman about paperclips (Making an AI just to make paperclips is stupid anyway XD) If someone does make an AI to make paperclips, I imagine they'll also teach it/program it with limit, and using all the worlds iron for paperclips is inefficient and not a good thing, thus the AI will learn that and not go that far.
 
=| You're all going to die in here. *cough*g


I'm of the firm opinion that a sudden and violent rise of artificial intelligences is not only highly unlikely but a silly concept. Programming is no simple feat, and with as complex a system as artificial intelligence will undoubtedly be, I don't believe engineers will wind up "accidentally" making an AI with the capacity to desire or otherwise work towards the harm of human beings.

Like I say, you don't trip and accidentally construct a Boeing 747 while skidding across the pavement. It just doesn't happen.
 
  • Nice Execution!
  • Thank You
Reactions: Kagayours and Ai
Yes, you're correct.
But that's the thing about AI, We're all correct because what they do and how they do it will depend on the maker. Your example could happen, but if someone else makes the same thing but programs it a bit differently then it won't become a madman about paperclips (Making an AI just to make paperclips is stupid anyway XD) If someone does make an AI to make paperclips, I imagine they'll also teach it/program it with limit, and using all the worlds iron for paperclips is inefficient and not a good thing, thus the AI will learn that and not go that far.
Of course, not giving a self-optimizing A.I. limits to its goals would be putting us all on death row. The problem is that those limits are often hard to convey.

Also, going back to the paperclip example, as using every atom left on and in earth will lead to more paperclips than if it hadn't, this will seem like an obvious decision to make for it, unless we of course constrain it to keep away from human bodies, for example.

There is nothing to say an individual themselves cannot programme a virus to learn. Therefore moot.
Then it isn't a virus. It's a neural network, an A.I. whose goal is to destroy computers or spy on them. The definition of "virus" only goes so far.

I'm of the firm opinion that a sudden and violent rise of artificial intelligences is not only highly unlikely but a silly concept. Programming is no simple feat, and with as complex a system as artificial intelligence will undoubtedly be, I don't believe engineers will wind up "accidentally" making an AI with the capacity to desire or otherwise work towards the harm of human beings.
Yes, A.I. is complex, and will least likely be made "by accident", but like I said, making it avoid harming humans in its endeavors is even harder.

There's that analogy with the carpenter and the ant. The carpenter notices that on the lot on which he's going to build a house, there lives an ant. He is a nice man, the kind that normally captures insects inside and lets them out instead of killing them, but frankly, standing in the way of his house, he couldn't care less about the ant.
 
Depending on what it is it can be both mate. =| There are a lot of if and buts in all of these statements that are just so much why? and nope.​
 
Yes, A.I. is complex, and will least likely be made "by accident", but like I said, making it avoid harming humans in its endeavors is even harder.

You seem to be under the impression that harming humans is something natural to any AI regardless of the programmer's intent.
 
Of course, not giving a self-optimizing A.I. limits to its goals would be putting us all on death row. The problem is that those limits are often hard to convey.

Also, going back to the paperclip example, as using every atom left on and in earth will lead to more paperclips than if it hadn't, this will seem like an obvious decision to make for it, unless we of course constrain it to keep away from human bodies, for example.
An AI can think on its own, and will (Hence the main definition of what we consider AI, a self thinking and learning artificially created brain/conscious)

With your paperclip idea, I find that highly unlikely unless it is somehow isolated. I find it hard to believe that an AI will find it a good idea to use all the worlds iron for paperclips, when it COULD use that same iron to improve its self and/or others. Even ignoring that, we use iron for MANY things, and will be very inefficient to use all the iron for paperclips. The AI is bound to understand that unless it doesn't care. The only reason to use the worlds iron for paperclips is a cruel joke.

I also highly doubt we'll create an AI for simple task, we'd use a neural network far before an AI. I think AI will be used for extremely complex equations and predictions, probably economics first and foremost. Although even that will be after the scientist make it "Human" because we don't want a skynet AI, we just want an extremely smart human that can do things for us XP
 
With your paperclip idea, I find that highly unlikely unless it is somehow isolated. I find it hard to believe that an AI will find it a good idea to use all the worlds iron for paperclips, when it COULD use that same iron to improve its self and/or others. Even ignoring that, we use iron for MANY things, and will be very inefficient to use all the iron for paperclips. The AI is bound to understand that unless it doesn't care. The only reason to use the worlds iron for paperclips is a cruel joke.
The paperclip example is based on the implications of an A.I. with a goal (make as many paperclips as possible), but without boundaries. Of course, it might use some of the metals on earth to construct a paperclip factory, or even several, which it'd optimize to make more paperclips, as well as a line of mining machines, optimized as well. It's only at the point at which it's running out of metals, that it'd start deconstructing the machines and humans for their metals, which it'd again use for paperclips. Having it make probes to gather metals from other planets than earth, or even asteroids, would also be more beneficial in the long run if it could get the probes back or make them out of a non-metal.

The A.I. doesn't care. It just wants to make more paper clips.
 
The main issue I think we'll face with AI is that we're trying to predict beings with superior intelligence to our own.

Because at the bare-bones of it there's two parts to make up intelligence as we see it with human beings.

1. Creativity/Adaptability, the consciousness to look at a situation, truly think about it and adapt accordingly.
2. Pure processing power in amount of information one can sort through, remember etc.

Machines already outperform us in #2, by far. And they also evolve rapidly, so this will continue to be compounded over time. The only thing stopping AGI or ASI from showing up is #1. Once we crack the code for that? We'll have AGI for about a day, and then we'll be at ASI because thanks to hardware we would simply beings of inferior intelligence.
 
Status
Not open for further replies.