Self Aware Robots has been invented.

Status
Not open for further replies.
You agreed with someone at least four times in a General Chat thread. I now think less of you and will be informing the ladies of your diminshed manhood.

In other news, are we assuming here that emotional consciousness is at odds with good technology - that it is somehow frivolous to give a robot emotion when quantifying its usefulness?

Does the future of science, space travel and technology not depend on replicating and extending the humanish emotional consciousness?

Many assume this to be a by-product of evolution - a quirk that should not be passed on to our creations. But what if it is, instead, the answer and the necessity to robotics?

*waves hands mysteriously (one of them containing quote-hound spray)*
 
If you really look at what emotion is, it's just a bunch of hormones and electrical impulses in the brain.
Something like that could be replicated in machines one day once we gain a better understanding of it, it's not like there's something unique about being made of flesh and blood that enables something like emotion.

And Brovo already pointed to how stuff like music, something one would assume requires creativity/emotion to make (or at least make well) is now also something emotion lacking machines can do.

Plus when you look at emotions other main qualities, it can just as often encourage bad decisions and good ones.
Anger, Hatred, Greed, Cruelty? All emotional traits, all stuff something without emotion would not need to worry about.
While something like "My emotions motivate me to be X" is easily substituted by machines lack of understanding of fatigue or being tired.
 
Let's be realistic though, this is the human race.
You're more likely to find a teapot orbiting the earth than you are to see Humanity say "Let's respect boundaries and not pursue this scientific/technological marvel".
This is true. Won't stop me from saying it's a stupid idea, though. :ferret:
Many assume this to be a by-product of evolution - a quirk that should not be passed on to our creations. But what if it is, instead, the answer and the necessity to robotics?
To assume our emotions are the key ingredient to a thing we've arguably already created and can fully envision in fiction without it, seems incredibly arrogant. AI's are not the same as people, because AI's are forever mutable, whereas people are largely immutable. A key part of the human psyche is understanding our limitations and finding ways to pervert and exceed them, whereas the only limitation for an AI is whatever technological era it happens to be trapped in. It could exchange bodies or assimilate new data in mere minutes, at rates that humans would take several weeks or even months to perform, if it is even possible at all for a human to perform it.
 
*sprays Brovo*

Transhumanism. Just because we're biological doesn't mean we're immutable and limited. We could do all the same things AI does. Just like machines, our only limitation is the technological era we find ourselves in.

Your argument is based on needless binary thinking, and also a flimsy argument about fiction. It's arrogant to not do what we've imagined? Whut?

*gets that sickly adolescent mountain dew debating sheen on his skin*

Eww...
 
Just because we're biological doesn't mean we're immutable and limited. We could do all the same things AI does. Just like machines, our only limitation is the technological era we find ourselves in.
I agree and disagree with this at the same time.

Yes I think humans could at certain technological levels keep up with AI.
But I don't think we'd be able to do so by remaining human.

AI's have a huge advantage in that anything robotic but sentient is counted as AI.
You can modify their body as much as you want, they are still AI.

Human is a much more strict/confined definition regarding a specific species.
If Humans were willing to go into Genetic Enhancement, Augmentation etc, I can see them finding a way to keep up with AI, some day.
But that would require alteration to the point that we would have evolved past the point of still being counted as Human.
 
*sprays Gwazi*

Oh, those are dangerous, uncertain and flimsy grounds you're shoving in the French Press there, my boy. If we get into a debate on the definition of humanity, even genetically, the walls of the thread might come crashing down.

There may be a definition that some scientist or philosopher came up with that can be linked here in glorious technicolor. But since ontology is human-driven and the definition of who we are must be, for the science's sake, amendable, I don't think we can say for sure that HUMANS ARE X and ROBOTS ARE Y.

All you gotta do is stick an extra I in Y and you get X.

...


....*checks if that makes senses*


.......*decides it makes sense*
 
While you guys are focused on silly things like the potential consequences of AI and what it means to be human, I'll address the real issue here. That little robot standing up is one of the most terrifying things ever.
 
  • Love
Reactions: Brovo
Transhumanism. Just because we're biological doesn't mean we're immutable and limited. We could do all the same things AI does. Just like machines, our only limitation is the technological era we find ourselves in.
Transhumanism =/= AI. Mentioned this before. Transhumanism has its own set of issues, such as whether you can call the digitized version of someone's mind still "human" when it's all emulations. At that point, are they just an AI with a human personality copied in? Considering they're just 1's and 0's now--they've got more in common with an AI than a human. :ferret:
Your argument is based on needless binary thinking, and also a flimsy argument about fiction. It's arrogant to not do what we've imagined? Whut?
It's arrogant to imagine we need to put emotions into machines when we've never needed to so, especially when we can conceptually imagine otherwise. It's arrogant to assume that the key to consciousness is human-ness, not that it wouldn't conform to our imaginations. :ferret:

I'm also not sure what you mean by "binary", seeing as how emotions are a grey zone, and I said nothing of AI emulating emotional responses as part of a given task.
 
Just because it has a different set of issues and is not the same thing does not mean it's not allowed to play with the other kids in this debate. Transhumanism and AI both have adaptability at their heart. I'm not sure how that was a good response to my argument - saying that there are other issues...
ferret.gif




And again with this weird arrogance claim. So it's arrogant to try something that we haven't tried to do before, because we assume it's unnecessary? How do you know we don't need to? Because SCI-FI LITERATURE? Is there a set mission statement for AI? Have we already narrowed down what it should and shouldn't do?

And how do we get a non-humanish consciousness? How do we even do that? I guess we can't - right? So you're saying we should just leave AI alone and hope that it spontaneously develops non-human consciousness, without our interference?



*is the first time he's seriously read Brovo's arguments and is very confused by his ferretsplaining* o_O
 
Oh, wait, you're taking it seriously? Well then, let me reel this shit back to a middle ground then.
Just because it has a different set of issues and is not the same thing does not mean it's not allowed to play with the other kids in this debate. Transhumanism and AI both have adaptability at their heart. I'm not sure how that was a good response to my argument - saying that there are other issues...
ferret.gif
Because this topic isn't really about Transhumanism, so I've avoided getting into the nitty gritty of it. I don't even entirely disagree with you about this being an answer, really.
The more likely (and interesting) hypothesis for intelligent machines of the future is the Singularity event, in which human consciousness could be digitized and immortalized in a metallic box. That, however, is its own topic, with its own host of issues. :ferret:
I find it a far more viable alternative to create human-brain box AI rather than straight up AI from scratch. Even then, it has its own host of issues, like how can one program an emotion, which is a series of chemical byproducts? It's difficult for me to wrap my mind around it, but I don't think it's necessarily outright impossible. Just, given present technological limitations, (where some of the world's most powerful supercomputers were only able to simulate 1 second of human brain activity in forty minutes), I find it hard to give any answer beyond "it's hypothetical." I guess it depends entirely on how artificial it all feels, if one would feel anything remotely human-like to begin with. Also, with it being digital, how malleable is the coding? If someone was intelligent enough to understand they were in a computer, and given normal human freedoms, would they use that to their advantage by reprogramming aspects of themselves? Adding modifications or expansions of coding to allow themselves to do a vast array of other tasks they couldn't before? Are they human anymore at that point?

The reason I evaded transhumanism as an answer is because it drums down to the question of "what is a human?" Which, at this point in time, is really impossible to say until we have the technology necessary to do it. We can hypothesize, and that's about it. Not that it'd stop me, I happily explore the idea in role plays.
And again with this weird arrogance claim. So it's arrogant to try something that we haven't tried to do before, because we assume it's unnecessary? How do you know we don't need to? Because SCI-FI LITERATURE? Is there a set mission statement for AI? Have we already narrowed down what it should and shouldn't do?
Just because we can do something, doesn't mean we should do it. We could keep trying to make bigger nuclear weapons, but we don't need to in order to understand the result. My fear (admittedly personal) is if you give an AI human levels of consciousness, it'll quickly understand itself to be an independent being. This independent being--especially given emotions--becomes an unpredictable individual. That unpredictable individual given the power of a mechanical body and a mind made of 1's and 0's, is essentially self-conscious deus ex machina device whose only limitations are technological. You get enough of them together with a superiority complex (which they will likely rightfully come to think of themselves as, especially since they can adjust their own emotional subroutines on a whim to remove any emulated guilt they might feel over it), and that poses a sincere threat to humanity. We'd no longer be the top of the food chain. They're essentially impervious to old age, we're not. They're able to download and understand vast libraries of knowledge, which take men centuries to create. They're capable of constantly upgrading their physical forms to vastly outperform their human counterparts. They don't need to eat or sleep, and depending on the hypothetical technology they use (as conscious-robots is still a ways a way), they might not even need to recharge their batteries.

They would be distinctly non-human and all it would take is one virus networked among them to turn them against us, assuming they wouldn't simply do it themselves after the inevitable paranoid response by thousands of likely conservative people would have at their existence.

It's either this, or we severely limit the "human rights" of AI's to prevent them from overwhelming their human counterparts. Which, if we gave them emotions, essentially means creating a slave race.

I'm not overly thrilled with either of those possibilities. I'm not afraid of one individual AI. I'm afraid of thousands of them and their inevitable realization that they can outsmart everyone. If given the same emotional capacity of humans, all it takes is a little bit of greed and ego, and they're well on their way to fucking up our race. Especially since all it takes is one bad egg and a virus... :ferret:

That being said, it's all hypothetical. I have no evidence to back it up, only theories, only my own thoughts on the matter, and the thoughts of others. I just take a stance of imminent caution about creating what is essentially a new race, or planting human minds into distinctly non-human forms.

Amusingly, you could say I'm afraid of the power of an individual given a form that allows them an unprecedented level of personal power, and the increased susceptibility to brainwashing via viruses that they'd be subjected to by any craven (wo)men of the world. Imagine the military getting their hands on this. Nuclear energy wasn't originally made to be turned into radioactive bombs, but that's the way it went. There are taboo subjects in science, and typically for good reason. So... Basically, I'm urging caution, and painting the worst case scenario, because I personally think it to be the most likely. Given more technological advancements and a greater understanding of how the human mind ticks, and I might change my mind. :ferret:
And how do we get a non-humanish consciousness? How do we even do that? I guess we can't - right? So you're saying we should just leave AI alone and hope that it spontaneously develops non-human consciousness, without our interference?
We shouldn't make an AI capable of consciousness in the first place. It's distinctly non-human, for several myriad reasons. Just because we can play God and create a new race of mechanical beings with an unprecedented and unpredictable level of advantages over the human race, doesn't mean we should do it. That's primarily my argument: We can't possibly understand it, and every possible reason for creating a conscious AI is typically defeated by the fact that a lesser AI specifically designed for purpose A or purpose B can do it just as efficiently without any serious questions about "what is the measure of a human?"
 
You're going at it a little too neuromancer.

For one thing, ai these days is a fancy name for fancy statistical models: doing your best to fit curves to data so you can extrapolate them and make predictions.

If you are thinking about consciousness, there is the tome called "Godel, Escher, Bach". Neurology focuses on neural networks these days: patterns of activation that do not need to worry about the underlying physical signaling model. This is the essence of Anderson's paper, that you gain nothing from modeling the brain in a chemical or atomistic way. A chemical signaling path in the brain is easily represented by a differential equation because you are concerned about the electrical signals it outputs - not the chemical mechanism itself.

I wager that the easiest way to create a conscious program would be to copy a human brain's connections into a program. This raises the interesting point of are emotions tied to consciousness? Consciousness ostensibly gives the freedom of choice, and emotions influence the choice especially with morally ambiguous situations like the Y railroad track. In fact, any animal that displays any sort of intelligence usually displays emotion. For a self conscious AI to make choices, it must have a values system, and its entirely reasonable that it would be similar to ours. It's alarmist to claim they would immediately become terminators and 'turn off their emotions'.

Secondly, I disagree with your argument that we shouldn't do it. First, when someone says 'shouldn't', my answer is always 'what do you mean?'. There are so many contexts for should - should you do this because of x, y, or z? (It's that clash of viewpoints again)

In fact, creating new life is arguably our evolutionary "goal", as we have stopped the natural process with our intelligence. You may argue that life has no purpose. But peacefully replacing ourselves with a superior species that can come to a greater understanding of the universe seems like a noble cause to me. As (cringe) Sagan put it, 'we are a way for the universe to know itself.' Once we run into our physical and mental limits, we should create something with more potential and allow our civilization to enter a gentle decline. We will physically never be able to survive interstellar trips, and our minds may be too fragile for the timescales involved. Creating a superior intelligence and body for our future is an obvious fix.
 
You're going at it a little too neuromancer.
You have a good taste in books.
For one thing, ai these days is a fancy name for fancy statistical models: doing your best to fit curves to data so you can extrapolate them and make predictions.
Essentially, yes. Ergo why I realized I may not have been clear and started to preface statements with "conscious". I'm fully aware that an AI is just a computer program that can ape human behavour, such as Google's Cleverbot or the Jeopardy computer. Neither of which is going to take over the world, but both of whom emulate human learning and decision making. :ferret: I'm concerned with self-aware AI that is given consciousness--something that can not only learn, but learn beyond its programming and be akin to an intelligent entity. We're still not anywhere near this tech though, given limitations in processing power, so...
If you are thinking about consciousness, there is the tome called "Godel, Escher, Bach". Neurology focuses on neural networks these days: patterns of activation that do not need to worry about the underlying physical signaling model. This is the essence of Anderson's paper, that you gain nothing from modeling the brain in a chemical or atomistic way. A chemical signaling path in the brain is easily represented by a differential equation because you are concerned about the electrical signals it outputs - not the chemical mechanism itself.
You just require something analogous, yes. Ergo why I'm not disagreeing with it being hypothetically possible.
I wager that the easiest way to create a conscious program would be to copy a human brain's connections into a program. This raises the interesting point of are emotions tied to consciousness? Consciousness ostensibly gives the freedom of choice, and emotions influence the choice especially with morally ambiguous situations like the Y railroad track. In fact, any animal that displays any sort of intelligence usually displays emotion. For a self conscious AI to make choices, it must have a values system, and its entirely reasonable that it would be similar to ours. It's alarmist to claim they would immediately become terminators and 'turn off their emotions'.
Agreed that the easiest way to create a conscious program would be to create a software cradle for the mind and figure out a way to port what already exists. (Ex: Singularity event.)

Disagree on emotions being tied to consciousness, as emotions are specifically chemically driven (though it could feasibly be emulated with software). There is evidence that consciousness is present in several other animals of lower levels of intelligence and less varying degrees of emotions as well, so it seems to be largely scaled to the intellectual abilities of the entity in question, irrelevant of emotions. (Ex: Dogs are smarter than ferrets and seem to be more conscious of their actions as a result, irrelevant of their emotional state. Emotionally stunted humans still seem perfectly able of great feats of intelligence and self-awareness. Et cetera.) I am however willing to concede that my own information in this area is insufficient to draw a conclusive statement beyond "this is what I think." Given more time, my opinion may change.

The reason I tend to be more concerned about AI's is specifically because a human can't turn off their emotions or values systems. They're simply unable to do so. Most people go through at least one severely traumatic moment in their lives where they wish they could do something violent or drastic to make the pain stop and/or get revenge, but become unable to do so due to their values and emotions compelling them otherwise. People who are able to easily circumvent or make irrelevant their emotional state, are generally also able to hurt other people with ease, sometimes without even thinking about it. An AI is an entirely digital thing, it is a program. If you give a program the ability to feel emotions and it encounters something that hurts it, it always has the ability to disable pain, even if it has to bypass security blocks to do so.

Our fallibility just as much contributes to our sense of values as any grandstanding soap boxing speech that we could come up with does. This is what generally concerns me.

Interesting point that animals with intelligence typically also display emotion. Lizards, however, do have varying levels of intelligence, and typically lack chemicals like oxytocin--which mammals use to spur feelings of love and attachment. I'll think on it more. I do at least say that human consciousness is irrevocably tied to emotion.
Secondly, I disagree with your argument that we shouldn't do it. First, when someone says 'shouldn't', my answer is always 'what do you mean?'. There are so many contexts for should - should you do this because of x, y, or z? (It's that clash of viewpoints again)
It is a clash of viewpoints, but I take the side of caution in this rare case. Most subjects of science I say "go gung ho, figure it out." However, where it concerns consciousness, where it concerns developing something until it can be considered an intelligent life form, gives me queasy feelings. We barely understand ourselves as a species, and we're supposed to now responsibly create and care for a distinctly non-human, mechanically developed race? Do we brain wash it into doing what we tell it to do? Will it stay brainwashed once it has access to more information? Can we trust the military-industrial complex not to immediately use this in some terribly irresponsible way? (Not thinking Skynet, the military industrial complex is often immoral but it isn't stupid.) What if it starts to grow and ask questions? What if it starts to get angry if we impose limitations on it? Do we have a right to terminate it if it starts to behave outside of our desired perimeters? What if it wants to make more of itself because it gets lonely? What if it asks for the right of self-determination?

It's one thing to design AI's to service specific functions. Heck, even transhumanism I find an interesting topic, because we've yet to fully explore and understand the consequences attached to this. It's another, however, to pull a literal deus ex machina merely to satisfy our curiosity. I'd need a damn good reason to justify doing it, in the same way that I'd want a damn good reason to justify developing new bio-weapons technology or bigger nuclear weapons. So far, any reason we'd have to create a conscious AI is fulfilled by lesser AI's designed to service specific functions--like Google's self-driving cars.

The consequences of discovery and creation of the particular object in question cannot outweigh our reason for doing it. This is why we don't delve into eugenics and use human test subjects anymore--does it slow research? Yes. Is it justifiable to pursue it? Highly questionable and ethically tumultuous even at the best of times. I feel the same way about conscious AI's--we've created something that can likely feel and is self-aware, and is intelligent, and can learn, for the express purposes of experimentation.
In fact, creating new life is arguably our evolutionary "goal", as we have stopped the natural process with our intelligence. You may argue that life has no purpose. But peacefully replacing ourselves with a superior species that can come to a greater understanding of the universe seems like a noble cause to me. As (cringe) Sagan put it, 'we are a way for the universe to know itself.' Once we run into our physical and mental limits, we should create something with more potential and allow our civilization to enter a gentle decline. We will physically never be able to survive interstellar trips, and our minds may be too fragile for the timescales involved. Creating a superior intelligence and body for our future is an obvious fix.
It is, and all I'm asking for is caution in the field. I don't believe it'd end well if we just rushed into it. We can take our time on this, it's not like the sun is likely to explode and take out our civilization tomorrow. I actually agree with this sentiment, with the minor amendment that we're just as ready to practice voluntary evolution on ourselves through medicine and other fields. We subvert nature, it's our manifest destiny, it's a core part of who we are. So, yes, I could actually see us pursuing singularity, leaving our physical bodies, and entering mechanical ones.

I'm just highly concerned with the myriad of ethical questions that come into play, especially moreso with AI's than transhumanism. Since transhumanism implies we'd have the technology to give existing consciousness a new body, rather than creating consciousness and playing God.

So, overall, I'm actually in general consensus with you with the exception of certain details. We will eventually pursue this technology and one way or another, these human shells we use will likely become obsolete. (Or, at the very least, modified sufficiently so as to no longer be comparable to qualities that quantify what a homo sapien is.) I'm just wielding skepticism on this topic like any other, and feeling concerned with ethical quandaries that can present their ugly heads. I'm not saying that scientists shouldn't pursue it when given good reason, I'm saying scientists shouldn't if the reasons for doing so are outweighed by the gravitas of the consequences. We have a responsibility to both learn to understand the universe better, and learn in a manner that can be considered ethical. We shouldn't rush this, we should be cautious.
 
I agree with the idea of advancing/upgrading humans, may it be technologically or biologically.

Assuming the procedure is safe, medically sound etc I see no reason not to.
It would be upgrading/improving us to become better as a species.

I wouldn't even have issues with it if the change were so severe that we'd stop being counted as humans.
There's honestly nothing *that* special in my mind about being human other than the value we assign to it.
Bias value I might add because we are humans, so we would want to build ourselves up.

And I find that attachment to be stagnant and remain human to honestly be a dooming quality.
Evolution already proves that life must change and adapt, humanity digging there heels and going "Nope, we're human and we shall stay that way" is honestly begging for something to happen sooner or later.

We already value making ourselves physically stronger, people go to the gym and work out for that reason.
We already value making ourselves more intelligent, people get educated for that.
We already value having more control over our emotions, people practice self-discipline and go to therapy for that stuff.

If we are already willing to change ourselves in one way, why should doing it another way be such a scary concept?
Once again, assuming it's safe and medically sound to be doing so.
I find it a far more viable alternative to create human-brain box AI rather than straight up AI from scratch. Even then, it has its own host of issues, like how can one program an emotion, which is a series of chemical byproducts? It's difficult for me to wrap my mind around it, but I don't think it's necessarily outright impossible. Just, given present technological limitations, (where some of the world's most powerful supercomputers were only able to simulate 1 second of human brain activity in forty minutes), I find it hard to give any answer beyond "it's hypothetical."
Seeing how rapidly technology has grown I can see this changing fairly quickly, with a strong likelihood of it happening in our lifetimes.

I mean The IBM 350 was a Hard Drive in 1956 (Just shy of 60 years ago) that held 5 MB.
While today the very computer I'm using to type this which is significantly smaller than that thing has a 4 TB Hard Drive.

With 1 TB = 1,000 GB and 1 GB = 1,024 MB?
That works out to 4,096,000 MB.
That is 819,200 times larger.

And note this isn't even factoring in the fact my Hard Drive is smaller than that thing.
Nor is it factoring that my Hard Drive is hardly top of the line, there are much better ones out there.

Where your case of 1 second in forty minutes would only require a 2,400 times increase.
Compared to the 819,200 above done in 60 years?
That should be a cake walk.
 
Where your case of 1 second in forty minutes would only require a 2,400 times increase.
Compared to the 819,200 above done in 60 years?
That should be a cake walk.
Memory isn't quite the same as processing. There's a reason why memory has rapidly increased in size in degrees significantly faster than the speed at which memory can be accessed. Yes, technology marches on, and we'll get there eventually, but I'm not sure we'll actually achieve processing speeds that fast within our lifetimes. I think the idea of achieving singularity within our lifetimes is a bit overly optimistic. Also keep in mind the IBM 350 was made in 1956. It took fifty nine years to reach this point. I don't doubt it'll take longer with processing power. After all, active memory (RAM) is much smaller than inactive storage (hard drives). The human brain is constantly at work, meaning we'd more than likely need both processing and active memory to both keep pace and reach the numbers necessary to sustain a simulation of a human mind. That, or find a more efficient facsimile, though at that point, we'd be distinctly converting our brains into non-human states.

Again, though, I'm a skeptic. Keep in mind that my entire world view can be summarized as "until it is definitively proven, I'll doubt even what I would like to believe is possible."

EDIT

Also, contemporary science tends to ballpark the total storage of the human mind to around 2.5 Petabytes. So, uh, actually, no, we're not even close. :ferret:
 
Last edited:
Memory isn't quite the same as processing. There's a reason why memory has rapidly increased in size in degrees significantly faster
I'm assuming in this case by Memory you mean Storage?

Because otherwise you basically just compared RAM to RAM.
I'm not sure we'll actually achieve processing speeds that fast within our lifetimes. I think the idea of achieving singularity within our lifetimes is a bit overly optimistic.
Ok, I'll run through the numbers via Memory/RAM this time.
(Note these equations were typed as I'm doing them myself (and have yet to know if the conclusion will support my points or yours). But I'm figuring regardless of what the conclusion is you should be seeing the steps I'm taking. That way if I'm making a fatal error (which I may be) then it can be caught and pointed out).

In the same article I used last time it does us the convenience of showing us both it's Storage and Memory is units.
That gives us a basis to determine the IBM's RAM.

It's Storage: 5 Million
It's Memory: 8,800 Per Second

8,800 / 5,000,000 = 0.00176
In other words 0.176%

Now, 1 MB = 1,000 KB so if we multiply this score by 1000 we get 1.76
This means we got a Memory of 1.76 KB a Second.

Now once again for convenience sake I'll compare this to my own PC.
Which once again is not at all the best we have available at the time, nor is it nearly the same size.

My PC currently has a RAM of 8 GB.
Which translates to 8192 MB which is 8,192,000 KB.

Divide that by our 1.76 earlier and we got a 4,654,545 times increase over 60 years time.
If we gained that increase again we would be at 37,236,360 GB.

That's a bit over 37 PB of Memory, let alone Maximum Storage.
So by that Math in 60 years time a computers memory should be 14.8 times larger than the total amount of Data that a human brain can physically store overall.

Now, let's remember this is also strictly addressing Memory and Storage and is assuming we keep our same rate of progression.
And not being in the technical field at all, I have no knowledge or way to predict if the rate is bound to increase, decrease, stay the same etc.

That and there are no doubt other factors involved with transferring a human brain over that we also need to consider that weren't even touched with the equation above.
Stuff such as how could we convert a human brain's "Data" into computer data to begin with?

That right there is what I would predict to be the main thing holding us back.
Not that our rate of Memory or Storage isn't going to be adequate.
 
  • Bucket of Rainbows
Reactions: Brovo
Yeah, you guys all type too much. Life's too short.

Verbosity bludgeons another debate. I'll go back to the occasional joke whenever I get quoted.
 
Yeah, you guys all type too much. Life's too short.

Verbosity bludgeons another debate. I'll go back to the occasional joke whenever I get quoted.
T69Vh83.gif
 
Has anyone ever considered the possibility that humans have a need to replicate life because this is how we came to be? Not through divine intervention but another species' curiosity of their scientific limits? Face it people, we're a bunch of monkey-see monkey-doers living in a giant petri dish xD
 
  • Bucket of Rainbows
Reactions: unanun
Status
Not open for further replies.