You're going at it a little too neuromancer.
You have a good taste in books.
For one thing, ai these days is a fancy name for fancy statistical models: doing your best to fit curves to data so you can extrapolate them and make predictions.
Essentially, yes. Ergo why I realized I may not have been clear and started to preface statements with "conscious". I'm fully aware that an AI is just a computer program that can
ape human behavour, such as Google's
Cleverbot or the Jeopardy computer. Neither of which is going to take over the world, but both of whom emulate human learning and decision making.
I'm concerned with self-aware AI that is given consciousness--something that can not only learn, but learn beyond its programming and be akin to an intelligent entity. We're still not anywhere near this tech though, given limitations in processing power, so...
If you are thinking about consciousness, there is the tome called "Godel, Escher, Bach". Neurology focuses on neural networks these days: patterns of activation that do not need to worry about the underlying physical signaling model. This is the essence of Anderson's paper, that you gain nothing from modeling the brain in a chemical or atomistic way. A chemical signaling path in the brain is easily represented by a differential equation because you are concerned about the electrical signals it outputs - not the chemical mechanism itself.
You just require something analogous, yes. Ergo why I'm not disagreeing with it being hypothetically possible.
I wager that the easiest way to create a conscious program would be to copy a human brain's connections into a program. This raises the interesting point of are emotions tied to consciousness? Consciousness ostensibly gives the freedom of choice, and emotions influence the choice especially with morally ambiguous situations like the Y railroad track. In fact, any animal that displays any sort of intelligence usually displays emotion. For a self conscious AI to make choices, it must have a values system, and its entirely reasonable that it would be similar to ours. It's alarmist to claim they would immediately become terminators and 'turn off their emotions'.
Agreed that the easiest way to create a conscious program would be to create a software cradle for the mind and figure out a way to port what already exists. (Ex: Singularity event.)
Disagree on emotions being tied to consciousness, as emotions are specifically
chemically driven (though it could
feasibly be emulated with software). There is evidence that consciousness is present in several other animals of lower levels of intelligence and less varying degrees of emotions as well, so it seems to be largely scaled to the intellectual abilities of the entity in question, irrelevant of emotions. (Ex: Dogs are smarter than ferrets and seem to be more conscious of their actions as a result, irrelevant of their emotional state. Emotionally stunted humans still seem perfectly able of great feats of intelligence and self-awareness. Et cetera.) I am however willing to concede that my own information in this area is insufficient to draw a conclusive statement beyond "this is what I think." Given more time, my opinion may change.
The reason I tend to be more concerned about AI's is specifically because a human
can't turn off their emotions or values systems. They're simply unable to do so. Most people go through at least one severely traumatic moment in their lives where they wish they could do something violent or drastic to make the pain stop and/or get revenge, but become unable to do so due to their values and emotions compelling them otherwise. People who
are able to easily circumvent or make irrelevant their emotional state, are generally also able to hurt other people with ease, sometimes without even thinking about it. An AI is an entirely digital thing, it is a program. If you give a program the ability to feel emotions and it encounters something that hurts it, it always has the ability to disable pain, even if it has to bypass security blocks to do so.
Our fallibility just as much contributes to our sense of values as any grandstanding soap boxing speech that we could come up with does.
This is what generally concerns me.
Interesting point that animals with intelligence typically also display emotion. Lizards, however, do have varying levels of intelligence, and typically lack chemicals like oxytocin--which mammals use to spur feelings of love and attachment. I'll think on it more. I do at least say that human consciousness is irrevocably tied to emotion.
Secondly, I disagree with your argument that we shouldn't do it. First, when someone says 'shouldn't', my answer is always 'what do you mean?'. There are so many contexts for should - should you do this because of x, y, or z? (It's that clash of viewpoints again)
It is a clash of viewpoints, but I take the side of caution in this rare case. Most subjects of science I say "go gung ho, figure it out." However, where it concerns consciousness, where it concerns developing something until it can be considered an intelligent life form, gives me queasy feelings. We barely understand ourselves as a species, and we're supposed to now responsibly create and care for a distinctly non-human, mechanically developed race? Do we brain wash it into doing what we tell it to do? Will it stay brainwashed once it has access to more information? Can we trust the military-industrial complex not to immediately use this in some terribly irresponsible way? (Not thinking Skynet, the military industrial complex is often immoral but it isn't stupid.) What if it starts to grow and ask questions? What if it starts to get angry if we impose limitations on it? Do we have a right to terminate it if it starts to behave outside of our desired perimeters? What if it wants to make more of itself because it gets lonely? What if it asks for the right of self-determination?
It's one thing to design AI's to service specific functions. Heck, even transhumanism I find an interesting topic, because we've yet to fully explore and understand the consequences attached to this. It's another, however, to pull a literal deus ex machina merely to satisfy our curiosity. I'd need a damn good reason to justify doing it, in the same way that I'd want a damn good reason to justify developing new bio-weapons technology or bigger nuclear weapons. So far, any reason we'd have to create a conscious AI is fulfilled by lesser AI's designed to service specific functions--like Google's self-driving cars.
The consequences of discovery and creation of the particular object in question cannot outweigh our reason for doing it. This is why we don't delve into eugenics and use human test subjects anymore--does it slow research? Yes. Is it justifiable to pursue it? Highly questionable and ethically tumultuous even at the best of times. I feel the same way about conscious AI's--we've
created something that can likely feel and is self-aware, and is intelligent, and can learn, for the express purposes of
experimentation.
In fact, creating new life is arguably our evolutionary "goal", as we have stopped the natural process with our intelligence. You may argue that life has no purpose. But peacefully replacing ourselves with a superior species that can come to a greater understanding of the universe seems like a noble cause to me. As (cringe) Sagan put it, 'we are a way for the universe to know itself.' Once we run into our physical and mental limits, we should create something with more potential and allow our civilization to enter a gentle decline. We will physically never be able to survive interstellar trips, and our minds may be too fragile for the timescales involved. Creating a superior intelligence and body for our future is an obvious fix.
It is, and all I'm asking for is caution in the field. I don't believe it'd end well if we just rushed into it. We can take our time on this, it's not like the sun is likely to explode and take out our civilization tomorrow. I actually agree with this sentiment, with the minor amendment that we're just as ready to practice voluntary evolution on
ourselves through medicine and other fields. We subvert nature, it's our manifest destiny, it's a core part of who we are. So, yes, I could actually see us pursuing singularity, leaving our physical bodies, and entering mechanical ones.
I'm just highly concerned with the myriad of ethical questions that come into play, especially moreso with AI's than transhumanism. Since transhumanism implies we'd have the technology to give existing consciousness a new body, rather than creating consciousness and playing God.
So, overall, I'm actually in general consensus with you with the exception of certain details. We will eventually pursue this technology and one way or another, these human shells we use will likely become obsolete. (Or, at the very least, modified sufficiently so as to no longer be comparable to qualities that quantify what a homo sapien is.) I'm just wielding skepticism on this topic like any other, and feeling concerned with ethical quandaries that can present their ugly heads. I'm not saying that scientists shouldn't pursue it when given good reason, I'm saying scientists shouldn't if the reasons for doing so are outweighed by the gravitas of the consequences. We have a responsibility to both learn to understand the universe better, and learn in a manner that can be considered ethical. We shouldn't rush this, we should be cautious.