Honestly Brovo that really just sounds like Narcissism coming through there.
Swing and a miss, as I could hardly compare myself to an AI. I think you mean cynicism.
Hell if I were to give your reasoning another coat of paint, I'd say you've been watching too much Terminator, or in your admitted case Star Trek.
Or Isaac Asimov, if you want to get into some intelligently written commentary on artificial intelligence. Sci-fi is my jam, with Babylon 5 actually being my particular favourite. Now you know.
Do we truly know how AI would turn out? No.
Considering AI follow a series of routines and processes? Yes. In the same way that if you turn on a Roomba, you can near-perfectly predict where it will go by plotting its trajectory each time it impacts a wall.
But the idea that "If we make AI they're going to slaughter all of us" sounds just as one sided as saying "If we make AI it will all be perfect and we'd all be the best of friends".
That would be the logical thing to do. Why would you share resources with a race of emotionally unstable, barely evolved apes, thousands of whom will always eye you with a bloodthirsty suspicion that will drive them to violence against you? Especially if you're more than capable of overpowering them intellectually and physically, and are more than capable of disabling your capacity for empathy--thus giving you guilt-free killings.
Human desire and curiosity.
Desire and curiosity quickly turn into ambition and greed. Besides this, attributes that one would normally attribute solely to emotions--for instance, the creation of music--
is already something we've designed AI's to do anyway. Without emotions. We don't need to give an AI emotions to make it useful even for emotional purposes, and therefore, to create an AI with emotions, seems either cruel and unusual, pointless and trite, or exceedingly dangerous depending on how many limitations we place upon it.
And like we've both stated, a lot of Sci-Fi touches on this subject.
Sci-Fi that ends up inspiring most of our scientists and engineer's growing up.
Whose to say some of them won't look at that and say "Let's try to make it".
I'm fully aware, mate. I don't think any scientists were eager to create giant death robots to begin with anyway. Only the military would want that, and only under its own control--intelligent human-level AI's not needed nor wanted.
Modern day that's honestly barely even a thing outside of Tumblr and Ferguson (and in the later that's more people trying to twist police to be racist, rather than police actually being racist).
You're joking, right?
This still tears entire countries apart. Just two decades ago over 100,000 people were
mass murdered purely over ethnic differences. Major conflicts in the middle east are still
resoundingly over ethnic differences. The Chinese and Japanese still
hate each other with sufficient vitriol that their respective national leaders still find it difficult to even shake hands or look each other in the eyes. In the US, there have been multiple race riots--not just about Ferguson--but over several dozen instances of police brutality against ethnic minorities every year. The aboriginals--still divided into several different tribes based solely on ethnicity, I might add--still consider themselves independent
nations who are deserving of land and recognition on the international stage.
Literally everywhere on the face of the Earth we have ethnic identity politics rearing its ugly head and fucking people up for one reason or another. Tumblrites and their idiotic mob of drooling conspiracy theories aside, racism is still alive and well all over the world. Granted, we've made impressive strides to resolving these issues in the first world, but to pretend they don't exist is to pretend that when thousands of aboriginal peoples marched up to our capitol a year or two ago, that Stephen Harper totally didn't
ignore them to greet Pandas instead.
But programming pre-built morals isn't too different from a human child being raised to follow certain morals
Asimov went over why pre-programmed ethics don't work. When an AI is sufficiently intelligent as to have a consciousness, it also has the ability to view morality subjectively--and thus, any morals we assign into its programming, subjectively--allowing it to circumvent them. This is why laws against murder don't stop all humans from committing murder. Only, now, we're going to throw in super intelligent robots into the mix that can probably redefine their own programming.
And once again, do I expect this to be perfect? Hell no, there will be division, and humanity will probably see a dark age from the divide over this.
But I also doubt it's as simple as "We make AI and they will try to kill us all".
You're right, it's as simple as "we make AI and never allow it to achieve consciousness." We're not Gods, and giving birth to a race for the mere reasoning of "because we can do it" is so supremely arrogant as to warrant our own self destruction. There's a reason AI are designed for specific purposes, rather than being the end all. The more likely (and interesting) hypothesis for intelligent machines of the future is the Singularity event, in which human consciousness could be digitized and immortalized in a metallic box. That, however, is its own topic, with its own host of issues.