Technological Detail Challenge - Weapons

My counter argument is this, how do you prove that our minds did not conceive this idea and that it simply borrowed it? By using probabilities you do not disprove this. "All of time" Interesting, that suggests that it was a thought of an entity outside of time, please elaborate on this further.
Yeah that bit was just me throwing in two cents about information permanence. The point is, we created the concept, but we can't create the actual thing by ourselves.
So, you mean to say that if said AI were to be murderous, it would be willing to go to a trial by the things it is trying to murder?
Maybe it would, maybe it wouldn't. If it does, good. If it doesn't, treat it like uncooperative humans and either completely neutralize it or terminate it.
Interesting, you mean to say that Baxter for example, will never overheat with some sort of self applying cooling system which is far more advanced then anything currently in use? Also, cooling systems have nothing to do with short circuiting.
(Side note: funny you should mention Baxter, it's also an AI in another book I'm reading - Titans) As long as Baxter is conscious, yes, he will never overheat. Severing the cooling command could cause such a thing. Also, it is literally impossible to short-circuit a processor using pure data - short-circuits, in AIs, would be akin to a human having a seizure. Cooling, on the other hand, would most likely be akin to breathing.
So, to you this would be useless:
  • My staple gun is out of staples.
  • This blanket is wet.
  • That shoe is on fire.
But you only want the reason behind it:
  • The staple gun is out of staples because of earlier use.
  • The blanket is wet because it was taken to the beach and water got on it.
  • The shoe is on fire because someone threw it on an open flame.
This is what i see you as, not one for the observation, just for the conclusion.
It is pointless in the grand scheme of things. What use would the staple being empty, the blanket being wet or the shoe on fire would be in a few years? What can you do with that knowledge? Nothing useful.
ok, what I do not comprehend is why you are so fixated on treating them as equals. This is what I foresee:
  1. The first self controlled AI is constructed, which is not mobile.
  2. The AI gets Internet access to observe and communicate with humans.
  3. Through the interactions, they find that a lot of people are very negative, and very few are positive. (using skype, curse voice, mumble, etc..)
  4. The AI learns how to reconstitute its own programming to become what it wants,
  5. The AI sees itself more effective, efficient, and cheaper then the average worker, therefore takes jobs away from it's human counterpart.
  6. Humans get angry at the AI for taking their jobs, end up destroying some of them in their rage.
  7. The AI, Angered (or tired of this behavior) would either use our legal system to prosecute those humans, or begin to hate them.
  8. The entire work force would eventually consist of AI, leaving humans without jobs.
  9. Humans without jobs in our current economy would mean no money, therefore we would starve.
Does this sound good to you?
I disagree with the first three. While the brain of the first AI would most likely be immobile, it would most assuredly have a mobile body to link to (either in a limited range or with a special deal with cell phone tower carriers). And, assuming they are right, number 3 is far from being guaranteed - hell, technically speaking if the AI visited four of the most known public content websites (Amazon, Reddit, Imgur and Wikipedia) - sure, it would stumble on some horrible stuff, but by going through everything the most that the AI could deduce is that today there is a small fraction of terrible people, but mostly everybody else is bored out of their mind (and love cats) and/or a horny sex beast and/or depressed (which I doubt the AI would find reason for extermination). Most chat software, the AI would be ignored - because let's be honest, how many people do you know accept random adds?

Then four comes into play. That I cannot disagree with - this is the very essence of what an AI is.

Five I disagree with. While it would see itself as more effective and efficient for certain, it might not see itself as cheaper. And even if it does, and if it takes a few jobs away from humans (because there is still 24 hours in a day), it's a very small number, and if it acquired sufficient information about our society would understand the dynamics of what it's doing and therefore understand the humans' anger (as they can no longer pay for their own existence) or even refrain from taking more work than is necessary for the money it requires to continue functioning (which might be as short as a few hours a week if we give it human rights - including minimum wage).

Then six - where did the OTHER AI come from? For something like that to happen, we would need hundreds if not thousands of AI. And by then we either have something figured out (if it's even needed - see above point for the possible reaction of an AI that would avoid this). Hell - if the AI have that much of a power, we might even have the momentum to switch to a post-scarcity world instead of a capitalist world, where everyone has a guaranteed income, the jobs that need to be filled are filled and people are free to do what they want.

And seven - seven depends heavily on six, and most likely if we have made it at this point, the legal system is most likely to be used, which is entirely valid - if a large wave of legal immigrants were getting assaulted for taking jobs for cheaper, you bet that something would be done. At this point, I doubt hate would be a reaction to the AI - if it's even capable of it.

Eight, again, depends on five, six and seven. By then, we might have the world mentioned in six, or we could have humans work in environments AIs can't (electromagnetic-prone environments like power plants and such), but one thing is for certain: we will not get to nine - AIs would most likely be on our side (as AIs, more likely than not, would not be restless workers - being intelligent beings, they would most likely express the desire to do other things), and would easily see how more advantageous it is to simply work how much they require it for their sustenance/goals (picture it this way: either AIs work as you describe, completely overthrowing the economy by making a lot of money without spending AND angering the normal people in the process, OR work only as needed for their goals, gain the sympathy/friendship of humans which means they won't be assaulted simply for being an AI, AND stimulate economy and progress, which could allow them to better themselves)
 
Yeah that bit was just me throwing in two cents about information permanence. The point is, we created the concept, but we can't create the actual thing by ourselves.
So you admit you were not being objective? I disagree, we CAN create an actual AI.
Maybe it would, maybe it wouldn't. If it does, good. If it doesn't, treat it like uncooperative humans and either completely neutralize it or terminate it.
This is where morality and free will clash. You want to treat it as if it were a free willed human with all the implications of such. An AI is NOT human, so I think it funny that we would try and treat it as if it were.
(Side note: funny you should mention Baxter, it's also an AI in another book I'm reading - Titans) As long as Baxter is conscious, yes, he will never overheat. Severing the cooling command could cause such a thing. Also, it is literally impossible to short-circuit a processor using pure data - short-circuits, in AIs, would be akin to a human having a seizure. Cooling, on the other hand, would most likely be akin to breathing.
Ironic, not funny in my opinion. I never said using pure data, what I mean, and I will elaborate:
  1. Gain access
  2. do a systems check to find out how it is coded, not everything is prone to be like humans programmed it.
  3. find the program that regulates the energy intake from the energy source to the other parts via resistors or other hardware.
  4. Disable the resistors, causing the hardware itself to fry itself.
I'm not saying that it would be easy, or even practical, but it is possible.
It is pointless in the grand scheme of things. What use would the staple being empty, the blanket being wet or the shoe on fire would be in a few years? What can you do with that knowledge? Nothing useful.
That knowledge can be used depending on circumstances but I digress as it seems you care not for it.
I disagree with the first three. While the brain of the first AI would most likely be immobile, it would most assuredly have a mobile body to link to (either in a limited range or with a special deal with cell phone tower carriers). And, assuming they are right, number 3 is far from being guaranteed - hell, technically speaking if the AI visited four of the most known public content websites (Amazon, Reddit, Imgur and Wikipedia) - sure, it would stumble on some horrible stuff, but by going through everything the most that the AI could deduce is that today there is a small fraction of terrible people, but mostly everybody else is bored out of their mind (and love cats) and/or a horny sex beast and/or depressed (which I doubt the AI would find reason for extermination). Most chat software, the AI would be ignored - because let's be honest, how many people do you know accept random adds?

Then four comes into play. That I cannot disagree with - this is the very essence of what an AI is.

Five I disagree with. While it would see itself as more effective and efficient for certain, it might not see itself as cheaper. And even if it does, and if it takes a few jobs away from humans (because there is still 24 hours in a day), it's a very small number, and if it acquired sufficient information about our society would understand the dynamics of what it's doing and therefore understand the humans' anger (as they can no longer pay for their own existence) or even refrain from taking more work than is necessary for the money it requires to continue functioning (which might be as short as a few hours a week if we give it human rights - including minimum wage).

Then six - where did the OTHER AI come from? For something like that to happen, we would need hundreds if not thousands of AI. And by then we either have something figured out (if it's even needed - see above point for the possible reaction of an AI that would avoid this). Hell - if the AI have that much of a power, we might even have the momentum to switch to a post-scarcity world instead of a capitalist world, where everyone has a guaranteed income, the jobs that need to be filled are filled and people are free to do what they want.

And seven - seven depends heavily on six, and most likely if we have made it at this point, the legal system is most likely to be used, which is entirely valid - if a large wave of legal immigrants were getting assaulted for taking jobs for cheaper, you bet that something would be done. At this point, I doubt hate would be a reaction to the AI - if it's even capable of it.

Eight, again, depends on five, six and seven. By then, we might have the world mentioned in six, or we could have humans work in environments AIs can't (electromagnetic-prone environments like power plants and such), but one thing is for certain: we will not get to nine - AIs would most likely be on our side (as AIs, more likely than not, would not be restless workers - being intelligent beings, they would most likely express the desire to do other things), and would easily see how more advantageous it is to simply work how much they require it for their sustenance/goals (picture it this way: either AIs work as you describe, completely overthrowing the economy by making a lot of money without spending AND angering the normal people in the process, OR work only as needed for their goals, gain the sympathy/friendship of humans which means they won't be assaulted simply for being an AI, AND stimulate economy and progress, which could allow them to better themselves)
And how can you say what a self aware program inside a robot would think or do? I only gave my opinions, of which all but one you disagree with. I mentioned other AI because you know how people love workers that cost less, and require less. Over time the transition from human workers to AI would take place. I acknowledge the AI may or may not care about human needs, but usefulness has its own way of fulfilling something. Also, the AI could construct forms for itself that are not prone to electromagnetic waves, thus not needing humans to do those types of jobs. As far as guaranteed income, lets be real here. What governing body who has a capitalistic history is going to give that? I do not foresee the AI changing that much in the law system or how they work. I do however foresee less and less need for humans as time goes on. My whole point about economy is that people would need a way to keep them alive, without jobs since the AI would eventually have all the jobs. This is all hypothetical of course and i cannot prove or disprove how an AI would act until I can observe said entity.
 
So you admit you were not being objective? I disagree, we CAN create an actual AI.
No, we can't. We can create the basic programs and hardware that can BECOME an AI, but we can't create one from scratch. Just like we can plant a tomato seed but not actually create a whole tomato plant.
This is where morality and free will clash. You want to treat it as if it were a free willed human with all the implications of such. An AI is NOT human, so I think it funny that we would try and treat it as if it were.
No, it's not. It's not human, but it would still be a sentient person, and thus deserves all rights that we have.
Ironic, not funny in my opinion. I never said using pure data, what I mean, and I will elaborate:
  1. Gain access
  2. do a systems check to find out how it is coded, not everything is prone to be like humans programmed it.
  3. find the program that regulates the energy intake from the energy source to the other parts via resistors or other hardware.
  4. Disable the resistors, causing the hardware itself to fry itself.
I'm not saying that it would be easy, or even practical, but it is possible.
One is not exactly easy. Two, an AI is likely to have rewritten itself from scratch, and it's not even certain the code would be legible or even understandable by humans (I triple-dare you to learn Assembly). Three depends on two, and it probably won't be obvious even if two succeeded. Four, you can't just disable resistors like that unless they're variable-voltage resistors. You'd need to physically disable them by opening up the AI - and by the time you're close enough to do that, you could just stab it in the brain or in the cables to disable it.
That knowledge can be used depending on circumstances but I digress as it seems you care not for it.
That knowledge cannot be used without the beginning of an understanding, no.
And how can you say what a self aware program inside a robot would think or do? I only gave my opinions, of which all but one you disagree with. I mentioned other AI because you know how people love workers that cost less, and require less. Over time the transition from human workers to AI would take place. I acknowledge the AI may or may not care about human needs, but usefulness has its own way of fulfilling something. Also, the AI could construct forms for itself that are not prone to electromagnetic waves, thus not needing humans to do those types of jobs. As far as guaranteed income, lets be real here. What governing body who has a capitalistic history is going to give that? I do not foresee the AI changing that much in the law system or how they work. I do however foresee less and less need for humans as time goes on. My whole point about economy is that people would need a way to keep them alive, without jobs since the AI would eventually have all the jobs. This is all hypothetical of course and i cannot prove or disprove how an AI would act until I can observe said entity.
I can say how it will react based on the fact it's an intelligence that will evolve in an environment that we know of, and therefore can reliably predict the behaviors that can be adopted. Second, you can be ABSOLUTELY CERTAIN that should an AI be a social success, economists will think THOROUGHLY how to handle that - because remember, a society that can't buy stuff is not profitable. As for an AI shielding itself, that is admittedly entirely possible, it was merely an example - though another thing that's possible is that humans will fill roles that AI just don't want to fill out, just like people. As for the guaranteed income, no existing entity has a history for that - however, if AIs are reliable enough workers to affect the work flow, you can be certain that it will change the power dynamics, and if the AI doesn't hold enough sway then the point is irrelevant because the only way they could not have that power is if they're not an issue in the first place. Also, that "lessening need for humans" is already happening today, without AI - companies are just keen on using specialized machines to do everything, and so humans eventually will only have two types of jobs: fixing machines and doing services. And my whole point about the economy is that AIs will either have the power to change it so drastically they won't be a problem, or not have the power because they don't hold enough jobs and not be a problem anyway. I can agree that this is entirely hypothetical and that for us to know how it would happen we would need a real AI though - that's entirely true.

Though, my final point is: if we create AI, we'll be fine as long as we treat it as a friend or neighbor instead of treating it like a slave/lesser being.
 
No, we can't. We can create the basic programs and hardware that can BECOME an AI, but we can't create one from scratch. Just like we can plant a tomato seed but not actually create a whole tomato plant.
So, to clarify, you are saying you were not made by your parents? You were a product of intercourse and then had a body made for you, then you developed your conscious inside your body. I fail to see the difference between your development and the development of an AI.
No, it's not. It's not human, but it would still be a sentient person, and thus deserves all rights that we have.
So, does all sentience deserve human rights and limitations?
One is not exactly easy. Two, an AI is likely to have rewritten itself from scratch, and it's not even certain the code would be legible or even understandable by humans (I triple-dare you to learn Assembly). Three depends on two, and it probably won't be obvious even if two succeeded. Four, you can't just disable resistors like that unless they're variable-voltage resistors. You'd need to physically disable them by opening up the AI - and by the time you're close enough to do that, you could just stab it in the brain or in the cables to disable it.
The code would still have a base form, as per any known language, and as such can be translated. (I think i just might learn it, it will just take time as it looks a lot like hexadecimal.) How about I make this simpler and just make a portable EMP device?
I can say how it will react based on the fact it's an intelligence that will evolve in an environment that we know of, and therefore can reliably predict the behaviors that can be adopted. Second, you can be ABSOLUTELY CERTAIN that should an AI be a social success, economists will think THOROUGHLY how to handle that - because remember, a society that can't buy stuff is not profitable. As for an AI shielding itself, that is admittedly entirely possible, it was merely an example - though another thing that's possible is that humans will fill roles that AI just don't want to fill out, just like people. As for the guaranteed income, no existing entity has a history for that - however, if AIs are reliable enough workers to affect the work flow, you can be certain that it will change the power dynamics, and if the AI doesn't hold enough sway then the point is irrelevant because the only way they could not have that power is if they're not an issue in the first place. Also, that "lessening need for humans" is already happening today, without AI - companies are just keen on using specialized machines to do everything, and so humans eventually will only have two types of jobs: fixing machines and doing services. And my whole point about the economy is that AIs will either have the power to change it so drastically they won't be a problem, or not have the power because they don't hold enough jobs and not be a problem anyway. I can agree that this is entirely hypothetical and that for us to know how it would happen we would need a real AI though - that's entirely true.

Though, my final point is: if we create AI, we'll be fine as long as we treat it as a friend or neighbor instead of treating it like a slave/lesser being.
  • Though you know the environment, you do not know the intelligence, therefore you cannot verify a guarantee.
  • I agree that Economists will think of how to handle it, I'm just saying that being holistic and not needing money would be a lot easier in this case.
That is basically all I have to say on the matter.
 
So, to clarify, you are saying you were not made by your parents? You were a product of intercourse and then had a body made for you, then you developed your conscious inside your body. I fail to see the difference between your development and the development of an AI.
Yep, that's exactly spot on. My body was created - however my conscious is a product of everything I ever experienced. That's... better than I could have put it. I think you're starting to understand where I'm getting at.
So, does all sentience deserve human rights and limitations?
Yes.
The code would still have a base form, as per any known language, and as such can be translated. (I think i just might learn it, it will just take time as it looks a lot like hexadecimal.) How about I make this simpler and just make a portable EMP device?
The code will most certainly not be hexadecimal - hexadecimal is just a readable shorthand for what the code really is - binary. Hell - it might not even be binary. It might just be a completely new programming language based on tertiary/quaternary processing systems or might even be completely analog (and if it DOES become analog, we're in for a treat - can you guess another analog intelligence? WE ARE! :D). The problem is that by the time you're done deciphering it, there's a chance that the AI has modified it, invalidating your deciphering. And EMP devices would be limited use - we both agreed on that, an AI could easily shield itself against electromagnetism.
  • Though you know the environment, you do not know the intelligence, therefore you cannot verify a guarantee.
  • I agree that Economists will think of how to handle it, I'm just saying that being holistic and not needing money would be a lot easier in this case.
That is basically all I have to say on the matter.
For the first, the same can be said of humans, we have no guarantees how anyone will turn out. That doesn't stop us. I completely agree with your second point, and what I'm saying is that there's also a not-insignificant chance that if AIs become numerous enough to make it a problem, they might get us there.


(And just for clarification about hex: let's take, for instance, 15. 15 in binary is 1111. In hex? F. Much shorter, much more readable.)
 
Yep, that's exactly spot on. My body was created - however my conscious is a product of everything I ever experienced. That's... better than I could have put it. I think you're starting to understand where I'm getting at.
In other words, you are saying you want to make another you, but give it a mechanical husk and potential limited only to the materials used in to make it. Sorry to ask this, but what makes the conscious to begin with?
So, if we found intelligent life in space, we should automatically give it human rights and treat it as if it were a human?
The code will most certainly not be hexadecimal - hexadecimal is just a readable shorthand for what the code really is - binary. Hell - it might not even be binary. It might just be a completely new programming language based on tertiary/quaternary processing systems or might even be completely analog (and if it DOES become analog, we're in for a treat - can you guess another analog intelligence? WE ARE! :D). The problem is that by the time you're done deciphering it, there's a chance that the AI has modified it, invalidating your deciphering. And EMP devices would be limited use - we both agreed on that, an AI could easily shield itself against electromagnetism.
49 20 6b 6e 6f 77 - Hexadecimal
010010010010000001001011011011100110111101110111 - Binary
Both mean "I know"

I agree that the AI can redo it's programming, though it would give a greater understanding of it's thoughts much the way we try to learn our own brain. Shielding itself for EMP is not always practical as it would just take splashing it with water then EMPing it, though you may disagree.
For the first, the same can be said of humans, we have no guarantees how anyone will turn out. That doesn't stop us. I completely agree with your second point, and what I'm saying is that there's also a not-insignificant chance that if AIs become numerous enough to make it a problem, they might get us there.


(And just for clarification about hex: let's take, for instance, 15. 15 in binary is 1111. In hex? F. Much shorter, much more readable.)
Question: Are humans artificial or natural anymore?

(The clarification was not needed.)
 
In other words, you are saying you want to make another you, but give it a mechanical husk and potential limited only to the materials used in to make it. Sorry to ask this, but what makes the conscious to begin with?
The conscious is formed by the modifications that the environment have on a configurable processing element. Put a human or AI brain in complete sensory deprivation at "birth" (whichever event is applicable to each), and you get a blank slate processor that doesn't do much until it's taken out. Admittedly, the AI might have an advantage here as we have no clue whether or not AIs would have lost of plasticity like humans do over time.
So, if we found intelligent life in space, we should automatically give it human rights and treat it as if it were a human?
Yes. Though take note that this also gives us the right to self defense (as it would with AI).
I agree that the AI can redo it's programming, though it would give a greater understanding of it's thoughts much the way we try to learn our own brain. Shielding itself for EMP is not always practical as it would just take splashing it with water then EMPing it, though you may disagree.
Yep, definitely - but just as with human brains, there's still the issue of interface also. But yes, it would give an insight on it's thought process and function. As for EMP, I was just saying. It's also possible to water-proof a machine. Best course of action, as with humans, would be tasers (Which would actually short circuit parts of the AI's body).
Question: Are humans artificial or natural anymore?
It's an entirely valid question, actually. We DO have a lot to say in our reproduction - whether or not to abort a "deficient" human, when and how we want to make more of ourselves, hell we even got different ways of starting the process, and if I remember there's talks of being able to make artificial wombs and of genetically modifying fetuses... And then there's the definition of artificial: made or produced by human beings rather than occurring naturally, typically as a copy of something natural. So, in a sense, I suppose we could say that ever since civilization started, yes humans are artificial - for we no longer strictly follow the laws that nature once held upon us, and we are of our own creation. That's a very interesting point indeed.


Also, for the whole hex thing: I was just making sure we had our bases covered. It's easier to give an explanation that's not needed than to splinter the discussion into two.
 
The conscious is formed by the modifications that the environment have on a configurable processing element. Put a human or AI brain in complete sensory deprivation at "birth" (whichever event is applicable to each), and you get a blank slate processor that doesn't do much until it's taken out. Admittedly, the AI might have an advantage here as we have no clue whether or not AIs would have lost of plasticity like humans do over time.
An AI can also "turn off" until it feels like "waking up".
Yes. Though take note that this also gives us the right to self defense (as it would with AI).
So, hypothetically, what if their natural ways impede on our rights to defense as you ask? Example, Zerg from starcraft 1. A Biological creature that undergoes metamorphosis at an extremely fast pace to enhance it's own evolution. But in its core of it's being, it is a insectoid virus.
Yep, definitely - but just as with human brains, there's still the issue of interface also. But yes, it would give an insight on it's thought process and function. As for EMP, I was just saying. It's also possible to water-proof a machine. Best course of action, as with humans, would be tasers (Which would actually short circuit parts of the AI's body).
I was talking about splashing the water on it to remove the emp field around the AI.
It's an entirely valid question, actually. We DO have a lot to say in our reproduction - whether or not to abort a "deficient" human, when and how we want to make more of ourselves, hell we even got different ways of starting the process, and if I remember there's talks of being able to make artificial wombs and of genetically modifying fetuses... And then there's the definition of artificial: made or produced by human beings rather than occurring naturally, typically as a copy of something natural. So, in a sense, I suppose we could say that ever since civilization started, yes humans are artificial - for we no longer strictly follow the laws that nature once held upon us, and we are of our own creation. That's a very interesting point indeed.
So, if humanity is now "self made" what is stopping an AI from becoming "self made"?
 
An AI can also "turn off" until it feels like "waking up".
Technically, an AI's equivalent of sleep would be "Standby Mode" where it's technically still on, just at a reduced/minimal processing power, which if it needs to be recharged would allow charging faster. An AI turning off could be anywhere from a human's coma to death, depending on the hardware configuration and if we know how to turn it back on. Though I grant that - an AI could set itself on standby mode until awakened by extraction.
So, hypothetically, what if their natural ways impede on our rights to defense as you ask? Example, Zerg from starcraft 1. A Biological creature that undergoes metamorphosis at an extremely fast pace to enhance it's own evolution. But in its core of it's being, it is a insectoid virus.
First - the likelihood of us coming across a space-faring species like this is infinitesimal - this is still fiction, and honestly I see no biophysical way that the Zerg could exist as-is. Honestly, if we come across a space-faring sapient alien species, chances either we will be too weak to defend against them and the point is moot, or they will be civilized and thus deserve the rights.
I was talking about splashing the water on it to remove the emp field around the AI.
There is no such thing as an EMP field. What the AI could do to protect itself is weave/mesh a Faraday cage into/under/over it's skin, though that comes with three main disadvantages: possible reduction of mobility (which would increase with the strength of electromagnetic fields that the cage is designed to protect against), possible (but uncertain) interference with it's hardware which could require some getting used to, and make it definitely impossible to have any wireless connection without exposing itself (which, depending, means disconnection from it's body or simply lower awareness of it's surroundings).
So, if humanity is now "self made" what is stopping an AI from becoming "self made"?
Nothing at all, but as previously said, this is not something that should stop us - because we risk the same thing with people.
 
Technically, an AI's equivalent of sleep would be "Standby Mode" where it's technically still on, just at a reduced/minimal processing power, which if it needs to be recharged would allow charging faster. An AI turning off could be anywhere from a human's coma to death, depending on the hardware configuration and if we know how to turn it back on. Though I grant that - an AI could set itself on standby mode until awakened by extraction.
Nah, Hibernation mode is more likely, but I digress.
First - the likelihood of us coming across a space-faring species like this is infinitesimal - this is still fiction, and honestly I see no biophysical way that the Zerg could exist as-is. Honestly, if we come across a space-faring sapient alien species, chances either we will be too weak to defend against them and the point is moot, or they will be civilized and thus deserve the rights.
Again, you miss my point. Just because we consider they deserve one thing or another, does not mean they consider they do. This seems moot to point out, since I stated it before, "not all things will have the same point of view".
There is no such thing as an EMP field. What the AI could do to protect itself is weave/mesh a Faraday cage into/under/over it's skin, though that comes with three main disadvantages: possible reduction of mobility (which would increase with the strength of electromagnetic fields that the cage is designed to protect against), possible (but uncertain) interference with it's hardware which could require some getting used to, and make it definitely impossible to have any wireless connection without exposing itself (which, depending, means disconnection from it's body or simply lower awareness of it's surroundings).
Interesting, so you mean to say that an electrical impulse at just the right location would do the trick then.
Nothing at all, but as previously said, this is not something that should stop us - because we risk the same thing with people.
This seems more philosophical at this point for some reason.
 
Nah, Hibernation mode is more likely, but I digress.
Yeah - it's pretty much the same thing, it's just that computer companies call Standby mode "hibernation" so people understand.
Again, you miss my point. Just because we consider they deserve one thing or another, does not mean they consider they do. This seems moot to point out, since I stated it before, "not all things will have the same point of view".
While you are entirely right (that both an alien and an AI can reject it's rights), it has no reason to do so. And even if it does, it still has to behave within our laws lest it be penalized.
Interesting, so you mean to say that an electrical impulse at just the right location would do the trick then.
Yes, but that would not be a permanent disable if it has a Faraday cage - even a taser would need to penetrate deep enough within the AI to give it a jolt that would affect it, otherwise the cage would act as grounding. A Faraday cage is really good protection for AIs.
This seems more philosophical at this point for some reason.
Of course - all of this is philosophical, we're discussing the ethics of creating and limiting another intelligence that has not yet been created.
 
While you are entirely right (that both an alien and an AI can reject it's rights), it has no reason to do so. And even if it does, it still has to behave within our laws lest it be penalized.
And what is to say that said being or entity care for laws other then it's own?
 
While you are entirely right (that both an alien and an AI can reject it's rights), it has no reason to do so. And even if it does, it still has to behave within our laws lest it be penalized.
And what is to say that said being or entity care for laws other then it's own?
What's to say that humans care more beyond the consequences?
Oh my, seems that is the problem with intellect, choice and conflict.
 
(I'm gonna try and go with something non-lethal here. Something that would be used during riots to stop mobs without causing permanent damage. Mostly.)

Name: Acoustic Crowd Control System (ACCS or "Axe")

Ammunition: The ACCS runs off of a small 20 kg (~40 lbs) generator attached to the "gunner's" back. Power is fed to the main system through a 1 m long (~3 ft) power connector plugged into the rear power input of the main system. Power is used by transducer to produce needed frequency.

Design: A large transducer is held in place by a steel frame and pointed at target. Transducer is capable of producing frequencies as low as 0.5 Hz and has a decibel range up to 177 db. Trigger is located underneath the frame and activates transducer via electronic signal when pressed. Main weapon body contains a control panel facing the user on the back of the main "gun". Control panel consists of a simple computer for regulating the system. Gun is attached to back-mounted generator through an insulated and protected power cable.*

Function: When the transducer is activated via electronic signal from trigger, it can produce frequencies as low as 0.5 Hz and up to 8 Hz through vibrations and at various volumes (similar to a speaker or sub-woofer). These low frequencies cannot be heard by human ears, but at high decibel ranges, can cause nausea, breathing difficulty, visual and auditory disorientation, and dizziness. This is caused when the low unheard but loud frequency vibrates internal organs within targets. Targets will experience aforementioned symptoms and will be incapacitated for the duration of the sound and some time after.** System must be pointed at targets in order to take effect and affects targets in a cone with ranged depending on decibel intensity (see Field Use pg 67). System must be attached to generator at all times in order to function correctly.

Safety Information: In order to prevent permanent damage to targets, system has a safety shut off of 20 seconds. If system is used for 20 seconds (cumulative) within a minute, system will shut off to prevent targets from suffering serious injury. Such injuries may include but are not limited to permanent hearing damage, eye damage, lung collapse, internal hemorrhaging, stroke, cardiac arrest, and nerve damage. Generator also comes with a safety shut off in the form of a lever located above the user's left shoulder (see Safety Procedures pg 109 for more information and instructions on safety systems).

* Maintenance one system should only be performed by licensed professionals. See pg 112 for contact information to our repair centers. Do not attempt to repair without proper licensing.
** For full list of symptoms, see pg 88. Note that not all possible symptoms may be listed and system should always be used with paramedics standing by.

Developed by Morgan & Ferguson Ind.

(Did you guys know this is based off a real phenomena and (in theory) would work if it could be directed? Low frequencies at high volumes make your insides vibrate and some places are notorious for causing this because of the right dimensions. Science is cool...)
 
Name: Zeus Rifle

Function: It's very similar to a dart gun, however, instead of firing syringes, it fires a dart which contains a battery, and two electrodes which stick into a body. A button on the rifle sends a signal to the dart telling it to activate. As long as that dart stays inside the person's body, the user has a long-range leash on them.
 
Ship Class: CU-90 'Hellfire'

Function: Heavy space combat and terror weapon. One has been built for ground invasion.

Size: About the quarter size of our own moon thus making it slow.

Arguments: 'Hellfire' has enough weapons to wipe out an solar system thus it has been used to threaten enemies into submission, but when that fails the Hellfire can dish out damage and take it. However, it needs a large crew to operate the ship and also needs a large energy source to power just the guns.

Crew: The men onboard a Hellfire class ship are veteran or specialized sailors who have been retrained to operate such a ship with care. There are of course combat teams in case the ship is boarded.

Armor: Hellfire ships are heavily armored making it difficult to damage and combined with their shields they are monsters to take down. However, all of this makes it slow and a easy target to hit. Still, you would need to have plenty of ships to take on one.

Safety Procedure: Due to circumstances there is a growing list of safety procedures one of which is when the ship's reactor is damaged and is about to go critical the crew has to turn off pipes' that allow energy to travel. This lowers the amount of energy produced. Though, in case it fails the backup reactor must be turned on and the other ejected into space.

Weaknesses: Hellfire ships are large and slow thus making it easy to hit with. It also needs a very large crew as well as a power source making it not only costly, but hard to build. The Hellfire also has a history of blowing up when certain areas have been hit. As a result these areas have been extensively modified.

History:
CU-90 Hellfire was a ship designed during a war that shook the very Universe. The war was between a dimension traveling race known as the 'Shadows' against a coalition of aliens known as the United Species. Near the end, the ones who built the Hellfire(known as Tierras, a human species) were finally defeated thus a network of devices that would vaporize the Shadows was activated, but these devices also fried electronics. It would be weeks or even months before the electronics turned back on and by then the United Species finally collapsed.

Billions of years later, a subspecies of the Tierras, known as the Emperica, fled from their planet with the assistance of a robotic race built by the Tierras when the Shadows returned. The Emperica discovered the home planet and thus the Hellfire along with various other technologies. Also, due to the Emperica nuking their own planet they had to recover thus they only had a 'fantasy' level tech they had a hard time getting use to the ships. Yet with help, the Hellfire was finally completed and taken for a test drive about 50 years later.

The Hellfire saw combat for the first time when a alien organization known as the Coalition began operations on the planet and begun abducting people for genetic experiments. After an investigation, three frigates and the first tested prototype Hellfire were sent out to stop the operations. They meet with ten battleships and seven light cruisers. Attempts at peace were made, but failed resulting in a battle that destroyed two of the three frigates and left the last one beyond repair. The Hellfire itself suffered damage, but successfully obliterated the enemy fleet leaving only two heavily damaged ships. It has since then been used for space combat or for making the enemy surrender. Hellfires have also been brought on diplomatic missions to show how powerful the Emperica are and this has prevent the outbreaks of three wars.

There are currently 50 active Hellfires. 50 more are either in the works or are being planned.

Weapon: M7-3E 'Shard Rifle'

Variant: M9-5E 'Nova Rifle'

Function: SR0 Short to mid range combat. NR- Short to long range combat

Ammo Types: SR uses a ammunition known as 'Shards' which are made from Energy Crystals which are the waste product of GM animal made by a race long ago. The 'shard' can penetrate anything if it is built properly, but depending on the design it can take longer and can be costly. Once fired the shard produces enough energy to fire that energy and hit the target. An interesting feature of the shard is that once used the shard loses the energy and can be used for painting. Reasons are unknown why this occurs, but troops just love using it for griffti.

NR uses a more dangerous ammo type which extracts solar radiation and converts it into 'Nova Bullets'. It takes a longer time to build, but is worth while as this ammo type not only deals massive damage to Shadows, but also causes them to feel pain. Against regular targets it only does slightly more damage than the Shard Rifle and so only officers or OPs that involve the Shadows are distributed.

Name: The 'Dragon' Flyer

Function: Though this is a armor type it fits on dragons and their riders to allow for space operations. The combat variant has jet packs that speeds the dragon when activated and allows for amazing feats to happen. The armor can also be equipped with light weapons strapped to the mount to allow the rider for firing.

Weakness: The dragon and rider both need oxygen meaning tanks have to be fitted onto them. Normally two is enough, but when one is damaged it reduces the amount of time they have and while the armor is light and strong it needs a power source meaning if that is damaged a explosion could follow.

There has also been some safety concerns among riders and dragons about the well being of them for going out into a space battle is suicidal thus the Dragon Flyers are only used for patrol, scout, and skirmish missions.
 
(I'm gonna try and go with something non-lethal here. Something that would be used during riots to stop mobs without causing permanent damage. Mostly.)
Congratulations, I appreciate non-lethal weapons!
Ammunition: The ACCS runs off of a small 20 kg (~40 lbs) generator attached to the "gunner's" back. Power is fed to the main system through a 1 m long (~3 ft) power connector plugged into the rear power input of the main system. Power is used by transducer to produce needed frequency.
Right, mostly, but what is fueling the generator?
Design: A large transducer is held in place by a steel frame and pointed at target. Transducer is capable of producing frequencies as low as 0.5 Hz and has a decibel range up to 177 db. Trigger is located underneath the frame and activates transducer via electronic signal when pressed. Main weapon body contains a control panel facing the user on the back of the main "gun". Control panel consists of a simple computer for regulating the system. Gun is attached to back-mounted generator through an insulated and protected power cable.*
At 100 db your eyes will twitch, at 110 db your vision starts getting messy. But capping it at 177 would cause headaches and other issues, but not cause destructive resonations unless they reach around 240 db, Well thought through.
Function: When the transducer is activated via electronic signal from trigger, it can produce frequencies as low as 0.5 Hz and up to 8 Hz through vibrations and at various volumes (similar to a speaker or sub-woofer). These low frequencies cannot be heard by human ears, but at high decibel ranges, can cause nausea, breathing difficulty, visual and auditory disorientation, and dizziness. This is caused when the low unheard but loud frequency vibrates internal organs within targets. Targets will experience aforementioned symptoms and will be incapacitated for the duration of the sound and some time after.** System must be pointed at targets in order to take effect and affects targets in a cone with ranged depending on decibel intensity (see Field Use pg 67). System must be attached to generator at all times in order to function correctly.
What is fueling the generator?

Overall, Well done in my opinion.
 
  • Like
Reactions: Yiyel
Name: Zeus Rifle

Function: It's very similar to a dart gun, however, instead of firing syringes, it fires a dart which contains a battery, and two electrodes which stick into a body. A button on the rifle sends a signal to the dart telling it to activate. As long as that dart stays inside the person's body, the user has a long-range leash on them.
Nice idea, kind of reminds me of a wireless teaser. Only problem with batteries is how long they retain charge, how large they are (presumably smaller then the shaft of the dart gun.), and locations used in (wireless stuff does not always work well in areas such as a heavy storm, or somewhere with magnetic interference.)