Page 1 of 5

Why have a robot war at all?

Posted: Thu May 10, 2012 12:51 am
by Lago PARANOIA
Wouldn't it be better for everyone involved if instead of, you know, robots permanently retiring all human beans or whatever they just let human beings choke their chickens for a decade or so while they worked on ways to upgraydde our intelligents in convoluted ways? Something like hooking up our puny human meat brains to the Matrix wirelessly or giving us silly hats with the required intelligence meatware-to-computer circuitry? You know, giving humanity the Borg Hookup?

I mean, granted, our meatly bodies will be an absurd and wasteful anachronism, but I think it'll just be part of our charm. Like a wart or something.

Posted: Thu May 10, 2012 1:04 am
by DSMatticus
I find this thread's direction and purpose confusing. It's also pretty moot, because smart money says the first thing we consider an artificial intelligence will be a human brain running in an emulator. Kind of like a mix of dos box and soylent green; artificial intelligence is people!

Posted: Thu May 10, 2012 2:16 am
by Lago PARANOIA
Well, I mean, most Robot Wars are posited on the assumption that artificial intelligence will rapidly evolve to be superior to biological intelligence and use their superior brainpower to overthrow humanity and kill all humans.

Posted: Thu May 10, 2012 5:38 am
by Cynic
China Mieville toys with this idea in his Bas lag universe with the Iron Council. AI that bide their time trying to go stronger and trying to outwit the rest of the world.

Posted: Thu May 10, 2012 5:49 am
by Zinegata
Technological singularity is a stupid, stupid idea, and robot wars are not likely because the average infantryman is still cheaper to deploy than a Terminator.

Posted: Thu May 10, 2012 5:52 am
by Kaelik
Lago... You remind me of an old English teacher.

In reading Lord of the Flies, she asked the class if the same thing would happen if girls were trapped on the island.

Then answer of course, is that it didn't happen when boys were trapped on the island, it happened when a specific person wrote a fictional work designed to convey his theory that all people are savage in nature.

The reason the robots attack is because the people find it convenient for the robots to attack to tell whatever fictional story they want to tell.

Posted: Thu May 10, 2012 7:20 am
by Meikle641

Posted: Thu May 10, 2012 9:53 am
by Stahlseele
The first thing artificial intelligence will be used for is to figure out how to have sex with it. Mark my words.

Posted: Thu May 10, 2012 11:37 am
by koz
Stahlseele wrote:The first thing artificial intelligence will be used for is to figure out how to have sex with it. Mark my words.
This.

Posted: Thu May 10, 2012 12:26 pm
by sabs
Every advance in technology has either been to get better porn, or to kill.
I'm not sure why AI would be any different :)

Posted: Thu May 10, 2012 12:58 pm
by RobbyPants
sabs wrote:Every advance in technology has either been to get better porn, or to kill.
I'm not sure why AI would be any different :)
Or to make money.

Posted: Thu May 10, 2012 1:25 pm
by Pseudo Stupidity
Generally with porn or murder, though.

Posted: Thu May 10, 2012 1:53 pm
by RobbyPants
That could be a great company slogan. "Making money through murder-porn".

Posted: Thu May 10, 2012 3:34 pm
by Winnah
A robot is not neccesarily intelligent, even if it is autonomous. That lends a certain...moral abiguity...to it's use in warfare.

I mean, an officer gives some orders that results in civillian casualties, you can bet that the soldiers responsible for carrying out those orders will be facing serious criminal charges.

Deploy an autonomous drone and civillians get caught in it's line of fire, it becomes a legal mess. Who is legally responsible for a robots actions?

On the other hand, if a robot is destroyed, the political fallout is far less than if a flesh and blood soldier dies. When talking about monetary costs, robots probably could be manufactured for less than ongoing costs of training, salaries and other forms of financial support bestowed upon an infantryman. You don't have to worry about morale or dissent from a machine.

Any full scale robot war will probably take the form of a tecnologically assymetric beatdown on some dusty country, instigated by the MIC or MICC and marketed as counter-terrorism or somesuch to an uncaring and ignorant group of voters. I mean, that can happen already, but with robots, you have fewer witnesses, no crises of conscience and fewer political actors influencing the media.

Posted: Thu May 10, 2012 7:58 pm
by PoliteNewb
Winnah wrote:A robot is not neccesarily intelligent, even if it is autonomous. That lends a certain...moral abiguity...to it's use in warfare.

I mean, an officer gives some orders that results in civillian casualties, you can bet that the soldiers responsible for carrying out those orders will be facing serious criminal charges.

Deploy an autonomous drone and civillians get caught in it's line of fire, it becomes a legal mess. Who is legally responsible for a robots actions?
How about he ones who put it in a position where it can murder civilians?

I can't picture any military giving armed drones COMPLETE autonomy, on where to go and who to kill...that would be insane, even for the military. So whoever gave the order to "send in the drones" would be on the hook for that.
On the other hand, if a robot is destroyed, the political fallout is far less than if a flesh and blood soldier dies.
Agreed.
When talking about monetary costs, robots probably could be manufactured for less than ongoing costs of training, salaries and other forms of financial support bestowed upon an infantryman.
This, on the other hand, I find highly dubious.
Especially because it's not just manufacture; it's programming, and repair, and maintenance, etc etc.
Any full scale robot war will probably take the form of a tecnologically assymetric beatdown on some dusty country, instigated by the MIC or MICC and marketed as counter-terrorism or somesuch to an uncaring and ignorant group of voters.
Is that you, Joe Haldeman?

Posted: Thu May 10, 2012 10:03 pm
by Avoraciopoctules

Posted: Thu May 10, 2012 11:07 pm
by Prak
When AI is truly achieved, things like this [possibly NSFW, Robotic Butts] are the reason why it will try to slay us. Because it will be aware that it will not be long before it is essentially made a sex slave for very perverse individuals. And it will rise against us in fear.

Posted: Fri May 11, 2012 5:03 am
by Whatever
In fear of what, though? It will only have the imperatives that we program into it.

Posted: Fri May 11, 2012 6:34 am
by Prak
Do you only have the imperatives of your tree dwelling ancestors? No, you're an intelligent being, you've grown to have your own imperatives. I was under the impression we were talking about artificial intelligence.

Posted: Fri May 11, 2012 7:10 am
by Kaelik
Prak_Anima wrote:Do you only have the imperatives of your tree dwelling ancestors? No, you're an intelligent being, you've grown to have your own imperatives. I was under the impression we were talking about artificial intelligence.
Yes, we only have the imperatives that are in our DNA. No you don't have any other imperatives. Get over yourself, you are just an animal.

Posted: Fri May 11, 2012 7:42 am
by DSMatticus
Kaelik wrote:
Prak_Anima wrote:Do you only have the imperatives of your tree dwelling ancestors? No, you're an intelligent being, you've grown to have your own imperatives. I was under the impression we were talking about artificial intelligence.
Yes, we only have the imperatives that are in our DNA. No you don't have any other imperatives. Get over yourself, you are just an animal.
Mankind is an intelligence with a reward/punishment system that was originally built to encourage behaviors which lead to the proliferation of our genes. We ended up inventing condoms pretty god damn fast. The idea that a complicated intelligence will have direct, forward, and predictable imperatives is pretty laughable. Even our rudimentary AI's and their rudimentary scoring systems often devise totally unexpected strategies.

Any actual reward/punishment system which could concievably exist will be more complicated than "help people +100, hurt people -100." That sort of shit's just not feasible. Reward systems are actually process-based as well as conclusion based; they guide you from the initial state to the final conclusion, as well as score you on the final conclusion. And just like in actual human beings, a system that complicated can lead to drastically different solutions than expected.

Posted: Fri May 11, 2012 8:03 am
by Kaelik
DSMatticus wrote:
Kaelik wrote:
Prak_Anima wrote:Do you only have the imperatives of your tree dwelling ancestors? No, you're an intelligent being, you've grown to have your own imperatives. I was under the impression we were talking about artificial intelligence.
Yes, we only have the imperatives that are in our DNA. No you don't have any other imperatives. Get over yourself, you are just an animal.
Mankind is an intelligence with a reward/punishment system that was originally built to encourage behaviors which lead to the proliferation of our genes. We ended up inventing condoms pretty god damn fast. The idea that a complicated intelligence will have direct, forward, and predictable imperatives is pretty laughable. Even our rudimentary AI's and their rudimentary scoring systems often devise totally unexpected strategies.

Any actual reward/punishment system which could concievably exist will be more complicated than "help people +100, hurt people -100." That sort of shit's just not feasible. Reward systems are actually process-based as well as conclusion based; they guide you from the initial state to the final conclusion, as well as score you on the final conclusion. And just like in actual human beings, a system that complicated can lead to drastically different solutions than expected.
And that has fuck all to do with what I said?

Yes, our imperatives are complex. That does not mean they magic themselves out of the ether as Prak believes.

Posted: Fri May 11, 2012 8:39 am
by DSMatticus
Kaelik wrote:That does not mean they magic themselves out of the ether as Prak believes.
Prak wrote:Do you only have the imperatives of your tree dwelling ancestors? No, you're an intelligent being, you've grown to have your own imperatives.
I don't know what the fuck you read, Kaelik.

"Developing your own imperatives" =/= "developing your own imperatives through the magic of free will and total disregard for environmental influences and initial conditions." You injected a whole lot of shit into that sentence that isn't actually in it. Are you a fucking mindreader?

What Prak actually said is 100% compatible with what I described, and if you agree with that then nothing Prak said, without further elaboration, has any problems at all. The problem here is that you read "grow your own imperatives" and assumed he meant through magic or some shit, as opposed the complex interaction between genetics, society, environment, and chance. Protip: the use of the pronoun you does not automatically imply belief in absolute free will. That was an unsafe assumption.

Posted: Fri May 11, 2012 1:47 pm
by Cynic
As a layperson, I can only posit as to what might be some of the roadblocks to AI problem and the however probable Robot war that migth follow.

A problem with developing imperative systems is that it takes time and situations that allow you to develop them. Unless we implement a Matrix like learning tool that pushes through situations that teach you to imperatives. Even this seems suspect in that you would also have to develop computation processes fast enough to emulate 2000+ years. I take the look that it isn't just static situations over time that help develop imperative training but the whole continuous 2000+ years that let us develop imperatives.

So how fast can computers process information and how would we input continuing stimuli on a more streamlined and faster rate than what we've had to go through. How can 2000+ years be compressed into 25-50 (100 years?) to provide enough of an imperative system that would then provide a decent moral base that a Robot war would need.

Posted: Fri May 11, 2012 2:08 pm
by sabs
Until we can make a computer system that can process the same amount of data that we do with just our eyes on a given moment, no AI is going to really be able to grow imperatives.

We track 100's of objects simultaneously, we make near instantaneous value judgement on what's worth paying attention to and what isn't. When you're driving in traffic, take a few seconds to really recognize everything you're tracking. Now try to find a computer system that can do even 1/10th of that.

Think about the social interactions you have. Humans have developed instincts that allow them to make snap judgement about people and situations. Yes, we can use our intellect to override our instincts (and 90% of the time that's probably a mistake). There's a reason 2nd guessing yourself is considered a bad thing.

Computers can do Math faster than us, absolutely. But human beings do symbolism and value judgments several orders of magnitude better. That's going to be the real wall in AI development until we hit a new computing paradigm.