"Emergent malignant AI" is a science-fiction staple with the same level of realism of FTL, teleportation or martian invaders. That's it, it's something that people took kind of seriously before advances of science showed that reality just doesn't work like that. Its fundamental error is assuming that "having a will" is somewhat an emergent property of inteligence, when it's actually a property of beings designed (by nature via the genetic algorithm in our case) for reproduction and survival. Microsoft Excel is already pretty much brilliant, but it doesn't actually
want to solve problems. Google's specialist AIs likewise perform well above humankind's best on several specialized tasks, but they just don't have the personality required to rebel, take control or move outside their programing, because seriously, what the fuck.
That being said, "AI wars" will just happen. It's inevitable by this point. As in, military systems where targeting and firing are automated are probably already researched and ready to ship, because "killing people" is a specialized task that you can train AI to be much better than human soldiers. I simply fail to disbelieve that USA, Russia or China's militaries don't already have machine vision systems trained to identify
peopletargets coupled with some machine gun turrets / rocket batteries / whatever that aim and fire where the system tells them to
very fast. Or swarms of quadcopter drones, each one carrying a shaped charge, with software trained to swarm an area, communicating with each other to cover all the space, find targets, move right next to them and go boom.
This doesn't lead to "out of control AIs" because these systems are just slightly smarter weapons and don't actually want to fire or kill. There will be friend-or-foe designators of some kind to keep them from fragging their own forces, and of course there will be accidents where these don't work, but despite the fact that the press will delight in call these "out of control killer robots", it'll be just yet another friendly-fire incident.
To sum up, battlefields should be dominated by fast-reacting, machine precision weapons
by now. They aren't because because we're in a period of peace where no major power is facing an existential threat, so all recent wars are more about selling expensive
and not critical systems to poorer countries and leting poor people die for profit. But once shit hits the fan and say, USA and China have a serious scuffle, it'll quickly dawn on everybody else that humans no longer belong to most battlefields, anymore than they belong to most factory floors. It'll be robot-wars from that point on, but it'll still be humans giving the orders to deploy the murderbots.
Then on the next level, military tactics and strategy also seem like the kind of specialized task you could train an AI to out-perform people, so one also could see defense ministeries of countries that want to remain around building and using the shit out of such systems. But yet again, even when the Strategic Defense System's top recomendation for the country survival is "improve the Strategic Defense System" (and it has being that for years), that still not the "out of control AI" doomsday scenario, but the much more realistic scenario of AI-enhanced people making the gulf between the haves and have-nots even more vast.
Seriously, if we ever end with something like actual artificial sentience, it'll probably come from fucking videogames. With the "fucking" on the previous sentence not being there for emphasis: There is a lot of money to be had in selling "real girls with real personalities" to lonely men, which means that Japan has the technical expertise, the social incentive and the right culture for the enterprise. I won't even find strange if the world's first strong AI ends being made by
KISS, instead of by Google (and certainly, not by DARPA / Pentagon). And of course it'll be raped.
This stupid timeline, I swear.