Rise of machines may spark new conflicts
Development of Artificial Intelligence could have potentially devastating implications
Russia's latest "Zapad" military exercise is underway on Nato's eastern border.
Tens of thousands of soldiers are taking part in the massive four-yearly war games that are both a drill as well as a show of strength for the West.
Next time around, in 2021, those troops might be sharing their battle space with self-driving drones, tanks, ships and submersibles.
Drone warfare is hardly new. What is now changing fast, however, is the ability of such unmanned systems to operate without a guiding human hand.
Critics have long feared countries might be more willing to go to war with unmanned systems. Now, some see a very real risk control might pass beyond human beings altogether.
Tech entrepreneur Elon Musk has long warned that humanity might be on the verge of some cataclysmic errors when it comes to artificial intelligence.
Last month, he warned that the development of autonomous weapons platforms might provoke a potentially devastating arms race.
As if to reinforce Mr Musk's point, Russian President Vladimir Putin told students shortly thereafter that he believed AI technology would be a game changer, making it clear Russia would plough resources into it. "The one who becomes leader in this will become ruler of the world," Mr Putin was quoted as saying.
China, too, is pushing ahead, and is believed by some experts to now be the global leader when it comes to developing autonomous swarms of drones.
Even more important than what is happening in robotics may be the wider developments in artificial intelligence. That won't make warfare necessarily more deadly. While it's possible that greater accuracy might reduce casualties, some analysts fear that the changes brought by new unmanned systems might themselves fuel new conflicts.
AI could dramatically increase the efficiency of surveillance technology, allowing a single system to monitor perhaps millions of digital conversations, hacked personal devices and other sources of information. The implications could be terrifying, particularly in the hands of a state with little or no democratic oversight.
Most countries deliberately keep their defence AI secret, ultimately fuelling the arms race Mr Musk was warning about.
Some scientists already worry about a real-world version of the premise for the "Terminator" film franchise in which the US, fearing a cyber attack, hands control of key military systems to the AI Skynet. (Skynet, fearing its human creators might choose to turn it off, immediately launches a full-scale nuclear attack on humanity.)
For now, Western nations at least look keen to keep a human in the "kill chain". Not all countries may make that choice, however.
Russia has long had a reputation for trusting machines more than people, at one stage considering an automated system to launch its nuclear arsenal should its command structure be destroyed by a first strike.
Outside of the military, there is evidence AI algorithms have already alarmed their creators.
In August, Facebook shut down an AI experiment after programs involved began communicating with each other in a language the humans monitoring them could not understand.
Is this the end for ordinary human soldiering? Almost certainly not. It's even been argued that a more complex, high-tech battlefield might require more soldiers, not fewer.
Robotic systems may be vulnerable to hacking, jamming and simply rendered inoperable through electronic warfare. Such techniques allow US-led forces in Iraq to largely negate the off-the-shelf drones being used by the Islamic State in Iraq and Syria.
Ironically, the North Korean crisis reminds us that the most dangerous technologies may well remain those invented more than 70 years ago - atomic weapons and the missiles that carry them.
Even if mankind can avoid a nuclear apocalypse, however, the coming AI and robotic revolution may prove an equal existential challenge. - REUTERS
Peter Apps is a global affairs columnist at Reuters