Skip to main content

Analysis: What's impact of AI development on Drone warfare



In two conflict-ridden decades, unmanned combat and reconnaissance systems have gone from being exotic capabilities fielded by a handful of armed forces to ubiquitous staples of military conflicts across the globe by state and non-state actors alike. Nowadays with the evolution of technologies, it's interesting to study the impact of AI on the Drone Warfare
Follow Army Recognition on Google News at this link


Army Recognition Global Defense and Security news
Kratos XQ-58 Valkyrie (Picture source Kratos )


We lie on the cusp of a new revolution in unmanned warfare as advances in sensors and artificial intelligence are poised to allow a growing range of uncrewed systems to perform their missions with much more limited direction from their human operators. The fruits of this welling sea change will affect small drones costing just hundreds or thousands of dollars, and aircraft, vessels, and armored vehicles costing millions.

Already, traditional uncrewed systems reliant on remote control typically exhibit several game-changing characteristics: reducing risk to the lives of human operators, lower procurement and operating costs, greater potential endurance, and capability to perform missions that may be impractical or impossible with traditional means.

Autonomy overlays the above with additional distinct qualities: the potential to carry out missions without access to satellite and communication links; the capacity to exceed human reaction speeds and mental tasking limitations, allowing a single human to control many drones; and for drones to cooperate with other drones and potentially even act in concert as a swarm.

Of course, many remote-control and even manned platforms already incorporate autonomous functions such as automatic takeoff and landing systems, ground-collision avoidance, waypoint navigation, and emergency landing or return-to-base algorithms for when communication links are lost. However new AI agents enable much more complex, whole-mission tasks while requiring much less human input.

While advanced autonomy is not easy to develop, as a digital product, once perfected it may be highly reproducible and fielded in very small, cheap platforms as well as exquisite ones. Admittedly, certain autonomy enablers—advanced sensors for navigation and target identification and comms to support cooperative behavior—do have a physical footprint, though hardly a prohibitive one. And even when general-purpose AI agents are developed, adapting and testing them to work with specific platforms will require non-trivial efforts.

Ethics and Autonomous Killer Robots

While there’s a cottage industry churning out academic articles on the potential that humanity will be destroyed by self-aware AI, a more tangible and proximate quandary is the proliferation of autonomous systems perhaps less sophisticated than Terminator T800 but empowered to execute lethal attacks.

Yes, human beings will set the parameters for what targets these killer robots attack—all of the kinetic systems described in this article require human authorization for kinetic attacks. But when operating on the edge beyond assured reach of comms, or in numbers too great to practically control, AI agents will classify possible targets and assess if they’re authorized to kill. Russia and Ukraine have both already begun fielding kamikaze drones mating automatic target recognition AI with terminal electro-optical guidance, albeit with mixed results.

In truth, a narrow form of such lethal autonomy already existed on some Cold War-era missiles and torpedoes with target classification capability. That these didn’t cause protests decades ago suggests not all forms of lethal autonomy are equally controversial. After all, there are unlikely to be civilian warships, tanks, or jet fighters in a warzone.

However, risks of error multiply when targeting dismounted human beings or lighter, plausibly civilian vehicles. Entrusting automation based on technologies such as facial recognition known to be unreliable and affected by biased datasets has particularly large risks. After all, humans often fail to accurately distinguish civilians from adversaries, and robots may have an even harder time. To be fair, it’s also possible AIs—not being susceptible to combat stress and disobedience—might eventually achieve lower rates of misidentification than humans. But even then, the inevitable accidents will pose complex moral issues given diffused and unclear responsibility for actions performed by autonomous systems.

Bear in mind self-imposed ethical restrictions on the uses of killer AI aren’t bound to be observed by foreign actors, absent an arms control treaty. While seeking to restrict the proliferation of ethically problematic technology, defense planners must also prepare for the use of autonomous systems in morally objectionable ways by adversaries and third parties, much as some cyberwarfare tools have been repurposed by U.S. allies for repressive purposes.

Lastly, just as long-endurance drones enabled the U.S. to embark on a sprawling campaign of surveillance and targeted assassination in the 2000s and 2010s, autonomous systems will de-risk and enable operational concepts that were impractical or risky before. That opens the door to diverse ways to improve force protection, lethality, and cost-efficiency. But not every concept made possible by new technology is a good idea.


Defense News April 2024

Copyright © 2019 - 2024 Army Recognition | Webdesign by Zzam