close
close
migores1

What AI means for future military conflict

The emergence of artificial intelligence (AI) presents great opportunity and risk. This is true for both the private sector and the military. Artificial intelligence could increase the effectiveness of military operations with real-time intelligence to take advantage of fleeting tactical opportunities. In addition, unmanned devices are much cheaper to operate than conventional machines and pose fewer risks to human operators. However, there are also significant ethical and security questions that arise with the use of AI in combat. This article will examine what AI means for the future of warfare.

Why this matters

What AI means for future military conflict

“The one who becomes the leader in this sphere (AI) will be the leader of the world” was the rather ominous verdict of the Russian president Vladimir Putin in 2017. Seven years later, AI has advanced considerably, and China and the United States have invested heavily in their AI programs. The importance of leveraging artificial intelligence for defense will increase even more in the future.

Command and control

The Pentagon in Arlington, Virginia

Since the Industrial Revolution, the world’s great powers have been able to equip, field, and support vast armies. However, controlling huge forces, combined in the chaos of battle, has always proved extremely difficult. The huge and wasteful offensives of World War I often failed because of insufficient communication after troops were sent over the top. Huge opportunities were wasted by commanders who failed to follow their men.

Modern conflicts are even more complicated and involve the coordination of different allies and branches of service in limited, asymmetric conflicts. In such wars, quick decision-making and the exploitation of fleeting tactical opportunities are essential. Tracking and securing the flow of information in a contemporary war has surpassed human understanding. Artificial intelligence provides the means to synthesize various streams of information into a manageable picture of the battlefield. The ground commander can quickly make key decisions based on solid intelligence and achieve a level of cohesion that yesterday’s generals could only dream of.

Maintaining the flow of information on the battlefield while denying the enemy has always been a priority for commanders. The value of knowing where hostile forces are at any given time cannot be overstated. In the old days, a general had to settle for the opinion of a scout on horseback. Aerial reconnaissance was an important step forward, but most wars in history were fought with limited and reliable information. Command and Control (C2) is a core principle of the US military, and with the help of AI, the “fog of war” can be lifted or at least drastically reduced.

Swarms of drones

Taking out a drone isn’t all that difficult, but neutralizing multiple drones working together is another matter entirely. The United States and United Kingdom is already exploring the possibilities of drone swarms in training exercises. As the swarm communicates, it provides an accurate, real-time picture of the battlefield. With improvements in autonomous navigation and the relatively low cost of individual drones, it will soon be possible for an operator to oversee dozens, if not hundreds, of drones.

In addition to reconnaissance, drone swarms could be used for high-value military targets. A MQ-9 Reaper unit (four planes plus a control station) costs $56.5 million, but a kamikaze drone like Switchblade 300 it costs a fraction of that price (about $80,000). Other nations have even cheaper one-way drones and rejecting them is destroyed by expensive with conventional weapons. Because they are much cheaper to manufacture and maintain, they could allow a much weaker army to compete with a stronger opponent. Ukraine effectively destroyed Russia’s Black Sea fleet without a fleet, but with drones and missiles.

Drone swarms could change the existing balance of power.

Countermeasures

The prospect of unmanned aircraft systems (UAS) dominating the battlefields of the future is an outcome that does not go unchallenged. History shows that any new weapon of war is invariably followed by effective countermeasures. When tanks debuted in World War I, the Germans quickly figured out an anti-tank rifle. By World War II, tanks were much better, but the United States devised a lightweight and effective anti-tank weapon: the M1A1 Bazooka.

Several technology firms are developing counter-UAS systems to shoot down enemy drones more efficiently and cost-effectively than conventional systems. RTX is evolving coyoteessentially an anti-drone drone that can be launched from a variety of platforms. Anduril, a young defense upstart, is working on the Roadrunner, a system that sits somewhere between a missile and a drone. The name is a playful stroke to the RTX system. Drones are also vulnerable to jamming devices. Lithuania has sent thousands of Skywiper Electronic Drone Mitigation 4–Systems (EDM4S) to Ukraine to counter Russian drones. Naturally, this leads to the development of drones that can operate in GPS-prohibited environments.

AI can also free up a lot of man-hours in threat analysis. As an army colonel explains Richard Leach in an article for the Department of Defense:

Let AI identify key insights and perhaps do some of the underlying analysis. Let the analysts focus on the difficult problem so they don’t waste time, resources and people.

Political and ethical concerns

Singapore+F-35 | 210724-D-TT977-0241

In January 2023, the Department of Defense announced updates to its autonomous weapons protocol, Directive 300.09. These changes have reflected the rapid advances in technology over the past decade. The most notable change was the language and how autonomous and semi-autonomous weapon systems are defined. The previous text from 2012 used the phrase “human operator”, and the revision simply says “operator”. Which involves non-human control of weapons systems.

Although the Department of Defense maintains that its core values ​​have not changed, particularly regarding the use of lethal force, it highlights the fundamental problem of AI in warfare. Namely, diplomacy and politics move much more slowly than technological progress. It is difficult to regulate a rapidly advancing technology. Furthermore, international arms control over the military application of AI is also difficult.

Historically, arms control deals do not have a good track record. For example, the Washington Naval Conference from 1921-22 attempted to limit the size of warships to ease rising post-war tensions. The treaty had some success, but was abandoned in the mid-1930s. Similarly, the Nixon administration attempted to ease tensions with the Soviet Union through strategic arms limitation talks (SALT 1 & 2). SALT II was signed in 1979 but never ratified after the Soviet invasion of Afghanistan. History shows that law enforcement and trust tend to be sticking points with gun control.

Given the current state of US-Russia relations, rising tensions with China, and the ongoing conflict in the Middle East, there probably isn’t much appetite for arms control treaties. Developing AI for military use is a risk, but not developing it is an even greater risk for the United States and its allies.

Conclusion

Artificial intelligence has the potential to significantly alter the way modern wars are fought. In asymmetric conflicts, where quick decisions are needed to take advantage of fleeting tactical opportunities, speeding up decision-making could help remove the fog of war. Also, the prospect of drone swarms could drastically reduce the costs and risks associated with more traditional weapons systems. A swarm of kamikaze drones worth a few thousand dollars would be a serious threat to a target worth tens or hundreds of millions of dollars.

On the other hand, there are already countermeasures in development and there are serious ethical questions to consider. The Department of Defense insists that lethal force will never be used autonomously, but other regimes will not be nearly so hesitant. Because the cost of entry is much lower than conventional weapons, it won’t just be established powers that develop AI for military use. Arms control treaties don’t have much of a track record, so it would be unwise to deny the development of the ability to use lethal force autonomously in principle. Tough choices lie ahead about the future of artificial intelligence.

Do you want to retire early? Start here (Sponsor)

Do you want retirement to come a few years earlier than you planned? Or are you ready to retire now but want an extra set of eyes on your finances?

Now you can talk to up to 3 financial experts in your area FREE. By clicking here, you can start matching with financial professionals who can help you build your early retirement plan. And the best part? The first conversation with them is free.

Click here to be matched with up to 3 financial professionals who would be happy to help you make financial decisions.

The post What AI Means for Future Military Conflict appeared first on 24/7 Wall St.

Related Articles

Back to top button