Header Ad
HomeDEFENCEDecentralised AI and its implications for military command and control

Decentralised AI and its implications for military command and control

- Advertisement -

Decentralised AI is a new way of thinking in which intelligence and decision-making are not controlled by one central point. Unlike traditional systems with a single leader, Decentralised AI works through a network of agents, each of which makes decisions based on its own observations and communicates with others nearby. This setup is robust because it does not fail if one part is damaged, and the system continues to function even if some agents are lost. The intelligence comes from the group working together, known as swarm intelligence, in which each agent follows simple rules, and by interacting, they solve complex problems. Inspired by nature, from how ants or birds work together, this idea allows for quick responses and can grow by adding more agents without slowing down the process.

Decentralised AI is a strategic move to address the weaknesses of traditional military methods. In the past, military power relied on large, expensive platforms such as aircraft carriers and fighter jets. However, with new threats such as long-range missiles, these systems are becoming vulnerable and costly. Decentralised AI offers a solution by shifting to “Mosaic Warfare.” This means using many smaller, cheaper, and networked platforms instead of a few expensive ones, creating many targets for the enemy and making it harder for them to focus on one. If one part is lost, the others can adapt and continue the mission, improving survival and effectiveness.

The use of Decentralised AI in warfare has been possible because of the following key technological advances:

Swarm Intelligence: This refers to how groups of simple agents, such as drones or robots, work together without a leader. Each agent follows simple rules and communicates with nearby agents, creating a flow of information that helps the group adapt and make quick decisions. This setup allows the swarm to cover a larger area, respond quickly to changes, and be strong without a single point of failure.

- Advertisement -

Edge AI: This technology processes data at the location where it is collected, instead of sending it to a distant server. This is important for military use because it reduces delays and allows for fast decision-making. For example, a drone with edge AI can spot a target and act without needing to contact a central command, which may be in a low-bandwidth area.

Federated Learning: In this method, AI models are trained using data that remain on local devices, and only the learning results are shared with a central server. It keeps sensitive military data safe and private, and allows for custom models, improving the development process.

Human-Machine Teaming (HMT): HMT involves humans and machines working together. AI helps humans by managing simple tasks and processing large amounts of data, while humans focus on big decisions. This teamwork aims to make decisions faster and better while maintaining human judgment and ethics.

In the following paragraphs, an attempt has been made to briefly analyse the progress being made in different countries/regions in the use of decentralised AI for military applications.

- Advertisement -

The United States

The US military is working on a Decentralised AI strategy to be more resilient against threats and reduce costs by focusing on human-machine collaboration, quick and flexible acquisition, and moving towards distributed operations.

Command, Control, and Decision-Making

Started in 2017 to bring big data and AI into the Department of Defence, Project Maven is a key example of AI assisting decision-making. The first goal was to use computer vision to analyse drone videos. Project Maven uses a data fusion platform to process substantial amounts of information, such as satellite images and sensor data, to find targets. The Maven Smart System (MSS) displays this data on a screen, marking targets with yellow boxes and friendly forces with blue boxes.

Maven’s impact was tested in the Scarlet Dragon exercises by the US Army’s 18th Airborne Corps. These exercises showed that Maven made the targeting process much faster. In one case, the AI found a tank in satellite images, a human confirmed it, and then the AI told a missile system to strike. This was the first AI-guided artillery strike conducted by the US Army. A senior officer said they could decide on 80 targets per hour with Maven, compared with 30 without it. The time to send targeting data dropped from 12 h to less than a minute. This was a substantial improvement, with the targeting team working as efficiently as in Operation Iraqi Freedom but with only 20 soldiers instead of 2,000 soldiers.

- Advertisement -

Project Maven demonstrates a change in military strategy. It is not a fully independent AI system but assists with distributed operations. By letting machines handle data analysis, Maven provides real-time information to operators and allows small units to make quick decisions without needing a large central team, which is important for the US Air Force’s “agile combat employment” strategy.

Air Domain: Collaborative Wingman. The US Air Force is using AI to make its expensive and vulnerable old aircraft more effective, as is seen in the “loyal wingman” idea, which started with the Skyborg program and is now part of the Collaborative Combat Aircraft (CCA) project. Skyborg created a flexible system that can work with different aircraft and missions. This system was moved to the CCA program, which is developing uncrewed aircraft to fly with piloted fighter jets. These CCAs are AI-powered jets that can fly alone or in groups for tasks such as air combat, electronic warfare, and intelligence gathering. The key technology for this is being developed by DARPA in the Air Combat Evolution (ACE) program. ACE focuses on human-machine teamwork in air combat and aims to build trust in AI. In 2023, the program successfully flew an F-16 test aircraft, the X-62A VISTA, in combat scenarios against a human-piloted F-16. This is not intended to replace pilots but to change their roles. The goal is for pilots to focus on larger missions while their aircraft and uncrewed systems manage simpler tasks, such as manoeuvres. The strategy is to create multiple targets to confuse the enemy. The CCA program plans to have 1,000 CCAs—two for each of 500 advanced fighters—to strengthen the fleet and gain air superiority against threats such as China’s defences.

Multi-Domain Operations on Land and Sea: The US military is also using a decentralised approach on land and sea by spreading out command and operations to make them more resilient and safer for human operators. The US Navy’s part in the Pentagon’s Joint All-Domain Command and Control (JADC2) initiative is called Project Overmatch, which aims to connect platforms, weapons, and sensors into a robust naval system. Project Overmatch is notable for its decentralised model, using pilot programs like Open DAGIR to quickly bring in commercial technology partners like Palantir and Anduril to speed up the slow procurement process. A key development from this partnership is the naval version of the Maven program, which provides a unified display with real-time ship data for the Navy to help turn data into useful information for commanders, supporting better maritime operations.

The US Army is providing soldiers with AI tools on their phones and laptops to help them spot local threats. This assists soldiers in working in tough areas without the need for constant support or a stable Internet connection. Companies such as TurbineOne develop applications that use data from drones and sensors to quickly identify threats, such as enemy positions or drone launch sites. Soldiers can even control groups of drones using these applications. The Army is also looking into using AI to manage airspace, which helps commanders deal with multiple drones and weapons in the sky. They are evaluating self-driving trucks for supply missions to ensure the safety of people. These changes show a shift from a top-down approach to a more flexible, bottom-up model, following the Agile Combat Employment (ACE) doctrine.

China

China is taking a strategic approach to using AI in the military to become a leader in future conflicts. They have a new military plan called “Intelligentized Warfare,” which focuses on AI as the main force for military changes. The goal is to use unmanned systems more in combat, with humans stepping back from the frontlines. This plan aims to control space, cyberspace, and information, similar to the US’s Multi-Domain Operations. The main way to achieve this is through Military-Civil Fusion (MCF), which is led by President Xi Jinping. The MCF aims to break down barriers between civilian and military sectors, using civilian research and private industry for military purposes, which is different from the US’s market-driven partnerships. By using small, innovative companies, China intends to quickly acquire modern technologies such as drone software and navigation tools, and the mix of civilian and military innovation is key to China’s goal of leading in AI globally.

Unmanned Hardware and Swarm Tactics: “Intelligentized Warfare” uses new unmanned systems and swarm tactics. In military parades, one can see “Robowolves,” which are four-legged, unmanned ground vehicles. These robots work in groups with PLA soldiers. They perform tasks such as scouting, clearing mines, and attacking targets. Some robots function as scouts, whereas others are shooters, similar to a wolf pack. The PLA also has unmanned naval helicopters that can work together and launch quietly from land or the sea. These help with surveillance and precise attacks in challenging maritime areas. China uses swarm tactics to counter the excessive cost of defending against drones, to make drone swarms better, Chinese engineers have created a “terminal evasion” system, which lets small drones use rocket boosters to dodge missiles at the last moment. Tests show that this system has a survival rate of over 87%, and the idea is based on the belief that making drones harder to destroy helps a swarm overwhelm air defences. This approach forces enemies to use expensive defences against many cheap drones. One person can control a whole swarm, showing the PLA’s focus on decentralised control. However, they rely on civilian drone manufacturers, such as DJI, which can cause issues with secure supply chains and military standards.

European Union

The European Union’s approach to military AI differs from that of the US and China. They want to innovate but also adhere to ethical principles and have developed a policy that values AI’s strategic importance but also ensures human control and oversight.

A Human-Centric Approach to Military AI. The EU’s stance is clear in its AI Act, which excludes military use but follows a “human-centric, risk-based model” of AI regulation. This approach focuses on accountability, following international humanitarian law, and preventing conflict escalation by maintaining the human oversight. The European Parliament has called for a ban on lethal autonomous weapon systems (LAWS).

The EU and its member states face a significant challenge in their ethical approach to AI, they know that AI can change military technology, but their ethical stance might put them behind in the global AI race. There are serious ethical issues, such as automation bias, where humans might follow incorrect AI advice, and AI trained on biased data, which can lead to mistakes and discrimination. There is also concern that allowing AI to make important decisions could remove human moral responsibility in war. This creates a dilemma of how to use military AI to protect sovereignty and stay ahead technologically without risking “ruthless precision” or losing “human dignity.”

The EU’s solution is to work together and invest in AI programs that keep humans involved instead of fully switching to autonomous combat.

Germany: The German government is working to modernise its defence by facilitating military collaboration with private startups. Helsing, a defense tech company in Munich, is key to this effort. Helsing uses a “software-first” approach to create AI systems for new and existing platforms. For example, the HX-2 strike drone is a software-defined drone with AI that can operate in swarms and find targets without a constant data connection. It is designed to withstand electronic warfare. Helsing also developed the Altra software platform, which can be used with different systems to create an AI-based decision chain for land forces. Another AI example from Helsing is the SG-1 Fathom, an underwater glider designed for secret surveillance. It uses the Lura AI software to process data independently, reducing the need for high-bandwidth communication.

France: France’s innovation strategy is led by the Agence Innovation Défense (AID), which accelerates dual-use technologies through open innovation and public-private partnerships. The AID’s Red Team Defence program looks ahead to future threats and shapes military strategy using AI tools and autonomous systems. To maintain control over AI technologies, France set up the Ministerial Agency for Artificial Intelligence in Defence (AMIAD) in 2024.

The United Kingdom

The UK has established the Defence AI Centre (DAIC) to accelerate the use of AI in defence work. A key project is Team Tempest, which is working with Italy and Japan to develop a new fighter jet. This jet will use AI to help the pilot by handling copious amounts of data and even taking control to avoid pilot stress or blackouts. Tempest can also control drone groups and act as a helpful partner. The British Army has shown that AI can be used to stop drone attacks with energy weapons.

These efforts in Germany, France, and the UK demonstrate a European trend of combining human and machine efforts. Instead of fully switching to autonomous combat, they focus on using AI to improve their current abilities and support human operators. They prioritise software solutions and collaborate with private startups.

India

India aims to be self-reliant in military AI, focusing on affordable and adaptable solutions to meet its needs. This is part of the “Make in India” plan, which is supported by a growing local defense startup scene. The Innovations for Defence Excellence (iDEX) initiative helps startups create military technology, aiming to build a strong local industry and reduce foreign dependence. India’s strategy uses AI to tackle specific challenges, such as reconnaissance and mine clearance. The AI defence market in India is expected to grow, driven by private sector advances in drones, surveillance, and simulation, leading to a strategy of building a solid foundation for a modern and resilient force.

Air and Land Teaming: India is working on a project called the HAL Combat Air Teaming System (CATS). This system, developed by Hindustan Aeronautics Limited (HAL), involves a manned fighter plane or “mothership,” that controls a group of unmanned combat aerial vehicles (UCAVs) and unmanned aerial vehicles (UAVs) using AI. The main aim is to observe from high altitudes and conduct attacks without risking human lives. The system includes the CATS Warrior, a UCAV that flies with the mothership, and CATS ALFA, a glider that releases a group of loitering munitions. These munitions, called ALFA-S, use AI to find and lock onto targets on their own, allowing them to attack without human help.

On land, the Defence Research and Development Organisation (DRDO) plays a key role in AI and robotics. Its main lab, the Centre for Artificial Intelligence and Robotics (CAIR), is creating a Multi-Agent Robotics Framework (MARF) to help different robots work together. This includes unmanned ground vehicles (UGVs) such as MUNTRA, which can be controlled remotely or work independently for tasks such as surveillance and clearing mines. CAIR is also working on secure communication and control systems to support these efforts.

Swarm and Surveillance System: India is learning from recent conflicts, such as the Russia-Ukraine war, by investing in low-cost swarm systems. The Sheshnaag-150 long-range swarm drone system, developed by NewSpace Research and Technologies, is a good example. These drones can operate autonomously, allowing for large-scale attacks on important targets and constant surveillance over high-altitude borders. The software that allows these drones to work together is a key component. In a test, the company showed that they could control a swarm of drones flying over 2,500 km away, proving that they could carry out long-range missions.

In the naval arena, the DRDO’s Trigun system, developed with the Indian Navy, uses AI and data analysis to provide a real-time view of maritime threats. By combining data from various sources, the system helps the Indian Navy detect risks and improve its awareness during conflicts.

These projects show that India is building a defence system that is practical and cost-effective, focusing on local solutions to its specific needs.

Geopolitical and Strategic Implications

Different national approaches to AI are changing global security and affecting traditional ideas of power and conflict. The term “AI arms race” is often used to describe the fierce competition between countries like the US and China, but this term can be misleading as decentralised AI is not a weapon like a missile; it is a technology that helps the military work faster and smarter. The real competition is not just about military tools but also about the systems that support AI, such as advanced chips, energy, important minerals, and skilled workers. The ability to quickly create and use modern technologies is important as this competition is complex and involves economic, military, and political factors. The US has bigger cloud service providers, which are important for AI, but China is catching up with its own AI developments in this area. The “AI arms race” idea is often used in politics to justify more defence spending or to criticise other countries; however, it does not fully explain the competition, which is influenced by private companies, market forces, and different national plans.

The use of AI in military operations changes how countries deter and escalate conflict. Traditional deterrence relies on human decision-making and communication, but AI can respond faster, which can lead to accidental conflicts because decisions may occur too quickly for humans to respond. There is also a risk that humans might blindly follow incorrect AI suggestions, and AI trained on bad data could make significant mistakes. On the other hand, AI can help deter conflicts by quickly analysing data from many sources and identifying threats, especially in the context of cyber warfare and this can make a country more credible in its response.

The key question is whether AI can be designed to understand and respond to signals to reduce conflict, or whether it will lead to a new type of fast-paced conflict in which both sides react quickly without thinking.

- Advertisement -
Rear Admiral S Kulshrestha, retd, PhD
Rear Admiral S Kulshrestha, retd, PhD
Former Director General of Naval Armament Inspection (DGNAI) at the Integrated Headquarters of Ministry of Defense (Navy) Rear Admiral Kulshrestha was advisor to the Chief of the Naval Staff prior to his superannuation in 2011. An alumnus of the Defence Services Staff College Wellington, College of Naval Warfare, Mumbai, and the National Defence College (NDC), Delhi — Admiral Kulshrestha holds two MPhil degrees in nanotechnology from Mumbai and Chennai Universities and Doctorate from ‘School of International Studies,’ JNU. He has authored a book “Negotiating Acquisition of Nanotechnology: The Indian Experience”.

1 COMMENT

  1. An intellectually superior work! My compliments. Diverse, enriching and full of usable inputs. The ongoing battle on the control of Air Littoral can be optimaly solved using the concept of Dentraliaed AI as enumerated by the author.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular