Header Ad
HomeDEFENCEAgentic AI and its impact on Command and Control of Military Operations

Agentic AI and its impact on Command and Control of Military Operations

- Advertisement -
LinkedIn

Artificial intelligence (AI) is transforming the management of complex decisions and operations. The two pivotal concepts in this domain are AI agents and Agentic AI. Although both contribute significantly to automation, their foundational structures and operational philosophies differ, impacting their reliability, adaptability, and scalability. AI agents are autonomous software programs designed to execute specific tasks in a particular environment. They possess a degree of independence, enabling them to respond to input and environmental changes without continuous human oversight. These agents often use advanced models such as large-language models (LLMs) or large-image models (LIMs) to make decisions and comprehend complex situations. Despite their proficiency in specific tasks, their capacity to adapt to novel or unforeseen situations is limited. By contrast, agentic AI surpasses the capabilities of individual AI agents by employing multiple specialised agents that collaborate flexibly. This approach incorporates elements, such as action coordination among agents, a memory system for information retention over time, collective task planning, communication between agents using semantic protocols, and a higher-level agent (meta-agent) that oversees the coordination process. Agentic AI is specifically designed for intricate workflows, where diverse agents must communicate, dynamically assign tasks, and make decisions based on evolving circumstances.

AI agents find applications in various domains, including conversational assistants, such as GPT-3-powered chatbots, content generation platforms, automated scheduling tools, and data summarisation systems. Conversely, Agentic AI is applicable in complex environments such as autonomous robotics coordination, military logistics management, healthcare decision support systems, and cybersecurity threat detection. The key distinctions between AI agents and Agentic AI systems are as follows:

Level of Autonomy

NationofChange

AI Agents: These agents exhibit moderate autonomy, being designed to perform specific tasks within defined parameters, respond to inputs, and adjust behaviour based on environmental changes. However, they are constrained by predefined rules and models, which limits their ability to deviate from programmed responses.

Agentic AI Systems: These systems operate with high autonomy and comprise multiple specialised agents working collaboratively. This enables them to address complex goals without constant human intervention. Coordinated autonomy allows these systems to dynamically assign tasks among agents, adapt strategies based on real-time data, and to learn from interactions over time.

- Advertisement -

Adaptability

AI Agents: These agents have limited adaptability because of their narrow focus on specific tasks. While they excel in environments with predictable patterns, they may encounter challenges in situations that require significant flexibility or creativity. Their adaptability is restricted by their programming specificity.

Agentic AI Systems: These systems exhibit advanced adaptability, enabling them to reconfigure in response to changing conditions through inter-agent communication to manage dynamic workflows. This adaptability is essential for handling unpredictable situations, rendering Agentic AI suitable for complex and evolving environments such as military operations.

Inter-agent Communication

Tehrani.com

AI Agents: Typically, these agents operate independently within their respective domains, resulting in minimal or no communication between them. Their interactions are often limited to simple data exchanges, without deeper collaborative processes.

Agentic AI Systems: By contrast, these systems excel through robust communication between agents. They use semantic communication protocols that allow agents to share insights, coordinate actions, and synchronise their efforts efficiently. This collaborative framework enhances decision-making by integrating the knowledge of multiple agents. The interaction among multiple agents facilitates distributed problem solving and real-time strategy adjustments. By sharing information and coordinating actions, these systems can make quicker and more informed decisions than isolated AI Agents can.

- Advertisement -

Military Applications of AI Technologies

Military applications of AI technologies have moved beyond simple automation to highly advanced adaptive systems capable of independent operation across air, land, sea, and cyber domains. Understanding the differences between AI Agents and Agentic AI is essential for understanding how autonomous systems are transforming their defence operations.

Examples of AI agents and AI-powered military systems

ECS

Autonomous and Unmanned Systems   

Drones and Unmanned Aerial Vehicles (UAVs): AI is crucial for navigation, target recognition, surveillance, and even autonomous combat in drones, such as Anduril’s Barracuda, Fury, and Ghost. Drone swarms, in which multiple drones coordinate to achieve a shared objective, have also been developed and evaluated.

Unmanned Ground Vehicles (UGVs): Used for reconnaissance, bomb disposal, supply transport, and other hazardous tasks to minimise human exposure to danger. Examples include mine detection UGVs like “Sapper Scout.”

Autonomous submarines and Unmanned Surface Vessels (USVs): Countries are developing AI-powered autonomous submarines and surface vessels for navigation, surveillance, and combat. The U.S. The navy’s Global Autonomous Reconnaissance Craft (GARC) is an example.

- Advertisement -

Loyal Wingman Programs: Several countries (US, Australia, China, Russia, UK, Turkey, India) are developing “loyal wingman” aircraft, which are relatively inexpensive, autonomous uncrewed aerial systems designed to fly alongside and support crewed fighter jets, performing tasks like electronic attack/defence, ISR (Intelligence, Surveillance, and Reconnaissance), or acting as decoys.

Intelligence, Surveillance, and Reconnaissance (ISR)

Lockheed Martin

Data Fusion and Analysis: AI systems process vast amounts of data from satellites, sensors, and surveillance platforms to identify patterns, detect threats, and provide real-time insights. Project Maven in the US, for example, uses AI to autonomously detect, tag, and trace objects or humans from video footage.

Target Recognition and Identification: AI-powered computer vision systems are used for the real-time threat detection and classification of vehicles, aircraft, personnel, and other targets.

Border Surveillance: AI-based surveillance systems, such as India’s AI-based Intrusion Detection System (AI-IDS) deployed along its borders with Pakistan and China, provide live feeds and automated alerts based on facial recognition and other threat indicators. Israel has used AI for border surveillance.

War Simulations and Planning: Generative AI can improve training programs with simulation software that builds virtual models for soldiers to refine combat skills and strategic planning (for example, China’s AI military commander for large-scale war simulations).

AI-piloted Fighter Jets: Sweden’s Centaur AI pilot, assessed in a Gripen jet, performed simulated air combat with autonomy, demonstrating an advanced threat response and manoeuvring.

It is important to note that the development and deployment of AI in military applications are constantly evolving, and many projects are classified. The examples above represent publicly acknowledged or discussed applications of AI agents and systems that are already in use or in advanced stages of development.

Examples of Agentic AI in Military

Although the military has long used AI for various tasks, the concept of “Agentic AI” implies a higher degree of independent decision making and proactive action, moving beyond simply processing data or assisting human operators.

Autonomous Weapon Systems (LAWS/AWS)

Counter-Rocket, Artillery, and Mortar (C-RAM) systems, such as the US Phalanx Close-In Weapon System (CIWS), can autonomously detect incoming projectiles (rockets, artillery shells, and mortar rounds) and engage them with rapid-fire guns without direct human intervention once activated and given engagement parameters. Similar systems exist for tanks (e.g., Russian Arena and Israeli Trophy). While ” Agentic” in their ability to detect and engage, they operate within predefined rules of engagement set by human operators.

Active Protection Systems (APS) for Vehicles: These systems on tanks and other armoured vehicles can autonomously detect incoming anti-tank missiles or rockets and deploy countermeasures (e.g., interceptor projectiles, jammers) to neutralise the threat.

Autonomous Drones for Specific Tasks: Although debated, there have been reports, such as the UN Security Council report on the Kargu 2 drone in Libya in 2020, suggesting instances in which an autonomous drone may have identified and attacked a human target without further human intervention. These systems operate with varying degrees of autonomy, with a trend towards more independent actions within the defined parameters.

Swarm Intelligence for Drones

Built In

Coordinated drone warms: Militaries are developing and testing drone swarms in which multiple autonomous drones communicate and cooperate to achieve a common objective, such as overwhelming enemy air defences, conducting coordinated reconnaissance, or performing complex search patterns. Each drone acts as an agent, making decisions based on its own observations and signals from peers, thus contributing to overall swarm intelligence. This is a prime example of a decentralised Agentic AI. The US and Israel have conducted tests involving AI-guided combat drone swarms.

Adaptive Reconnaissance/Surveillance Swarms: Drones in a swarm can autonomously adapt their flight paths and sensor usage based on real-time information to optimise intelligence gathering and collectively identify targets or areas of interest.

Cybersecurity Agents (Hunter-Killer agents)

Nordic Defence Review

Autonomous Intelligent Cyber Defense Agents (AICA): NATO researchers have outlined architectures for AICA, which would be cyber “hunter-killer” agents deployed in military networks. These agents are envisioned to work in swarms to detect cyber-attacks, devise countermeasures, adapt their responses in real time, operate without constant human instruction to patrol networks, and neutralise threats.

Autonomous Penetration Testing: Agentic AI bots can continuously simulate multi-stage cyberattacks against an organisation’s own systems to identify weaknesses before adversaries. These “red team” agents act autonomously to probe defences and report vulnerabilities.

Logistics and supply chain optimisation

Autonomous Supply Chain Management: Agentic AI systems can autonomously manage inventory, predict equipment failures, optimise resupply routes, and adapt to changing operational environments to ensure efficient delivery of resources.

Decision Support and Planning Agents (with human-in-the-loop)

Multi-Agent Systems for Operations Planning: Agentic AI can revolutionise operational planning. A centralized “commander agent” could orchestrate and dispatch tasks to specialized “worker agents.” These worker agents would have the autonomy to use various tools (APIs, databases, simulation systems) to retrieve information, execute specific tasks, and collaboratively critique drafted plans to ensure effectiveness and compliance.

AI-Enhanced Combat Simulation and Training: Generative AI, which often acts agentically, can create realistic and dynamic training scenarios and battlefield simulations. AI agents can play the role of enemy forces or allies, adapting their behaviour based on trainee performance for more immersive and effective military training. The U.S. The army is exploring agentic AI with companies such as Sandtable for research and development in reasoning, learning, planning, and execution, for practical military use.

Challenges, Risks, and Security Implications

Nordic Defence Review

The implementation of advanced autonomous systems in the military context presents several significant challenges. These include:

Reasoning Errors: AI agents, particularly those relying on large language models (LLMs), can exhibit reasoning errors or “hallucinations.” These inaccuracies arise from models that generate plausible, yet incorrect, information based on their training data. In a military scenario, such errors can lead to faulty decision making with potentially catastrophic consequences.

Coordination Failures Multi-agent systems: Agentic AI, depends on seamless inter-agent communication and coordination. Failures in these areas can result in misaligned objectives, ineffective task execution, and operational inefficiency. Thus, it is critical to ensure reliable communication protocols and synchronisation among the autonomous units.

Autonomous systems are vulnerable to several security threats including adversarial attacks (e.g., data poisoning and prompt injection) and unauthorised data access. These risks are exacerbated by the interconnected nature of these systems that may expose multiple points to potential exploitation.

Various strategies can be implemented to mitigate these risks, while ensuring data security. There is a need to develop comprehensive policies for data validation, anomaly detection, and transparent auditing of AI decisions. This includes establishing human-centric security training programs and defining clear protocols for monitoring the behaviour of autonomous systems. Enforcing strict access controls to ensure that AI systems possess only the minimum permissions necessary to perform their tasks. This approach reduces the attack surface and limits the potential damage from breaches. Real-time monitoring tools need to be used to track AI system performance and promptly detect deviations from expected behaviour. This facilitates the identification and resolution of issues before they escalate into significant problems. Maintaining a balance between autonomous operations and human intervention at critical control points is essential, this would ensure that, while AI autonomously manages routine tasks, humans oversee strategic decisions and manage complex scenarios. These measures enhance the reliability, safety, and security of advanced autonomous systems in military operations, and effectively address key implementation challenges.

- Advertisement -
Rear Admiral Dr. S Kulshrestha (Retd)
Rear Admiral Dr. S Kulshrestha (Retd)
Former Director General of Naval Armament Inspection (DGNAI) at the Integrated Headquarters of Ministry of Defense (Navy) Rear Admiral Dr. S Kulshrestha was advisor to the Chief of the Naval Staff prior to his superannuation in 2011. An alumnus of the Defence Services Staff College Wellington, College of Naval Warfare, Mumbai, and the National Defence College (NDC), Delhi — Rear Admiral Kulshrestha holds two MPhil degrees in nanotechnology from Mumbai and Chennai Universities and Doctorate from ‘School of International Studies,’ JNU. He has authored a book “Negotiating Acquisition of Nanotechnology: The Indian Experience”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -

Most Popular