The Role of AI in Coordinating Air, Naval, and Ground Forces

Artificial intelligence serves as the connective tissue that enables military forces to operate as a unified fighting force rather than separate services...

Artificial intelligence serves as the connective tissue that enables military forces to operate as a unified fighting force rather than separate services executing parallel campaigns. Through programs like the U.S. Department of Defense’s Joint All-Domain Command and Control (JADC2) initiative, AI systems process and correlate sensor data from aircraft, ships, submarines, satellites, ground vehicles, and dismounted troops””creating a shared operational picture that allows commanders to orchestrate strikes and maneuvers across domains in near real-time. The November 2025 Capstone exercise at Nellis Air Force Base demonstrated this capability directly, with AI-enabled battle management systems coordinating assets from the Air Force, Army, Navy, and Marine Corps alongside Five Eyes partner nations to execute dynamic targeting across air, land, sea, and cyber domains.

This integration addresses a fundamental challenge: modern warfare unfolds at machine speed, but traditional command structures evolved when radio communication represented cutting-edge technology. AI bridges that gap by fusing millions of data points from disparate sensors, identifying threats and opportunities that human analysts would miss, and presenting commanders with actionable options rather than raw information overload. However, the technology introduces its own complications””questions about decision authority, accountability when systems fail, and whether militaries can securely share AI-generated intelligence with coalition partners remain active areas of debate. This article examines how AI enables multi-domain coordination, the sensor fusion technologies making it possible, real-world deployment challenges, and the critical limitations that military planners must address.

Table of Contents

How Does AI Enable Real-Time Coordination Between Air, Naval, and Ground Forces?

The core capability AI brings to joint force coordination is speed””specifically, the ability to process and distribute information faster than any human staff could manage. When a reconnaissance satellite detects a target, an AI system can simultaneously calculate optimal engagement options from available air assets, assess which naval platforms have line-of-sight for missile engagement, and determine whether ground-based artillery falls within range. This analysis, which might take human planners hours to coordinate across service channels, can occur in seconds. The U.S. Air Force’s Advanced Battle Management System (ABMS), the Navy’s Project Overmatch, and the Army’s Project Convergence all feed into this JADC2 architecture. The technical foundation involves machine-to-machine communication that bypasses traditional voice-and-radio coordination.

Rather than an Army fire direction center calling an Air Force tactical air control party to request air support, AI systems can identify that an Army unit has designated a target, cross-reference available Air Force assets, check airspace deconfliction requirements, and present options to both Army and Air Force commanders simultaneously. A 2020 Black Sea exercise demonstrated this concept when Air Force aircraft connected with naval vessels, special operations forces, and eight NATO nations in a simulated counter-Russia scenario, passing targeting data seamlessly across services and national boundaries. This represents a genuine operational improvement, but commanders must recognize its limits. AI coordination works best with pre-planned scenarios and clearly defined engagement authorities. In ambiguous situations””distinguishing combatants from civilians, assessing proportionality, or deciding whether to escalate””human judgment remains essential. The technology accelerates execution of decisions already made; it does not replace the decision-making process itself.

How Does AI Enable Real-Time Coordination Between Air, Naval, and Ground Forces?

The Sensor Fusion Challenge: Integrating Data Across Domains

Multi-domain coordination depends on sensor fusion””the ability to combine radar tracks, infrared imagery, signals intelligence, sonar contacts, and video feeds into a coherent picture. Without AI, this integration overwhelms human analysts. Modern battlefields generate data volumes measured in terabytes per hour from drones, satellites, ships, aircraft, and ground sensors. AI algorithms identify relationships between these data streams, filter noise, and highlight actionable intelligence that individual sensors might miss. The processing architecture has evolved into hierarchical layers. Edge computing at individual sensors performs initial filtering and feature extraction, reducing raw data volumes by orders of magnitude before transmission. Fog computing at tactical levels aggregates local sensor data for unit-level awareness.

Cloud processing at operational and strategic levels performs complex analytics across entire battlespaces. Leonardo DRS’s Artificial Intelligence Processor exemplifies edge capabilities, providing high-speed data handling in harsh environments that supports integration with image sensors and sensor fusion systems at the tactical level. However, sensor fusion introduces reliability concerns that commanders must weigh carefully. When systems fuse conflicting data””a radar track that contradicts a satellite image, or acoustic signatures that don’t match visual identification””AI must make judgment calls about which source to trust. Testing has revealed troubling failure modes: the U.S. military’s Project Maven AI image recognition system mistakenly identified a reflective puddle as a missile launcher during exercises. In complex urban warfare scenarios, AI target recognition systems have shown misidentification rates reaching 12.3%””far exceeding human operator tolerance thresholds. Commanders cannot treat AI-fused data as ground truth; verification processes remain essential, particularly for high-consequence decisions.

Military AI Logistics Market Growth Projection (Billions USD)20242.38$B20252.73$B20263.11$B20273.55$B20294.63$BSource: GlobalNewswire Market Research Report, January 2026

Drone Swarms and Autonomous Systems: Coordinated Multi-Domain Operations

autonomous systems represent the most visible application of AI coordination across domains. Ukraine’s conflict with Russia has provided unprecedented real-world data, with Ukrainian forces deploying approximately 9,000 drones daily and domestic manufacturers producing up to 200,000 first-person-view drones monthly. Ukrainian companies like Swarmer have developed software enabling coordinated operations of 3 to 25 drones per mission, with more than 100 documented combat deployments through 2025. The Fourth Law, another Ukrainian firm, created AI targeting modules costing just $70 that increased strike success rates from 20% to 80%. Major powers are scaling these concepts dramatically. The Pentagon’s Replicator program aims to deploy thousands of inexpensive autonomous drones, with $500 million allocated for fiscal year 2024 alone.

Anduril tested its Fury drone over the Mojave Desert in October 2025, marking the first AI-controlled flight of what may become a fleet of 1,000 robotic wingmen flying alongside crewed fighter jets. China revealed the Jiu Tian mothership drone at the November 2024 Zhuhai Airshow””a 10-ton UAV capable of deploying smaller swarms at speeds up to 560 mph with a 1,200-mile range. Multi-domain swarm coordination adds complexity beyond air-only operations. The UK Defence Science and Technology Laboratory contracted SeeByte and Blue Bear to develop Mixed Multi-Domain Swarms (MMDS) architecture enabling autonomous collaboration between air, land, and maritime vehicles. AUKUS Pillar 2 trials in May 2024 demonstrated this concept with Blue Bear Ghost UAVs, Viking ground vehicles, and Challenger 2 tanks executing coordinated swarm maneuvers. XTEND has deployed over 10,000 systems across air, land, and sea in more than 32 countries, with their ACQME-DK system leveraging swarm-based autonomy for distributed tactical operations.

Drone Swarms and Autonomous Systems: Coordinated Multi-Domain Operations

Coalition Interoperability: Making AI Work Across Alliances

Effective multi-domain coordination requires sharing AI-generated intelligence with allies””a capability that remains technically and politically challenging. When the U.S. Army’s 2nd Cavalry Regiment deployed to NATO’s eastern flank following Russia’s 2022 invasion of Ukraine, it struggled to digitally integrate with Romanian, Hungarian, and Slovak units. Differences in communication networks, data classification systems, and secure information-sharing protocols delayed joint decision-making even with human-operated systems. Adding AI to this environment multiplies the interoperability challenge. NATO’s revised AI strategy emphasizes growing interoperability between AI systems throughout the Alliance, but achieving this goal requires reconciling divergent national approaches. Member states operate at varying levels of AI technological advancement, maintain different procurement processes, and hold competing views on autonomous weapon ethics.

Allied Command Transformation is working toward making NATO a digitally transformed, Multi-Domain Operations-enabled Alliance by 2030, but significant obstacles remain. Some nations impose restrictions on AI decision-making authority that others do not share; data classification rules prevent certain intelligence from flowing to certain partners; and legacy systems in many allied forces cannot interface with modern AI platforms. The U.S. Army’s Mission Partner Kit represents one practical solution””a deployable system designed to enhance multinational interoperability with NATO allies by providing standardized interfaces that bridge national system differences. However, commanders must plan for degraded AI coordination when operating with coalition partners. If AI systems cannot share data across alliance boundaries, human liaison officers and traditional coordination methods must fill the gap. Relying on AI coordination that cannot extend to critical partners invites operational failure at the worst possible moment.

Decision-Making Under AI: Trust, Explainability, and Accountability

AI coordination systems present commanders with a fundamental dilemma: accept recommendations they may not fully understand, or reject AI judgments and risk being outpaced by an AI-enabled adversary. The most sophisticated AI calculations can be incompatible with human-interpretable logic. When an AI system recommends a particular course of action, it may provide a post hoc rationalization that appears plausible, but the explanation may bear little resemblance to the AI’s actual computational path. Commanders must decide how much to trust what amounts to an “alien oracle.” Research from Georgetown University’s Center for Security and Emerging Technology identifies specific failure modes in human-AI teaming. Over-trust and automation bias lead users to defer excessively to system outputs, particularly when interfaces convey unwarranted certainty. Conversely, under-trust causes operators to disregard accurate AI-generated insights because they contradict human intuition.

Both failure modes degrade operational effectiveness, and neither has a simple technological fix. The accountability question looms over all AI military applications. When AI systems make recommendations that lead to unintended consequences””civilian casualties from misidentified targets, for example””the traditional command responsibility chain becomes unclear. Legal frameworks for autonomous weapons and AI decision support remain works in progress. The 2025 Military AI, Peace & Security Dialogues convened by the United Nations emphasized the enduring importance of human leadership in guiding responsible AI integration, but specific accountability mechanisms remain undefined. Commanders who employ AI coordination tools must maintain clear documentation of human decision points, understand the limits of AI recommendations, and retain authority to override AI judgments even under time pressure.

Decision-Making Under AI: Trust, Explainability, and Accountability

Predictive Logistics: AI Coordination Beyond Combat Operations

AI coordination extends beyond combat to the logistics that make sustained operations possible. The Defense Logistics Agency operates 55 AI models in production with over 200 use cases under development, creating one of the most comprehensive AI-powered supply chain operations globally. These systems forecast demand, identify supply risks, and reduce equipment downtime through predictive maintenance””capabilities that directly enable combat effectiveness.

The Army Materiel Command’s Predictive Analytics Suite illustrates the approach. The software processes maintenance records, usage patterns, environmental conditions, and operational tempo to anticipate when critical systems will likely fail. This allows prepositioning spare parts and maintenance capabilities before breakdowns occur rather than reacting after equipment becomes unavailable. Early implementations have reduced delivery times by up to 25% and extended equipment lifespan by scheduling maintenance proactively rather than waiting for failure.

How to Prepare

  1. **Assess current sensor and communication architecture.** Map existing data flows between services, identify bandwidth constraints, and determine where edge computing can reduce transmission requirements. Legacy systems may require middleware to interface with AI platforms.
  2. **Establish data standards and classification protocols.** AI systems cannot fuse data that arrives in incompatible formats or that security restrictions prevent sharing. Define common data standards across services and resolve classification conflicts before attempting integration.
  3. **Develop realistic training scenarios.** Personnel must practice with AI coordination tools under conditions that approximate operational stress. Tabletop exercises that assume perfect AI performance do not prepare commanders for the degraded, ambiguous situations where human judgment matters most.
  4. **Create clear authority and override procedures.** Define in advance when human commanders should override AI recommendations, how to document those decisions, and who holds accountability for AI-influenced outcomes.
  5. **Build coalition interoperability from the start.** Retrofitting AI coordination to include partners is far more difficult than designing inclusion into initial architecture. Common mistake: assuming AI systems developed for single-nation use will seamlessly extend to coalition operations””they rarely do without deliberate engineering effort.

How to Apply This

  1. **Start with low-risk, high-value applications.** Predictive maintenance and logistics forecasting offer immediate returns with minimal controversy. Success in these domains builds organizational confidence and technical expertise for more complex coordination tasks.
  2. **Implement human-in-the-loop requirements for kinetic decisions.** Regardless of AI speed advantages, maintain human decision authority for any action involving lethal force. Document the decision chain explicitly.
  3. **Conduct regular red-team exercises against AI systems.** Adversaries will probe AI coordination for exploitable vulnerabilities. Testing must include adversarial scenarios where AI systems receive corrupted or deceptive data.
  4. **Establish feedback loops from operational use to development teams.** AI systems improve through iterative refinement. Field units must have channels to report performance issues, failure modes, and unexpected behaviors back to system developers.

Expert Tips

  • Maintain proficiency in manual coordination procedures. AI systems will fail, networks will be degraded or denied, and human staff must be capable of coordinating multi-domain operations without AI assistance. Do not allow AI dependence to atrophy fundamental skills.
  • Treat AI recommendations as one input among many, not as authoritative answers. AI systems excel at processing data volume but lack contextual understanding that experienced human operators bring.
  • Do not attempt to explain away AI misidentification rates as acceptable. A 12% error rate in target recognition that seems reasonable in training becomes unacceptable when applied to real populations. Set meaningful error thresholds and enforce them.
  • Invest in AI explainability research, but recognize its limits. Some AI insights genuinely cannot be translated into human-comprehensible logic without losing their value.
  • Plan communications architecture assuming contested, degraded environments. AI coordination that depends on reliable high-bandwidth links will fail when adversaries attack those links””exactly when coordination matters most.

Conclusion

Artificial intelligence has become an operational necessity for coordinating air, naval, and ground forces in modern warfare. The speed and complexity of multi-domain operations exceed human cognitive capacity to manage manually; AI systems that fuse sensor data, identify targets, recommend engagements, and coordinate logistics enable commanders to orchestrate effects across domains at a pace that previous generations could not achieve. The U.S. military’s JADC2 initiative, demonstrated in exercises like Capstone 2025, shows genuine progress toward this integrated future. Yet AI coordination remains a tool with significant limitations, not a solution that eliminates military uncertainty.

Sensor fusion systems misidentify targets at rates that should concern operational planners. Explainability remains an unsolved problem for the most sophisticated AI recommendations. Coalition interoperability challenges persist despite strategic-level agreements. And accountability frameworks for AI-influenced decisions lag behind the technology’s operational deployment. Military organizations that embrace AI coordination while maintaining human judgment, manual backup capabilities, and realistic assessments of AI limitations will extract genuine operational advantage. Those that treat AI as a replacement for military expertise rather than an enhancement of it will discover the technology’s failure modes at the worst possible times.

Frequently Asked Questions

How long does it typically take to see results?

Results vary depending on individual circumstances, but most people begin to see meaningful progress within 4-8 weeks of consistent effort. Patience and persistence are key factors in achieving lasting outcomes.

Is this approach suitable for beginners?

Yes, this approach works well for beginners when implemented gradually. Starting with the fundamentals and building up over time leads to better long-term results than trying to do everything at once.

What are the most common mistakes to avoid?

The most common mistakes include rushing the process, skipping foundational steps, and failing to track progress. Taking a methodical approach and learning from both successes and setbacks leads to better outcomes.

How can I measure my progress effectively?

Set specific, measurable goals at the outset and track relevant metrics regularly. Keep a journal or log to document your journey, and periodically review your progress against your initial objectives.

When should I seek professional help?

Consider consulting a professional if you encounter persistent challenges, need specialized expertise, or want to accelerate your progress. Professional guidance can provide valuable insights and help you avoid costly mistakes.

What resources do you recommend for further learning?

Look for reputable sources in the field, including industry publications, expert blogs, and educational courses. Joining communities of practitioners can also provide valuable peer support and knowledge sharing.


You Might Also Like