Artificial intelligence has fundamentally altered the tempo of warfare by compressing target identification and engagement timelines from hours or days to mere minutes or seconds. Where human analysts might previously identify 50 bombing targets per year, AI systems like Israel’s Gospel can now generate 100 targets per day with real-time strike recommendations. This acceleration represents the single most significant change in military operations since the introduction of precision-guided munitions, creating both unprecedented tactical advantages and serious risks of rapid, uncontrollable escalation. The 2025 Israel-Iran conflict demonstrated this new reality in stark terms.
In the first 12 hours alone, U.S. and Israeli forces executed nearly 900 strikes on Iranian targets””an operational tempo that would have required days or weeks in earlier conflicts. Ukraine’s battlefield experience offers another example: AI-enabled drones have increased target engagement success rates from 10-20 percent to 70-80 percent by removing the need for constant manual control. These are not theoretical projections but documented outcomes from active conflicts. This article examines how AI-driven targeting systems work, their impact on military decision-making cycles, the ethical and legal tensions they create, real-world implementations in current conflicts, and the risks of removing human judgment from lethal decisions.
Table of Contents
- How Does AI Transform Target Identification in Modern Warfare?
- The OODA Loop Accelerated: AI and Military Decision Cycles
- Real-World Implementation: AI Targeting in Gaza and Ukraine
- The Human Element: When Speed Undermines Judgment
- Hypersonic Threats and the Push Toward Full Autonomy
- The Growing AI Military Market
- How to Prepare
- How to Apply This
- Expert Tips
- Conclusion
- Frequently Asked Questions
How Does AI Transform Target Identification in Modern Warfare?
Traditional military targeting required human analysts to manually review surveillance data, cross-reference intelligence sources, and build target packages over extended periods. AI systems fundamentally restructure this process by simultaneously processing satellite imagery, communications intercepts, geolocation data, infrared sensors, and synthetic-aperture radar feeds to identify potential targets in real time. The Pentagon’s Project Maven, now deployed across more than 35 military tools and used by over 20,000 personnel, exemplifies this transformation. The system displays aircraft movements, logistics patterns, key personnel locations, and no-strike zones in a unified interface.
Yellow-outlined boxes highlight potential targets while blue boxes mark friendly forces. One unit reported that its intelligence-to-engagement timelines dropped from hours to minutes during recent exercises. However, this speed comes with significant tradeoffs. AI systems trained on historical data may encode biases””treating all military-age males as combatants, or flagging school buses as legitimate targets if even one was previously used by enemy forces. The brittleness of machine learning models means a single anomaly in training data can produce systematically flawed targeting recommendations across thousands of operations.
- —

The OODA Loop Accelerated: AI and Military Decision Cycles
Colonel John Boyd’s OODA loop””observe, orient, decide, act””has governed military decision-making theory since the 1970s. The fundamental principle holds that whoever cycles through this loop faster than their adversary gains decisive advantage. AI promises to compress these cycles beyond human cognitive limits, creating what some strategists call “Super-OODA Loops.” Connecting sensors directly to AI systems at the tactical edge eliminates transmission delays between observation and orientation. Automated analysis removes the latency of human interpretation. Some defense theorists argue that AI enables “Matrix Operations”””simultaneous optimization across multiple domains in real time””rather than sequential decision-making.
This capability becomes especially critical against hypersonic weapons, where a Mach 8 missile can cover 200 miles in under two minutes, leaving virtually no time for human deliberation. The limitation is that faster decisions are not necessarily better decisions. RAND Corporation research found that autonomous systems accelerated inadvertent escalation in wargaming scenarios. The obsession with speed may also disconnect decision-making from adversary behavior””acting at maximum velocity rather than at moments of maximum comparative advantage. As one analysis noted, speeding through one’s own OODA loop so quickly that it becomes disassociated from the adversary’s may prove counterproductive.
- —
Real-World Implementation: AI Targeting in Gaza and Ukraine
The Gaza conflict beginning in October 2023 provided the most extensively documented case study of AI-driven targeting in urban warfare. Israel’s Lavender system””a probabilistic model assigning individuals scores from 1 to 100 indicating likelihood of Hamas or Islamic Jihad membership””reportedly identified up to 37,000 Palestinians as potential targets. A separate system, Gospel, focused on identifying buildings and infrastructure associated with militant groups. The distinction matters: Gospel targets locations, while Lavender targets people. According to reports from Israeli military sources, human operators often devoted approximately 20 seconds to each target before authorizing strikes, primarily confirming the target was male.
The acceptable collateral damage threshold reportedly reached 15-20 civilians for a single low-ranking militant. These parameters enabled 12,000 targets to be struck in the opening weeks””a pace impossible with traditional targeting methods. Ukraine’s experience offers a different model. By retraining publicly available AI models on classified battlefield data, Ukrainian forces increased drone strike effectiveness three- to four-fold. In December 2024, Ukrainian forces executed the first fully unmanned operation near Lyptsi, coordinating uncrewed ground vehicles with FPV drones without infantry participation. The success prompted plans to increase AI-guided drone procurement from 0.5 percent to 50 percent of total orders in 2025.
- —

The Human Element: When Speed Undermines Judgment
The fundamental tension in AI-driven targeting lies between operational tempo and deliberative judgment. Defense officials emphasize “human-in-the-loop” or “human-on-the-loop” doctrines requiring human authorization before lethal force. In practice, the volume and velocity of AI-generated targets creates pressure toward rubber-stamping machine recommendations rather than independent assessment. This creates what researchers term “automation bias”””the tendency to accept automated recommendations without critical evaluation. When AI systems generate 100 targets daily, human review becomes a bottleneck.
The choice then becomes either slowing operations to enable meaningful human judgment or accelerating approval processes until human oversight becomes nominal. Gaza demonstrated the practical outcome: 20-second reviews to confirm gender rather than substantive target evaluation. The tradeoff extends beyond individual strikes. Proponents argue AI enables more precise targeting that should reduce civilian casualties. Critics counter that accelerating the “kill chain” inevitably increases total deaths, injuries, and destruction regardless of individual strike precision. Lauren Gould of Utrecht University summarized the critique: “Proponents argue that AI enables more precise targeting and therefore reduces civilian deaths, but that’s highly questionable.”.
- —
Hypersonic Threats and the Push Toward Full Autonomy
Hypersonic weapons represent the scenario where human decision-making may become physically impossible. A Mach 8 missile covering 1,000 miles in under 10 minutes leaves no time for traditional command chains. Intercepting such weapons may require AI systems authorized to act without human approval. This prospect has generated serious concern among arms control experts. Paul Scharre of the Center for a New American Security has warned of “flash wars” erupting when machines misinterpret radar signals and initiate catastrophic responses.
The historical record provides sobering context: Soviet Lieutenant Colonel Stanislav Petrov likely prevented nuclear war in 1983 by correctly judging that a satellite detection system showing five inbound missiles was malfunctioning. Whether an AI system would have reached the same conclusion remains uncertain. The limitation is that current AI systems cannot reliably distinguish between conventional and nuclear-armed hypersonic weapons during their flight phase. Dual-capable delivery systems create ambiguity that could prompt worst-case assumptions and preemptive responses. The compressed timeline may not permit the nuanced judgment that has historically prevented nuclear exchanges.
- —

The Growing AI Military Market
Investment in military AI is accelerating globally, with market projections ranging from $19 billion to $35 billion by 2030, depending on scope definitions. The Pentagon increased its contract ceiling for Maven Smart System to $1.3 billion through 2029, up from $480 million. Ukraine’s procurement of AI-enabled drones jumped from negligible quantities to 10,000 units in 2024, with plans for dramatic expansion.
China, Russia, and India are each developing their own military AI capabilities. Russia announced an unmanned systems branch in December 2024 and began systematic data collection on drone operations and strike outcomes to train AI models. China’s advances in defense AI applications prompted the original establishment of Project Maven. This dynamic creates competitive pressure to deploy AI systems faster, potentially before adequate testing or safeguard development.
- —
How to Prepare
- **Assess data quality and training sources.** AI systems are only as reliable as their training data. Systems trained on limited or biased datasets will produce systematically flawed recommendations.
- **Define acceptable error rates explicitly.** Every targeting system will produce false positives and false negatives. Establish clear thresholds before deployment rather than adjusting standards during operations.
- **Map human decision points in the kill chain.** Identify where human judgment occurs and ensure those chokepoints receive adequate time and information for meaningful review.
- **Test against adversarial inputs.** Enemies will attempt to manipulate AI systems through deception, spoofing, and deliberate exploitation of training data gaps.
- **Establish escalation protocols.** Define conditions under which AI targeting authority must be suspended and human-only processes activated.
How to Apply This
- **Establish independent verification processes.** Create parallel human analysis workflows for a statistically significant sample of AI-generated targets to calibrate system accuracy over time.
- **Implement time minimums for lethal decisions.** Even if AI systems can generate targets in seconds, mandate minimum review periods proportional to expected collateral damage.
- **Create accountability documentation.** Record every targeting decision, including AI confidence scores, human reviewer identity, review duration, and post-strike assessment.
- **Build kill switches.** Engineer systems with immediate suspension capabilities that can be activated when pattern failures are detected or conflict parameters change.
Expert Tips
- Treat AI targeting confidence scores as starting points, not conclusions. A 95 percent confidence score still means 5 percent of targets are wrong””potentially thousands of individuals in high-volume operations.
- Do not deploy AI targeting systems trained solely on simulation data. Real-world battlefield conditions contain variations that synthetic training cannot replicate.
- Monitor for target category drift. Systems may gradually expand their definition of legitimate targets over time if not actively constrained.
- Maintain human expertise in traditional targeting methods. Over-reliance on AI creates critical vulnerabilities if systems are compromised, jammed, or fail.
- Question the premise that speed is the primary measure of targeting effectiveness. Accuracy, proportionality, and strategic impact matter more than tempo in most scenarios.
- —
Conclusion
AI-driven target analysis has already transformed military operations in documented conflicts from Gaza to Ukraine to the 2025 Israel-Iran war. The technology compresses decision cycles from hours to minutes, enables targeting volumes that would be impossible for human analysts, and creates tactical advantages that are reshaping doctrinal assumptions across major militaries.
The critical questions are no longer whether these systems will be deployed but how they will be governed. The gap between AI processing speed and human judgment capacity creates persistent pressure toward automation of lethal decisions. Whether military institutions can maintain meaningful human oversight while capturing AI’s tactical advantages””or whether competitive dynamics will push humans progressively out of the loop””will significantly influence both the conduct and consequences of future conflicts.
Frequently Asked Questions
How long does it typically take to see results?
Results vary depending on individual circumstances, but most people begin to see meaningful progress within 4-8 weeks of consistent effort. Patience and persistence are key factors in achieving lasting outcomes.
Is this approach suitable for beginners?
Yes, this approach works well for beginners when implemented gradually. Starting with the fundamentals and building up over time leads to better long-term results than trying to do everything at once.
What are the most common mistakes to avoid?
The most common mistakes include rushing the process, skipping foundational steps, and failing to track progress. Taking a methodical approach and learning from both successes and setbacks leads to better outcomes.
How can I measure my progress effectively?
Set specific, measurable goals at the outset and track relevant metrics regularly. Keep a journal or log to document your journey, and periodically review your progress against your initial objectives.
When should I seek professional help?
Consider consulting a professional if you encounter persistent challenges, need specialized expertise, or want to accelerate your progress. Professional guidance can provide valuable insights and help you avoid costly mistakes.
What resources do you recommend for further learning?
Look for reputable sources in the field, including industry publications, expert blogs, and educational courses. Joining communities of practitioners can also provide valuable peer support and knowledge sharing.



