1. Home
  2. article
  3. AI in Battlefield Decision-Making: Ethical and Operational Challenges in Autonomous Warfare

AI in Battlefield Decision-Making: Ethical and Operational Challenges in Autonomous Warfare


Updated: 2/27/2026Our Bureau

AI in Battlefield Decision-Making: Ethical and Operational Challenges in Autonomous Warfare

“When machines decide, ethics must command; for without human conscience, autonomy is but chaos in uniform.”

Operational & Ethical Dimensions

“Will tomorrow’s wars be fought by code or commanded by conscience?”

Rise of the Unmanned Warrior — Autonomous Systems Across Domains, places AI at the heart of modern and future warfare. Autonomous platforms in land, air, and maritime environments depend on AI-enabled decision-making, making its operational reliability and ethical governance a decisive national security concern.  AI systems now support functions from target recognition to mission planning. They enable unmatched speed in threat evaluation and engagement cycles, giving militaries a vital edge in contested battlespaces.  AI-driven autonomy faces the test of reliability under electronic jamming, spoofing, and cyber-attacks. Multi-domain fusion of satellite, UAV, radar, and EW data demands resilient AI architectures that can function under extreme conditions of denial and deception.

AI warfare tools cannot be procured off-the-shelf. India must pioneer a Force-to-Company (F2C) integration model, bringing Services, Industry, and Academia into co-development clusters. This ensures indigenous ownership, rapid innovation, and security from foreign dependencies. The dilemmas of AI in combat are profound. Should machines take lethal decisions without human oversight?

The framework must define levels of autonomy: human-in-the-loop, human-on-the-loop, and fully autonomous. Ethical AI in warfare must uphold humanitarian law and accountability. Bias in algorithms, poisoned data, adversarial AI attacks, and over-reliance on machine judgement create risks of strategic instability. The danger of autonomous escalation—machines triggering conflict spirals without deliberate human intent—cannot be ignored. India must codify an AI-Soldier Doctrine, embedded within a broader AI-Battlefront Doctrine. This doctrine would regulate autonomy levels, escalation control, accountability, and integration into Tri-Service operations, ensuring responsible deployment of AI warriors.

Sovereignty, Data Security & Indigenous AI

Imported AI platforms pose risks of covert backdoors, unreliable algorithms, and cyber manipulation. In battlefield decision-making, reliance on foreign AI tools is strategically untenable. Battlefield AI must operate on classified, sovereign datasets within secure, air-gapped environments. This ensures data secrecy and prevents breaches or intrusions that could compromise national command structures. India must field exclusive defence-use AI machines, developed by national consortia of DRDO, Services, SMEs, and private innovators. These platforms must be hardened for combat, tested against adversarial AI, and certified for mission-critical trustworthiness.

The Government should constitute a National Strategic Cyber Combat & Security Committee (NSCCSC) at Cabinet level, overseeing AI doctrines, integration with the Tri-Service Cyber Command, and guided by visionary advisers. This ensures civilian-strategic oversight of military AI, anchored in sovereignty. By 2035, India must create trusted indigenous AI warriors—ethical, adaptive, and resilient—crafted under Atmanirbhar Bharat and guided by national ethos. Positioned as both a cyber power and ethical AI leader, India can secure its sovereignty while contributing to global norms of responsible autonomous warfare.

Section 1 – Prologue & Context Setting

1.1 The Dawn of Autonomous Warfare

The evolution of warfare has always been shaped by technology. From the invention of the longbow to the emergence of nuclear weapons, each leap altered the nature of conflict. Today, the world stands at the threshold of another decisive shift — the rise of autonomous systems governed by artificial intelligence (AI). The theme, “Rise of the Unmanned Warrior:  Autonomous Systems Across Domains”, reflects this historic juncture. For the first time in recorded military history, machines are beginning not only to execute human orders but also to decide within the parameters of combat. This unprecedented development necessitates a re-examination of ethics, reliability, and sovereignty in defence.

1.2 Why Autonomy Matters

Speed has always been a determinant of survival in battle. In today’s hyper-contested environments, the ability to observe, orient, decide, and act — the OODA loop must be executed at machine pace. Human cognition, however sharp, cannot match the split-second calculations of AI-enabled systems. This creates an imperative: either militaries embrace autonomy or risk irrelevance in tomorrow’s battlespaces. Yet, such adoption is not without peril. Autonomy without ethical compass and sovereign safeguards can spiral into chaos in uniform a battlefield where human conscience is replaced by algorithmic determinism.

1.3 India’s Context and Strategic Imperative

For India, the question of autonomy is both existential and aspirational. Existential, because our adversaries are investing heavily in unmanned aerial vehicles, undersea drones, and AI-driven cyber warfare tools; aspirational, because Atmanirbhar Bharat envisions India as not merely a consumer of imported technologies but a pioneer of indigenous, ethical, and resilient AI warriors. The adoption of autonomous systems must therefore be carefully sculpted to ensure both strategic security and national ethos. This paper argues that India must simultaneously harness AI’s battlefield potential and restrain its risks through doctrine, governance, and sovereign ownership.

1.4 Framing the Ethical Question

The central dilemma can be expressed in a single question: Should machines be permitted to make lethal decisions without human oversight? While technology promises operational superiority, it also raises profound moral concerns. International Humanitarian Law, the Geneva Conventions, and India’s own dharmic traditions converge on one principle: accountability must remain human. The challenge, then, is to embed conscience into code, to ensure that the unmanned warrior remains a servant of law and humanity rather than an autonomous arbiter of life and death.

1.5 Structure of the Paper

This paper develops the argument in nine further sections. Section 2 explores the rise of unmanned warriors across land, sea, and air domains. Section 3 delves into battlefield decision-making and AI’s role in compressing time cycles. Section 4 interrogates the balance between operational reliability and ethical governance. Section 5 examines sovereignty and indigenous AI imperatives. Section 6 proposes doctrinal frameworks for India’s armed forces. Section 7 outlines institutional mechanisms of governance. Section 8 derives lessons from global practices. Section 9 sets forth India’s roadmap towards 2030 and beyond. Finally, Section 10 presents the concluding message — that India’s AI warriors must be trusted, indigenous, and ethical, embodying not only technological excellence but also human conscience.

Section 2 – Rise of the Unmanned Warrior

2.1 Historical Continuity of Innovation

Warfare has always adapted to technology, and technology has always reshaped the battlefield. Just as the tank revolutionised land combat in the early 20th century and aircraft transformed the very geometry of war, today autonomous systems are rewriting the grammar of conflict. The unmanned warrior is not a futuristic vision; it is a present reality steadily maturing across domains.

2.2 Land Domain – Unmanned Ground Vehicles (UGVs)

Autonomous ground platforms have advanced far beyond basic remotely operated vehicles. Equipped with AI-based navigation, terrain-mapping, and payload integration, UGVs can execute reconnaissance, logistics, explosive ordnance disposal, and even direct combat roles. Their value lies in reducing human exposure to risk while maintaining persistent presence in high-threat environments. For India, UGVs can play a critical role in border surveillance, counter-insurgency operations, and supply chain resilience in difficult terrains such as the Himalayas.

2.3 Air Domain – Unmanned Combat Aerial Vehicles (UCAVs) and Swarm Drones

In the skies, unmanned combat platforms are redefining aerial warfare. UCAVs can strike deep behind enemy lines, guided by AI-enabled target recognition and mission planning. Swarm drone systems, operating in coordinated clusters, present a disruptive capability — overwhelming adversary air defences through distributed intelligence. Global experiences, from Armenia–Azerbaijan conflicts to current theatres, underline how unmanned aerial systems can tip the scales in contested battlespaces. India’s indigenous swarm projects and loyal wingman concepts must be accelerated under Atmanirbhar Bharat.

2.4 Maritime Domain – Unmanned Surface and Undersea Vehicles

Maritime security is increasingly shaped by autonomous vessels. Unmanned Surface Vehicles (USVs) patrol littorals, escort fleets, and monitor choke points. Unmanned Undersea Vehicles (UUVs) extend surveillance into the hidden depths, where manned submarines are vulnerable. For India, with its vast coastline and strategic Indian Ocean footprint, the integration of unmanned maritime platforms is a necessity, not a choice. Indigenous design and deployment will ensure sovereignty in undersea domain awareness.

2.5 Multi-Domain Expansion and Integration

The true power of unmanned warriors lies in their integration across domains. Land, air, maritime, space, and cyber systems, when fused into a coherent operational web, create decision superiority. Autonomous systems not only act individually but also collaborate in multi-domain task forces, guided by AI fusion of radar, EW, satellite, and sensor data. The rise of the unmanned warrior is therefore not about replacing humans but about augmenting human decision-making with relentless, coordinated, and adaptive machine support.

2.6 The Unmanned Warrior as a Composite Entity

The unmanned warrior must not be seen in fragments — as drones in the sky, vehicles on land, or vessels at sea — but also as a composite entity that fuses sensing, decision, and striking capabilities into one continuum. Powered by AI-enabled situational awareness, these warriors detect threats, classify intent, and coordinate responses across domains. They are not mere extensions of human command but partners in cognition, capable of shaping battles through speed, precision, and adaptive decision-making.

2.7 India’s Position in the Global Landscape

Globally, nations have raced ahead in developing autonomous systems — the United States with loyal wingman projects, China with drone swarms, Israel with combat-tested UAS, and Russia with robotic ground platforms. India is catching up, with indigenous programs under DRDO, startups, and academia–industry collaborations. Yet, the strategic choice before India is unique: to craft unmanned warriors that are not merely imitations of foreign models but embodiments of India’s sovereignty, ethos, and innovation. The coming decade will determine whether India becomes a consumer of imported autonomy or a leader in indigenous unmanned warfare.

2.8 From Machines to War fighters

The next stage of evolution is not about building more unmanned machines, but about creating unmanned war fighters — platforms that combine sensors, neutralisers, and decision engines into a unified combat system. Such entities will carry both defensive and offensive payloads, adapt in real time to contested conditions, and operate in swarms or teams under AI-guided coordination. For India, embracing this vision early ensures that its unmanned warriors are not passive tools but active war fighters that redefine the very geometry of the battlefield.

2.9 Transition to Next Discussion

Having established the presence of unmanned warriors across land, air, and maritime domains, the next section will analyse the role of AI in battlefield decision-making — the mind that drives the unmanned body. Here lies both the operational promise and the ethical peril of the unmanned warrior.

Section 3 – AI in Battlefield Decision-Making

3.1 The Centrality of Decision in War

Every conflict ultimately distils into decisions — when to strike, whom to engage, how to manoeuvre, and when to disengage. Traditionally, commanders bore this burden, aided by staff and intelligence. With AI, decision-making enters a new epoch. Machines are now capable of processing sensor data, evaluating threat matrices, and recommending or executing courses of action at speeds incomprehensible to human cognition. The question is not whether AI will shape decisions, but how much authority it should be granted.

The unmanned warrior, as earlier defined, is not a mere platform but a composite combat entity — fusing sensors, decision engines, and strike systems into one continuum. Its power lies in how AI transforms disparate functions into a unified cycle of awareness, decision, and action. Thus, battlefield decision-making is not an adjunct to the unmanned warrior but its very lifeblood, converting platforms into true war fighters.

3.2 Acceleration of the OODA Loop

The Observe–Orient–Decide–Act (OODA) cycle defines the rhythm of battle. AI compresses this loop, reducing minutes to seconds. Automated target recognition enables instantaneous engagement; AI-assisted mission planning reshuffles resources in real time. In high-velocity theatres such as air-to-air combat or electronic warfare duels, milliseconds separate victory from defeat. India must therefore develop AI that can operate reliably at machine pace, while retaining human oversight where ethical and strategic prudence demand it.

3.3 Decision Latency as the New Vulnerability

In high-velocity battlespaces, decision latency — the delay between sensing a threat and acting upon it — can mean the difference between victory and destruction. In air-to-air duels, hypersonic missile engagements, or swarm drone clashes, even a two-second lag is fatal. AI-enabled decision-making reduces this latency from minutes to milliseconds, converting what was once human reaction into near-instantaneous action. Commanders who harness this power gain decision superiority, which in turn translates directly into battle superiority.

3.4 Functions Enabled by AI

AI applications in battlefield decision-making span multiple dimensions:

  • Target recognition & classification – distinguishing friend from foe with precision.

  • Mission planning & re-planning – dynamically adjusting tactics under fluid conditions.

  • Threat evaluation – prioritising multiple incoming dangers across domains.

  • Sensor fusion – integrating radar, UAV, satellite, and EW inputs into coherent situational awareness.

  • Predictive logistics – forecasting supply chain demands based on evolving battle patterns.

3.5 The Force Multiplier Effect

AI is not just a tool for efficiency; it is a force multiplier that redefines combat power. A single commander, aided by AI-driven situational awareness and predictive analytics, can now manage what previously required entire staffs and operations centres. This redistribution of cognitive load frees human leadership to focus on strategic judgement while AI handles tactical and operational complexities at machine pace.

3.6 Reliability Under Adversity

AI decision engines must function even under adversarial conditions — jamming, spoofing, cyber denial, and misinformation. The battlefield will not present pristine datasets; it will flood AI with poisoned or deceptive inputs. Therefore, architectures must be resilient, self-correcting, and capable of functioning with degraded or partial data. For India, the challenge is to design AI that does not collapse under denial but adapts like the seasoned warrior it replaces.

3.7 The Stakes of Trust

For AI to be truly accepted on the battlefield, it must earn the trust of the soldier and the commander. A single misclassified target, a single false alarm under cyber deception, can undermine confidence and paralyse operations. Trust in AI must therefore be built through rigorous combat trials, resilient architectures, and doctrinal clarity. Once trust is secured, AI ceases to be an experimental support and becomes a central pillar of battlefield command.

3.8 Human–Machine Synergy

AI must not be viewed as a replacement for human judgment but as an enhancer. The synergy lies in combining machine speed with human wisdom. Human commanders must oversee critical thresholds — authorising lethal force, setting escalation boundaries, and interpreting contextual nuance. This balance, often termed human-on-the-loop, ensures that machines accelerate the tactical but humans preserve the strategic and moral compass.

3.9 Case Illustrations

  • Border standoffs: AI-enabled surveillance drones that instantly relay troop movements, allowing commanders to de-escalate or reinforce.

  • Grey-zone conflicts: Swarm drones that assess adversary intent by simulating attack patterns but hold back actual fire until authorised.

  • Electronic warfare theatres: AI that adjusts frequencies in microseconds to evade jamming, while humans determine larger engagement rules.

3.10 The Coming Reality of Machine-Dominated Tempo

Future conflicts will unfold at a tempo dictated by machines, not men. In such a battlespace, hesitation is defeat. The nation that integrates AI most effectively into its decision cycles will not just respond faster — it will shape the rhythm of the war itself. India’s imperative is therefore clear: build AI that is trusted, indigenous, and capable of keeping pace with, or outpacing, any adversary’s machine-driven tempo.

3.11 Transition to Next Discussion

The operational promise of AI in decision-making is undeniable, yet it brings forth dilemmas of reliability, accountability, and escalation. The next section will therefore probe the tension between operational reliability and ethical governance, a balance that will define the legitimacy of autonomous warfare.

Section 4 – Operational Reliability vs. Ethical Governance

4.1 The Dual Imperative

Autonomous warfare rests upon two pillars: operational reliability and ethical governance. Reliability ensures that systems perform as intended under stress; ethics ensures that their performance remains within the bounds of law and conscience. To privilege one at the expense of the other would imperil both military effectiveness and moral legitimacy. India must therefore develop a balanced approach that harmonises these imperatives.

4.2 Reliability Under Combat Conditions

AI-enabled autonomous systems must endure the harshest test — combat adversity. Electronic jamming, cyber intrusions, GPS spoofing, and data poisoning are not hypotheticals but certainties in contested environments. A reliable autonomous warrior must not only resist such attacks but continue to function with grace under degradation. Fail-safe designs, redundancy, and layered cyber defences are essential. India’s development doctrine must stress combat hardening of AI architectures before frontline deployment.

The transformation of machines into war fighters amplifies both the promise and peril of autonomy. Because these entities now carry cognitive weight — sensing, classifying, and striking — a failure in reliability or an error in ethical boundaries does not remain technical; it becomes a failure of warfighting judgment itself. This makes governance and reliability of AI not optional but existential for the unmanned warrior to remain a trusted comrade-in-arms rather than a liability.

4.3 The Cost of Systemic Failure

The battlefield is unforgiving. A single AI failure can trigger cascading consequences. A drone misidentifying a civilian convoy as hostile, or an automated missile defence system failing under spoofed signals, could escalate a border skirmish into a regional war. The margin for error in autonomous warfare is therefore near zero. Reliability is not an engineering preference — it is a strategic necessity for stability and deterrence.

4.4 Levels of Autonomy

To govern autonomy, it is necessary to define levels of human control:

  • Human-in-the-loop – Machines propose, humans decide. Suitable for lethal engagements requiring oversight.

  • Human-on-the-loop – Machines decide within preset parameters; humans supervise and can intervene. Useful in high-speed combat.

  • Fully autonomous – Machines decide and act independently without human intervention. To be restricted, if at all, to non-lethal or logistics functions.

This graduated model preserves human accountability while leveraging machine efficiency.

4.5 The Balance of Authority

Autonomy is a double-edged sword. Granting machines excessive freedom risks uncontrollable escalation, while over-restricting them undermines their operational value. The art lies in defining graduated layers of machine authority, where routine functions are automated, time-critical responses are supervised, and lethal decisions always retain a human fingerprint. This balance preserves the speed of the machine without sacrificing the wisdom of the human.

4.6 Ethical Dilemmas in Lethal Autonomy

Delegating life-and-death decisions to algorithms raises profound moral questions. Can an AI truly distinguish combatant from civilian in fog-of-war conditions? What happens when algorithmic bias or adversarial deception causes wrongful targeting? Without a human moral compass, autonomy risks descending into indiscriminate violence. Ethical frameworks rooted in International Humanitarian Law and India’s dharmic ethos must therefore circumscribe the scope of lethal autonomy.

4.7 Ethics as a Weapon of Legitimacy

Ethics in autonomous warfare is not merely a moral shield; it is a weapon of legitimacy. A nation that demonstrates ethical restraint while employing AI in war strengthens its credibility in the eyes of allies, partners, and global institutions. For India, embedding dharmic principles and humanitarian law into AI systems is not just compliance — it is a strategic act of winning the narrative, ensuring that India’s warriors fight with both might and moral authority.

4.8 Risks of Strategic Instability

Unchecked autonomy creates the danger of autonomous escalation. Machines, reacting to algorithmic triggers, could initiate spirals of conflict without deliberate human intent. An automated retaliatory strike misinterpreted as hostile escalation could rapidly broaden a limited skirmish into regional war. To mitigate this, strict escalation control protocols and human veto thresholds must be enshrined within AI doctrines.

4.9 Escalation in Machine Time

Traditional escalation unfolds over hours or days, allowing diplomacy and backchannels to intervene. Autonomous escalation, however, can unfold in machine time — minutes or seconds. Algorithmic misinterpretation of intent could lock nations into spirals before leaders are even aware of the crisis. To guard against this, India must hard-code escalation brakes into AI doctrines, ensuring that machines accelerate warfighting but never seize control of war itself.

4.10 Accountability and Legal Responsibility

In traditional warfare, accountability rests on commanders and states. With autonomous systems, responsibility risks dilution: was it the coder, the commander, or the machine? India must pre-empt this ambiguity by codifying laws of accountability, ensuring that human commanders remain ultimately responsible for every autonomous action. This preserves the principle of command responsibility central to military justice.

4.11 Preserving the Chain of Command

The cornerstone of military order is the chain of command. Autonomous warfare threatens to blur this chain by shifting accountability onto algorithms. India must assert clearly that command responsibility cannot be outsourced to code. Every AI-enabled action must trace back to a human decision-maker, ensuring that military justice, political accountability, and civilisational values remain intact.

4.12 International Norms and what India’s Stance to be

Globally, debates on Lethal Autonomous Weapon Systems (LAWS) are intensifying at the United Nations. Some nations argue for a ban, others for regulation, still others for unrestricted development. India must carve its stance: championing responsible autonomy, preserving national security interests, and contributing to global ethical norms. By articulating an AI-Soldier Doctrine, India can align operational necessity with moral responsibility.

4.13 Transition to Next Discussion

Having weighed the tension between reliability and ethics, the paper now turns to Sovereignty, Data Security, and Indigenous AI — the bedrock upon which India’s unmanned warrior must be built.

Section 5 – Sovereignty, Data Security & Indigenous AI

5.1 Sovereignty as the First Principle

In the realm of autonomous warfare, sovereignty is the first and non-negotiable principle. A nation that relies on imported AI tools for battlefield decisions risks ceding its command to foreign code. Hidden backdoors, malicious patches, or manipulated algorithms can compromise not only missions but also national command structures. For India, sovereignty in AI is not merely desirable; it is strategically indispensable.

5.2 Outsourcing Command is Surrender

The most dangerous form of dependency is not on weapons or fuel, but on decision intelligence itself. To rely on foreign AI for battlefield decision-making is to outsource the very sovereignty of command. In such a scenario, the mind of war would no longer be Indian — it would be hostage to alien code, invisible biases, and possible backdoors. Sovereignty demands that India’s AI warriors think, decide, and fight with Indian code, Indian data, and Indian conscience.

5.3 Risks of Imported AI Systems

Imported AI systems pose multiple vulnerabilities:

  • Embedded backdoors – allowing adversaries covert access during conflict.

  • Algorithmic opacity – leaving militaries blind to hidden biases.

  • Cyber manipulation – remote interference with mission-critical decisions.

  • Supply chain dependency – exposure to sanctions or denial regimes. In a high-stakes battlefield, such vulnerabilities translate directly into strategic defeat. Dependence is therefore untenable.

5.4 The Trojan Horse Risk

Imported AI platforms may arrive as allies but conceal Trojan horses — hidden circuits, malicious updates, or concealed algorithms that could be activated in conflict. The nightmare is not a system refusing to work, but one that subtly misdirects decisions in ways invisible until it is too late. For a nation like India, surrounded by contested borders and multiple adversaries, the price of such infiltration would be catastrophic. The antidote lies only in indigenous AI, tested in Indian laboratories, built on sovereign datasets.

5.5 Secure Data Ecosystems

Battlefield AI must operate on sovereign, classified datasets housed in secure, air-gapped environments. Unlike civilian AI, which thrives on vast open data, military AI demands secrecy and protection. Training datasets must be curated within national laboratories; operational data must be encrypted and ring-fenced; and testing must simulate adversarial conditions. Only then can India ensure mission-critical trustworthiness of its unmanned warriors.

5.6 Data as the New Ammunition

In autonomous warfare, data is ammunition. An army may run out of bullets, but if it loses control over its data, it loses the war itself. Sovereign datasets must therefore be guarded with the same sanctity as nuclear arsenals. Training and operational data must never cross borders, never reside on foreign servers, and never be exposed to hostile cyber espionage. A breach of data sovereignty is not theft; it is an invasion without a shot fired.

5.7 Indigenous Development Pathways

India must field exclusive defence-use AI machines crafted indigenously. This requires a Force-to-Company (F2C) integration model wherein the Services, DRDO, academia, and private industry co-develop solutions. Such clusters enable rapid innovation, indigenous ownership, and insulation from foreign dependencies. Start-ups and SMEs must be incentivised to partner with the armed forces, bringing agility to the otherwise long development cycles of traditional defence establishments.

5.8 India’s Opportunity to Lead the Global South

Indigenous AI is not only a defensive necessity but also a strategic opportunity. By creating secure, ethical, and sovereign AI systems, India can become the arsenal of trusted autonomy for the Global South. Many nations face the same dilemma: advanced foreign AI comes with dependency, but domestic capacity is underdeveloped. India can bridge this gap, exporting not just machines but a philosophy of sovereign AI warfare, thereby shaping global norms while advancing Atmanirbhar Bharat.

5.9 Combat Hardening of Indigenous AI

Indigenous AI must be rigorously tested against adversarial threats: jamming, spoofing, adversarial AI attacks, and cyber denial. Combat hardening is not a one-time process but a continuous cycle of adaptation. Certification protocols must be institutionalised, ensuring that no AI warrior is inducted without passing mission-critical reliability trials. This would elevate indigenous AI from prototype to trusted defender.

5.10 The Role of Atmanirbhar Bharat

Atmanirbhar Bharat is not simply an industrial slogan; it is the doctrinal backbone of India’s future in AI warfare. By fostering indigenous design, manufacturing, and innovation ecosystems, India ensures that its AI warriors embody national ethos, sovereignty, and trust. A self-reliant AI ecosystem will also allow India to export ethical AI solutions, shaping global norms while advancing national security.

5.11 Sovereignty as the New Deterrence

In the wars of the future, deterrence will not rest on nuclear parity or conventional numbers alone. It will rest on the sovereignty of algorithms. Nations that control their AI warriors will command deterrence; nations that rent them will be dictated to. For India, Atmanirbhar Bharat in AI is not industrial policy — it is the new foundation of strategic deterrence in the 21st century.

5.12 Transition to Next Discussion

With sovereignty and indigenous imperatives established, the next section will outline Doctrinal Imperatives for India — the conceptual frameworks that will regulate autonomy, ensure accountability, and guide deployment across the Tri-Services.

Section 6 – Doctrinal Imperatives for India

6.1 The Need for Doctrinal Clarity

Technology, however advanced, cannot substitute for doctrine. Without a clear framework, autonomous systems risk being misapplied, misunderstood, or misused. India requires doctrine not only to regulate its AI warriors but also to anchor them within the broader principles of national security, humanitarian law, and dharmic ethos. Doctrine provides predictability, accountability, and strategic stability.

6.2 Arming Without Aiming

To induct AI into the battlefield without doctrine is to arm without aiming. Machines may act with speed, but without doctrinal guidance, that speed becomes random and potentially destabilising. Doctrine is the compass that ensures autonomy accelerates towards victory, not chaos. India’s doctrine must therefore precede deployment, making clear that machines serve the strategy, not dictate it.

6.3 AI-Soldier Doctrine

The AI-Soldier Doctrine must define the permissible boundaries of autonomy for every class of unmanned system. It should codify:

  • Levels of autonomy appropriate for different missions.

  • Human oversight requirements for lethal functions.

  • Escalation control protocols to prevent unintended conflict spirals.

  • Accountability principles ensuring that human commanders remain answerable for autonomous actions.

By articulating such a doctrine, India can both operationalise AI warriors and reassure allies, partners, and the international community of its responsible intent.

6.4 The Human Fingerprint Principle

The AI-Soldier Doctrine must be anchored in what may be termed the Human Fingerprint Principle: every autonomous action, however automated, must ultimately trace back to a human decision-maker. This principle reassures allies, preserves accountability, and prevents machines from becoming rogue arbiters of war. It is this principle that transforms autonomy from a liability into a trusted extension of command.

6.5 AI-Battlefront Doctrine

Beyond individual systems, India must adopt an AI-Battlefront Doctrine — integrating autonomy across land, air, maritime, space, and cyber domains. This doctrine should ensure:

  • Multi-domain synergy between autonomous platforms.

  • Fusion of sensor inputs for unified situational awareness.

  • Joint operations protocols for Tri-Service integration.

  • Resilient C2 structures that retain human oversight even in degraded conditions.

Such a doctrine elevates autonomy from isolated tools to a comprehensive force multiplier.

Doctrinal clarity becomes indispensable precisely because unmanned warriors are no longer passive tools but active teammates on the battlefront. When machines graduate into war fighters — equipped with sensors, neutralisers, and decision engines — they demand doctrinal rules that bind them within the human chain of command. Doctrine is the framework that ensures these war fighters fight with discipline, accountability, and alignment to India’s strategic ethos.

6.6 From Tools to Teammates

Doctrinal thinking must shift from seeing AI as tools to seeing them as teammates in the battlefront ecosystem. Just as infantry trusts artillery or air cover, future warriors must trust AI-enabled platforms to complement their fight. The AI-Battlefront Doctrine should therefore frame AI not as a substitute for humans, but as a force partner, seamlessly woven into joint operations.

6.7 Escalation Control Mechanisms

Escalation remains the gravest risk of autonomous warfare. Doctrines must enshrine:

  • Pre-set engagement thresholds below which machines may act independently.

  • Mandatory human authorisation for lethal or strategic strikes.

  • Automated de-escalation triggers in the event of ambiguous or conflicting inputs.

This layered approach ensures that machines accelerate tactical decisions without breaching strategic red lines.

6.8 Red Lines in Machine Time

Escalation in the age of autonomy unfolds in machine time — seconds, not days. Doctrines must therefore define red lines that machines cannot cross without human sanction. These doctrinal brakes will preserve stability by ensuring that AI accelerates tactical cycles without triggering strategic spirals. India’s innovation will lie in coding escalation control into doctrine itself, not leaving it to chance.

6.9 Integration into Tri-Service Operations

AI doctrines must be embedded into the operational structures of the Army, Navy, and Air Force. Joint planning, joint exercises, and joint command centres must incorporate AI warriors as integral components. Only through Tri-Service integration can India harness the full potential of autonomy while avoiding duplication, fragmentation, or misalignment.

6.10 Embedding Ethics in Doctrine

Doctrines must reflect India’s unique ethical traditions. Drawing from dharmic principles of just war, humanitarian law, and command accountability, India’s doctrines can project a distinctive model of responsible autonomy. This ethical framing will strengthen legitimacy both domestically and internationally, positioning India as a leader in shaping the moral compass of autonomous warfare.

6.11 India’s Doctrinal Distinction

India has the opportunity to lead the world by embedding dharmic ethos into digital code. While other nations argue over bans and loopholes, India can craft doctrines that showcase autonomy fused with humanity. Such doctrines will not only guide Indian forces but also inspire international norms, positioning India as the thought leader of ethical autonomy in warfare.

6.12 Transition to Next Discussion

With doctrinal imperatives defined, the next section will turn to Institutional Framework & Governance — the organisational mechanisms required to oversee, regulate, and implement these doctrines at the highest levels of statecraft.

Section 7 – Institutional Framework & Governance

7.1 The Necessity of Institutional Oversight

Doctrines and technologies alone cannot guarantee responsible autonomy. Strong institutional frameworks are essential to translate principles into practice, regulate development, and ensure accountability. Without structured oversight, the pace of technological adoption may outstrip ethical safeguards and strategic coherence.

7.2 The Danger of Institutional Vacuum

History teaches that where institutions lag, technology runs amok. In autonomous warfare, an institutional vacuum is not just inefficiency — it is vulnerability. Without strong oversight, machines may be inducted without accountability, algorithms may evolve without ethical checks, and adversaries may exploit gaps in governance. India must therefore institutionalise AI control early, decisively, and at the highest levels of statecraft.

7.3 National Strategic Cyber Combat & Security Committee (NSCCSC)

India must establish a National Strategic Cyber Combat & Security Committee (NSCCSC) at the Cabinet level. This body should:

  • Oversee AI doctrines for military deployment.

  • Integrate AI strategies with national cyber, space, and electronic warfare policies.

  • Provide civilian-strategic oversight to balance military urgency with democratic accountability.

  • Guide national AI R&D priorities, ensuring that defence-use AI receives sustained funding and focus.

The NSCCSC would act as India’s apex guardian of autonomous warfare policy.

7.4 The War Council for the AI Age

The NSCCSC must function as India’s War Council for the AI Age — fusing political will, military necessity, and technological foresight into one decisive body. Its authority must be Cabinet-level, its mandate strategic, and its decisions binding. Only such a council can align India’s national AI posture with the gravity of 21st-century conflicts.

7.5 Integration with Tri-Service Cyber Command

The proposed Tri-Service Cyber Command (TSCC) must be the operational counterpart of the NSCCSC. Its mandate should include:

  • Operationalising AI doctrines across the Army, Navy, and Air Force.

  • Conducting joint training exercises with AI-enabled platforms.

  • Ensuring resilience of C2 systems against cyber denial.

  • Maintaining cyber-physical integration with space, EW, and sensor networks.

This alignment between NSCCSC (strategic) and TSCC (operational) will ensure both coherence and adaptability.

7.6 Forging the AI–Cyber Command Nexus

The Tri-Service Cyber Command must not remain a reactive entity. It must evolve into a proactive AI–Cyber nexus, continuously stress-testing India’s autonomous systems against real-world cyber threats. This institutional integration ensures that every AI warrior inducted into service has survived the gauntlet of cyber denial, EW deception, and algorithmic attack.

7.7 Civil–Military–Industry–Academia Synergy

No single stakeholder can deliver autonomous warfare capability. India’s strength will lie in collaborative ecosystems:

  • Civilian policymakers – ensuring legal, ethical, and diplomatic framing.

  • Military services – defining operational needs and doctrine.

  • Industry and start-ups – delivering innovation and agile solutions.

  • Academic institutions – advancing AI research, ethics, and training.

Institutional frameworks must formalise this synergy, with clear roles, responsibilities, and funding pathways.

7.8 A New National Innovation Compact

What India requires is a National Innovation Compact — a formal pact binding Services, academia, industry, and start-ups into a single collaborative mission. This compact must not be loose coordination but structured synergy, backed by state funding and clear milestones. Only then can India accelerate from fragmented efforts to a cohesive ecosystem of autonomous warfare innovation.

7.9 Legal and Regulatory Architecture

India must enact a regulatory regime tailored to autonomous systems in defence. This should cover:

  • Standards for certification of AI platforms.

  • Accountability clauses for commanders and developers.

  • Export controls to prevent misuse of indigenous AI.

  • Alignment with international law while safeguarding sovereign interests.

Such a legal framework will prevent ambiguity and strengthen India’s credibility as a responsible power.

7.10 Codifying Sovereignty in Law

Regulation must go beyond technical certification. It must codify sovereignty itself. Laws must declare that India’s military AI cannot be hosted on foreign clouds, cannot use unverified datasets, and cannot be subject to foreign arbitration. Such codification transforms sovereignty from aspiration into enforceable rule, ensuring no loophole undermines India’s security.

7.11 Anchoring Sovereignty in Governance

All institutional measures must reinforce sovereignty. Foreign partnerships may provide learning opportunities but must not compromise command over national AI warriors. Data, algorithms, and operational authority must remain under Indian control. This principle must be codified into every governance mechanism.

7.12 Institutional Resilience as Deterrence

Strong institutions are themselves a form of deterrence. Adversaries hesitate to test a nation that demonstrates discipline, oversight, and unity in its AI command structures. India’s governance mechanisms must therefore project resilience not only to domestic stakeholders but also to the world — signalling that India’s unmanned warriors are guided by order, not chaos.

7.13 Transition to Next Discussion

With institutional frameworks outlined, the next section will broaden the horizon to Global Lessons & Comparative Perspectives, drawing insights from other nations while shaping a uniquely Indian pathway.

Section 8 – Global Lessons & Comparative Perspectives

8.1 The Value of Comparative Insights

Autonomous warfare is a global phenomenon, with nations experimenting across domains. India must carefully study these experiences — not to imitate, but to distil lessons and craft a path aligned with its unique security environment and ethical traditions. Comparative perspectives provide both cautionary tales and aspirational models.

8.2 United States – Pragmatism and Scale

The U.S. has invested heavily in Lethal Autonomous Weapons Systems (LAWS) research and AI-enabled decision support. Key features include:

  • Loyal Wingman programs – pairing unmanned aircraft with manned fighters.

  • Project Maven – AI-driven target recognition and ISR integration.

  • “Defend Forward” cyber doctrine – pre-emptive disruption using AI-enabled tools.

8.3. Lesson for India – Speed with Oversight

Lesson for India: balance innovation with transparency, ensuring accountability frameworks keep pace with technological ambition.

The U.S. demonstrates that large-scale AI adoption must be matched with accountability mechanisms. India must imbibe this principle — adopt AI at speed, but never at the cost of ethical and doctrinal oversight. Rapid scaling must walk hand in hand with institutional discipline.

8.4 China – Strategic Integration

China’s People’s Liberation Army (PLA) has consolidated cyber, space, and electronic warfare under the Strategic Support Force (SSF). Features include:

  • Large-scale investment in AI-enabled swarm drones.

  • Integration of autonomy into doctrine as a core warfighting principle.

  • Fusion of AI with information dominance strategies.

8.5 Lesson for India – Integration without Imitation

Lesson for India: integrate AI not as an adjunct but as a structural component of doctrine and force organisation, while guarding against escalation risks.

China’s model shows the power of integrating AI into doctrine, but it also reveals risks of authoritarian overreach and destabilising escalation. India must integrate AI as structurally as China has, but do so with democratic accountability and humanitarian restraint. The lesson is clear: integration is necessary, imitation is dangerous.

8.6 Israel – Combat Validation

Israel’s experiences in asymmetric conflicts highlight the effectiveness of unmanned systems:

  • Combat-tested UAVs used for surveillance and precision strikes.

  • Rapid innovation cycles between battlefield feedback and system refinement.

  • Export-oriented defence AI ecosystem.

8.7 Lesson for India – Short Feedback Loops

Lesson for India: leverage real-time operational feedback loops to accelerate innovation, while preserving sovereign safeguards.

Israel thrives on battlefield validation, refining systems after every conflict. India must adopt this practice by creating rapid feedback loops between combat units, R&D labs, and industry clusters. Long procurement cycles must give way to agile iteration, ensuring AI warriors evolve with battlefield reality.

8.8 Russia – Automation with Constraints

Russia has developed robotic ground vehicles and autonomous systems, but operational reliability has often faltered under combat stress. Lessons include:

  • Challenges of robustness in contested environments.

  • Over-reliance on imported sub-components.

8.9 Lesson for India – Resilience over Showcase

Lesson for India: prioritise resilience and indigenous supply chains over mere prototypes.

Russia’s struggles highlight that prototypes are not enough; robustness under stress is what matters. India must resist the temptation of showcase projects and instead demand resilience-first AI systems that can survive denial, deception, and attrition. Reliability, not rhetoric, is the true test of AI warriors.

8.10 Europe – Ethical Anchoring

Several European nations emphasise ethics and law in their approach to autonomy:

  • EU debates on banning LAWS.

  • Strong focus on human-in-the-loop systems.

  • Commitment to humanitarian principles in AI deployment.

8.11 Lesson for India – Ethics as Strategic Capital

Lesson for India: assert its dharmic ethos and humanitarian tradition to project an ethical model of autonomy in international fora.

Europe’s emphasis on ethics reveals a valuable insight: restraint itself can be a form of power. India, rooted in dharmic traditions, must practise and project ethical anchoring as a strategic capital. By doing so, India not only strengthens legitimacy but also creates a differentiator against authoritarian models of autonomy.

8.12 India’s Unique Pathway

While lessons abound, India must avoid dependency on imported models. Its pathway should be:

  • Ethically anchored – guided by dharmic principles and humanitarian law.

  • Operationally sovereign – built on indigenous AI ecosystems.

  • Globally contributive – shaping international norms for responsible autonomy.

India can thus emerge not merely as a participant but as a thought leader in the global discourse on autonomous warfare.

8.13 The Synthesis India Must Achieve

From the U.S., India must learn scale with oversight; from China, integration without authoritarianism; from Israel, rapid feedback loops; from Russia, resilience-first design; and from Europe, ethics as strength. Synthesising these lessons will allow India to craft a model of autonomy that is sovereign, ethical, resilient, and globally influential. This synthesis is India’s true comparative advantage.

8.14 Transition to Next Discussion

From global lessons, the next section will chart The Way Forward — 2030 & beyond, setting milestones for India’s journey towards trusted, indigenous, and ethical AI warriors.

Section 9 – The Way Forward: 2030 & Beyond

9.1 The Roadmap Imperative

A vision without milestones risks remaining aspirational. To realise the potential of unmanned warriors, India must adopt a structured roadmap with clear targets for 2030 and beyond. This roadmap should combine technological development, doctrinal maturity, institutional strengthening, and ethical embedding.

9.2 Short-Term (2025–2027) Goals

  • Establish AI-Soldier and AI-Battlefront Doctrines formally within the armed forces.

  • Operationalise Tri-Service Cyber Command (TSCC) with AI integration capability.

  • Launch indigenous swarm drone squadrons for border surveillance and grey-zone conflicts.

  • Develop sovereign datasets and air-gapped AI laboratories under DRDO and Services.

  • Certification framework for combat hardening of indigenous AI.

9.3 Laying the Diplomatic Groundwork

Alongside technical milestones, India must launch diplomatic initiatives that position it as a champion of responsible AI in warfare. By 2027, India should lead regional and global forums advocating for sovereign, ethical AI doctrines. This dual-track approach — building capability at home while shaping norms abroad — will ensure India is not a rule-taker but a rule-maker in autonomous warfare governance.

9.4 Medium-Term (2028–2029) Goals

  • Fusion of AI with EW and space assets, creating seamless situational awareness.

  • Fielding of autonomous UGVs and UUVs for logistics, surveillance, and limited combat roles.

  • AI-enabled mission planning systems integrated into command and control centres.

  • Legislative enactment of national laws governing military AI accountability and certification.

  • Expanded F2C (Force-to-Company) clusters across Services, academia, and industry.

9.5 AI as a Strategic Equaliser

By 2028–29, India’s focus should shift to demonstrating AI as a strategic equaliser. Against adversaries with larger conventional forces, India’s sovereign AI warriors can level the balance by compressing decision time, multiplying combat power, and deterring escalation. This vision must be articulated not only in military doctrine but also in diplomatic messaging — showing that AI is India’s tool of deterrence and stability, not aggression.

9.6 Long-Term (2030–2035) Goals

  • Deployment of trusted indigenous AI warriors across land, air, maritime, and cyber domains.

  • AI–Quantum synergy to counter emerging threats in cryptography and EW.

  • Autonomous systems integration into Tri-Service doctrine as core force multipliers.

  • Creation of National Strategic Cyber Combat & Security Committee (NSCCSC) at Cabinet level for oversight and guidance.

  • Positioning India as an ethical AI leader globally, shaping norms for responsible autonomy.

9.7 The Triad of Future Sovereignty

By 2035, sovereignty will rest on three intertwined triads: control of territory, control of cyberspace, and control of algorithms. India must prepare for this future by integrating AI into its strategic nuclear doctrine, space policy, and cyber warfare posture. Diplomatic efforts must project this triad as India’s doctrine of comprehensive sovereignty, ensuring global recognition of India’s rise as both a military and moral power.

9.8 Dual Role for India

By 2035, India must aspire to play a dual role:

  • As a sovereign defender – ensuring AI warriors guard the nation without external dependency.

  • As a global contributor – offering models of ethical, responsible autonomy to the international community.

This dual role aligns with India’s civilisational ethos of blending power with principle.

9.9 Custodian of the Global South

India’s dual role must extend beyond its borders. As the leading voice of the Global South, India must offer smaller nations an alternative to dependency on great power AI ecosystems. By exporting ethical, resilient, and sovereign AI models, India can elevate itself as the custodian of technological independence for emerging nations, thereby building diplomatic capital and strategic partnerships.

9.10 Vision Statement

The vision for 2035 can be summed up as: Trusted Indigenous AI Warriors — Ethical, Adaptive, Resilient. This vision will not only secure India’s sovereignty but also elevate it as a shaper of global norms in the age of autonomy.

9.11 A Tryst with Technological Destiny

Echoing Nehru’s historic words, India must frame its AI journey as a tryst with technological destiny. By 2035, trusted indigenous AI warriors must stand as sentinels not only of India’s sovereignty but also of humanity’s conscience. This is India’s chance to marry technology with dharma, power with principle, sovereignty with responsibility — and in doing so, persuade the world that AI warriors can defend borders without erasing humanity.

9.12 Transition to Next Discussion

With the roadmap outlined, the final section will present the Conclusion & Final Message — a reaffirmation of India’s path towards unmanned warriors that serve both national defence and humanity’s conscience.

Section 10 – Conclusion & Final Message

10.1 Reaffirming the Journey

This paper has traversed the rise of the unmanned warrior, the decisive role of AI in battlefield decision-making, the dilemmas of reliability and ethics, the sovereignty imperative of indigenous AI, doctrinal clarity, institutional frameworks, and global lessons. The central message is clear: the future of warfare belongs to those who command decision superiority, and decision superiority belongs to those who master AI.

10.2 The Unmanned Warrior Reimagined

The unmanned warrior is no longer a passive platform but a composite combat entity — fusing sensors, decision engines, and neutralisers into one continuum. It is AI that breathes cognition into these machines, transforming them from tools into warfighters capable of sensing, deciding, and striking at a tempo beyond human reach. This redefinition demands that India not only adopt such warriors but embed them within its doctrines, institutions, and ethos.

10.3 The Power and Peril of AI in Command

AI compresses the OODA loop into machine time, enabling commanders to dominate battles by sheer speed and precision. Yet, the same power holds peril: a single algorithmic error could escalate a skirmish into regional war. Therefore, India’s unmanned warriors must embody trustworthy AI — resilient under denial, accountable under doctrine, and ethical under dharma. Only then can machines accelerate without seizing control of war itself.

10.4 Sovereignty as Strategic Deterrence

Future deterrence will not rest on nuclear parity or troop numbers alone, but on the sovereignty of algorithms. A nation that rents its AI rents its command; a nation that owns its AI secures its destiny. For India, Atmanirbhar Bharat in AI is not industrial policy but existential strategy. Sovereign, indigenous, combat-hardened AI warriors will ensure that India’s command remains Indian — in code, in conscience, and in character.

10.5 Doctrine, Governance, and Global Responsibility

India’s responsibility is two-fold: to govern its AI warriors with doctrine and institutions at home, and to shape global norms abroad. The AI-Soldier Doctrine and AI-Battlefront Doctrine must bind autonomy within human authority. The NSCCSC and Tri-Service Cyber Command must institutionalise oversight. Internationally, India must project a model of responsible autonomy rooted in dharmic values — showing that ethics is not a restraint but a weapon of legitimacy.

10.6 India’s Unique Synthesis

From the U.S., India must imbibe speed with oversight; from China, integration without authoritarianism; from Israel, rapid feedback loops; from Russia, resilience-first design; and from Europe, ethics as strategic capital. Synthesising these lessons, India can craft its unique pathway: ethical, sovereign, resilient, and globally influential AI warriors.

10.7 The Way Forward – A Tryst with Technological Destiny

By 2035, India must stand as both sovereign defender and global custodian of ethical autonomy. Its AI warriors must defend borders, secure cyberspace, patrol the seas, and monitor the heavens — not as foreign-dependent machines but as trusted indigenous war fighters. This is India’s tryst with technological destiny: to ensure that machines fight with speed, but humans command with wisdom; that autonomy serves sovereignty, not supplants it; and that dharma guides even the age of algorithms.

10.8 Final Message

The unmanned warrior of the future is not merely steel and silicon; it is the living embodiment of a nation’s conscience. When machines decide, ethics must command. When algorithms act, sovereignty must prevail. When war accelerates to machine time, India’s destiny is to ensure that its AI warriors march under the tricolour — disciplined by doctrine, guided by dharma, and trusted by the world.

Contributed by Commander Prasad YVV, IN-Sr. Veteran

Founder and Managing Director of Prasad Consulting Hyd (India)