Sherry Chen
“The motion of the Tao is reversal. The use of the Tao is weakness.” (反者道之动,弱者道之用)So wrote Laozi in the Daodejing, suggesting that order often arises from its counterbalance. In warfare, this principle still holds: peace is rarely won by goodwill alone. Peace is stabilized through the credible capacity for destruction.
Nuclear weapons never eliminate competitions, but they prevent world wars on a massive scale. Similarly, AI-powered systems and autonomous warfare may not deliver moral progress, but they offer something more pragmatic: the chance to shift the battlefield from flesh to code, from killing soldiers to disabling systems. As strategic costs rise, technology may turn war into a contest of networks rather than body counts.
The evolution of warfare is a predictable arc: from man-to-man, where soldiers confront each other directly with personal risk; to tech-to-man, where machines and algorithms are used to locate and kill human targets with precision and distance; and now, increasingly, to tech-to-tech, where autonomous systems and AI-driven weapons battle each other to disable networks and neutralize capabilities without immediate human casualties. The future of combat may be decided not by who dies first, but by whose system collapses first—whose signal is jammed, whose drones are blinded, whose decision loop is broken. It will be the high point of technical rationality, though not necessarily the high point of human morality.
China’s strategic behavior in recent decades reflects a uniquely structured version of this shift. Since the 1979 border war with Vietnam, China has avoided initiating large-scale wars. But that doesn’t mean it has remained passive. From conducting live-fire drills around Taiwan, to patrolling and building islands in the South China Sea, to engaging in skirmishes in the Himalayas, China has kept its military assertiveness below the formal threshold of war. Its use of lawfare, coercive presence, administrative declarations, and gray-zone pressure points to a system of conflict without combat, where disputes are pushed through regulations, civilian maritime forces, and information control—not bombs and bullets.
This is China’s version of technology-to-technology warfare. It uses coast guard cutters instead of warships, maps instead of missiles, and noise jamming instead of firepower. The objective is to exhaust and confuse adversaries without inviting retaliation or international backlash. It is not pacifism, but cost-optimized coercion.
Other powers also are building their own versions of this paradigm. Israel’s Iron Dome system intercepts low-cost rockets with algorithmic precision. In Ukraine, electronic warfare, drone swarms, and real-time signal disruption define key battles. The U.S. military is pursuing JADC2—Joint All-Domain Command and Control—aiming to integrate sensors, AI, and weapon platforms into a single decision-making web. These developments all point toward one conclusion: the future of war is not manpower, but system superiority.
Yet even in this age of systems, there are still humans dying. Why? Because killing people is often cheaper, faster, and more visibly effective than crashing networks. Slaughtering a commander grabs more headlines than cutting his communications. Politicians need visual victories. And many countries have yet to outgrow the archaic logic of “sacrifices show strength.” Meanwhile, international norms on autonomous lethal weapons remain weak or nonexistent.
Transition toward tech-to-tech warfare shall not rely on hope or morality, but on profitability and efficiency. That means:
1. Redirecting defense budgets toward jamming, intercepts, and electronic warfare
The first step in shifting from “tech-to-man” to “tech-to-tech” warfare lies in how nations allocate their defense resources. Rather than continuing to invest disproportionately in kinetic platforms—manned bombers, infantry equipment, armored divisions—states should redirect budgets toward technologies that neutralize enemy systems without targeting personnel. This means investing in jamming platforms, spoofing tools, and directed-energy interceptors like Israel’s Iron Beam or the U.S. HELWS—systems that disable threats without explosions. Electronic warfare is now central to deterrence: AI drone swarms can blind radar and scramble signals, as seen in Ukraine, where both sides disrupt GPS and contest the electromagnetic spectrum—a glimpse of future warfare.
This shift is especially urgent in contested maritime zones like the South China Sea. There, China routinely deploys jamming vessels, radar decoys, and civil-military fusion platforms to dominate the information environment and obstruct foreign ISR (intelligence, surveillance, and reconnaissance). The United States and its allies can counter not with more ships, but with more signal supremacy: deploying Electronic Warfare-capable drones, distributed sensor networks, and mobile intercept platforms that can detect, deceive, and deny. In a theater where conflict often remains below the threshold of open war, superiority in the electromagnetic spectrum could determine not just who shoots first—but whether anyone dares shoot at all.
2. Rewriting rules of engagement to prioritize system disruption over personnel elimination
Military doctrines and rules of engagement must evolve to reflect a new hierarchy of objectives: disabling the enemy’s systems should take precedence over destroying their forces. This shift would reframe tactical success away from body counts and toward metrics like radar blackout duration, the interruption of communications, or the downing of uncrewed aerial vehicles. For example, a battlefield commander might be authorized to deploy an EMP device to disable an adversary’s drone control system before being permitted to fire on a manned vehicle. Simulations and training exercises could adopt new standards in which mission success is scored by non-lethal outcomes: jammed signals, severed data links, or broken logistical chains. These scenarios are already being tested in select NATO cyber drills and Japan’s “silent interception” exercises. But unless such rules are institutionalized, even the best non-lethal tools will default to conventional escalation in moments of uncertainty. In the South China Sea, Chinese forces regularly use radar spoofing, underwater sensors, and maritime militia vessels to assert presence and deny access without firing a shot; the United States shall match this logic, favoring disruption over destruction.
3. Launching international agreements to regulate autonomous weapons and preserve human-in-the-loop standards
While a full global ban on AI weapons may be politically unreachable, smaller steps toward regulation are both necessary and achievable. One option is to establish a System Disarmament Accord that prohibits the deployment of fully autonomous weapons without real-time human oversight. Another is to create a transparent UN register of military AI capabilities, modeled on existing arms transparency initiatives, so states are held accountable for their development trajectories. Bilateral or minilateral agreements—particularly among AI front-runners like the United States, China, and Israel—could help establish redlines on using autonomous systems in gray-zone operations, such as EEZ patrols or cyberattacks on civilian infrastructure. Rather than waiting for a Geneva moment after a tragedy, these norms should be drafted now—when the costs of restraint are still lower than the costs of a crisis.
4. Embedding system-prioritized targeting into military AI architecture and decision-making protocols
Rather than trusting that ethical judgment will prevail in wartime, militaries should design AI targeting systems that default to non-lethal priorities. This means building algorithms that target command centers, fuel depots, radar arrays, or communications nodes before identifying enemy personnel. AI combat systems can be trained to detect vulnerable infrastructure points as primary objectives, while “digital firebreaks” can require human confirmation for any escalation beyond disabling strikes. Target-prioritization matrices—already used in missile defense systems—can be refined to emphasize system paralysis over physical destruction. Furthermore, procurement contracts for new platforms could mandate the inclusion of non-lethal options, such as electromagnetic pulses or communications scramblers, as standard loadouts. Designing systems this way ensures that systemic disruption is the norm, not the exception.
In the Indo-Pacific, this approach to design is essential. For example, a U.S. drone patrolling near disputed reefs in the South China Sea could be configured to disrupt Chinese maritime militia communications or disable forward radar sites without resorting to lethal measures. Contracts for new systems should require built-in non-lethal tools—such as electromagnetic pulses or signal jammers—as standard features. Designing AI this way embeds restraint into its decision-making process, ensuring that disabling enemy systems becomes the standard mode of operation.
5. Redesigning military exercises to reward disruption, not destruction
Finally, to change doctrine and culture, militaries must train for a new kind of victory. In U.S. exercises like Project Convergence, rather than measuring success by how many enemy “soldiers” are simulated killed, victory should be defined by how long adversary drones are blinded, how quickly satellite communications are severed, or how effectively the opposing force is digitally isolated. NATO’s Steadfast Defender series and Pacific allies’ joint drills could incorporate similar “system warfare” objectives, simulating maritime or airspace contests where the goal is to degrade the opponent’s ISR chain without escalating to hot conflict. Civilian tech firms—such as Palantir, Anduril, or cognitive electronic warfare startups—should be integrated into these war games to model how real-time data, software exploits, and autonomous swarms behave in contested conditions. When militaries begin scoring victories based on precision, control, and disruption—rather than destruction—they will naturally gravitate toward doctrines that favor technological suppression over human sacrifice.
Humanity rarely acts wisely for virtues. But it often chooses stability when destruction becomes too expensive. This is not moral progress, but a strategic logic stemming from survival pressure rather than compassion. As in the nuclear age, so too in the AI age: War won’t be avoided due to progress in human morality, but it could be avoided for high costs.
Reshaping warfare starts from recalculation. When enough nations run the numbers and realize that breaking systems is more effective than breaking bodies, then the age of machine-vs-man warfare may give way—if only temporarily—to a new balance: a world where deterrence is high, casualties are low, and victory belongs to the side with the faster, smarter, and more resilient systems.
Sherry Chen is a Research Associate at South China Sea NewsWire (SCSNW), a regional news and analysis platform focused on political, economic, and strategic developments in the South China Sea, and is currently studying in a dual program between Columbia University and Sciences Po Paris.