
Artificial Intelligence and the Future of Warfare
Scholars who will tell the story of the future – we do not know whether it will be flesh-and-blood humans or artificial intelligence – cannot fail to mention the date 5 March 2025 in their annals. On that day, the adoption of advanced technologies, such as artificial intelligence in the military, reached a new milestone and opened up unexpected scenarios for tomorrow’s wars.
On that day, in fact, the Pentagon awarded the Thunderforge project to the company Scale AI, with the aim of developing an advanced AI system to optimise military planning, especially in operations in Europe and the Indo-Pacific region. This agreement marks a crucial moment in the evolution of modern warfare, where artificial intelligence is no longer relegated to logistical support or data analysis, but is becoming a central player in military strategy, potentially capable of making autonomous decisions similar to those of a commander on the battlefield.
Integrating AI into military planning
The Thunderforge project is based on advanced language models and interactive war simulations, with the aim of improving the ability to make rapid decisions in battle scenarios. Information provided by the Defense Innovation Unit, the US agency overseeing the project, outlines a system capable of anticipating threats, testing battle scenarios, and allocating strategic resources on a global scale. At the same time, collaborations with companies such as Anduril and Microsoft take military AI to the next level by integrating advanced data collection and predictive analysis technologies. Anduril, for example, will provide its Lattice system, used to analyse data from drones and sensors, while Microsoft will contribute language models that can enhance Thunderforge’s decision-making capabilities. Artificial intelligence is no longer a mere support, but an autonomous operator capable of managing complex dynamics in real time, such as that of monitoring Chinese activities in the Pacific, as emphasised by Admiral Sam Paparo, in charge of operations in that region.
Drones and autonomous weapons reshape modern warfare
Strategic planning is just one of the applications of AI in warfare. Autonomous drones, advanced surveillance systems and weapons that identify and strike targets without human supervision are already changing the face of warfare.
In the conflict between Russia and the West in Ukraine, for example, AI-equipped drones were used by both armies to identify and attack enemy positions. Meanwhile – 2000 km further south – Israel employed AI algorithms to select targets in bombing raids in Gaza and Lebanon.
This is not just a step towards automation, but a real revolution in the way things are done. Decisions are no longer made by humans, but by algorithms analysing huge amounts of data, making warfare faster, more precise but also much more difficult to control.
The integration of AI in war operations introduces a new type of warfare, the automatic warfare, in which the boundary between strategic decision and immediate action blurs. A bombing raid that would once have required careful evaluation by a commander can now be carried out by an autonomous drone in seconds, based on a selection of targets made by an algorithm, without the possibility of moral or ethical evaluation. This speed and precision raises questions not only about responsibility, but also about the human impact of such choices.
UN Resolution 79/L.77 and the race for Autonomous Weapons
At the international level, the risks from the use of autonomous weapons have been recognised, but steps towards binding regulation are slow and resisted. On 2 December 2024, the UN General Assembly passed Resolution 79/L.77 on Lethal Autonomous Weapons Systems, with 166 votes in favour, 3 against (Belarus, Russia and North Korea) and 15 abstentions, including China and Israel. The document, while recognising the risks of AI applied in the military context, does not include a binding ban. The resolution invites member states to participate in informal consultations during 2025 to explore possible solutions. With a consistency that is at least objectionable, the United States voted in favour but reiterated its intention to continue developing AI-based warfare technologies, thus revealing its resistance to regulation that could limit its competitive advantage.
This reticence towards stricter regulation is not without reason. The fear of losing technological dominance in a world increasingly dependent on AI superiority is one of the main reasons why the major military powers, including the United States, are reluctant to embrace measures that could slow their progress. The risk, as I pointed out right here on Tech Economy 2030, is that an AI-based arms race could destabilise the entire global security system, lowering the threshold for attack and making war not only more frequent but also bloodier.
From Mutual Assured Destruction to Mutual Assured AI Malfunction
One of the most influential players in the defence-applied AI landscape is Eric Schmidt, former CEO of Google and a leading advocate of the militarisation of AI. Surprising the most superficial analysts, Schmidt himself recently expressed serious and justified concerns about the risk of an uncontrolled race towards superintelligence. In the report ‘Superintelligence Strategy‘ written together with Dan Hendrycks and Alexandr Wang, Schmidt highlighted the risk that an uncontrolled competition for the development of military superintelligence could trigger a spiral of international tensions, with the risk that rival countries would react with drastic measures, including preventive actions such as large-scale cyber attacks.
In response to these risks, Schmidt proposed the strategy of ‘mutual assured AI malfunction’ (MAIM), which would be a form of deterrence similar to the doctrine of ‘mutual assured destruction’. The latter, mutual assured destruction, was the lintel on which the balance of terror was based throughout the Cold War: According to this strategy, both the US and USSR were able, with their atomic arsenals, to level the entire globe. Neither was able to neutralise the enemy by preventing him from reacting lethally to a surprise attack; therefore, neither could unleash a pre-emptive strike because, in all cases, the entire planet would be turned into a vitrified ball of rock.
In the near future, however, according to the new model of mutually assured malfunctioning of AIs, the threat of disabling rival AI technologies through cyberattacks and other preventive measures, such as embargoes on the export of microchips and other sophisticated computer technologies to adversary countries, would be sufficient to prevent other states from developing military super-intelligences. Although the proposal aims to reduce the risk of escalation, the model also suggests a continued militarisation of technology, bringing the world closer to the creation of a new ‘ultimate weapon’ in the context of automatic warfare.
Ethical implications in the future of war
As technology advances, the ethical and moral implications become increasingly central. Autonomous drones and weapons operating without human supervision might seem a natural progression in the warfare of the future, but they raise fundamental questions. Can automated warfare, without direct human intervention, reduce suffering and collateral damage? Can AI systems make fair discriminations between military and civilian targets? The answer to these questions is far from obvious, and the inability to regulate these developments risks uncontrollable escalation.
In January 2016 – when the development of the first artificial intelligences was still in prehistoric times – I pointed out in MIT Technology Review that, with the development of future autonomous weapons, the likelihood of devastating wars and multiplying civilian casualties will increase. Even if the person who decides to pull the trigger is not human, war itself cannot be dehumanised.
Whoever, human or synthetic, is behind the crosshairs, the victims will always be human, and an automated war will never become less cruel or devastating. Despite its technological potential, a war waged by machines risks being a war that becomes not only bloodier, but also more uncontrollable. A system that does not include ethical and moral awareness in its design cannot be considered a ‘solution’ to conflicts.
The urgency of international regulation
Automated warfare is an ever closer reality, and the risks it entails are there for all to see. The world powers, while recognising the dangers of an uncontrolled robotic army, are reluctant to stop for fear of losing their competitive edge. However, the risk of an AI-based arms race, as the Superintelligence Strategy report warns and as emphasised by UN Resolution 79/L.77, is that destabilisation of the international system will become inevitable. A war fought by machines, devoid of human oversight, will not lead to peace, but to the proliferation of violence.
It is essential that the international community acts now, regulating the use of AI in military operations and ensuring that technological evolution does not lead to a future of automatic, devastating and uncontrollable wars. The challenge is enormous, but we cannot afford to ignore it. Our collective responsibility is to protect peace and human rights by preventing the digital transition in warfare from being used for limitless destructive purposes.