AI and the New Blueprint of Terrorism

Advanced violence is democratizing.  AI, in conjunction with dramatic improvements in robotics, energy production, and sensors, will increasingly enable ever-smaller groups of people to use targeted violence more effectively, and from a distance. Over time, this shift will dramatically impact a

War on the Rocks
75
15 хв читання
0 переглядів
AI and the New Blueprint of Terrorism

Advanced violence is democratizing.  AI, in conjunction with dramatic improvements in robotics, energy production, and sensors, will increasingly enable ever-smaller groups of people to use targeted violence more effectively, and from a distance. Over time, this shift will dramatically impact all varieties of force projection: state-on-state war, various forms of low-intensity conflict, and how states enforce internal order. 

Perhaps understandably, however, national security discourse about the AI revolution has generally focused on more earth-shattering scenarios: superintelligence, state-to-state conflict, and the prospect of unleashing new biological weapons. These are all critical questions that deserve extensive scrutiny. But super-empowering small groups of people will shift security dynamics in crucial, if less dramatic, ways as well. Non-state actors will use AI-backed tools to conduct relatively simple attacks using increasingly autonomous weapons. In this scenario, it will be the ability of AI-empowered weapons to deliver destruction discriminately, rather than at a catastrophic scale, that will be critical. 

There are structural reasons that AI’s impact on non-state violence is relatively under-examined. National security thinkers often default to analyzing interstate competition, the rise of China is the central geopolitical feature of the early 21st century, and AI labs, for reasons of both public and self-interest, often highlight such concerns. These companies understand how AI will shape the 21st century and so recognize genuinely critical issues. They also see the government as a customer and regulator — and understand the political utility of highlighting AI’s importance to core national security interests at a moment of emergent regulation and government investment. 

Finally, many of these sub-existential risks will be driven by freely-available open source and open weight models, which in general receive less scrutiny than those produced by large foundation model labs. Despite being less powerful than cutting-edge models, these tools are highly effective, freely available, can run locally on relatively simple devices, and are not governed by increasingly sophisticated security practices at foundation model labs. Such advantages will be catnip for violent non-state actors, just as they are for a variety of innovators in Silicon Valley and elsewhere. The tools of innovation can be used for good or ill. 

As the co-founder and chief strategy officer of Cinder, which provides safety infrastructure to leading AI companies, I have a commercial interest in policies and practices related to AI safety and misuse. But it is an issue I care about deeply on a personal level. Empowering terrorists is not the most profound risk associated with AI, but it should receive more attention than it gets. 

Policymakers in democratic capitals and leaders in Silicon Valley  should still prioritize Western technological advantage and not limit core AI advances, but should have a strategy to address the abuse of AI to facilitate targeted violence. That means establishing clear legal guidelines and preparing to harden potential targets. It also means that companies should monitor for abuse of models in sub-existential modes, and that firms that host open source and open weight models should account for these risks as best they can. These are not ideal steps. Like many other technologies, AI is fundamentally dual-use — we cannot unleash the creative benefits of this technology without accepting risk. The policy task then is not to eliminate risk. Rather, it is to mitigate it. 

A Multiplicity of Actors… and Types of AI

New training techniques and immense investments in computing power have produced more competent cutting-edge models that suggest what Anthropic CEO Dario Amodei calls “powerful AI.” Despite the hype over DeepSeek, Qwen and other smaller models, advancing fundamental capabilities requires capital-heavy computational systems and these are generally controlled by nation-states or large firms. Competition at the bleeding edge of technical capacity promises not just economic advances but militarily relevant ones — the ability to derive meaning from immense multi-modal sensor arrays, to encrypt and decrypt signals, and to coordinate swarms of autonomous drones, for example. These capabilities are a key reason that leading AI labs have proposed alliances to ensure that democracies control “powerful AI.” 

The AI revolution, however, is not limited to government institutions. Companies and individuals at all levels of society have access to previously unthinkable capabilities. Amodei rightly and persuasively celebrates this progress for its expected contributions to medicine, art, and economic development. He makes a compelling case for the value these new tools will offer society. But just as the internet unleashed a new era of personal, commercial, and artistic human engagement, it also gave al-Qaeda, neo-Nazis, and the self-proclaimed Islamic State new tools that they have used to terrible effect. The AI revolution will be similar. 

The ability to accurately deliver weapons on target from a distance is a powerful tactical advantage in combat. Technologically sophisticated militaries have long sought it. Terrorists have aimed to address their technological limitations via ambush and by emphasizing cultures that encourage self-sacrifice to deliver violence. Think of the cultivation of suicide bomber culture by jihadists and the celebration  of “saints” by right-wing militants and their supporters. AI-empowered weapons, even those with initially limited capability, will offer terrorists new tactical options and different strategic narratives. 

Weapons capable of autonomous target selection, launch, and action have three fundamental practical problems: targeting imprecision; the opportunity costs of utilizing a complex, expensive system that fails to achieve its mission; and the political, social, and moral fallout from destroying the wrong targets. In general, these concerns will likely restrain states more than terrorist groups. 

Terrorist groups and other non-state militants have often (though not always) been willing to accept imprecise weapons and are consistently creative in the use of inexpensive munitions, largely because of their willingness to attack soft targets. This lower standard encourages adoption and experimentation and requires cheaper munitions. These factors will be critical to deploying such weapons. With today’s technology, it is easy to imagine drones locally running open source vision models autonomously able to distinguish vehicles by size, or separate people based on skin tone, clothing, or gender. Such distinctions might not be sufficient to drive most military targeting choices, but they are enough for terrorist groups from a variety of ideological perspectives. 

Indeed, the willingness to accept imprecise targeting will also unlock non-state actors to use less-sophisticated open models. These tools are unlikely to achieve the precision and capability of cutting-edge “powerful” models, but they are sufficient for a wide range of discrete tasks and can run locally on relatively simple autonomous or semi-autonomous systems. Such tools may not be ready to systematically shift the militarized Russian-Ukrainian battlefront, where concealment and countermeasures are common. But deployed in a civilian setting by an actor willing to target imprecisely, they could be disastrous. 

Another benefit of open models for terrorists is that they are easier to fine-tune for nefarious purposes. Although some open source developers take steps to limit the proclivity of their models to abuse, others do not. And any open model is subject to fine-tuning in unpredictable ways. Meanwhile, leading model labs with closed models have taken steps to limit the abuse of their technology and monitor efforts to manipulate their systems. Such steps are inevitably incomplete, but it is notable that they have taken steps earlier in their lifecycle to mitigate such dangers than their predecessors at social media companies (who got there, but belatedly). Providers that host and deploy open models for customers also take steps to secure the models they host, but such efforts are more immature. 

Nation-states have innumerable advantages in developing and deploying core AI capabilities. But terrorists have key advantages of their own — lower targeting standards, demonstrated willingness to try to extract strategic benefits from middling tactical results, and a raw need to fundamentally change odds that are against them. So much of the discussion about AI’s geopolitical impact is focused, understandably, on the resources needed to develop this technology. But AI serviceable enough for many terrorist needs is becoming ubiquitous — and so the question becomes the factors necessary to adopt and deploy these tools. Terrorists are clearly weaker in the former, but at least in terms of operationalizing rudimentary versions of these tools, they are stronger in the latter. 

Given the globe-altering impacts we should expect from AI generally, it is tempting to imagine AI enabling tactical acts with extraordinary scale: novel biological attacks, breakthrough cyber incursions, or coordinated drone activity of mind-blowing scope. These are real issues. But the lesson from AI’s impact on coding is not simply that it can transform strong teams into extraordinary ones or good engineers into great ones — it is also that it can turn mediocre engineers into good ones and non-engineers into builders. Raising the capability floor is as revolutionary as blowing away the ceiling. 

AI is lowering the bar for non-technical and relatively uncommitted people to commit acts of technological creation. This is going to produce all sorts of messy outcomes, many of them wonderful. But it is also going to lower the bar necessary to commit an act of targeted violence at a distance. 

Terrorism rests on the presumption that relatively targeted violence can have outsized political and social impacts. It is possible that AI will shift that logic entirely by giving smaller groups the ability to conduct massive acts of violence. We should prepare for that possibility — AI companies are right to highlight these concerns. But it will also give those small groups new ways to execute targeted attacks in service of those older strategic concepts. This is less conceptually dramatic, but that does not mean the risk is small. 

So, What to Do?

Terrorists are sometimes credited as early adopters. In 2007, my colleagues and I at West Point’s Combating Terrorism Center wondered why al-Qaeda and its associates were not aggressively adopting the new tools of social media. They continued to prioritize more traditional web forums even as it was clear that the social revolution was imminent. Three years later, that adoption was happening in earnest — and today even the question seems quaint. In my experience, terrorist groups are rarely as forward-leaning as American college students or innovators in Silicon Valley, but they do outpace members of Congress and large government bureaucracies. The AI revolution is not emerging first among militants, but an adoption delay is no reason to think it will not manifest. 

Conflict in Ukraine and the Levant suggests that autonomous force projection is still a work in progress. But we are on the cusp of a dramatic shift. Distributed systems are ever-more autonomous, China is using AI to instrumentalize mass domestic surveillance, and agentic systems promise to disrupt economies generally. War, power, and force will forever be human endeavors that extract blood, sweat, and tears, but there is no reason to believe organized violence will be exempt from general trends in an increasingly automated global society. 

Perhaps most importantly from a policy perspective, the danger of terrorists abusing AI should be balanced against other, greater risks — falling behind technologically relative to geopolitical competitors, enabling would-be autocrats, unleashing models with something like independent agency and access to dangerous tools, novel biological weapons, etc. For these purposes, that means building a set of restrictions on how models are used, not how they are trained; defending against simple autonomous weapons; and preparing generally for sub-existential threats. It also means acknowledging and addressing the safety issues around open source and open weight models more aggressively. 

For starters, it is important to clarify that misuse of AI to conduct violence is a crime. Congress should update statutes to indicate that deliberately fine-tuning models to facilitate violence constitutes premeditation, that doing so on behalf of a designated Foreign Terrorist Organization constitutes material support, and that utilizing models to target people based on protected characteristics can lead to federal hate crime charges and sentencing enhancements. Case law is not sufficient. Congress should act as a matter of clarity. Such laws should not be designed or interpreted to increase the liability of the people or companies that develop foundational models. The purpose should be to punish those that intentionally fine-tune such models used for nefarious purposes. 

Nonetheless, such attacks are inevitable. Indeed, I think they will likely happen within the next few years. Though AI will be critical for autonomous or semi-autonomous attacks, the delivery mechanisms are likely to be drones of various kinds. Both militaries and terrorists worldwide are watching conflicts in Ukraine and elsewhere for lessons on the use and disruption of drones, which include a variety of relatively low-tech risk mitigations. Likely targets should adopt some of these techniques, including electronic countermeasures and subtle physical barriers. 

My focus here is on AI’s contribution to violence rather than the production of propaganda and disinformation. But such use is occurring and will inevitably grow. As a practical matter, model companies, like social media firms, will be a critical first line of defense against such abuse — and should build and enforce usage rules to limit it. But open models will almost certainly be capable of serving terrorist propaganda and disinformation needs, regardless of actions taken by the larger labs. Frustrating as it may seem, the best policy response to such a process will likely be limiting distribution on social platforms and prosecuting creators when they are actively enabling terrorist groups. 

Indeed, open models will provide terrorists tremendous opportunities. Limiting the abuse of these models will be extremely difficult. But researchers should include open models in cross-cutting evaluations of model risk. And the companies that host open models should build the sort of sophisticated analytical teams to track abusive usage patterns that exist at leading foundation lab companies. This will not prevent terrorists from running models on their own infrastructure, but it may limit iteration and increase friction for dangerous actors.

Terrorists do not always seek to maximize violence — indeed, many groups endeavor to calibrate it carefully. This should impact how we assess the threat of terrorists abusing AI. AI labs rightly focus on the risk of AI developing novel biological threats and they should continue work to limit the danger of massively destructive human pathogens. But biological weapons require more than just a chemical compound — they require fabrication, storage, and distribution. Some novel agents might mitigate the challenges associated with those steps. But it is also reasonable to presume that terrorists may use biology to disrupt society rather than destroy it, and to mitigate the risk to themselves while they do so. If so, foundation labs should consider the dangers of biological agents that target livestock or crops. Such agents are easier to handle and distribute once produced and could dramatically disrupt the agricultural economy. 

The bulk of this article was written in late 2024, prompted by Amodei’s essay Machines of Loving Grace. A mostly upbeat vision for AI, Amodei nonetheless described the broad, civilizational risks that he anticipates from “powerful AI” and the geopolitical importance of maintaining America’s leadership in AI development. At the time, I worried that he understated the risks of more banal AI-driven threats, particularly regarding terrorism and potential authoritarian drift. Amodei has more recently published a sequel entitled The Adolescence of Technology, which remains focused on acts of massive violence, but is clear-eyed that AI offers a new set of cross-cutting tools — and specifically notes terrorism and potential authoritarianism. 

Nonetheless, Amodei’s approach reflects a broader disconnect between Silicon Valley and many outside of it. Many technologists center new technology itself as the primary instrument of strategic impact on national security and geopolitics. The mechanisms of impact and disruption are tied directly to the vast scope of violence newly possible. There is truth in this worldview — one need only examine the strategic shift wrought by nuclear weapons to see it. 

At the same time, many others — both pro- and anti-social — think about how to create dramatic social and political change by applying targeted pressure at strategic points. For them, new technology need not fundamentally alter the scale of violence that can be applied in the world — it must simply unlock novel tactical opportunities that can be applied in operations that ultimately produce strategic change. There is truth in this worldview as well — one need only examine the vast shifts wrought by the AK-47 to see it.

Modern AI creates both kinds of dangers. It simultaneously raises the prospect of violence on a nearly unthinkable scale while also creating new tactical opportunities to apply narrowly targeted violence. Just as importantly, it will raise the capabilities of actors ranging from sophisticated militaries to lone wolves.  All of these  dynamics demand our attention. AI increasingly has its own agency, both in the sense that agents are able to make judgments formerly limited to human beings and in the sense that these new capabilities will reshuffle all human choices, including those around political violence. But it is also true that human beings maintain agency of their own to pursue political and ideological ends, sometimes using violence. Those strategies will continue to exist in a world of AI-enabled tactics.

Brian Fishman is the co-founder and chief strategy officer of Cinder, which provides mission-critical infrastructure that empowers leading businesses, including various foundation model labs producing closed and open models, to fortify their products, users, and brand from abuse, manipulation, and junk. He previously led Facebook’s work to counter dangerous organizations, including terrorists, hate groups and cartels. Prior to that, Fishman was the director of research at West Point’s Combating Terrorism Center and a Fellow at North America. He is the author of The Master Plan: ISIS, al-Qaeda, and the Jihadi Strategy for Final Victory (Yale University Press 2016).

Image: Midjourney

Оригінальне джерело

War on the Rocks

Поділитися статтею

Схожі статті