The U.S. Military Risks Letting Contractors Define How It Sees the Battlefield

In the 1964 black comedy Dr. Strangelove, an emergency war plan called “Plan R” allows an unhinged U.S. Air Force commander, Jack Ripper, to launch a nuclear strike without presidential authorization. Once the president, the joint chiefs, and the Soviet ambassador convene in the war room

War on the Rocks
75
14 хв читання
0 переглядів
The U.S. Military Risks Letting Contractors Define How It Sees the Battlefield

In the 1964 black comedy Dr. Strangelove, an emergency war plan called “Plan R” allows an unhinged U.S. Air Force commander, Jack Ripper, to launch a nuclear strike without presidential authorization. Once the president, the joint chiefs, and the Soviet ambassador convene in the war room, the bombers are already airborne. Only Ripper knows the three-letter prefix needed to recall them, until his aide, Lionel Mandrake, reconstructs it from Ripper’s notes. Although nearly all planes are turned back, one damaged B-52 cannot receive the recall message and successfully drops its bomb, triggering the Soviets’ secret doomsday machine and bringing about global destruction.

The film’s lesson is not only about nuclear weapons, but also about what happens when critical systems are not governed effectively. Today, a version of that failure is playing out across the U.S. military in the governance of its integrated command platforms. The definitional layer of these platforms, which, for example, defines threat levels or escalation thresholds, is the proprietary intellectual property of contractors, ungoverned by an institutional process, and subject to change without notice.

Full disclosure: I’m the founder and CEO of Mind-Alliance Systems, which provides intelligence and wargaming solutions to national security clients. As such, I have an interest in defense contracting outcomes. But my work has led me to become deeply concerned about our government’s dependence on integrated command platforms it neither owns nor actively governs, because they can result in poorly informed decisions. Ultimately, a military that cannot govern its own categories cannot fight at machine speed.

Some will point to the modular open systems approach as the solution to this problem. While this approach is essential, it is not enough. It governs the technical interface layer, but not the semantic layer, thereby enabling vendors to retain control over the ontologies. While the U.S. military may realistically have to accept some level of vendor-lock on these commercial platforms, it should refuse to outsource the definitions that govern how those platforms see the world, and maintain the authority to alter it before vendors or agentic AI change ontological definitions faster than any review process can track.

How the Map Takes Control

An integrated command platform fuses data from the military and intelligence enterprise into interfaces, enabling users to enhance situational awareness, assess options, and make decisions. Today’s integrated command platforms, like Palantir’s Gotham and the Pentagon’s Joint All-Domain Command and Control, pull in hundreds of millions of data points daily from operational databases, through satellite imagery, to signals intelligence, and across classification levels. The way these systems model an increasingly complex operating environment structures the decision environment in every major combatant command.

These systems depend on ontologies: the structured frameworks of concepts, categories, and their relationships that explain what constitutes a threat, what “readiness” means, and where the escalation thresholds are found. In this article, “map” refers to the combination of ontologies and the data they structure that together form the system’s integrated command platforms. A system uses this map to transform data into information that operators care about (e.g., “this unit is ready,” “this pattern of movements is a crisis”). Whoever controls that map controls the situations the system both notices and ignores, as well as the prioritization of options that leaders see.

The U.S. government doesn’t have “representational sovereignty” over its maps, which simply means owning and governing the authoritative ontologies and integrated command platforms on which it relies. Through a series of individually innocuous contracts and licensing arrangements, the definitional layer — the part that determines what “threat,” “readiness,” and “escalation” mean within these systems — has been ceded to vendors as proprietary intellectual property.

Once a platform flags a threat, these categories begin to appear in briefings, influencing command decisions and turning vendors’ ontological choices into de facto doctrine. Even with experienced officials in the room, the most consequential framing decisions that constrain how analysts and leaders think have already been made by people who aren’t there.

If vendors hired experts to inform that ontology, it would be one thing. But often, the categories commanders rely on in a crisis are controlled by software engineers who, while smart and able to learn on the job, lack operational or regional expertise. Crisis management scholars have spent decades refining taxonomies as part of the International Crisis Behavior project and Militarized Interstate Disputes data. A vendor’s ontology and engineers, however, may improvise categories to fit application needs, rather than relying on the work of crisis scholars or doctrine specialists and testing the system’s conceptual models against the best available research on how real crises actually unfold. These engineers carry no doctrinal accountability for the ontological choices or changes they make.

During a crisis, leaders are unlikely to question the categories embedded in their software. And if they do, it might be too late.

When the map produces a surprising output, the instinct is to search for a technical explanation, such as a misconfigured parameter or a bad data feed. The underlying model, which is run by nonexperts and not grounded in decades of conflict research, is rarely questioned. In my experience, most senior leaders assume that the platform’s categories reflect published doctrine. Few have ever seen the underlying ontology, and fewer still know who wrote it. Officials debate options and weigh tradeoffs, but the conversation rarely escapes the system’s framework. No one demands a second model built on different assumptions, even when the system produces questionable results.

This also results in a significant loss for the institution, which never goes through the disciplined process of deciding what its own concepts mean. When operators, intelligence officers, and doctrine specialists argue about whether “pre-crisis signaling” is a distinct category or a subset of “gray-zone activity,” institutional clarity is built. Outsourcing the ontology outsources that learning. Consider a platform whose “pre-crisis signaling” threshold was calibrated to one theater and one adversary. When a different adversary moves forces in ways an experienced regional analyst immediately recognizes as preparatory, the system reads green. The analyst reads amber. And because the categories are proprietary, she has no way to show the commander exactly where the model’s assumptions diverge from her judgment.

Organizational research indicates that corporations destroy strategic advantage when they outsource core competencies, which are the distinctive capabilities that cut across products and provide access to multiple markets. Government is no different. Defining what counts as a threat, what “readiness” means operationally, and where escalation thresholds are found is a core competency of command. It cuts across every platform and every combatant command and it shows up in every crisis. A nation that cannot define “escalation” in its own systems cannot fully control its own escalation decisions.

The Problem is Governance. AI is Making it Urgent

The modular open systems approach, now required by law for major defense acquisition programs, is often treated as the solution to this problem. The approach is a genuine advance: It mandates modular design, open interface standards, and machine‑readable interface definitions, and it gives the government stronger data rights to avoid component‑level vendor lock‑in. Yet the approach governs the technical interface layer, specifically how systems plug together and exchange data, but not the definitional, ontological layer. Two platforms can be fully compliant and interoperable at every application programming interface boundary, yet still run on vendor‑proprietary ontologies that no program office owns.

To be clear, this is not because the modular open systems approach was poorly designed, but because ontology governance is a distinct, parallel institutional framework. Given the National Defense Authorization Act for Fiscal Year 2026, the Department of Defense has now statutorily chartered a separate ontology governance structure under the chief data officer to establish baseline ontological standards across the agency. However, that framework remains nascent, and command platforms that most urgently need governing definitions have yet to be integrated into this emerging structure. The gap between a modular open systems interface architecture and the chief digital officer’s ontology governance remains a critical vulnerability for decision systems.

AI use is accelerating this problem, as many of these vendors are now using AI to code and incorporate it into their applications.

Agentic AI systems are already further modifying existing definitional layers. When an AI coding assistant refactors a codebase, it may rename categories, merge threshold values, or restructure classification logic to improve code consistency without recognizing that those labels carry doctrinal meaning. Automated data pipelines can reclassify entities based on pattern-matching rather than on doctrine. Each change is small, but cumulatively they rewrite the map.

The map these systems produce may be syntactically coherent while being strategically incoherent. New categories (“gray-zone preparation,” ”pre-crisis cyber shaping”), new thresholds, and new scoring logic are introduced faster than the doctrine can be updated, or review boards can spot the implications. Model drift gradually takes place as application-level rules, thresholds, and ad hoc categories are changed or added without institutional review.

Without explicit version control and audit trails, the map will drift in real time, reshaping how leaders frame crises without anyone deciding that it should. And when AI operates on ambiguously defined terms and categories, it will produce confident yet misleading answers.

How Governments Can Retain Control of Their Maps

A military that owns its ontologies gains a decisive advantage. And AI systems operating on government-controlled, doctrine-grounded definitions will be faster, less brittle, and harder for adversaries to exploit or manipulate.

The U.S. military should first ensure, in the short term, that it seizes control of its ontologies. In the long term, however, it should reevaluate how definitions are standardized and how it performs regular data maintenance.

Regaining Control of the Map

First, it may be unclear if a private vendor owns your organization’s map. To answer that, find the ontology and analyze the way data is mapped into its categories. Determine whether it is understandable.

At this stage, answer the following four questions: Does this map align with your shared understanding of the key aspects of your domain? Is the data being rendered faithfully, adhering to the ontology? Can you detect in real time when events and situations are no longer aligning with it? Can you change it at will? If the answer to any of these questions is no, you don’t own your map.

In that case, you need to decouple the map from the private vendor just like any other vendor-locked software product. As Dave McComb argues in Software Wasteland, the application-centric mindset, in which every platform ships with its own model of what the data means, is the root cause of vendor lock-in across both government and industry. While applications should be ephemeral, the definitions should persist, and the current acquisition model reinforces this. When the Department of Defense buys a platform rather than requiring vendors to build on government-owned, open-standard definitions, vendor lock-in is the contractual default.

The architectural principle is straightforward: Separate the definitions from the platforms that use them. Design a layered architecture in which data, definitions, and applications are clearly distinct so that when you swap tools, the concepts stay put and only the interfaces change. This means that the ontology, which belongs to the institution, not the product, is treated as shared infrastructure, not a feature of any one application.

Standardizing Ontologies

Another area of critical concern is the standardization of ontologies across commands and maps. The military already governs doctrinal definitions and intelligence databases through structured review processes. Extending that same discipline to the ontologies embedded in command platforms is not a new kind of governance but an overdue application of an existing one.

The crucial task is to develop doctrinally grounded ontologies, modeled on the Joint Doctrine Ontology and built on the Basic Formal Ontology, and to represent them in standard formal languages (e.g., Resource Description Framework, Web Ontology Language) for machine use.

The Joint Doctrine Ontology, developed in the 2010s, was a proof-of-concept effort to translate joint publication definitions into machine-readable form. It not only showed the approach works, but also revealed the hard problem. Doctrine is written to be flexibly interpreted by humans, while an ontology demands the precise, unambiguous definitions that software requires. Closing the gap between doctrinal generality and computational precision is exactly the type of work that vendor engineers now perform without adequate oversight.

The key is to tie every element in the ontology to a specific operational question a commander needs answered, such as “Is this unit deployable within 72 hours?” or “Does this activity pattern constitute pre-crisis signaling?” rather than model military concepts in the abstract. An ontology built from competency questions stays lean, testable, and connected to decisions.

Some may claim that program offices already handle this issue through configuration control boards or within a program. But governance is usually scoped to one platform. Nobody owns the multi-program cross-cutting ontological map that shapes how leaders understand the battlespace. When three programs define “readiness” in three different ways, no single program office owns the shared problem.

Perform Regular Maintenance

Rather than over-relying on AI to make maintenance easier, ontology maintenance could be treated like doctrine and order-of-battle management: assign clear ownership, require justification grounded in joint publications and peer-reviewed research, and record any change. The solution is making the authoritative ontology a required filter that AI outputs should pass through before any output reaches a human operator.

A “Joint Ontology Board” that includes operators, intel officers, system engineers, and external scholars who work on crisis behavior, escalation dynamics, and conflict datasets would own this process. As wargames and exercises reveal new patterns, operational reviews uncover near misses, and crises are observed and coded in projects such as the International Crisis Behavior and the Militarized Interstate Disputes, a structured process should feed these findings back into the ontology and the categories that command systems use. In this board, the military would hold the ultimate authority to access existing vendor ontologies, approve definitions and thresholds, and mandate audit trails so that every change is logged, reviewable, and reversible.

Governance should also address what the map claims to be true. If a targeting database classifies a building as a military facility, but by the time a strike package is assembled, the structure has become an elementary school, the ontology may be definitionally sound but factually wrong. A well-governed ontology treats every operationally significant instance as carrying mandatory temporal metadata: when the classification was last verified, what evidence supports it, and when it must be reconfirmed before operational use.

A category like “military facility” should include a machine-readable validity horizon that triggers reverification before any instance bearing that tag can enter an operational decision workflow. If the ontology and its required metadata are vendor-controlled, the military cannot mandate that field. If it is government-owned, it can build that question into the map itself.

Conclusion

These problems are not unique to the Department of Defense. Any organization that relies on ontology-driven platforms without governing the definitions doesn’t “own its map.”

There is still a path forward, but the window is narrowing. The FY2026 National Defense Authorization Act chartered a chief digital officer-led ontology governance framework, so the institutional reflex exists. What remains is the harder step of extending that framework’s reach into the command platforms where ontological failure is most consequential. If that integration does not happen, the United States will risk fielding AI-enabled systems that are fast but brittle: systems that process data at machine speed while operating on definitions that no commander fully controls, no doctrine specialist has reviewed, and no adversary intelligence program has failed to study.

In Dr. Strangelove, the recall code had simply never been placed under proper control. The ontologies that drive today’s command platforms are no different. Owning the map is not a bureaucratic refinement. It is the precondition for fielding systems that are genuinely faster, more lethal, and harder to deceive, rather than merely more automated.

David Kamien is the founder and CEO of Mind-Alliance Systems and the editor of The McGraw-Hill Homeland Security Handbook. He has spent more than 25 years working at the intersection of national security, intelligence, and technology.

**Please note, as a matter of house style, War on the Rocks will not use a different name for the U.S. Department of Defense until and unless the name is changed by statute by the U.S. Congress.

Image: Air Force Senior Airman Tabatha Chapman via the Department of Defense.

Оригінальне джерело

War on the Rocks

Поділитися статтею

Схожі статті