Our newest scenario, No Man’s Land
How the exponential growth of AI capabilities is transforming the future of war
There is nothing both as frustrating and yet as rewarding as a hard decision. The complexity of the world means the actions we take often have multiple and contradictory effects. We want to publish AI modeling code as open source, making it easier to build the next generation of great software and accelerate America’s AI competitiveness. Yet that same code can be downloaded by a terrorist organization bent on building a custom bioweapon. Or a near-peer competitor like China can use it to accelerate their own catch-up at our expense. By comparison, eating a proper breakfast is child’s play (it’s bagel and lox, or you’re wrong).
A lot of senior decision-makers — people who have been doing their jobs at the highest level for decades — play Riskgaming. The essential skill these leaders have honed is their ability to make extraordinarily complex decisions at a brisk pace. Generals and admirals make strategic and tactical decisions after decades of courses, simulations, training exercises and hardened field experience. Ideally, each of these military leaders can swiftly answer what should be done when an issue arises.
Crises come, though, when fundamentals change. Suddenly, all of those layers of decision-making acumen built up over decades are less useful — or worse, are actively wrong. It is dangerous when incumbent leaders stay in place during transition points but haven’t updated their thinking about the world. It’s even more perilous when the institutions that appoint these leaders assume that the experiences that led to peak performance in the past will continue to be the requirements for the best leaders going forward.
A key current example of one of these transition points is autonomy and war. It’s been nearly four years since the start of Russia’s full invasion of Ukraine, and the major lesson of that conflict is the pivotal role drones and autonomous warfare will play from here on out. The pace of battles is accelerating in line with AI capabilities, and it’s increasingly untenable to have humans on the loop, let alone in it. The machine has to run as fast as its chips will let it.
That’s a very different pace of change for war. Normal defense innovation happens linearly. Bullets, tanks, airplanes — the equipment and materiel that make up a modern army are always improving, but never particularly quickly. No one doubts that the Gerald R. Ford-class aircraft carrier is an improvement over its vaunted predecessor, but neither do they believe it radically transforms the nature of sea warfare. For a navy admiral trained to command fleets, their judgment and experiences are as relevant today as they were decades ago.
Autonomy, on the other hand, is a rupture. AI capabilities over the past decade have exponentially improved, and just when people started to doubt its progress, we’ve seen it once again with the launch of Google’s Gemini 3 last month. Capabilities in the commercial sector are flowing into the defense sector as ideas and technologies diffuse across the porous border between these two markets.
We are seeing the same exponential pattern of change in all domains of war, including land, sea, air, space, cyber and more. No facet of modern warfare will be left unchanged.
Returning to Ukraine: the drones on the battlefield in 2021 are not those around in 2025. Both Ukraine and Russia are constantly fielding new equipment that’s improving on an exponential curve. As anyone who has followed the conflict knows, that’s required both sides to regularly adapt their strategies and tactics to the new normal of the automated battlefield. There are dozens of ambitious Ukrainian drone startups innovating across a range of functions, from rapid mine detection and reconnaissance to cargo and munitions delivery.
Yet, this is just one environment in defense. We are seeing the same exponential pattern in all domains of war, including land, sea, air, space, cyber and more. No facet of modern warfare will be left unchanged.
With fits and starts, America is adapting, but it’s been slow going. This is a crisis, and one that’s finally getting more attention. This week, The New York Times’s editorial board called for rapid progress on defense innovation. Like any rupture, autonomy is upending the carefully cultivated judgment of thousands of senior military officials. What was taught in boot camp, officer candidate school and war college or learned across decades of service is suddenly much less relevant. In a way, we are all amateurs in this new world.
Riskgaming is how we take sudden amateurs and begin to bridge their valuable past experiences to meet the demands of the future. Science, technology and finance are rapidly changing, but that doesn’t mean the cultivated instincts for decision-making and judgment have to be discarded. In times of intense uncertainty, Riskgaming allows leaders to keep their tools even as they throw out expired schematics.
It’s obvious who is ready to adapt. I spent an hour with a current four-star general leading one of the most important U.S. global commands. He gets this future, entirely. As we talked about AI and what it’s doing in different domains, I was profoundly reassured that this leader comprehends how much everything is changing, and just how much work is ahead for America to maintain its power this century.
What scares me are the leaders who believe autonomy is nothing more than a phase or bubble that will simply disappear.
On the other hand, I had a similar lengthy talk with a three-star general who is a deputy of one of America’s other most important commands. I had the opposite reaction: this is someone who thinks nothing has changed — that autonomy is merely a buzzword peddled by defense salesmen for profit. Intelligence analysis just needs more bodies to be effective and can’t be automated. I was gobsmacked. At times, I viscerally felt the cool shiver of past defeats in the face of recalcitrant hubris. This is what the French felt as German tanks drove through the Ardennes.
In short, we need better tools to upend and recalibrate our decision-making around AI and its effects on national security.
That prompt was the genesis of our latest Riskgaming scenario, No Man’s Land. Mike Bloomberg, who at the time was the chairman of the Defense Innovation Board, along with Beth Kroman wanted to convene a senior group of military leaders, congressmen, think tank presidents and private CEOs to competitively engage around the future of defense innovation and how the exponential growth of AI technologies would transform the future of conflict and the warfighter in the field.
We hosted that group last year, and since then, we’ve hosted the game for senior groups from U.S. allies, including Canada and Australia.
What makes No Man’s Land special — and also our richest Riskgaming scenario to date — is the number of tradeoffs the game models in an extremely compact format. It only takes about three hours to play with eight players, and the rules aren’t fiendishly complicated. Yet, the scenario allows players to balance economic growth and national security, emerging threats and great power competition, open source and proprietary technology, commercial competition and cooperation, productivity and employment as well as long-term investment against short-term capital constraints. That’s a lot of tradeoffs to balance in one experience!
We’ve hosted dozens of Riskgaming events since we launched two years ago. But none of them have proved to be as controversial as a runthrough of No Man’s Land. There is always a divide among players who “get it” and those who don’t. What’s been interesting is that it is never a fight over the details. Instead, it’s a much more fundamental disagreement over the scope of future automation and just how much will change in the years ahead.
I’m not too scared by the bulls or the pragmatists: the bulls will rein themselves in and the pragmatists will figure it out. What scares me are the vociferously opposed, the leaders who believe autonomy is nothing more than a phase or bubble that will simply disappear. The arguments around our Riskgaming tables are invaluable, but I have grown more and more wary of the belief that a prior generation of leaders will catch up in time. When the world ruptures, there are those who find their futures and those who slink to the past. That’s the discriminatory power of a hard decision in a world that’s rapidly changing.
In addition to the game, I’ve posted our memo to the Pentagon outlining the main lessons of No Man’s Land. By all means, read that even if you don’t play the game.







