Who really stopped Iran from getting the bomb? Sanders’ data center gambit. Fun with Victorian AI, and more.
Hey – I’m on vacation!
Unfortunately, OpenAI’s offer to buy Riskgaming must have been lost in the mail. Shucks. In the meantime, I am on vacation the next few weeks, so our usual dispatches and podcasts will be a bit haphazard, but I’ll try to get Lux Recommends out each week.
Nobody start any new wars, please. Please.
From Lux Capital
Lux is partnering with Lip-Bu Tan, CEO of Intel, on a new funding round of $60 million for Cognichip. The company is tackling the two major challenges in semiconductor development — high cost and inaccessibility — by developing an AI model purposely built for semiconductor design.
We’re also leading a $60 million Series B round for Crosby, an AI law firm that helps companies execute contracts faster and eliminate the dreaded billable hour. As Grace Isford noted:
What convinced us was when Crosby publicly launched last year and our portfolio companies were clamoring to use the product. By the time they signed their third Lux portfolio company and were recruiting some of the best talent in NYC, we’d seen enough.
From around the web
1. Bombs away
I want to start off this week with an amazing New Yorker story from David D. Kirkpatrick about the former CIA officer who says he helped stop Iran from getting nuclear weapons. During a relatively short stint at the agency, Kevin Chalker recruited Iranian nuclear scientists to defect to the United States. After leaving, he built a security consulting firm that brought in hundreds of millions of dollars before everything was derailed in a hacking scandal.
Chalker said that, at least for him, the curious-scientist ruse never worked. He told me that every actual scientist he approached immediately guessed that he was a spy, from either the U.S. or Israel. “Every time I walk up and say, ‘Salaam habibi, how are you?,’ they just think, Oh, this is it, and they assume I am there to kill them.” Most of the time, he said, the terrified scientist was “compliant” enough to at least sit down in a café. Chalker typically had about ten minutes to explain, as gently as possible, that he was from the C.I.A., that he had the power to secure the scientist and his family a comfortable new life in the U.S.—and that, if the offer was rejected, the scientist, regrettably, would be assassinated. (Chalker tried to emphasize the happier potential outcome.)
2. Circuit breaker
We’ve been writing a lot about the AI backlash and data centers, so Senator Bernie Sanders’ proposed federal moratorium on data center construction came as no surprise. That doesn’t mean it’s a good idea. Laurence liked Nat Purser’s examination: Even if enacted, it wouldn’t meaningfully slow AI development — companies would simply build elsewhere — and it bundles too many unrelated policy goals into one unwieldy bill. If Congress wants to regulate AI, it should do so. But this isn’t how.
When Singapore paused new data center development between 2019 and 2022 over energy and land use concerns, the regional buildout of compute continued regardless. Microsoft, Google, and AWS have collectively committed billions across Malaysia, Thailand, and Indonesia. Meanwhile, the Gulf states are treating AI infrastructure as their post-oil economic strategy. Middle East data center capacity is projected to triple from 1 gigawatt to 3.3 gigawatts in five years. Saudi Arabia just earmarked $40 billion for AI investments and declared 2026 the “Year of Artificial Intelligence.”
3. Hedge bets
Prediction markets have also been on the brain. A new article from Jamie Pietruska in Aeon put some things in perspective for me. Jamie’s article traces the history of “catastrophe markets” — from 19th-century betting on the weather to today’s Kalshi and Polymarket, where people wager real money on disasters like wildfires, hurricanes and climate change. Throughout history, proponents have argued that such markets democratize forecasting, but the author points out that they really just distort how society thinks about disaster.
Social scientists and climate scientists think probabilistically about a range of possible outcomes in multiple possible futures. Recognising that ‘the future’ is plural and that futures are not overdetermined by the present offers a way to grapple with complexity and uncertainty, as well as a way out of fatalism. As the science-fiction writer Octavia Butler observed in ‘A Few Rules for Predicting the Future’ (2000), ‘the very act of trying to look ahead to discern possibilities and offer warnings is in itself an act of hope.’
4. Cloudy forecast
Speaking of disasters, complexity and uncertainty: Alex Trembath’s latest in Asterisk is also worth checking out. He argues that climate science has shifted from honestly acknowledging uncertainty to pushing artificial certainty on the field in order to drive policy and litigation. The problems this has caused — eroding public trust and failing to reduce emissions — are real.
For a newly successful climate science to take root in the coming years and decades, we need a renaissance in uncertainty. Uncertainty is not a dirty word — in climate science or anywhere else. Indeed, it is better understood as a kind of epistemic bravery: an assertion that while scientists and policymakers can’t predict the future, a scientifically informed, democratic public is capable of navigating it.
5. Gray matter
Finally, Russia has been running a sustained gray-zone warfare campaign across Europe since 2022, using cheap and deniable tactics like disinformation, sabotage, cyberattacks, and military probing. A report from the Soufan Center looks at just how impactful those efforts have been and finds that context, like the level of pre-existing societal cohesion, really matters.
While the cost asymmetry, which makes hybrid threats the obvious choice for Moscow, explains why this wave of [tactics, techniques, and procedures] TTPs was observed across the six case studies, it does not explain their impact or lack thereof. The impact of a Russian hybrid operation — incredibly difficult to measure — appears to be shaped in our case study countries by the presence or absence of institutional constraints, attribution mechanisms, and societal cohesion in the targeted country, rather than by the skills, frequency, or intensity of Russian operations. The impact of hybrid activity, especially in the ‘Information’ domain, is largely contingent on pre-existing societal factors — which can amplify or blunt operations regardless of Russian operational tempo or skillset.
6. Jolly good
Finally, if you just need a break ahead of the weekend, check out the delightful “Victorian Gentleman Chatbot” Laurence found. Ask it a question and get an answer trained on 28,000 British texts published between 1837 and 1988. Mr. Chatterbox has some suggested reading, too.
From Riskgaming
Get out, gambling, AI and democracy
Conservatives touching grass, bad — and good — writing about gambling, and why AI might improve democracy. Plus my dispatch from the Hill and Valley Forum. Thanks for reading Riskgaming by Lux Capital! This post is public so feel free to share it.





