Are humans getting smarter? Whatever happened to the hyperloop? And what happens when AI starts making scientific discoveries we don’t understand? Plus abundance and foreign policy.
From Lux Capital
Nominal, the startup aimed at making defense, space, energy and automotive manufacturing more efficient through advanced data analytics and testing, has achieved unicorn status. An $80 million investment round, including Lux, recently put the startup’s value at $1 billion.
“We wrote Nominal's first check,” Josh Wolfe noted in a post about the deal. “Since then? 7x ARR growth year-over-year. Four of the five largest defense contractors on the platform.” He continued:
When the engineers building the most technically advanced hardware systems in the world converge on the same platform, you pay attention. And when AI starts to actually WORK for hardware, not as a cheap “digital engineering” demo, not as a “digital twin” gimmick, but as a tool that earns trust by being right, it will be because platforms like Nominal built the data layer underneath it. You can’t have AI for machines without making everything about those machines machine-readable first.
In other news, Lux held its Health X Intelligence forum this week, including a panel on women’s health with Angela Dao of Maven, Bailey Renger of BeSound, and Melissa Teran of Alife. Fortune has a nice writeup of the discussion.
Finally, The Information reported on a memo from Josh to founders suggesting they start preparing for a bumpy road ahead. Looking to bonds, tariffs, infrastructure, and markets, he noted “signals suggest something is off.” From The Information: “he is concerned about ‘the bubble of AI, which people are all afraid to talk about publicly.’ He said many people in the VC industry ‘privately harbor these very vocal doubts’ but keep them quiet because of the upbeat culture.”
From around the web
1. Reading room
I’ll start off this week with an essay in Aeon dismissing all the doomsaying about technology and the decline of literacy. Human cognition is evolving rather than declining, Carlo Iacono writes, with people engaging ideas across multiple modes — text, audio, video — rather than just one — books. The solution isn’t nostalgia as much as making meaningful, multimodal thinking accessible for everyone.
Reading worked so well for so long not because text is magic, but because books came with built-in boundaries. They end. Pages stay still. Libraries provide quiet. These weren’t features of literacy itself but of the habitats where literacy lived. We need to rebuild those habitats for a world where meaning travels through many channels at once.
2. AI mode
Coming at this issue from a slightly different angle, a new Wharton paper offers an addition to Daniel Kahneman’s model of fast thinking (system 1) and slow thinking (system 2): AI thinking (system 3). Through three experiments, the authors demonstrate “cognitive surrender,” where people routinely defer to AI outputs with little critical scrutiny. H/t Katie Salam
Tri-System Theory is not a warning about AI’s dangers but a recognition of System 3’s psychological presence. We do not merely use AI; we think with it. In doing so, we must ask new questions: What happens when our judgments are shaped by minds not our own? What becomes of intuition and effort when a generative, artificial partner stands ready to answer? How do we preserve agency, reflection, and autonomy in a world where users engage in cognitive surrender? We offer Tri-System Theory as a conceptual foundation for understanding these challenges. It is a theory for an age of human-AI algorithmic cognition, and for the decision-makers, researchers, and designers shaping that future.
3. Tunnel vision
Remember Elon Musk’s hyperloop? Only vaguely? That’s by design. Laurence liked Matt Ribel’s look at what went wrong in the gambit to bring us revolutionary high-speed underground travel — an effort that had all but collapsed by 2021 due to technical incompetence, regulatory hurdles, and a lack of serious planning. For Ribel, this saga reflects a broader cultural tendency to embrace flashy, billionaire-led tech promises over the hard, collective work that real infrastructure requires.
Soon after, the Boring Company gathered reporters in California for a highly anticipated demo. But there was no high-speed, self-driving bus. Instead, a Tesla Model X SUV carried passengers through a short tunnel, sometimes approaching 50 miles an hour and ricocheting so violently that one journalist fell ill. Musk’s new-new vision was just a car, moving slower than it would aboveground. “I thought it was epic,” he told reporters. “It was an epiphany.”
For more on Musk’s foibles, check out my interview with Christian Davenport, author of the book, Rocket Dreams: Musk, Bezos, and the Inside Story of the New, Trillion-Dollar Space Race.
4. Lost in translation
We don’t have speedy transportation, but we do have AI that can beat us at chess. And science is not far behind. On that score, scientist-in-residence Sam Arbesman recommends Asimov Press’ take on the “legibility problem”: What happens when AI makes discoveries that are too complex or conceptually foreign for humans to understand — and use.
If AI science does achieve superhuman performance, and if AI systems begin forming their own research communities around concepts that mutate faster than we can track, then the work of human scientists will shift from that of creation to that of excavation. How much knowledge we will need in order to act on what AI discovers remains an open question. But some degree of legibility will be essential, as discoveries that cannot be interpreted cannot be deployed.
5. Card trick
Finally, Sam directs you to a fascinating video about the life and death of HyperCard, which came before the web and led to the best-selling computer game of all time — then disappeared.





