The Riskgaming newsletter started nearly a decade ago as the brain child of our scientist-in-residence Sam Arbesman, who dubbed it “Lux Recommends.” As we near issue #500, we’re taking a moment to revisit his top selections. This week: Sam’s picks on AI — from early musings on the changes the technology might bring to the latest on where it will go next.
AI around the web
1. Black Death, white lies
In 2023, Sam highlighted Benjamin Breen’s look at large language models and how their hallucinations would transform education and the humanities. Benjamin described using LLMs as “history simulators” in his own work as a history professor, where he would have students role-play scenarios like the 1348 Bubonic Plague and then fact-check the AI’s inaccuracies.
I was blown away by student engagement and creativity. Here’s a brief list of what some of my students did in their medieval simulations:
ran away from home to become an apprentice to a traveling spice merchant
developed various treatments for the plague, some historically accurate (like theriac) others much less so (like vaccines)
negotiated complex legal settlements between the warring guilds of Pistoia
fled to the forest and became an itinerant hermit
attempt to purchase “dragons blood,” a genuine medieval and early modern remedy, to cure their fast-worsening plague
made heroic efforts as an Italian physician named Guilbert to stop the spread of plague with perfume
became leaders of both successful and unsuccessful peasant revolts
Student engagement in the spring quarter, when I began these trials, was unlike anything I’ve seen.
2. Labor pains
Up next, in 2024, we’ve got Sara Tillinger Wolkenfeld’s essay in The Atlantic, “Productivity Is a Drag. Work Is Divine.” Drawing on Jewish tradition, Sara distinguished between melakhah (meaningful, creative labor that imitates the divine act of Creation) and avodah (rote toil). She warned that delegating creative work to machines risks robbing us of purpose and fulfillment, so we should be selective about which technologies we adopt. This argument feels as relevant now as it did when Sam shared it two years ago.
Modern technologies such as generative AI threaten to make 21st-century Americans like the woman in the Mishnah: Deprived of purpose, convinced that our creative output is useless because a computer can produce a result that is sometimes just as good, or even better. Much of the debate around AI hinges on the question Can a computer do it better? But Jewish texts insist that the most important question is about process, not product.
3. Prompt and circumstance
Jumping forward in time, earlier this year we covered the Claude Code craze. Whether you’ve been swept up or not, “10 things I learned from burning myself out with AI coding agents,” is worth a revist. Benj Edwards explained what AI tools are good at, what they’re (still) not good at and why they’ll probably end up creating more work for humans rather than making us obsolete.
Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want. For example, I spent four days trying to get Claude Code to create an Atari 800 version of my HTML game Violent Checkers, but it had trouble because in the game’s design, the squares on the checkerboard don’t matter beyond their starting positions. No matter how many times I told the agent (and made notes in my Claude project files), it would come back to trying to center the pieces to the squares, snap them within squares, or use the squares as a logical basis of the game’s calculations.
4. Lost in translation
Whatever their flaws, we do at least have AI that can beat us at chess. And science is not far behind. On that score, Sam highlighted Asimov Press’ take on the “legibility problem”: What happens when AI makes discoveries that are too complex or conceptually foreign for humans to understand — and use.
If AI science does achieve superhuman performance, and if AI systems begin forming their own research communities around concepts that mutate faster than we can track, then the work of human scientists will shift from that of creation to that of excavation. How much knowledge we will need in order to act on what AI discovers remains an open question. But some degree of legibility will be essential, as discoveries that cannot be interpreted cannot be deployed.
5. Keyboard cowboys
Sam’s last pick for this week’s edition is “Coding After Coders” by Clive Thompson in the New York Times Magazine. A majority of Americans may be neutral or skeptical on AI, Clive wrote, but not coders. After all, AI solved one of coding’s main problems, drudgery. It has transformed software development so dramatically, he writes, that most programmers now spend their days directing AI agents in plain English rather than writing code themselves, boosting productivity by anywhere from 10x to 100x.
I looked at Ebert’s prompt file. It included a prompt telling the agents that any new code had to pass every single test before it got pushed into Hyperspell’s real-world product. One such test for Python code, called a pytest, had its own specific prompt that caught my eye: “Pushing code that fails pytest is unacceptable and embarrassing.” Embarrassing? Did that actually help, I wondered, telling the A.I. not to “embarrass” you? Ebert grinned sheepishly. He couldn’t prove it, but prompts like that seem to have slightly improved Claude’s performance.
From Riskgaming
Consciousness, biology, Tylenol
How the study of consciousness fell asleep at the wheel, surprising biomedical advances, and why Tylenol beats Advil.Thanks for reading Riskgaming by Lux Capital! This post is public so feel free to share it.




