When It Comes to AI and Health, Everyone’s Thinking of the Wrong Oppenheimer
A dispatch from the Health x Intelligence Conference hosted by Lux
On a sunny morning on Pier 15 off the Embarcadero, Chelsea Clinton opened our Health x Intelligence conference (in conversation with Lux Partner Deena Shakir) by telling us the story of the room we were all sitting in.

The Exploratorium — the San Francisco science museum where nearly three hundred founders, doctors, government officials, pharma executives, and investors gathered on Wednesday to discuss the intersection of healthcare and AI — was founded by Oppenheimer. Not J. Robert Oppenheimer, the one they made the movie about and that the AI frontier lab CEOs like to compare themselves to, but his brother Frank.
Frank also worked on the Manhattan Project. He was also hauled before the House Un-American Activities Committee. And after he was blacklisted from society and the scientific community, he spent a decade working as a cattle rancher in rural Colorado. When he finally clawed his way back to teaching, it was at a high school with fewer than three hundred students. With limited resources, Frank got creative: he took kids to the junkyard, where they got their hands dirty learning about mechanics from abandoned car parts. After seeing firsthand the power of tactile learning, Frank built the Exploratorium on a radical conviction: that understanding doesn’t come from listening to an authority figure. It comes from experiencing things yourself.
Clinton connected this story to her own childhood, visiting a museum in Arkansas that the Exploratorium had inspired, and then to this current moment. “Science really should be a place of not only understanding the future,” she argued, “but creating greater community in the present.”
The conference was an argument about whether patients should be allowed to get their hands dirty and touch their own data, take charge of their own healthcare — and that argument is a lot older than AI.
Throughout the rest of the day, I kept thinking about Frank Oppenheimer because, at bottom, the entire conference was an argument about whether patients should be allowed to get their hands dirty and touch their own data, take charge of their own healthcare — and that argument is a lot older than AI.
In 1847, the American Medical Association published its first Code of Ethics. Article II, titled “Obligations of Patients to Their Physicians,” instructed that “the obedience of a patient to the prescriptions of his physician should be prompt and implicit. He should never permit his own crude opinions as to their fitness to influence his attention to them.”
Crude opinions. That was the official position of American medicine on whether you should have a view about what was happening to your own body.
And it held for a remarkably long time. A JAMA article from the fifties debated whether patients who learned of their cancer and became distressed should be “handled in many respects as children.” In 1961 — the year we put a man in orbit — a survey of 219 physicians in Chicago found that ninety percent would not tell a cancer patient the diagnosis. Not couldn’t, wouldn’t. Many actively changed the chart to avoid any mention of the scary word. The phrase “informed consent” didn’t even exist until after the Nuremberg trials compelled the medical profession to reckon with what happens when physicians make decisions for people without their knowledge. In other words, for twenty-five centuries, the doctor-patient relationship was modeled on parent and child. The physician possessed knowledge; the patient possessed obedience.
What I saw at the conference was evidence that this regime is finally ending. Not because of regulation or an ethical awakening, but because patients now have tools to touch what was kept behind the glass of specialized knowledge. Dr. Hala Borno, CEO of Trial Library, argued that because patients now have access to real (if imperfect!) expertise through LLMs, doctors have gone from being fully in control to being more in an advisor position. Dorothy Kilroy, the Chief Commercial Officer of Oura, made the case that people’s obsession with tracking their own health isn’t all about vanity — it’s often about the fundamental desire to live more life:
Time with my kids is really important to me. How I age is really important to me. I want to be around, and I’m not embarrassed to say I’m obsessed with that. And for the first time, we now have data ourselves that makes that visible. Not as a doctor, not as a professor. We actually have it ourselves. What was invisible before is now visible. Suddenly, for the first time ever, you feel more in control.
This was what I kept hearing all day. The VP of health technology at Meta, Dr. Freddy Abnousi, described patients snapping photos of lab results and interrogating LLMs before their appointments. And Angela Dao, Maven Clinic’s Director of Product for Maternity, struck the same chord when she said “I can speak for myself, my members, my friends who are pregnant, I think we’re all using ChatGPT one to ten times a day to triage our symptoms.”
Frank Oppenheimer would recognize all of this instantly: people learning by handling directly what used to be interpreted for them.
The best work I saw at the conference extended this logic. Waymark uses AI to reach seventy million Medicaid beneficiaries — patients answered calls from voice agents at a fifty-five percent rate, higher than human callers, and then filled out housing and food-benefit forms alongside the AIs. Waymark’s CEO, Rajaie Batniji, published a paper showing AI can predict complex pregnancies fifty-five days earlier than conventional methods. Maven Clinic carries patient data from fertility through pregnancy through menopause, building a continuous health picture that no rotating cast of in-person providers could replicate. And BeSound is using AI-enhanced ultrasound to screen younger women for breast cancer, building the first dataset on a population that has never been studied because mammography starts at forty and misses women with dense tissue.
None of these are administrative efficiency plays. They are contact plays: they put patients’ hands on data, care pathways, and decisions that were previously locked behind institutional walls.
Of course, there are serious, serious risks to handing everything over to patients. WebMD has been the bane of doctors’ existence for good reason. And as several panelists mentioned, a recent Nature Medicine study shows ChatGPT Health under-triages emergencies more than half the time. In ChatGPT-speak, that’s not just scary, that’s dangerous.
Unfortunately, so far the regulatory response hasn’t been to help make AI better, or supplement with better access to doctor consultation, or even to mandate more direct disclaimer language for users, but to ban AI use entirely. This week, New York’s Senate Bill S7263 landed on the state Senate floor. The bill would make chatbot operators civilly liable whenever AI provides a “substantive response” in any of fourteen licensed professions — medicine, law, nursing, dentistry, psychology, and more. Strangely, it never defines “substantive response.” The proposed law targets the deployer, not the model maker, so a hospital running an AI triage tool carries the risk.
The Abundance Institute called the bill “shortsighted at best and protectionist at worst.” I’d call it something more specific: it’s the 1847 AMA Code in a consumer-protection costume. The same ancient reflex — the patient’s crude opinions are dangerous, knowledge must be mediated by the credentialed — updated for a world where the patient finally has tools to be more than a child in the exam room. Restrict low-cost guidance channels, and paid professional channels become the default. That lands hardest on the people with the least money and the least options, which is to say, the people who most need the Exploratorium.
This matters because trust is genuinely at stake — just not the way S7263’s authors think. “Healthcare moves at the speed of trust” was the conference’s most-of repeated line. But I was really struck by FDA advisor Dr. Shantanu Nundy’s reply that, actually, “healthcare also moves at the speed of desperation.” A hundred million Americans don’t have routine care, eighty-three million are living where there aren’t enough doctors, medical errors are the third-leading cause of death, and American life expectancy has been flat for decades. Josh DeFonzo, whose company Mendaera builds robotic guidance for medical procedures, agreed and argued “we trust the current standard of care far too much.”
The current system undertriages millions of people every day, not through error but through absence. We are holding AI to a higher standard than a system where a hundred million Americans don’t have a doctor to undertriage them in the first place.
May Habib, CEO of Writer, offered the sharpest formulation I heard all day. When AI is the one restricting care — denying a claim, rejecting a referral, saying you don’t need treatment — a human should always be in the loop. But when AI is expanding care — reaching a patient who otherwise has nothing, flagging a risk weeks earlier, filling out a benefits form at midnight — it should be permissive. Yes, the line between the two is blurrier than anyone would like; the same chatbot that expands access can also undertriage an emergency. But the current system undertriages millions of people every day, not through error but through absence. We are holding AI to a higher standard than a system where a hundred million Americans don’t have a doctor to undertriage them in the first place. At what point does do no harm become do nothing at all?
So while experimenting with Chinese peptides and unregulated compounds is a dangerous and bad idea, these biohackers are the canary in the coal mine, the leading indicator of a trust crisis in healthcare that predates AI by decades. The pharmaceutical industry has been the lowest-rated industry in America since Gallup started asking — below the federal government, below oil and gas, and below banking. Trust in doctors has fallen fourteen points since 2021 alone. Three in four Americans don’t trust drug companies to price their products fairly. This is the soil into which healthcare AI is being planted.
Gray Matter, my new riskgaming scenario, explores this exact dynamic: what happens when a breakthrough technology meets an information environment where trust has already collapsed. What I’ve learned, watching hundreds of players navigate this scenario, is that when trust is broken, new technology doesn’t get evaluated on its merits, it gets consumed by the narrative around it. Effective treatments get rejected because their side effects are more viral than their benefits. Sugar pills get approved because they have better marketing. The question is not “does this work?” The question is “who do you believe?” And the answer, increasingly, is “not the institutions.”
That’s the real danger for healthcare and AI. Not that the technology won’t work — the conference proved this is a tremendously powerful tool in the right hands. The danger is that it arrives into a trust environment so degraded that the public won’t know who to believe — their phones, their doctor, their doctor using an LLM, or anything in-between. And an LLM plays the strange dual role of being both an expert-sounding oracle that can ingest your personal data and draw on the sum of published medical knowledge in an instant, and a hallucinating autocomplete that doesn’t actually know your situation, often reverts to the mean, and may dangerously underplay or overplay your symptoms. Patients have more information than ever, but information without trust is not empowerment, it’s noise. And noise, in healthcare, kills people.
Which brings me back to Oppenheimer’s museum. The Exploratorium worked not because it handed visitors a textbook and told them to figure it out, but because it built an environment designed for contact — curated, structured, scientifically rigorous, safe enough to explore, and real enough to learn from. The exhibits weren’t consumer products and they weren’t credentialed lectures. They were something in-between: an institution that trusted people to touch the science, and that earned people’s trust by making the science touchable.
Frank Oppenheimer spent a decade in the wilderness because the institutions of his era couldn’t distinguish between dangerous knowledge and democratic knowledge. We are about to make the same mistake — not with physics, but with our own bodies.





