It’s always jarring to walk the heart of Silicon Valley, which beats not to the tune of people (there weren’t very many of those walking or biking around) but rather to the resonant electrons that compute our civilization.
And what a civilization it is! I was heading this weekend to one of the corporate campuses that dot Mountain View, the famed NASA Ames Research Center’s Hanger One gloriously rocketing up in the distance. Dating to 1939 and once an emblem of ambitious spaceflight, today it’s more known as the elite earthbase for private aviation in the Bay Area given its proximity to Silicon Valley’s wealthiest. Google paid $1.16 billion for a sixty-year lease to manage the hanger and the associated airfield.

The inflated valuations of the companies here are only matched by the deflated tires of the motorhomes that crammed street parking, a nomadland of the involuntary itinerant. One camper trailer had a handwritten sign in the passenger window pleading to Mountain View enforcement to not ticket it. Meanwhile, for those not hiding in the liminal shadows of Mountain View’s municipal ordinances, a one-bedroom at the building six feet away (which offers a “life in the spotlight” as part of a “carefully curated ecosystem”) rents for about $4,000 a month.
I was here for the O’Reilly Social Science Foo Camp, which brought together about 200 people from all walks of life — from the typical (journalism, researcher, academic, writer) to the unusual (banjo player, cocktail craftsman). It was a crowd that heavily leaned erudite; I am sure more books have been written by people at the conference than read by the entire population of Mountain View last year (those electrons holding together fibers of paper could really be doing something more intelligent, no?)
What struck me throughout the discussions was how much our frames and models of society — the very essence of social science — are entirely unequipped to handle the world that’s shortly coming.
AI was the inevitable focus. Foo Camp is an unconference, offering a democratized barometer of what people are thinking about and who is drawn to what subjects. The combinatorial math of people and time meant that pretty much every intersection of the social sciences and AI was covered, from economic productivity and jobs to politics, sociology, psychology, aesthetics, gaming and more, with the occasional provocative protest session like “AI People are Losers.”
What struck me throughout the discussions was how much our frames and models of society — the very essence of social science — are entirely unequipped to handle the world that’s shortly coming. The social sciences aren’t dead; if anything, they are more important than ever. Yet there is a severe discontinuity with the past that requires more than just updating Weber, Smith and Freud for the agentic era.
Take AI and government. What does a constitution mean in the age of AI, when computers themselves may have more power over our affairs than fellow citizens? Will an OpenClaw’d computer be allowed to vote someday? That bot and its agentic brethren may well control the fates of more people than the management class ever did, even at the height of the postwar boom.
There were strong ideas and contributions from everyone, including a long-time elected politician who had his own perspective from the trenches. Each contribution, though, seemed to be a brushstroke on a canvas that no one could see, or even fathom.
The conversation seemed to regularly hit impassable intellectual barriers. I brought up the idea that politicians in the future would be able to hyper-personalize their messages to voters, complete with audio and video (someone brought up Narendra Modi’s hologram as an early effort). This would create a breakdown in the political system as candidates sell an atomistic vision during a campaign while ultimately governing as a whole. A political scientist proffered that the academic research shows persuadability is very hard, and that the only goal of campaigns should be to mobilize allied voters. That research made sense before, but now we are finding early evidence that agents can be highly persuasive in ways that humans cannot, precisely because what was once a social interaction with its scripts, norms and expectations has been replaced by a man-machine interchange with near-infinite patience.
We might hide our health from our closest friends while uploading our complete medical records to ChatGPT or Claude to understand our situation better. Which relationship is more personal?
Many of us see machines as more objective and understanding than other humans, known as the machine heuristic. We might hide our health from our closest friends while uploading our complete medical records to ChatGPT or Claude to understand our situation better. Which relationship is more personal? That was a topic for a session on “is it okay to form an emotional attachment to your AI?” The subject echoed Sherry Turkle’s decade-old book Alone Together, about how the lack of agency embedded in technology is precisely what makes it so dangerous. Tech, whether robots or agents, is tailored to serve us, rather than becoming equal partners with us through shared experiences.
One person who attended the session said they were opposed to the very notion of AI friendships, seeing them as part of a broader decline of civilization. I countered, suggesting that AI friendships are an inevitable and perhaps even positive development, particularly for the already lonely. Which is better: being alone, or being alone with a bot? For many, that’s the bleak but realistic choice they get to make.
Here again, our social science frames of human experience seem to be wanting. What is friendship? If I talk to a stochastic parrot, and I imbue those interactions with meaning, could that be enough? Derek Thompson and other writers have claimed we have a loneliness epidemic, but it’s as much an epidemic as the common cold. Loneliness is universal and eternal. One of the recurring themes of Robert Caro’s first book on Lyndon Johnson is just how lonely the Hill Country of Texas was a century ago. Do our aspirational frames of friendship have any basis in past or present reality, let alone in our agentic future?
Optimism came from a session on chess. AI comprehensively defeats humans there, and yet the world of chess today is more robust than it’s ever been before. World chess championships are now watched by millions, and the number of players seems to be rapidly growing, with more than a million ranked players and tens of millions of occasional amateurs.
Why do so many people play a game in which they can be easily defeated by AI? Answers abounded. One person noted that the community is engaging and that chess is fundamentally a social activity that affords reasons to connect with others. Another said that it was “existential solidarity,” that we care about what another human will do far more than a machine. It’s no different than watching a speedrun of a video game: AI can perfectly mash the controller, but it doesn’t invoke the sense of awe we encounter when we know that a human accomplished the near-impossible. Another brought up Walter Benjamin’s famed essay on what makes art unique, claiming that it has an ‘aura’ that arises from its provenance of human workmanship.
My answer was that onboarding to chess — like coding — has been leveled. Learning chess before the internet would require distant and slow correspondence, perhaps a live coach, and a lot of books. Now, one can play an opponent with an equally matched Elo score and rapidly acquire skills and expertise, while chess commentators and AI tutors can explain moves and give more context for what’s taking place. Rapid feedback, greater fun, and play at a click of a button — all the ingredients are here for a robust and growing community. AI may well be the greatest grandmaster in history, but that ultimately doesn’t matter.
Unfortunately, it does matter in the economy. There were lightning talks and sessions on jobs, productivity and human agency throughout the conference, and certainly such concerns were ambient in many side conversations. Despite its popularity and primacy though, the distance between the economic prognosis of AI and the tools of social science was the greatest of any subject I encountered.
I got into a heated argument over productivity. The old Robert Solow line of “you can see the computer age everywhere but in the productivity statistics” has always had an obvious answer to me: you aren’t counting right — you’re just a drunkard searching at the lamppost. Spotify delivers the entire world’s music to three-quarters of a billion people. It also only paid out $11 billion to musicians in 2025. Outside of its revenues and royalties, economists have no ability to calculate what music’s worth is to people. A few might try to estimate consumer surplus of course, but those accounting methods are never integrated into the national statistics that ultimately drive policy.
That narrowness of economics is one of its defining features. What is a job? Economists have robust definitions of “job,” definitions that are going to require a substantial revision if not a wholesale rip-and-replace in the years ahead. Is orchestrating an agentic army of Claudes and Geminis a job? What if we start to see the rise of hyper-freelancers, who might work for 100 clients or more simultaneously while leveraging their bots? One could argue that nothing has changed, and yet, everything has. That’s the cardinal challenge for the social sciences in the years ahead.
Then again, as I walked by one decrepit motorhome after another, maybe unsettling some of our society’s (or at least, California’s) social failures is the disruptive breath we need to inhale.
There was a low hum of pessimism across the proceedings, latched on to the nightmare of a permanent AI underclass who are left behind by an elite squad of agentic engineers who accelerate away with the entire world. It’s not even a dystopian fantasy, for the tech industry’s success the past three decades provides ample proof of this trajectory.
The usual points were raised in various discussions, from universal basic income to some form of techno-communism, ideas that have already failed concretely or intellectually. What’s not even being discussed is what happens when bots have their own wallets and the very notion of property starts to break down. Who counts as a person? What is their (or its) relationship to property and contracts? Can an AI legally possess its own compute? Why not?
I was thinking about that on the walk back to the hotel. Answers didn’t flow, and I understand the deeply unsettling world we are all entering when very important and fundamental questions on the arrangement of society seem to be openly ambiguous and widely unanswered. Then again, as I walked by one decrepit motorhome after another, maybe unsettling some of our society’s (or at least, California’s) social failures is the disruptive breath we need to inhale. Our hearts might beat fast, but they will beat strong.





