Finding a Third Way on the AI singularity
Intelligence risk and reward with Mike Sexton
Our guest today, Mike Sexton, believes that the AI singularity has arrived, and somehow, it ended up “on page C3 in the newspaper.” What he’s getting at is that the tools we have at our fingertips today like OpenAI’s ChatGPT, Google’s NotebookLM and others are already so diversely capable, we have reached a point of no return when it comes to future societal change. We need to get ahead of those changes, embrace them, and offer new paths for everyone to take advantage of these tools.
Mike serves as the Senior Policy Advisor for AI and Digital Technology at Third Way, the prominent centrist Democratic think tank that emerged from the Clinton administration and the pro-tech, pro-competition left that was at the core of national power in the 1990s. He researches the changing policy landscape around AI technologies, and argues that Democrats need a new direction other than anti-capitalism or existential risk doomerism.
Joining Laurence Pevsner and I, the three of us talk about the rise of effective altruism and effective accelerationism (or e/acc), why improving government services is so critical for the future of the Democratic Party, AI technologies in robotics and research, and finally, why a bipartisan consensus is emerging on protecting America’s AI industry going forward.
This interview has been edited for length and clarity. For more of the conversation, listen to the episode and subscribe to the Riskgaming podcast.
Danny Crichton:
Mike, you are the senior policy advisor focused on AI and digital technology at an institute called Third Way, which is trying to connect a bunch of different ideas around economics, AI, and digital technologies to policy issues. Maybe talk a little bit about the work you're doing.
Mike Sexton:
Third Way was founded during President George W. Bush's first term by a bunch of alumni from the Bill Clinton administration who were interested in establishing a Democratic think tank focused on issues that specifically matter to moderate voters — swing voters.
I came in three years ago. First I worked on cybersecurity and a little bit on privacy, and then with the release of ChatGPT, my role expanded to cover artificial intelligence. It's been a really interesting evolution throughout this time. The Democrats have had a long-standing flirtation with a very anti-business mindset. But we are trying to, especially with regards to AI, reframe that to show the private sector is where innovation comes from.
Laurence Pevsner:
Left, right, center — everyone wants to figure out what we are going to do about AI, whether today’s AI is it going to become AGI, and if so, what's going to happen. That is the debate, and you're right at the center of it. You’ve spoken about the need for some kind of framework to respond to e/acc, a Twitter acronym you see all the time. For our listeners, I wonder if you could start by saying what that movement is, what you think its positives and negatives are, and why you're proposing an alternative framework.
Mike Sexton:
e/acc or Effective Accelerationism is an outgrowth of the Effective Altruist movement, the philosophical philanthropic movement under Sam Bankman-Fried. That movement had a flirtation with AI-doomer philosophy. And if you believe that a likely outcome is an AI apocalypse, it does make sense to be donating a lot of money to prevent the AI apocalypse, potentially taking some radical measures like a six-month global pause on AI development.
Effective Alturism inspired its own evil twin, as I describe it, which is Effective Accelerationism.
But that movement inspired its own evil twin, as I describe it, which is Effective Accelerationism. Just as Effective Altruists take a utilitarian line in trying to benefit humanity, the Effective Accelerationsists say we also want to benefit humanity, and AI is the way we're going to do it. And so AI must be invested in, it must be sped up. People like Marc Andreessen have had some controversial takes, for example, that if you slow down AI, you are literally murdering people in the future who could be getting life-saving treatments via the AI that you are hindering.
That kind of language gets a little controversial, but there's a philosophical nugget at the center you can't just wave away. And when Democrats look at the whole spectrum — everyone from Marc Andreessen to the polar opposite, Eliezer Yudkowski, who believes there is a more than 99% chance that AI is literally going to kill everyone — they shouldn’t be sticking to either of those poles. It's a little cliché to think we need a third way between two complete opposites, but we do.
So I lay out a three-fold agenda for Democrats. The first agenda item is advance. We should take it off the table the idea of a six-month pause. Unless China happens to go take a smoke break from AI development for a few months, we should have our feet on the gas.
The second is protect. There are areas, especially deepfake pornography and child sexual abuse material, where there are urgent problems with AI. We should be focused on solving those problems, not on things that will hypothetically happen five to ten years in the future.
And the last is implement. This is where I'm worried Democrats are likeliest to fall down. We need to be using AI to improve government services, education, public safety, and public sanitation. There are so many ways we can be doing this.
Danny Crichton:
One of the interesting things with AI is the rapid adoption of these tools. So you look at ChatGPT, it was the fastest app in history to go from zero to a hundred million users. Now I think it's probably approaching a billion. Kids are using it as young as nine and 10. I was down at a military base in the South, and half the flag officers in the room raised their hand when they were asked if they used ChatGPT at work.
And so on one hand, a lot of people have access to these cutting-edge models. They may not have access to Pro or some of the more expensive versions, but the other publicly accessible models are pretty good. I think the big question becomes how much do you see inequality as a lens to start thinking about AI policy on the left?
Mike Sexton:
Matt Yglesias was just writing about this in his newsletter. There are Democrats whose analysis of inequality focuses on who has power and how you take power away from them. And there are other Democrats who are more focused on outcomes — what benefits are people getting from this company, this service, this government institution.
At Third Way, we're very much the latter kind of people. We are more skeptical, for example, that the big antitrust battles pushed by the left would actually result in improved competition or quality of service for Americans.
And so the highly risk-averse position, which was logical two years ago, isn’t anymore.
On AI, it says something about our economy that a company founded in 2015 is now becoming possibly the biggest. But it also underscores the importance of open-source artificial intelligence. And this is an issue that has been tricky to parse. When I was writing in 2022 and 2023, these large language models were pretty new. The question of catastrophic risk was a lot more front of mind. But as the open-source models have come out and advanced, the apocalypse has not happened.
And so the highly risk-averse position, which was logical two years ago, isn’t anymore. And the release of DeepSeek in China as an open-source model has really demonstrated how you can win a significant market share just by taking an open-source model or models and copying them and building something new.
Danny Crichton:
If you go to debates in DC, open-source has been this weird kryptonite for some, and a superpower for others. The challenge has been national security, which we spend a lot of time on. On that front, open-source is anathema to almost all security objectives the Pentagon and the National Security Council care about. And so it's been a very delicate dance between different constituencies. How fast do you want to accelerate economic growth versus how much do you want to try to protect national security? And I don't know where all that goes.
Mike Sexton:
A couple of months ago, I published a memo with the title, “Open-Source AI Is a National Security Imperative.” I was not the first expert to say this, but my memo happened to be right after DeepSeek, which was not planned. It did underscore my point, though, which was that if you looked at DeepSeek and you were worried about China overtaking the United States, there is a logical corollary, which is that the United States should be building the best open-source AI in the world.
There are some people — there's a group, the Open Source AI Foundation or OSAFE — who want to force the U.S. government to only use open-source AI in contracts. I don't think it's necessarily what we are saying needs to happen. But if you are using an AI model to build the next pair of smart glasses or to help with taxes or immigration forms or something, then no, you don’t want to use open-source models from China that do not know about the historical events of Tiananmen Square and cannot tell you the forms of oppression Uyghurs face.
Laurence Pevsner:
In the Riskgaming scenario we play on this, one of the key lessons players learn is that the reason companies will invest in open-source is less to better humanity than to undercut their competitors. This has been Meta's play. The whole reason Llama exists is to undercut direct competitors with closed-source models by offering a free, open-source one.
To take it back to your idea that there's less doomerism now when it comes to AI, one thought I have is that these models are making a lot of money and so people are incentivized to want the AI to keep advancing. And at the same time, we haven't actually reached the threshold where we would be worried about the threats. But maybe this is a good time to just ask you about the threat of AGI as you see it, and whether doomerism has any merit.
Mike Sexton:
My definition of AGI is the dictionary one: an AI that is able to do anything a human can, which unfortunately comes with all sorts of ambiguities. Is it better than the best human? Is it only as good as the best human?
The problem is that once you’ve built a chess engine that is as good as the best chess player, you've actually built a superhuman chess engine. There isn't some point where chess playing AI sticks around at artificial general intelligence and does not cross over into artificial super intelligence. The crossover is immediate. NotebookLM is not just as good as any human at taking notes from a document and then turning it into useful ways to consume that information. It surpassed that bar immediately.
I think there is a little bit of a bias among the experts to try to tamp down excitement… The passing of the Singularity is on page C3 in the newspaper.
So with models like OpenAI’s o3, there are still some areas in which they are not perfect, but I've also been looking at ChatGPT since it came out and just thinking, well, there is no one human on Earth who knows all the information ChatGPT 3.5 has. If you had asked me ten years ago, I would've looked at ChatGPT 3.5 and said, "If that's not AGI, it's somewhere in the ballpark at least." I would've already said it looks like a significant milestone to the Singularity.
I think there is a little bit of a bias among the experts to try to tamp down excitement. I'm publishing a memo about the Singularity, literally saying that we may have already passed it. And this memo is probably not going to be on our website's front page. We're mostly focused on the same things that are top priority issues for Democrats, like budget issues and immigration. The passing of the Singularity is on page C3 in the newspaper.
Danny Crichton:
There was a moment two to three years ago where existential risk was on the front page of everything. And then there was this gap. I think a combination of existing industry trying to lobby and the risk not happening came together.
But I am sensitive to the idea of superintelligence. There is maybe a Singularity. All this becomes a little bit more heightened as you get into robotics. What's the future there, and do you think we’re going to see a resurgence of some of the concerns?
Mike Sexton:
It's a good question. I'd say there is at least a 50% chance that there is some kind of commercially available humanoid robot you can get in your own home by the next presidential election. And then the question from there really becomes, what is this going to do to the economy? I have a friend who was working as a researcher, and just like you, went to grad school wanting to be a researcher or a research assistant in political science. He’s seen ChatGPT come out and more or less run roughshod over his preferred career track.
If your dream is collecting information from internet sources and synthesizing it in a way that helps your supervisor, there are tools that already do that better. And with robotics, I think we're going to see a similar level of job movement. My hope would be that it's not necessarily far fewer janitors who are employed, but that janitors will be overseeing robots that are doing their job much more effectively at a much higher volume. And then this brings up questions of, for example, in a nursing home, what jobs should be done by the robots versus the humans. Because I certainly don't think nursing homes should be fully automated.
It's going to be robots doing a lot of the hard labor and humans in a role of taking responsibility for that labor, being accountable if something goes wrong, interacting with people to explain what's going on. We've already seen this evolution in the military with the spread of lethal autonomous weapons. People talk about lethal autonomous weapons all the time as a category. It drives me insane because if you actually talk about the weapons themselves — this naval drone, this sentry gun, this suicide drone — it becomes pretty clear that there’s a human in the loop. The humans are the ones who do things in the military, they just do things with autonomous weapons now. And that's where all these other sectors of the economy are going to be going.
Laurence Pevsner:
This goes back to that famous line in the IBM handbook. "A computer can never be held accountable." And that's why you have humans in the loop, right? But there does seem to be a contradiction here, where if robotics has had a ChatGPT moment and we already have AGI, why can't they be held accountable?
Mike Sexton:
Organizationally, if you are running, let's say Third Way, and you want something done, you could replace me with an AI that is just looking at what's on the internet and synthesizing it into some sort of lowest common denominator that sounds most appealing for Third Way. And it's possible that ChatGPT 5 could be good enough to get rid of me.
But at least from where I sit right now, it really does matter to have a person who can make dispositive choices. I think the AI tends to narrow itself to this lowest common denominator. I think that is structural. I'm not sure that is going to go away in future models. But for me, as a human in this role, I am able to say, "I think we should take this position on this issue, and I am aware that this is going to tick some people off. And this is how we should deal with some people being ticked off by this position."
I don't think AI is necessarily able to lead. AI does not have the leadership capability to say, "We're going to tick some people off."
I don't think AI is necessarily able to lead. AI does not have the leadership capability to say, "We're going to tick some people off. This priority is going to the side." I think if you ask AI, if you give it all of the notes from the past two years of what lawmakers have said about AI, about the need for regulation, then that AI is more likely to say, "Okay, well, here's our structure for a holistic AI regulatory framework." It's not going to say, "Hey, actually the holistic thing, they're doing that in the EU. And it's not working out so great. Why don't we take a beat and, for now, let's be really targeted and strategic." I don't think AI is necessarily giving answers that could actually disappoint some people.
Danny Crichton:
What is top of mind for you next?
Mike Sexton:
I think the interesting, challenging, exciting thing is there's a lot more bipartisan consensus on artificial intelligence than people realize. The media naturally focuses on areas of conflict, and so it's interesting to be working on a specific policy issue where there is not partisan rancor.
I read Atlas Shrugged in high school. It did not change my opinions, but I thought there was something interesting to the thesis, which is basically that entrepreneurs, private citizens, inventors, are the people who really push society forward, and government is effectively a parasite that tries to stop people from achieving all the success they deserve and slows the world down.
I don't completely agree with that thesis, but the idea that private individuals do invent the technologies that actually revolutionize society is a pretty hard one for me to disagree with.