The AI backlash scares the hell out of me
Why do we always snuff out progress just as it gets going?
In the U.S. Senate, a 99-1 vote is a very odd duck. For all of the polarization in American politics, the reality is that most Senate business is conducted through unanimous agreement. That’s pretty much the same with the Supreme Court and other government institutions, too. Most legislative and judicial work is technical and procedural, unable to sustain the flame of culture-war passion that can ignite trends on social media. Yet this particular vote, on a topic of immense strategic importance, somehow brought together the progressives and the MAGA right, the centrists and the Midwesterners and the Southerners and the Coastal Elites all voting as one — well, minus one.
That vote was on the subject of artificial intelligence, a part of last week’s “vote-a-rama” to pass the One Big Beautiful Bill Act (I swear we are a serious nation). The bill initially included a provision for a ten-year federal preemption of state laws regulating artificial intelligence. That has now been thrown out entirely. The sole holdout — Republican Senator Thom Tillis of North Carolina — announced contemporaneously that he was not going to seek reelection before ultimately voting against the reconciliation budget in its entirety. Appreciate you Thom.
The preemption’s early inclusion was surprising but reassuring. States, most notably California, had begun legislating how to regulate AI with a heavy hand, threatening a nationwide patchwork of rules that would constrain leading model developers, harm open-source projects without the funds to follow the new regulations, and reduce America’s global competitiveness most notably with China. We were saved thanks only to a last-minute veto by California Governor Gavin Newsom from this unmitigated disaster. Federal legislators took note and started working to constrain such activities.
The federal preemption paralleled the legal course of the development of the World Wide Web and ecommerce in the 1990s. As more and more local regulations and ambiguous court decisions started to threaten the dot-com boom, Congress stepped in with the Telecommunications Act of 1996, which enshrined a clear set of rules for the internet that are in force to this day. It might be hard to believe given today’s progressives’ fights over capitalism and growth, but that bill passed the Senate 91-5 and the House 414-16, with the text’s goal written as:
An original bill to provide for a pro-competitive, de-regulatory national policy framework designed to accelerate rapidly private sector deployment of advanced telecommunications and information technologies and services to all Americans by opening all telecommunications markets to competition, and for other purposes.
For artificial intelligence, federal preemption wasn’t about stopping nationwide AI regulations, but instead preventing any one state from causing catastrophic damage (intended or unintended) to one of America’s most important emerging strategic industries.
Despite an array of early support and its inclusion in the must-pass reconciliation bill, preemption ultimately failed in an extraordinarily lopsided vote, even after proposals were circulated to weaken it by enforcing it for a shorter period of time.
How did we go from massive, bipartisan majorities supporting pro-competition and technological growth for the emerging internet to an all-but-unanimous vote to cede artificial intelligence to the vagaries of 50 state legislatures and countless municipalities?
Latent in all of these statistics is fear, an emotion that resonates with politicians. They know it’s a powerful motivator for votes, shifting minds far faster than rational debate.
The answer, in a single word, is backlash. Pew polled Americans back in April on their views about artificial intelligence, and the results are among the most striking I have ever seen. There is an extraordinary gap between experts building AI and the general public. Only 17% of Americans believe that AI will have any positive impact, while a majority said that they are more concerned than excited about its prospects. Even more staggering, less than a quarter of Americans believe AI will benefit them at work. AI experts, on the other hand, are wildly more optimistic across all axes.
Latent in all of these statistics is fear, an emotion that resonates with politicians. They know it’s a powerful motivator for votes, shifting minds far faster than rational debate. The rise of the internet happened, well, without the internet and without social media networks that allow for the rapid dissemination of fear-inducing doomer-gloomers about the future (read our podcast interview with Jonathan Haidt for more). Fast forward and AI isn’t having nearly the same luck on the legislative battlefield.
It’s incredible to consider that OpenAI’s ChatGPT launched just a few years ago, expanded to hundreds of millions of users in the fastest adoption rate of a new technology in history, and then within a matter of months, came to be reviled by those very users. Of course, based on leaked revenue numbers, people haven’t stopped using these products. Quite the opposite; in fact, they seem more willing than ever to pay oodles of money to have access to the most cutting-edge models.
Why so much usage and yet so much hatred? I’ll dub the problem FOC UP, or Fear Of Change Under Pressure (And yes, we are FOC’d UP). I understand the fears of artists and creatives who see people rapidly illustrating and animating new works at a fraction of the time and expense their own higher-quality work requires. I understand the frustration of doctors who now deal with patient after patient carrying diagnoses spit out from LLMs and demanding that health providers confirm their illnesses. I understand software engineers who don’t want to edit code sludge, and I also understand the frustration of consumers who hate talking with AI conversation bots on the telephone. I particularly understand the morose depression of teachers and professors who now receive AI-written student assignments in a mockery of the mission of education.
None of these professions are going to disappear though. There are going to be artists, doctors, software engineers, service workers and teachers, just as there always have been. The jobs are going to change, perhaps radically. That’s a commonplace of history from the rise of books and electricity to, yes, the advent of the computer and the digital economy.
I have read about the Luddite movement and its fight against the power loom, and I thought I understood the economic logic of workers looking to fight their displacement by automation. I now realize that it was just profoundly irrational.
Productivity isn’t magic, and oftentimes it requires real structural shifts for individuals and societies to increase it. Change is hard, and change under pressure and at speed is even harder. It’s arduous to accept it and adapt to a new set of tools. What’s far easier, of course, is to lobby political leaders demanding that all change be stopped. That’s what’s happening in places like Washington state, where Democratic legislators have proposed giving public-sector unions a practical veto over any automation of their work (I’ll have more to say on this subject in my cover story in this quarter’s City Journal).
What scares me about the current backlash isn’t its illogical nature, but rather its vituperative tone. There’s a venom to this backlash that even social media and crypto didn’t fully experience at their apogees of growth. I have read about the Luddite movement and its fight against the power loom, and I thought I understood the economic logic of workers looking to fight their displacement by automation. I now realize that it was just profoundly irrational.
For our legislative leaders, the goal shouldn’t be piecing together a quilt of conflicting regulations that tears at the fabric of America’s strategic strengths. Instead, it should be a full-throttle embrace of the future. Help millions of people retrain through new programs that connect them with AI-native jobs. Rebuild sclerotic institutions, including those that somehow still haven’t digitalized. Offer a consistent message of pragmatic hope, that the best is yet to come. Productivity’s gains can be to everyone’s benefit.
For all of the criticism of the internet, it’s striking that it created trillions of dollars in value and is now at the center of America’s global dynamism. That’s what foresight and bipartisan support brought to the homeland. For this generation of legislative leaders, embracing the backlash’s torching of the future rather than seizing the torch and guiding everyone forward together is a remarkable turnabout that we will come to regret.
I respectfully disagree on parts of this. Not the analysis of blowback. AI is coming, it’ll do some (hopefully lots) of good and some bad. Panic is buying into the narratives of attention, not really anything useful.
Where I quibble is in the lessons learned from the Luddites. We were Nottingham last spring in the center of the Luddite Revolution and got an in depth look at the how’s and why’s of their decisions. Given the absolute lack of a social safety net and a society based on work as not only identity but social hierarchy, the only rational choice in this situation is to smash the looms and fight to keep “what’s yours.” And then they got hung for it.
The lesson to be learned for AI, I propose, is not that the Luddites were bonkers, but that they felt forced to take action for reasons we can empathize with, and that we need to make damn sure to roll out AI in a manner that allows people to retrain and maintain human value. To have a narrative of “you can embrace this and grow.”
Thanks as usual for all the great content!
This sentence points exactly to the heart of the backlash:
"Help millions of people retrain through new programs that connect them with AI-native jobs."
The US (and other countries, too) have a very bad history of actually doing this- it's part of the reason people are still upset about NAFTA, for example.
AI is being billed by its proponents as society-upheaving at best, and existential at worst, so if you think AI is going to be a big deal *and* don't think the government is up to the task of helping manage the disruption, of course you'd be upset. If anything, I'm surprised the backlash hasn't been more severe.