Something is rotten in the state of the internet. Social networks that were once meant to be entertaining diversions have become riven with vituperative political combat that leaves all but the most blinkered acolytes running for the safety of a funny YouTube channel. Bots swarm through the discourse, as do trolls and other bad actors. How did we let such a crucial communications medium become enshittified and can we build something else in its stead?
Joining me and Riskgaming director of programming Laurence Pevsner is Renée DiResta. She is a leader in the field of internet research and is currently an Associate Research Professor at the McCourt School of Public Policy at Georgetown. She’s written recently on the surges of users migrating from one internet platform to another, as well as on the future of social platforms in the age of personal agentic AI.
Today, the three of us talk about how social networks like X, Reddit, Bluesky and Mastodon are each taking new approaches to mitigate some of the dark patterns we have seen in the past from social media. We then talk about how the metaphor of gardening is useful in the course of improving the internet, and finally, how private messaging spaces are increasingly the default for authentic communication.
This Q&A has been edited for length and clarity.
For more of the conversation, please subscribe to the Riskgaming podcast.
Danny Crichton:
In your research, Renee, you cover disinformation, the future of federation and social networks. These topics are all on the front page of every magazine and newspaper around the world. But when you look at 2025, what are some of the top factors you're thinking about?
Renee DiResta:
I’m thinking about two big areas that we might call the future of internet research.
The first is middleware. That term refers to third-party providers enabling users to interface with, in this case, social media a little bit differently. A third-party provider might act as an agent for you. Let's say you're on a site and you want to curate your feed in a particular way, you could potentially use a middleware provider to do that. You can see this really having an impact in curation and moderation, because there's so much dissatisfaction on that front, and I think giving users more control is really important.
The other big area is that, as you have more and more agentic AI — so AI that operates autonomously on behalf of a person — how will we know the thing we are talking to on social media is a bot or a human? Maybe you don't need to necessarily know which person it is, but you might want to know if it's human or not. So there's different degrees of what we call attestation, where you are indicating that something is operating on your behalf, perhaps, or where you're indicating that you are who you say you are.
Danny Crichton:
Let's go to the first one. When you think about social media history, let's just use Facebook as an example, everyone uses the same app. We all got the same algorithm. That algorithm was determined by a team of machine learning engineers and systems engineers over in Menlo Park. There wasn't a lot of choice.
With Bluesky and a lot of these new federated chatting models, you can essentially download whatever algorithm you want. You can even design your own. It is unlikely that billions of people are all going to do that, but what's interesting is this idea of different classes of algorithms. So as a researcher and writer myself, there were tools that allowed me to only find the algorithms that provide links to articles and that I can curate as almost an RSS feed.
But the different algorithms may also customize the experience in a way that can be negative. I can create filter bubbles that weren't there before, for example. How do you balance between those positives and negatives?
Renee DiResta:
This is one of the interesting questions: As we move into a realm where this is possible, what do users actually do? We wrote a paper on middleware, myself and Richard Reisman and Luke Hogg of FAI, the Foundation for American Innovation. We tried to write a bipartisan piece on what sort of regulatory environment will help third party providers actually do this. There has to be some way for them to be incentivized.
What you’re describing on Bluesky is the work of people who have just decided out of their own altruistic interest to go and build something. There are also moderation feeds on the platform. And in my early conversations with people who are running them, they're doing it out of love for a community. So there's very, very, very granular types of labels they can assign to content, and that you can then filter out of your feed, which is great.
But there are a lot of reports on how overwhelming that experience actually is. We have the technology to do it. Now we have to create environments where people want to do it.
I joined Bluesky very, very early. I was number 4,000 or something like that, and I didn't know what to do with it at first. I have to say, when I first landed, I was like, "Oh, it feels very politically lefty. I don't know if I fit here." Then they started doing two things: one, you can create lists of people to follow — like starter packs that made it easier to figure out who was there and to solve the cold start problem. The other is the labeled lists, which are like feeds.
I'm a really bad gardener. I try, but I kill everything. And so I subscribed to a gardening feed and that was when I was like, "Okay, I get it. This is the value." It's not Twitter, where I'm going to expect some magical curation algorithm to figure out what I want. But with a little bit of legwork — a couple hours of picking feeds and hiding and moving things around — I can have an incredibly tailored experience.
I think what we're going to see, as the platform becomes more politically diverse, is a proliferation of different communities that find their niche. And then from there, it's just a question of whether we can incentivize people to engage constructively.
Laurence Pevsner:
In your recent piece in Noema Magazine, you described some of these major platforms like Facebook, like Twitter, now X, as walled gardens. And the federalized options are more like community gardens.
But that raises the question of what the defaults are. Most people are not you. They’re not experts on social media, aren't really spending that startup time. Instead, they just download the app and then whatever feed they get at the start, that's what they stick with.
Renee DiResta:
The onboarding is such a key part. The team at Bluesky is so new, they're focused on building the infrastructure, building out the protocol. They are not necessarily focused on the onboarding and growth experience that you would see from a bigger company.
Danny Crichton:
If we think about cultivating this community garden, the flip side of that is the weed killer aspect. In every community garden, you get the good stuff — the tomatoes, the flowers you wanted. But then you also have the weeds: the trolls. As people have migrated from Twitter or X onto Bluesky, one complaint has been that the trolls have come along as well.
How do we create systems and incentive structures to say, look, if you stay in the positive realm, if you're constructive, there's a really positive path for you. The community garden is happy. You all get to be part of the potluck.
Renee DiResta:
There's a saying for people who do trust and safety work: the problem with social media is people. We look for technological solutions to human problems. And the reality is some people are awful. So you wind up in this position where you're trying to create norms.
Bluesky has a very rapid block culture, meaning people don't spend a whole lot of time engaging with trolls. The culture is just block and move on. And the way their block function works, it really limits. You don't have people continuing to have conversations in the replies.
The platform that has been the most effective, from a federated standpoint, is Reddit, which has this sort of centralized governance that sits up at the top and then a lot of day-to-day moderation by local mods, who are volunteers.
There are cat picture groups, for example, where it's against the terms of service to post a dog picture. You could sit there and scream about censorship if you want to, but nobody cares, because they’re all there because they agree that, in this community we do these things. These are our rules.
So the question we come to is at what level is ideal for moderation.
Laurence Pevsner:
Maybe my favorite subreddit on Reddit is the AskHistorian subreddit. It has some of the most extreme content moderation policies. You go to the page and it's like thread deleted, thread deleted, thread deleted, comment deleted, comment deleted. But their policies are that you have to be an historian who responded with a really thorough answer to these questions. You have to have sources, and if anything doesn't meet these standards, sayonara. People love the thread because it makes for a very high-quality discussion.
One big difference with Reddit versus Bluesky versus Twitter and a lot of the other platforms, is Reddit is a mostly anonymous space. Most people have usernames that don't identify who they really are. You can voluntarily identify yourself, but it's not the norm.
And so that gets us to this question of impersonation and agentic AI. Is it okay if you're sending your own bot to act on your behalf? Maybe the question is: Does federization work as if you don’t have anonymization to go with it?
Renee DiResta:
Reddit is interesting because it has persistent pseudonymity. So you're not using a throwaway alias for every post. You have the little number that appears alongside your account, which conveys that you're not a complete newbie. There are some subs that do not allow users to post before they have achieved a particular rating.
Which, as you're alluding to with your historian example, people actually want. Not everybody wants the hot take from the weird blue check. But that's what we get when what is surfaced is determined by popularity or a large follower count. You can envision creating better systems to surface expertise in some way, akin to how Reddit does it. My hope is that we get to something a little bit more like that because, again, it's community driven.
Danny Crichton:
In the world of Facebook and Twitter, the universe of users was pretty flat. Everyone's equal in that world, you can talk to anyone, you can reach out. But then in the last couple of years, we've seen this migration to private messaging apps, channels, WhatsApp groups, etcetera, which are not public. They are, in some cases, undiscoverable.
How do you think about this, the public social versus the private social, and are we getting the balance right?
Renee DiResta:
One of the things that became unpleasant about Twitter was that the gladiatorial arena aspect came to define the whole experience. You made one wayward comment and all of a sudden you were the main character of the day.
And then you had the cancel mobs across the political spectrum, and I think it did lead to a lot of people moving to smaller, more intimate social media experiences. For younger kids in particular, there's the public Insta, but then they have the real one that's private for friends. The public performance internet is important for shaping public opinion, and there are people who choose to be in that arena for those reasons. But people increasingly see a divide between authentic conversation and performative social media.
Danny Crichton:
Let's wrap that back into agentic AI, because one of the biggest challenges is adversaries, either domestic or overseas, going onto the internet to try to shape public opinion with automated accounts, bot networks, etcetera. Basically a lot of inauthentic content. How do you start to evaluate how agentic AI influences public social media, and is that a long-term dampener on the industry?
Renee DiResta:
It's a really interesting question.
One thing to note is that there are demonstrably AI accounts. Accounts that are avowedly AI, that actually wind up with fairly large followings. There's the rise of a valid chatbot ecosystem, like Character.ai and Replika and things like that. Some people really don't care or enjoy engaging with a machine for various reasons. I think that there will be AIs that will identify themselves proactively.
So the question is really how do you avoid the manipulative kind? And it's very hard to gate things at this point, because they are more and more sophisticated, harder and harder to detect. And is a platform going to want to invest in playing this cat-and-mouse game with adversarial accounts? I mean, candidly, some of these social platforms don't even do a very good job dealing with bullshit click-farm stuff.
But I do think platforms are going to eventually try to create a space with proactive attestation to get in. So you're not compelled to do it to participate. You're choosing to enter a space where everybody else has also chosen to certify themselves. But if you have to do that by showing identity documents, people don't want to do that. That's a very high cost. I mean, we all see these data breaches and things like that.
So the question of what you credential with is a really important one. And again, do you need to identify that you're you, or do you just need to identify that you're human? We're still in the early phases of, do people feel a need for it? What is the legitimacy? What are the areas where you'll want to authenticate?
Danny Crichton:
We first met you in person at our Riskgaming session in Washington D.C. We were hosting this game, DeepFaked and DeepSixed, which focuses on AI election security. This was right before the election in November. And one of the threats that comes out of the game is a voice hack, in which the Russians or North Koreans download clips of local pastors from YouTube, make a voice print of them, and then deliver phone calls saying “Hey, this is Reverend so-and-so from the local church. I just want to make sure you're voting on Tuesday. It's really important for the issues that we do.”
In the end, we survived 2024 from a deepfakes perspective. But do you think that changes either in '26 with the midterms, or '28 and going forward?
Renee DiResta:
My friend Katie Harpeth has this phrase, “panic responsibly.” You don't want to say, "Oh, the sky is falling and everything's going to be terrible." For the 2024 election, it was important to point out the risks. And recognizing what could happen, how do we think about mitigating it if it does, or making the public aware? That's responsible risk management.
And the technology is only going to continue to become more democratized and easier to use. So there are going to be fewer guardrails on what's possible. You can't really put that genie back in the bottle. There's no way to regulate model outputs because of their myriad uses.
So then the question is how we let people who want to say, “I am real,” say it. There's no compulsion here, but maybe it’s social ecosystems or companies, for example, requiring some sort of credential in order to prevent voice or face spoofing.
But the main point is that it's important to be aware of the trends and the risks, and then to decide rationally and communally what measures we consider appropriate and reasonable for mitigation.
Danny Crichton:
We need to know who everyone is in the community garden. Renée, thank you so much for joining us.