The future of science in an age of spending cuts
Kenneth Stanley on why we need experimentation, weirdness, and divergence
Science feels under attack. The Trump administration has proposed budget cuts of up to one-third of all basic research funding, breaking a generations-long, bipartisan consensus that what is good for science is good for America. Even if not fully enacted by Congress, even the hint of cuts has already had an extraordinary effect on the perceptions of higher education and science leaders on America’s stability. Lux recently hosted a dinner with a group of these luminaries, and the general conclusion is that science institutions will need to radically change in the years ahead to adapt.
I wanted to talk more about this subject, and then I realized that we just published a great episode on our sister podcast, The Orthogonal Bet. Lux’s scientist-in-residence, Sam Arbesman, had on Kenneth Stanley, the senior vice president of open-endedness at Lila Sciences. Kenneth is also the author of Why Greatness Cannot Be Planned: The Myth of the Objective, a widely praised book exploring the nature of creativity and discovery.
The two talked about the future of research institutions, and how new forms of organizational designs might be the key to unlocking the next frontiers of knowledge in the 21st century. Their conversation delves into the tradeoffs between traditional and novel research institutions, how to carve out space for exploratory or “weird” work within large organizations, and how research itself can serve as a tool for navigating disruption.
This interview has been edited for length and clarity. For more of the conversation, listen to the episode on The Orthogonal Bet and subscribe to the Riskgaming podcast.
Sam Arbesman:
You've had a number of different roles in different kinds of organizations — academia, corporate industry labs, startups. And presumably, these different types of institutions are better or worse at different kinds of research.
Kenneth Stanley:
Yeah, I have been in a lot of different places. I've been in academia, I've been in multiple big corporations, startups. What I've learned is that there are trade-offs. Nothing is really perfect. You have to figure out which things actually matter most to you.
In academia, the really good thing is freedom. You don't really have a normal manager. You might have a department head, but they're not really telling you what you should be researching. And you don't have company objectives. So you're not trying to think about aligning with the objectives of anyone else. It's true though that you are trying to beg for money because that's what grants are about. But in theory at least, you can beg for money for the things that you actually want to do.
But academia has disadvantages. You generally have fewer resources than you have in industry, and you also have a lot of distractions like constantly begging for grant money. And you've got these other things that — distraction might be too strong of a word because these are good things — but things like teaching, service activities, and the accounting management of running a group.
Another positive in academia is the PhD students, though. The thing that's interesting about PhD students compared to, say, employees is that they tend to be more committed because they want to get their PhD. And so there's a sunk-cost issue.
Sam Arbesman:
Related to the PhD students, are there downsides in terms of having to train them?
Kenneth Stanley:
Yeah, maybe. I mean, there is a difference when someone's not fully trained. But it's not like they just came out of high school. Some of them have master’s degrees, some of them don't. Some have industry experience, some don't. But it's true that there is a lot of training involved. But then that's another trade off. In some ways, you have a blank slate: somebody who doesn't yet have a dogmatic view or is not entrenched in a certain style of thinking.
Industry's a little different. Every company is different. But some generalizations: there are more resources. Generally speaking, there tends to be more GPUs in industry than in academia. And so you can do bigger things, you can think bigger. You're not constantly trying to ask for money, so you can spend more time on research. You can also hire people with a lot of experience. You're paying much better salaries, so in theory, you can get some of the best people in the world.
And there is the freedom issue. Companies, tend to go through phases where they can support more research freedom and phases where they can support less.
But again, there are trade-offs. More experience also means more entrenched in current thinking and having their own strong opinions already. And there is the freedom issue. That varies a lot company to company, and even within a company from time to time. But companies, I find, tend to go through phases where they can support more research freedom. That tends to be good times or maybe early days. But during tighter times, maybe more mature stages, they tend to start wanting to give their investors responses like, "Well, how is this going to make us money?"
So if you only care about research, what should you do? Where should you be? If you think about it briefly, it seems like the best thing is, well, you should find a billionaire who gives you millions of dollars to do whatever you think is cool. But even that is not optimal, because how are you going to attract people to an institution that doesn't exist? Are you going to get the best people to work for you? Where's all the support infrastructure, the compute infrastructure?
So there's no really quick solution to creating a research paradise. I think you just have to look at the trade-offs and where you are in life and what aligns the best with your interests.
Sam Arbesman:
When you have a patron, to a certain degree, you may be dependent on the patron's whims and fortunes and interests. And those things can all change. It's similar to what you're saying about the oscillations in industry. I'm wondering if there are ways of thinking about rules for finding when places might be more amenable to open research. I can see lots of different arguments about the size of an institution or organization or a company, why smaller or larger might be better.
Kenneth Stanley:
Yeah, there isn't really one blanket answer. The details matter a lot. There’s another layer, too, which is thinking about how innovation should be run as an enterprise — what is the right way to organize, what makes a successful innovation lab. But that’s something that tends not to be done much at most companies. For most companies, it's just somebody's gut instinct.
Sam Arbesman:
I wonder what your thoughts are on the need for new types of institutional forms. What other sorts of institutions might be positive for the world of research?
Kenneth Stanley:
There should be much more exploration. I know there are people trying to agitate for it, so it's not a dead idea. But in terms of actual implementation, it takes a lot of momentum to put together something significant.
And research as an enterprise is just completely rife with counterintuitive principles. A lot of the things that you feel like you should be doing, you probably shouldn't be doing and vice versa. And that makes it really amenable, I think, to institutional disruption, because there probably are some upside-down models that would be radically successful.
But even within the existing institutions, there's room for exploration. For example, the way funding agencies work is just so boring and so stereotypical. Where's the innovation in the funding incentive system? The National Science Foundation, for example, could try all kinds of creative and interesting ways of deciding who gets funding and how much funding. Where's the experimentation on this?
And of course there are answers. Politics enters into it. But whatever the reasons, there still needs to be more exploration. And there's an opportunity with NSF, because you do have this public institution that could try lots of things because they don't have a bottom line the way a company does.
If you create something that looks like a playground with children playing who don't have to answer to anybody, everybody else in the company is like, "What the heck is going on there?"
Now a company obviously does have a bottom line, so there's only so much room for experimentation. But there still is room. It makes sense to carve out a pocket that works differently, where you could try all kinds of crazy things. But you really have to believe what you're saying. I think the problem in industry is there's another kind of politics, which is people being treated special tends to bother other people. And so if you create something that looks like a playground with children playing who don't have to answer to anybody, everybody else in the company is like, "What the heck is going on there?"
But people need to get better at articulating why experiment zones are actually essential.
It's dangerous not to have them, both for the disruptor and the defender.
Sam Arbesman:
How would you make the argument to people about being willing to truly run these kinds of experiments.
Kenneth Stanley:
My book with Joe Layman, Why Greatness Cannot Be Planned, is an attempt to make that argument. So I've thought a lot about what the arguments should be. But why would we need to write a whole book to make this argument? Because it's complicated. It's arguing against very, very common-sense ideas. Things like you should set an objective before you go out and try to do something. Or that consensus is a good way of deciding what we should do.
Now, if you go to individuals, you can find all kinds of people who would argue against these things. But institutionally, almost nobody's willing to break these molds. There are good arguments for breaking out, though. For example, if you have consensus, then you're probably not at the cutting edge of knowledge.
Sam Arbesman:
One of the experiments people talk about is you don't necessarily fund the proposals that everyone thinks are good grant proposals. You say, okay, here are the ones that are the most polarizing. They've passed some sort of threshold of reasonableness. And then beyond that, if half the people think it's great, half the people think it's a terrible idea, maybe those are the ideas that are worth exploring.
Kenneth Stanley:
Yeah, for sure. There may be creative ways of setting up systems for doing that. For example, a system where the people who make the decisions have some limited number of votes. But if they make a vote, then it counts for a lot. Or maybe there's a skin-in-the-game argument. It's like, I don't really care about my project if this one goes forward, or at least I'll give up something valuable to me to see this thing go forward. I think it's a really interesting possibility.
The other thing I think is very counterintuitive is regarding objectives, and having to actually tell people where you're going before you go there. Even granting agencies tend to want to know what your broader impacts are going to be. What are the benefits going to be from doing this?
The problem is when you don't know what's going to happen, you can't really answer that question. So we're not allowed to propose things where we don't know what's going to happen. But we should be.
Sam Arbesman:
Even internally within a pocket of a more open-ended research, you have to also balance certain things. It can't just be like, oh, you're going to play with anything. I'm thinking back to the heyday of Xerox PARC. I think they still had sort of the overarching goal of building the office of the future.
Kenneth Stanley:
Yeah, right. The answer here is not just anything goes. But that would be a straw man argument against what I'm saying. You could say, well, he's just crazy, because he's basically saying, just let people play with toys and do whatever they want. And then obviously nothing will ever pay off.
So you do need something. And I would call that a constraint. It's different from an objective. But there are constraints. So this is an AI research lab. We're not playing with finger paint, even though you could argue that there could be an interesting result, a nice painting could come out of it.
So constraints are totally okay, and I think necessary. All open-ended systems have constraints of some sort. The only open-ended system with no constraints is random search, which is not a good algorithm for doing anything.
Sam Arbesman:
You mentioned AI research. And I know you have some thoughts on the frontier AI organizations and the ways in which some are doing things better, some worse.
Kenneth Stanley:
AI in general has a history of decades of different fads and different levels of funding. And in recent years we've been in a, let's say, AI summer. There's been lots of funding, especially in industry.
In AI, we’re probably seeing over-convergence. And it's possible this is leading to a local optimum.
For a while, it seemed that frontier research believed to some extent in diversity. There would be a statistical AI team. There'd be a deep learning team, or maybe called neural networks back in the day. And there'd be some other areas represented. So there's a healthy balance of different bets.
But recently, we've seen a lot of convergence. It's for an understandable reason; there's been a breakthrough, and happened to be in the deep learning area. Actually, a breakthrough in a particular pocket in a way of doing deep learning, which is the LLMs. I'm not saying this is irrational. There clearly was a breakthrough. There's been real value created. And it makes sense to invest.
But we’re probably seeing over-convergence. And it's possible this is leading to a local optimum. Now some people will react very indignantly, and think somehow this is a commentary condemning all of AI. That wouldn't be my position. When we're talking about a local optimum, we're not talking about starting over, but possibly taking a couple steps backward to move a few more forward.
A lot of the benchmarks we use right now, they could end up being deceptive at some point. So the challenge I think for frontier labs is to avoid that pitfall. Some small entity is going to see an opportunity that they're blind to, because they're so over-invested on the current gradient.
Sam Arbesman:
Just out of curiosity, and you don't necessarily have to have a single answer for this: But what are the more contrarian bets and things in AI that you think should be tried more? Where should we be diverging and exploring?
Kenneth Stanley:
It is always a tricky question. If you ever ask researchers publicly what are the right contrarian bets, it's like if they have one, it's probably their best idea. So they're not going to tell you.
Sam Arbesman:
And that is fine. You can be as vague or evasive as necessary.
Kenneth Stanley:
So I'll be slightly evasive just because I'm not going to give you all my best ideas. But what I would say is that I've been trying to encourage people to look more at a potentially very disruptive area, which is open-endedness. For those who don’t know, open-endedness refers to systems that continually produce interesting artifacts on their own. And the longer they run, the more interesting it gets.
Civilization is an open-ended system. Civilization is divergent.
A very intrinsic and important property of open-ended systems is that they're divergent. Divergence is something commonly discussed in machine-learning circles, at least historically, because most systems were interested in convergence. Convergence to the global optimum would be the ideal situation. So why would you want it to diverge? It sounds like a crazy proposition, but there's a lot of reasons to diverge. When you think about it, divergence is often associated with creativity. Divergent thinkers are brilliant and things like this.
In fact, civilization is an open-ended system. Civilization is divergent. All of the inventions we've ever had over all of human history, that's a divergent tree of ideas that is not converging to a final uber invention. And that doesn't look like a normal machine-learning algorithm. It is built on top of human intelligence. The longer civilization goes, the more stuff comes out of it, and the more diversity there is within that stuff, and the more complex that stuff becomes. And it's all because of human intelligence.
And so clearly we need to capture whatever that phenomenon is if we're going to do what human intelligence does — if it's worthy of the name AGI or superhuman or whatever.