Remembering Daniel Kahneman on risk, bias, and decision-making
Part 1: pre-mortems, self-serving bias, and changing your mind
Today, we kick off a special two-part series on risk, bias, and decision-making — the key skills of Riskgaming.
In 2022, Lux’s Josh Wolfe invited a group of celebrated decision and risk specialists for a lunch to debate the latest academic research and empirical insights from the world of psychology and decision sciences.
Joining us was Daniel Kahneman, who won the 2002 Nobel Prize in Economics for his work on decision sciences. His book Thinking Fast and Slow has been a major bestseller and summarizes much of his work in the field.
In the months after our recording, Kahneman made an extraordinary decision under uncertainty of his own. Concerned about his future risk for dementia, he decided to travel to Switzerland at the age of 90 to pass away through assisted suicide. It was an astonishing final decision by the master of decision-making, and he conducted his final act in secrecy before it was revealed in The Wall Street Journal in a column by Jason Zweig last month.
We wanted to remember Kahneman and his contributions, and so this is an edited two-part series from our lunch seminar, with each part covering one topic of the conversation.
Also joining Kahneman and Josh at the table were two others. First, Annie Duke, a World Series of Poker champion who researches cognitive psychology at the University of Pennsylvania. Her books How to Decide and Thinking in Bets have also been tremendously influential best sellers, and she is also the co-founder of the Alliance for Decision Education. Second, Michael Mauboussin, the Head of Consilient Research at Counterpoint Global. He has taught finance for decades at Columbia, and his book More Than You Know is similarly a major bestseller.
In this first part, the group discussed the concept of pre-mortems, an approach of imagining that a decision has failed and thinking through why it failed. It's designed to overcome group-think and avoid the fact that the pessimists are really unpopular in group decision-making sessions. We also discuss whether — pre-mortem or not — it is possible to change minds at all.
For more of the conversation, please subscribe to the Riskgaming podcast.
Josh Wolfe:
At Lux, I've indoctrinated people with this quote: failure comes from a failure to imagine failure.
Michael, can you explain what a pre-mortem is and how it relates?
Michael Mauboussin:
So you are about to make a decision. You gather as a group, you pretend you've made the decision, and you launch yourself into the future. Let's say it's a year from now, and you pretend the decision turned out very badly.
Each person individually writes down an explanation for why this decision turned out poorly. We combine those decisions, and we think about whether we should make a different one. This idea was popularized by Gary Klein, who did an adversarial collaboration with Danny Kahneman, which was a great paper.
So as far as I can gather, there are two key components to pre-mortems. The first is where Klein opened his Harvard Business Review article, which is the idea of prospective hindsight. The idea is based on the fact that, if you put yourself into the future, and you pretend the actual outcome has occurred, then that sense of reality allows you to open up your mind and think about more alternatives.
The second component is this idea of overcoming group-think. When we're barreling toward a decision on something, doubts tend to get suppressed. Pre-mortems create an opportunity for dissenters to chime in — essentially, to counterbalance the argument.
The Rousseau experiment on prospective hindsight appears not to have been replicated, but, before I turn things over to Annie, every time I talk about pre-mortems, certainly in investment organizations, everybody loves this idea. People love it.
But I'm keen to understand what the mechanisms are, especially if half the argument is not replicating.

Annie Duke:
Over at Wharton, we've been working on this problem for over a year now. This is work I've been doing with Linnea Gandhi, who's the lead researcher, and Maurice Schweitzer.
We have tried to replicate the Rousseau results, and let me explain what that result is. Imagine you ask somebody to think about the things that can go wrong. Now imagine asking them to imagine that something has gone wrong and then asking them to give reasons for why it happened. The Rousseau finding was that the pre-mortem framing led to 30% more reasons.
But that’s an old study with a very small N. Your alarm bells should go off whenever you have a really small N. We've tried to replicate the study across a bazillion different scenarios: people planning how they're going to stick to whatever they're giving up for Lent, to exercise goals, and so on. So we're asking people to imagine what could happen, or framing it prospectively.
Josh Wolfe:
And all in anticipation of some performance towards some known goal.
Annie Duke:
Yes, but we have been unable to replicate the 30%. We just haven't been able to do it. And we've done it with really large Ns.
Michael Mauboussin:
Danny, I think your enthusiasm about pre-mortems stemmed more from the possibility of overcoming group-think and making sure doubts rise to the surface than solely the idea of prospective hindsight.
Daniel Kahneman:
My hunch was that pessimists are really unpopular. And they're especially unpopular when a group is converging toward a decision. So you need to legitimize dissent. Pre-mortems reward ingenuity. You're trying to find a good failure, one that will make you look clever. And I think that inversion is very powerful. The cognitive thing is very minor. And I'm not surprised it doesn't replicate.
Annie Duke:
Let me just be clear, we were just now going to start seeing what happens in groups. We've found a few things that make us skeptical in terms of groups and legitimizing dissent.
We've also been having trouble getting behavior change as a result of a pre-mortem. In other words, people can imagine what might go wrong, but we don't actually see them doing more research or changing their plans. We also see that there's just as much self-serving bias if you work prospectively than if you work in prospective hindsight.
Josh Wolfe:
And just define what self-serving bias is.
Annie Duke:
When an outcome is poor, I attribute it to things that are external to me. Something happened to me. And when I have a success, I attribute it to things that are internal to me — things I did and decisions I made.
We know this is a very strong bias. If I get in a car accident, I'll say it was the other person's fault. Or, this is one from my child: If he does poorly on a test, it's the teacher's fault. The test was too hard. When he does well, it is because studied really hard, and he was great.
So that's thinking about a result that you've already gotten. But it turns out that if you say, “imagine you're taking a test in a week, and you get the test back, and you've done quite poorly,” they will still say, “the test was too hard.” When you say, “imagine it's a week from now, you get the test back, and you've done quite well,” they will say, “I studied really hard, I'm very smart, and I pay attention in class.”
The self-serving bias doesn't go away when you do things in prospective hindsight.
Daniel Kahneman:
But my expectation is that a pre-mortem will not change your mind about the plan. If you've made the decision and you're committed to the decision, you're still going to go through with it. This will not change.
But what it can do is help you find loopholes that you can close. You can find little things that you wouldn't find otherwise. So it's worthwhile.
Josh Wolfe:
I had this conversation at our Monday meeting just the other day. And I was like, "Okay, let's imagine all the things that can go wrong. If one of these things goes wrong, I'm okay. If we took the process, we took the risk, and it happens, then fine. But I would be less okay, and consider the process a failure, if we got surprised.”
Now, the pro of that is that we're thinking about the expansive set of possibilities of what might occur and how things can unfold. The con is, if and when it doesn't unfold, I've already prepared myself that we expected that bad thing to occur. And so I don't have to update my priors.
Daniel Kahneman:
It's really not about the priors. It's about closing things. It's about oh, we haven't covered the left side. You can see that in a military context.
Josh Wolfe:
I always love the DARPA line — I think it's the DARPA line — their mission is to create and prevent strategic surprise. They want to surprise the other side. And they want to ensure they are not surprised.
Annie Duke:
If you're going to do a pre-mortem, you need to specifically ask about what could go wrong that's within your control and what can go wrong that's not in your control.
There are certain things you can do during a pre-mortem that helps to dampen bias. When it's paired with a pre-commitment contract, for example, then you can start to get changes. That pairing is really powerful. What it does really well is to help reduce overconfidence, or at least not bump it up.
One really interesting thing we've done is what's called a pre-parade or a backcast, which is the opposite of a pre-mortem. I did really well on the test, what happened, why did that happen? So that's a pre-parade. And what we find is that pre-parades massively increase overconfidence. Whereas when you do a pre-mortem, it keeps your confidence at bay. Sometimes, depending on the task, it will actually bring it below control level.
Michael Mauboussin:
So one question is why I need to time travel. If it's not a cognitive thing, why don't I just say: Everybody's in the room, we're gonna give voice to the people who might be dissenters to write concerns down privately.
Daniel Kahneman:
This is really interesting, because it links to the topic of adversarial collaboration. What happens to a researcher when they find that their hypothesis was falsified by the data? It's instantaneous. You see why it happened now, but you couldn't see it earlier. And I think I understand why. When I'm in my current state, I have my theory. You can tell me a particular result will violate my theory, but I simply can't see how you could get there. Now that it's happened, though, all I have to do is tweak my theory so that it's compatible with the results. That turns out to be quite easy, but you're not going to do it unless you're forced.
Josh Wolfe:
So in that case, science as an institution has a forcing function of peer review. Not wanting to be wrong induces you to want to be more correct or less wrong. You're more likely to change your mind because there's punishment if you didn't.
Daniel Kahneman:
Well, I think people don't change their mind on anything that matters.
Josh Wolfe:
In anything, people don't change their mind?
Daniel Kahneman:
On anything that matters.
Josh Wolfe:
Because why? Their identity is so tied into what they believe?
Daniel Kahneman:
Because everything is tied. You believe something and you're committed to it, and the people you love believe the same thing.
Annie Duke:
One thing people don't understand is that your actions inform your beliefs just as much as your beliefs inform your actions.
For example, I come up to you, and I say, “I've got a sign for this political candidate, will you put it on your lawn?” You put it on your lawn. I measure how much you support that candidate. And then I come to you a week later and I measure your support then. Guess what? You think they're more awesome. Why? Because you put a sign on your lawn.
Now imagine there's some scandal. It's something where, had you known about it before you had formed any beliefs, you probably wouldn't support the candidate. But now you've got a sign on your lawn. And maybe you've done some canvassing. Instead of walking back your support, you say, “Oh, it's just the establishment is trying to get them.”
Josh Wolfe:
What is that tactic that's being used? Because I'm handing you something for free? You're putting it on your lawn? It's shaping your views?
Daniel Kahneman:
It's internal consistency. This is dissonance reduction. So if I'm doing this, I must love this candidate. It works backward. I mean, in general, this is the way it works.
Josh Wolfe:
Now, evolutionarily, consistency was a virtue because it means you're somebody that can be relied upon, you're predictable. So what is the science of encouraging dissent? What are the incentives for changing your mind?
Daniel Kahneman:
For things that really matter to people, forget it. My stepson, who is an expert on Russia, asked what could be done to change the Russians’ minds about what's happening in Ukraine. I said it can't be done.
So to change minds, in the first place, it's got to be a thing that people don't care too much about. And that goes back to the pre-mortem. I think that is useful. People are not attacking the whole project — they're not saying it can't be done. They're saying something clever, that other people haven't thought about. That's the incentive.
Annie Duke:
I agree vehemently with Danny that you can’t change the Russians’ mind about this invasion.
But if you're thinking about yourself or within an organization, if you can get a little bit more flexibility in terms of mind changing, you're better off. And there are some ways you can do it. One is to set the circumstances under which you would change your mind in advance.
One problem is that when we’re facing down the decision to change, we're all terrible at it. But if you can say, “but what if it were wrong?” It turns it into something that's in the future, and that's sort of separate from the person that you are right now.
Josh Wolfe:
So you're setting a conditional? Like, if it turns out like this in the future, then okay, but if I get disconfirming evidence, then I will change my mind.
Daniel Kahneman:
Well, what really does it is the specificity. It is not being wrong, it is being wrong in a particular way. That is, and that's what happens in research. If you tell me why my whole theory is false in advance, then no way. But a particular failure? Sure, that's easy.
Josh Wolfe:
Interesting. So if something is narrow, you might accept that specific thing.
Daniel Kahneman:
Right, it's not an error. My theory is, basically, all right. I may need to tweak this or that. While keeping my theory essentially constant, I can either find a flaw in the experiment or a tweak in the theory. Or very often, you will say, “you have misunderstood my theory, you're applying my theory in a way that it shouldn't be applied.”
As Danny Kahneman pointed out, the point of using tools like pre-mortems is ultimately to change your mind — to choose a different direction for your decision than you might have otherwise chosen. Yet pre-mortems on their own may not be sufficient without other tools like pre-commitment contracts, as Annie's research has found; systems like peer-review that create adversarial collaboration, as Josh pointed out; or “specificity,” in Danny’s parlance.
It’s extremely hard to change our minds, but that doesn’t mean we can’t find strategies to open up alternative paths. Pre-mortems are useful tools because no failure should be unimagined. Imagining failure isn't an excuse not to make a decision. It's a method to reduce strategic surprise.