Psychology Today
John Wilcox Ph.D.
The Seven "Irrational" Habits of Highly Rational People
How our intuitions about rationality can lead us astray.
Updated September 30, 2024 | Reviewed by Lybi Ma
Key points
If someone made perfectly accurate judgments and sound decisions, would we recognize it?
The science and philosophy of judgment and decision-making suggests that the answer is often “no.”
Suggestions about how we can distinguish the genuinely rational from the irrational.
Part One of Two
The importance of recognizing what's rational and what's not
If someone was as rational as could be—with sound decisions and many accurate and trustworthy judgments about the world—would we recognize it? There are reasons to think the answer is “No.” In this post, I aim to challenge prevailing intuitions about rationality and argue that the philosophy and science of judgment and decision-making reveal several ways in which what appears to be rational diverges from what actually is rational.
This post takes its title from Stephen Covey’s well-known book “The Seven Habits of Highly Effective People.” I argue that, similarly, there are seven habits of highly rational people—but these habits can appear so counter-intuitive that others label these habits as “irrational.” Of course, the rationality of these habits might be obvious to specialists in judgment and decision-making, but I find they are often not so obvious to others of the sort for whom this post is written.
In any case, not only are these habits potentially interesting in their own right, but recognizing them may also help to open our minds, to help us better understand the nature of rationality, and to better identify the judgments and decisions we should trust—or not trust—in our own lives.
The seven "irrational" habits of highly rational people
1. Highly rational people are confident in things despite “no good evidence” for them:
The first habit of highly rational people is that they are sometimes confident in things when others think there is no good evidence for them. One case where this shows up extremely clearly is the Monty Hall problem, as I discuss in detail in a post here.
In the problem, a prize is randomly placed behind one of three doors, you select a door and then the gameshow host—Monty Hall—will open one of the other doors that does not conceal the prize. If you select a door and that door conceals the prize, then Monty Hall will open either of the other two doors with an equal likelihood. But if the door you select does not conceal the prize, then Monty Hall must open the only other door that does not conceal the prize and one you did not select.
In these circumstances, as I explain in the post, if you select door A and Monty Hall opens door C, then there’s a two-thirds probability that door B conceals the prize. In this case, then, door C being opened constitutes “evidence” that door B conceals the prize. Furthermore, let us consider an adaptation called the “new Monty Hall problem." In this case, door C would be opened with a 10 percent likelihood if door A conceals the prize, in which case there’s provably a 91 percent probability that door B conceals the prize after door C is opened. In this version, the truly rational response is to be very confident that door B conceals the prize.
But despite this, in my experiments, everyone without training who encountered these problems got the wrong answer, and the vast majority thought door B had only a 50 percent probability of concealing the prize in both versions of the problem. This effectively means they thought there was no good evidence for door B concealing the prize when there in fact was!
What’s more, the studies found that participants did not recognize this good evidence, and they were also more confident in their incorrect answers. Compared to participants who were trained and more likely to get the correct answers to these problems, the other participants were on average both more confident in the correctness of their (actually incorrect) answers and they thought they had a better understanding of why their (actually incorrect) answers were correct.
What this shows is that truly rational people may recognize objectively good evidence for hypotheses where others think there is none—leading them to be confident in things in ways that others think are irrational. In the post, I also discuss some more realistic scenarios where this in principle could occur—including some from medicine, law, and daily life.
2. They are confident in outright false things:
But even if someone is rationally confident in something, that thing might be false a particular proportion of the time.
In fact, according to one norm of trustworthy judgments, a perfectly accurate person would often be 90 percent confident in false things approximately 10 percent of the time. In other words, a perfectly accurate person would be “well calibrated” in the rough sense that, in normal circumstances, anything they assign 90 percent probabilities to will be true approximately 90 percent of the time, and anything they assign 80 percent probabilities to will be true approximately 80 percent of the time and so on.
We can see this when we look at well-calibrated forecasters who might assign high probabilities to a bunch of unique events, and while most of those will happen, some of them will not—as I discuss in detail here. Yet if we focus on a small sample of cases, they might look less rational than they are since they will be confident in outright false things.
3. They countenance the “impossible” and are “paranoid”
However, studies suggest many people—including experts with doctorates in their domain, doctors, jurors, and the general public—are not so well-calibrated. One example of this is miscalibrated certainty—that is when people are certain (or virtually certain) of things that turn out to be false.
For instance, Philip Tetlock tracked the accuracy of a group of political experts’ long-term predictions, and he found that out of all the things they were 100 percent certain would not occur, those things actually did occur 19 percent of the time. Other studies likewise suggest people can be highly confident in things that are false a significant portion of the time.
But a perfectly rational person wouldn’t be so miscalibrated about these things, which others are certain about, and so they would assign higher probabilities to things that others would think are “impossible.” For example, a perfectly calibrated person would perhaps assign 19 percent probabilities to the events that Tetlock’s experts were inaccurately certain would not happen—or they might even assign some of them much higher probabilities, like 99 percent, if they had sufficiently good evidence for them. In such a case, the perfectly rational person would look quite “irrational” from the perspective of Tetlock’s experts.
But insofar as miscalibrated certainty is widespread among experts or the general public, so too would be the perception that truly rational people are “irrational” in virtue of them countenancing what others irrationally consider to be “improbable” at best or “impossible” at worst.
Furthermore, when one has miscalibrated certainty about outcomes that are “bad, a rational person looks like they believe in the possibility of “impossible” outcomes, and the rational person will look irrationally “paranoid” in doing so since the supposedly “impossible” outcomes are bad.
Key points
If someone made perfectly accurate judgments and sound decisions, would we recognize it?
The science and philosophy of judgment and decision-making suggests that the answer is often “no.”
Suggestions on how we can distinguish the genuinely rational from the irrational.
Part Two of Two
In a previous post, we considered three of seven "irrational" habits of highly rational people. Here we consider the remaining four, as well as some suggestions for distinguishing the genuinely rational from the irrational.
4. They avoid risks that don’t happen
As discussed, a rational person can look “irrational” or “paranoid” by virtue of thinking the “impossible” is possible or even (probably) true. Not only that, but they will also act to reduce risks that never actually happen.
This is because our leading theory of rational decision-making claims that we should make decisions not just based on how probable or improbable outcomes are, but rather based on the so-called expected utility of those outcomes, where the expected utility of an outcome is the product of both its probability and how good or bad it is.
This sometimes means that we should avoid decisions if there is an improbable chance of something really bad happening. For example, it can be rational to avoid playing Russian roulette even if the gun is unlikely to fire a bullet, simply because the off-chance of a bullet firing is so bad that this outcome has high expected negative utility.
Likewise, for many other decisions in life, it may be rational to avoid decisions if they have improbable outcomes that are sufficiently bad. This consequently means a rational person could often act to avoid many risks that never actually happen.
But as is well known, people often evaluate the goodness of a decision based on its outcome, and if the bad thing does not happen, the average person might evaluate that decision as “irrational.”
This kind of thing could happen quite often too. For example, if for every highly negative outcome with a probability of 10 percent and which a rational decision-maker would avoid, then they will be avoiding negative outcomes that simply don’t happen 90 percent of the time, potentially making them look quite irrational.
The situation is even worse if the evaluator has the "miscalibrated certainty" we considered earlier, and the outcome not only does not happen, but rather it looks like it was always “impossible” from the perspective of the evaluator.
5. They pursue opportunities that fail
But the same moral holds for decisions that avoid risk, and for decisions that pursue reward. For example, a rational decision-maker might accept an amazing job offer that has merely a 10 percent chance of continued employment if the possibility of continued employment is sufficiently good. But of course, the decision to accept that job has a 90 percent chance of resulting in unemployment, potentially making the decision again seem like a “failure” if the probable happens.
More generally, a rational decision-maker would pursue risky options with 90 percent chances of failure if the options are sufficiently good all things considered: it is like buying a lottery ticket with a 10 percent chance of winning but with a sufficiently high reward.
But again, the rational decision-maker could look highly “irrational” in the 90% of cases where those decisions lead to less-than-ideal outcomes.
In any case, what both this habit and the preceding one have in common is that rational decision-making requires making decisions that lead to the best outcomes over many decisions in the long run, but humans often evaluate decision-making strategies based on mere one-off cases.
6. They are often irrational
Despite that, though, arguably any realistic person who is as rational as could be would still be genuinely irrational to some degree. This is because our dominant theory of judgment and decision-making—dual process theory—entails that while we often make reflective judgments and decisions, there are countless situations where we do not and simply cannot.
Instead, the literature commonly affirms that everyone employs a set of so-called heuristics for judgment and decision-making which—while often adequate—also often lead to sub-optimal outcomes. Consequently, even if someone was as rational as could be, they would still make irrational judgments and decisions in countless other contexts where they cannot be expected to rely on their more reflective faculties.
If we then focus solely on these unreflective contexts, we would get an inaccurate impression of how rational they are overall.
7. They do things that are often “crazy” or “unconventional”
All of the preceding thoughts then entail that rational people may do things that seem “crazy” or “unconventional” by common standards: they might believe in seemingly impossible things, act to reduce risks that never happen, or pursue opportunities that never materialize, and so on. This might express itself in weird habits, beliefs, or in many other ways.
But this shouldn’t be too surprising. After all, the history of humanity is a history of common practices that later generations appraise as unjustified or irrational. Large portions of humanity once believed that the earth was flat, that the earth was at the center of the universe, that women were supposedly incapable or unsuitable for voting, and so on.
Have we then finally reached the apex of understanding in humanity’s evolution, a point where everything we now do and say will appear perfectly rational by future standards? If history is anything to go by, then surely the answer is “No.” If that is the case, then perhaps the truly "rational" will be ahead of the rest—believing or doing things that seem crazy or irrational by our currently common standards.
How to distinguish the rational from the irrational
I hope I have conveyed just how frequently our untrained intuitions about what is rational may diverge from what is truly rational: what’s rational might appear “irrational” and vice versa. In a world where these intuitions might lead us astray, then, how can we tell rational from irrational, accurate from inaccurate, or wisdom from lack of wisdom?
Some common rules of thumb might not work too well. For example, sometimes the evidence fails to find that years of experience, age, or educational degrees improve accuracy, at least in domains like geopolitical forecasting.
Some suggestions supported by the evidence:
Suggestion #1: Measure calibration
First, track the calibration of the judgments you care about—whether they are yours or others. I provide some tools and ideas for how to do this here. This can help us to put things in perspective, avoid focusing on single cases, and detect pervasive miscalibration that can affect our decision-making. And as other studies suggest, past accuracy is the greatest predictor of future accuracy.
Suggestion #2: Learn norms of reasoning
Additionally, I would suggest learning and practicing various norms of reasoning. These include the evidence-based suggestions for forming more accurate judgments in my book Human Judgment, such as practicing active open-minded thinking and thinking in terms of statistics. It also includes other norms, such as so-called “Bayesian reasoning,” which can produce more accurate judgments in the Monty Hall problem and potentially other contexts, as I discuss here and here.
Suggestion #3: Think in terms of expected utility
Finally, when evaluating the rationality of someone’s decisions, think in terms of expected utility theory. Expected utility theory is complicated, but here is a potentially helpful introduction to it (and from my former Ph.D. advisor—a really awesome person!). In short, though, expected utility theory requires us to ask what probabilities people attach to outcomes, how much they value those outcomes and, on my preferred version of it, whether their probabilities are calibrated and their values are in some sense objectively “correct.” Then, we can ask whether they are making decisions that lead to the best possible outcomes in the long run.
In these ways, I think we can better tell what’s rational from what’s not in a world where our intuitions can otherwise lead us astray.
About the Author
John Wilcox Ph.D.
John Wilcox, Ph.D., is a cognitive scientist at Columbia University, founder of Alethic Innovations, and author of Human Judgment: How Accurate Is It, and How Can It Get Better?
Online: www.johnwilcox.org
No comments:
Post a Comment