Daniel Kahneman

Thinking Fast and Slow

This book, Thinking Fast and Slow by Daniel Kahneman, has been on my reading list for quite some time. It has been referenced countless times in earlier readings of mine around decision making, human behavior, thought process, intuition and reasoning. It is also on nearly every reading list of must-reads within its category. Despite the very high expectation I had of this book, it has delivered far beyond them both in terms of depth and breadth in addressing the topic of human thinking.

This is a book about cognitive biases:

When you are asked what you are thinking about, you can normally answer. You believe you know what goes on in your mind, which often consists of one conscious thought leading in an orderly way to another. But that is not the only way the mind works, nor indeed is that the typical way. Most impressions and thoughts arise in your conscious experience without your knowing how they got there. You cannot trace how you came to the belief that there is a lamp on the desk in front of you, or how you detected a hint of irritation in your spouse’s voice on the telephone, or how you managed to avoid a threat on the road before you became consciously aware of it. The mental work that produces impressions, intuitions, and many decisions goes on in silence in our mind…Much of the discussion in this book is about biases of intuition. However, the focus on error does not denigrate human intelligence, any more than the attention to diseases in medical texts denies good health. Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time. As we navigate our lives, we normally allow ourselves to be guided by impressions and feelings, and the confidence we have in our intuitive beliefs and preferences is usually justified. But not always. We are often confident even when we are wrong, and an objective observer is more likely to detect our errors than we are.

Daniel goes on to explain how the research, both in terms of hypothesis and associated experiments, Amos and him were conducting was different:

Historians of science have often noted that at any given time scholars in a particular field tend to share basic assumptions about their subject. Social scientists are no exception; they rely on a view of human nature that provides the background of most discussions of specific behaviors but is rarely questioned. Social scientists in the 1970s broadly accepted two ideas about human nature. First, people are generally rational, and their thinking is normally sound. Second, emotions such as fear, affection, and hatred explain most of the occasions on which people depart from rationality. Our article challenged both assumptions without discussing them directly. We documented systematic errors in the thinking of normal people, and we traced these errors to the design of the machinery of cognition rather than to the corruption of thought by emotion…The use of demonstrations provided scholars from diverse disciplines notably philosophers and economists—an unusual opportunity to observe possible flaws in their own thinking. Having seen themselves fail, they became more likely to question the dogmatic assumption, prevalent at the time, that the human mind is rational and logical. The choice of method was crucial: if we had reported results of only conventional experiments, the article would have been less noteworthy and less memorable. Furthermore, skeptical readers would have distanced themselves from the results by attributing judgment errors to the familiar fecklessness of undergraduates, the typical participants in psychological studies. Of course, we did not choose demonstrations over standard experiments because we wanted to influence philosophers and economists. We preferred demonstrations because they were more fun, and we were lucky in our choice of method as well as in many other ways. A recurrent theme of this book is that luck plays a large role in every story of success; it is almost always easy to identify a small change in the story that would have turned a remarkable achievement into a mediocre outcome. Our story was no exception.

The author then goes on to summarize the structure of the book and the areas covered:

The book is divided into five parts. Part 1 presents the basic elements of a two-systems approach to judgment and choice. It elaborates the distinction between the automatic operations of System 1 and the controlled operations of System 2, and shows how associative memory, the core of System 1, continually constructs a coherent interpretation of what is going on in our world at any instant. I attempt to give a sense of the complexity and richness of the automatic and often unconscious processes that underlie intuitive thinking, and of how these automatic processes explain the heuristics of judgment. A goal is to introduce a language for thinking and talking about the mind. Part 2 updates the study of judgment heuristics and explores a major puzzle: Why is it so difficult for us to think statistically? We easily think associatively, we think metaphorically, we think causally, but statistics requires thinking about many things at once, which is something that System 1 is not designed to do. The difficulties of statistical thinking contribute to the main theme of Part 3, which describes a puzzling limitation of our mind: our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. The focus of part 4 is a conversation with the discipline of economics on the nature of decision-making and on the assumption that economic agents are rational. Part 5 describes recent research that has introduced a distinction between two selves, the experiencing self and the remembering self, which do not have the same interests.

Throughout the book, two systems of thinking that humans possess are referenced, so Daniel takes a moment to define each:

I adopt terms originally proposed by the psychologists Keith Stanovich and Richard West, and will refer to two systems in the mind, System 1 and System 2.

  • System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

Below are selected lessons from the book that I found particularly perceptive: Noting that System 2 has a particular attribute when it comes to managing its capacity:

System 2 and the electrical circuits in your home both have limited capacity, but they respond differently to threatened overload. A breaker trips when the demand for current is excessive, causing all devices on that circuit to lose power at once. In contrast, the response to mental overloaded is selective and precise: System 2 protects the most important activity, so it receives the attention it needs; “spare capacity” is allocated second by second to other tasks. In our version of the gorilla experiment, we instructed the participants to assign priority to the digit task. We know that they followed that instruction, because the timing of the visual target had no effect on the main task. If the critical letter was presented at a time of high demand, the subjects simply did not see it. When the transformation task was less demanding, detection performance was better.

On one of the focal interplays between System 2 and System 1:

One of the main functions of System 2 is to monitor and control thoughts and actions “suggested” by System 1, allowing some to be expressed directly in behavior and suppressing or modifying others. Anything that makes it easier for the associative machine to run smoothly will also bias beliefs. A reliable way to make people believe in falsehoods is frequent repetition, because familiarity is not easily distinguished from truth. Authoritarian institutions and marketers have always known this fact. But it was psychologists who discovered that you do not have to repeat the entire Statement of a fact or idea to make it appear true.

On cognitive ease:

These findings add to the growing evidence that good mood, intuition, creativity, gullibility, and increased reliance on System 1 form a cluster. At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together. A happy mood loosens the control of System 2 over performance: when in a good mood, people become more intuitive and more creative but also less vigilant and more prone to logical errors. Here again, as in the mere exposure effect, the connection makes biological sense. A good mood is a signal that things are generally going well, the environment is safe, and it is all right to let one’s guard down. A bad mood indicates that things are not going very well, there may be a threat, and vigilance is required. Cognitive ease is both a cause and a consequence of a pleasant feeling.

On seeing causes:

Experiments have shown that six-month-old infants see the sequence of events as a cause-effect scenario, and they indicate surprise when the sequence is altered. We are evidently ready from birth to have impressions of causality, which do not depend on reasoning about patterns of causation. They are products of System 1.

On the brain as a machine for jumping to conclusions:

Jumping to conclusions is efficient if the conclusions are likely to be correct and the costs of an occasional mistake acceptable, and if the jump saves much time and effort. Jumping to conclusions is risky when the situation is unfamiliar, the stakes are high, there is no time to collect more information. These are the circumstances in which intuitive errors are probable, which may be prevented by a deliberate intervention of System 2.

A summary of the characteristics of System 1:

  • generates impressions, feelings, and inclinations; when endorsed by System 2 these become beliefs, attitudes, and intentions

  • operates automatically and quickly, with little or no effort, and no sense of voluntary control

  • can be programmed by System 2 to mobilize attention when a particular pattern is detected (search)

  • executes skilled responses and generates skilled intuitions, after adequate training

  • creates a coherent pattern of activated ideas in associative memory

  • links a sense of cognitive ease to illusions of truth, pleasant feelings, and reduced vigilance

  • distinguishes the surprising from the normal

  • infers and invents causes and intentions

  • neglects ambiguity and suppresses doubt

  • is biased to believe and confirm

  • exaggerates emotional consistency (halo effect)

  • focuses on existing evidence and ignores absent evidence (WYSLATI)

  • generates a limited set of basic assessments

  • represents sets by norms and prototypes, does not integrate

  • matches intensities across scales (e.g., size to loudness)

  • computes more than intended (mental shotgun)

  • sometimes substitutes an easier question for a difficult one (heuristics)

  • is more sensitive to changes than to states (prospect theory)

  • overweights low probabilities

  • shows diminishing sensitivity to quantity (psychophysics)

  • responds more strongly to losses than to gains (loss aversion)

  • frames decision problems narrowly, in isolation from one another

On the law of small numbers:

  • The exaggerated faith in small samples is only one example of a more general illusion—we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
  • Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations. Many facts of the world are due to chance, including accidents of sampling. Causal explanations of chance events are inevitably wrong.

On anchoring:

Anchoring effects are threatening in a similar way. You are always aware of the anchor and even pay attention to it, but you do not know how it guides and constrains your thinking, because you cannot imagine how you would have thought if the anchor had been different ( absent). However, you should assume that any number that is on the table has had an anchoring effect on you, and if the stakes are high you should mobilize yourself (your System 2) to combat the effect.

On representativeness:

The combination of WYSIATI and associative coherence tends to make us believe in the stories we spin for ourselves. The essential keys to disciplined Bayesian reasoning can be simply summarized:

  • Anchor your judgment of the probability of an outcome on a plausible base rate.
  • Question the diagnosticity of your evidence.

On the challenges of our two-systems to incorporate the regression to mean view:

Extreme predictions and a willingness to predict rare events from weak evidence are both manifestations of System 1…Regression is also a problem for System 2. The very idea of regression to the mean is alien and difficult to communicate and comprehend. Galton had a hard time before he understood it. Many statistics teachers dread the class in which the topic comes up, and their students often end up with only a vague understanding of this crucial concept. This is a case where System 2 requires special training. Matching predictions to the evidence is not only something we do intuitively; it also seems a reasonable thing to do. We will not learn to understand regression from experience.

On the illusion of validity:

Subjective confidence in a judgment is not a reasoned evaluation of the probability that this judgment is correct. Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it. It is wise to take admissions of uncertainty seriously, but declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.

On when we can trust the opinion of experts:

  • an environment that is sufficiently regular to be predictable
  • an opportunity to learn these regularities through prolonged practice

On lessons learned from a failed project:

The first was immediately apparent: I had stumbled onto a distinction between two profoundly (labeled the inside view and the outside view). The second lesson was that our initial forecasts of about two years for the completion of the project exhibited a planning fallacy. Our estimates were closer to a best-case scenario than to a realistic assessment. I was slower to accept the third lesson, which I call irrational perseverance: the folly we displayed that day in failing to abandon the project. Facing a choice, we gave up rationality rather than give up the enterprise.

On the forecasting method that Flyvbjerg devised to overcome the tendency to neglect the base-rate:

1. Identify an appropriate reference class (kitchen renovations, large railway projects, etc.). 2. Obtain the statistics of the reference class (in terms of cost per mile of railway, or of the percentage by which expenditures exceeded budget). Use the statistics to generate a baseline prediction. 3. Use specific information about the case to adjust the baseline prediction, if there are particular reasons to expect the optimistic bias to be more or less pronounced in this project than in others of the same type.

On the premortem technique to improve our decision making:

Organizations may be better able to tame optimism and individuals than individuals are. The best idea for doing so was contributed by Gary Klein, my “adversarial collaborator” who generally defends intuitive decision making against claims of bias and is typically hostile to algorithms. He labels his proposal the premortem. The procedure is simple: when the organization has almost come to an important decision but has not formally committed itself, Klein proposes gathering for a brief session a group of individuals who are knowledgeable about the decision. The premise of the session is a short speech: “Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”

On prospect theory:

  • In mixed gambles, where both a gain and a loss are possible, loss aversion causes extremely risk-averse choices.
  • In bad choices, where a sure loss is compared to a larger loss that is merely probable, diminishing sensitivity causes risk seeking.

On rare events:

  • People overestimate the probabilities of unlikely events.
  • People overweight unlikely events in their decisions.

On framing:

Skeptics about rationality are not surprised. They are trained to be sensitive to the power of inconsequential factors as determinants of preference—my hope is that readers of this book have acquired this sensitivity.

On the two selves:

The evidence presents a profound challenge to the idea that humans have consistent preferences and know how to maximize them, a cornerstone of the rational-agent model. An inconsistency is built into the design of our minds. We have strong preferences about the duration of our experiences of pain and pleasure. We want pain to be brief and pleasure to last. But our memory, a function of System 1, has dived to represent the most intense moment of an episode of pain or pleasure (the peak) and the feelings when the episode was at its end. A memory that it neglects duration will not serve our preference for long pleasure and short pains.

On the role of behavioral economists, and the sensitivity around individual freedom:

But life is more complex for behavioral economists than for true believers in human rationality. No behavioral economist favors a state that will force its citizens to eat a balanced diet and to watch only television programs that are good for the soul. For behavioral economists, however, freedom has a cost, which is borne by individuals who make bad choices, and by a society that feels obligated to help them. The decision of whether or not to protect individuals against their mistakes therefore presents a dilemma for behavioral economists. The economists of the Chicago school do not face that problem, because rational agents do not make mistakes. For adherents of this school, freedom is free of charge.

On a concluding note, three additional lessons I wanted to highlight:

  • The way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down, and ask for reinforcement from System 2…Unfortunately this sensible procedure is least likely to be applied when it is needed most. We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available, and cognitive illusions are generally more difficult to recognize than perceptual illusions. The voice of reason may be much fainter than the loud and clear voice of an erroneous intuition, and questioning your intuitions is unpleasant when you face the stress of a big decision. More doubt is the last thing you want when you are in trouble. The upshot is that it is much easier to identify a minefield when you observe others wandering into it than when you are about to do so. Observers are less cognitively busy and more open to information than actors. That was my reason for writing a book that is oriented critics and gossipers rather than to decision makers.
  • Organizations are better than individuals when it comes to avoiding errors, because they naturally think more slowly and have the power to impose orderly procedures. Organizations can institute and enforce the application of Useful checklists, as well as more elaborate exercises, such as reference-class forecasting and the premortem. At least in part by providing a distinctive vocabulary, organizations can also encourage a culture in which people watch out for one another as they approach minefields.
  • Ultimately, a richer language is essential to the skill of constructive criticism. Much like medicine, the identification of judgment errors is a diagnostic task, which requires a precise vocabulary. The name of a disease is a hook to which all that is known about the disease is attached, including vulnerabilities, environmental factors, symptoms, prognosis, and care. Similarly, labels such as “anchoring effects,” “narrow framing,” or “excessive coherence” bring together in memory everything we know about a bias, its causes, its effects, and what can be done about it. There is a direct link from more precise gossip at the watercooler to better decisions. Decision makers are sometimes better able to imagine the voices of present gossipers and future critics than to hear the hesitant voice of their own doubts. They will make better choices when they trust their critics to be sophisticated and fair, and when they expect their decision to be judged by how it was made, not only by how it turned out.

An absolute must read, in the areas of thought, decision making, reasoning and behavioral economics. As Professor William Easterly best articulated it on the cover of the book: “[A] masterpiece…This is one of the greatest and most engaging insights into the human mind I have read.”

On Decisive

I am a big fan of the Heath brothers, having read their previous bestsellers Switch and Made To Stick. I was excited to read their latest book Decisive, How to Make Better Choices in Life and Work, not only because they had written it but because decision making itself was a subject area of particular interest to me. This book exceed my high expectations both in terms of content and delivery.

The Heath brothers begin by reminding us why decision making is difficult:

And that, in essence, is the core difficulty of decision making: What’s in the spotlight will rarely be everything we need to make a good decision, but we won’t always remember to shift the light. Sometimes, in fact, we’ll forget there’s a spotlight at all, dwelling so long in the tiny circle of light that we forget there’s a broader landscape beyond it.

And while we instinctively think that more analysis should lead to superior decision making, it is actually the process we use to come up with the decision that is more important:

When the researchers compared whether process or analysis was more important in producing good decisions—those that increased revenues, profits, and market share—they found that “process mattered more than analysis—by a factor of six.” Often a good process led to better analysis—for instance, by ferreting out faulty logic. But the reverse was not true: “Superb analysis is useless unless the decision process gives it a fair hearing.”

So why is decision making so difficult and what is the key to improving our capability? It is about understanding the underlying set of biases:

Research in psychology over the last 40 years has identified a set of biases in our thinking that doom the pros-and-cons model of decision making. If we aspire to make better choices, then we must learn how these biases work and how to fight them (with something more potent than a list of pros and cons).

How does the normal decision process flow, and what are the challenges within each step:

If you think about a normal decision process, it usually proceeds in four steps…And what we’ve seen is that there is a villain that afflicts each of these stages:

-You encounter a choice. But narrow framing makes you miss options.

-You analyze your options. But the confirmation bias leads you to gather self-serving information.

-You make a choice. But short-term emotion will often tempt you to make the wrong one.

-Then you live with it. But you’ll often be overconfident about how the future will unfold.

And while we can’t eliminate these biases, we can counteract them:

We can’t deactivate our biases, but these people show us that we can counteract them with the right discipline. The nature of each villain suggests a strategy for defeating it:

  1. You encounter a choice. But narrow framing makes you miss options So…Widen Your Options. How can you expand your set of choices?
  2. You analyze your options. But the confirmation bias leads you to gather self-sensing info So…Reality-Test Your Assumptions. How can you get outside your head and collect information that you can trust?..
  3. You make a choice. But short-term emotion will often tempt you to make the wrong one. So…Attain Distance Before Deciding. How can you overcome short-term emotion and conflicted feelings to make the best choice?…
  4. Then you live with it. But you’ll often be overconfident about how the future Will unfold So…Prepare to Be Wrong. How can we plan for an uncertain future so that we give our decisions the best chance to succeed?

This is the WRAP process for decision making which is at the heart of this book:

Our goal in this book is to teach this four-step process for making better choices. Note the mnemonic WRAP, which captures the four verbs. We like the notion of a process that “wraps” around your usual way of making decisions, helping to protect you from some of the biases we’ve identified. The four steps in the WRAP model are sequential; in general, you can follow them in order—but not rigidly so. Sometimes you’ll double back based on something you’ve learned.

Why is a process needed?

To get that kind of consistent improvement requires technique and practice. It requires a process. The value of the WRAP process is that it reliably focuses our attention on things we otherwise might have missed: options we might have overlooked, information we might have resisted, and preparations we might have neglected.

1- Widen Your Options

On avoiding a narrow frame:

Focusing is great for analyzing alternatives but terrible for spotting them. Think about the visual analogy—when we focus we sacrifice peripheral vision. And there’s no natural corrective for this; life won’t interrupt our focus to draw our attention to all of our options.

On multitracking:

In a study of top leadership teams in Silicon Valley, an environment that tends to place a premium on speed, she found that executives who weigh more options actually make faster decisions. It’s a counterintuitive finding, but Eisenhardt offers three explanations. First, comparing alternatives helps executives to understand the “landscape”: what’s possible and what’s not, what variables are involved. That understanding provides the confidence needed to make a quick decision. Second, considering multiple alternatives seems to undercut politics. With more options, people get less invested in any one of them, freeing them up to change positions as they learn. As with the banner-ad study, multitracking seems to help keep egos under control. Third, when leaders weigh multiple options, they’ve given themselves a built-in fallback plan.

An important element of multitracking is our mindset:

How you react to the position, in short, depends a great deal on your mindset at the time it’s offered. Psychologists have identified two contrasting mindsets that affect our motivation and our receptiveness to new opportunities: a “prevention focus,” which orients us toward avoiding negative outcomes, and a “promotion focus,” which orients us toward pursuing positive outcomes.

Another method of widening options, is finding someone else who’s solved your problem:

To break out of a narrow frame, we need options, and one of the most basic ways to generate new options is to find someone else who’s solved your problem…Notice the slow, brute-force approach that had to be used by the lab that didn’t use analogies. When you use analogies—when you find someone who has solved your problem—you can take your pick from the world’s buffet of solutions. But when you don’t bother to look, you’ve got to cook up the answer yourself every time. That may be possible, but it’s not wise, and it certainly ain’t speedy.

2- Reality-Test Your Assumption

On considering the opposite as a way to further test our assumption:

The most important lesson to learn about devil’s advocacy isn’t the need for a formal contrarian position; it’s the need to interpret criticism as a noble function. An effective promoter fidei is not a token argumentative smarty-pants; it’s someone who deeply respects the Catholic Church and is trying to defend the faith by surfacing contrary arguments in situations where skepticism is unlikely to surface naturally.

Questioning can be an effective tool to that effect:

Roger Martin Says “What would have to be true?” question has become the most important ingredient of his strategy work, and it’s not hard to see why. The search for disconfirming information might seem, on the surface, like a thoroughly negative process: We try to poke holes in our own arguments or the arguments of others. But Martin’s question adds something constructive: What if our least favorite option were actually the best one’ What data might convince us of that?

Other methods include:

1. Confirmation bias = hunting for information that confirms our initial assumptions (which are often self-serving).

2. We need to spark constructive disagreement within our organizations.

3. To gather more trustworthy information, we can ask disconfirming questions.

4. Caution: Probing questions can backfire in situations with a power dynamic.

5. Extreme disconfirmation: Can we force ourselves to consider the opposite of our instincts?

6. can even test our assumptions with a deliberate mistake.

7. Because we naturally seek self-confirming information, we need discipline to consider the opposite.

On Zooming in and out, and the importance of perspectives to further test assumptions:

Psychologists distinguish between the “inside view” and “outside view” of a situation. The inside view draws from information that is in our spotlight as we consider a decision—our own impressions and assessments of the situation we’re in. The outside view, by contrast, ignores the particulars and instead analyzes the larger class it’s part of…The outside view is more accurate—it’s a summary of real-world experiences, rather than a single person’s impressions—yet we’ll be drawn to the inside view.

The point is that the predictions of even a world-class expert need to be discounted in a way that their knowledge of base rates does not. In short. when you need trustworthy information, go find an expert—someone more experienced than you. Just keep them talking about the past and the present, not the future.

When we zoom out, we take the outside view, learning from the experiences of others who have made choices like the one we’re facing. When we zoom in, we take a close-up of the situation, looking for “color” that could inform our decision. Either strategy is helpful, and either one will add insight in a way that conference-room pontificating rarely will. When possible, we should do both. In interpreting the sentiments of Americans, FDR created statistical summaries and read a sample of real letters. In assessing the competitors’ products, Paul Smith’s colleagues relied on scientific data and personal experience. In making a high-stakes health decision, Brian Zikmund-Fisher trusted both the base rates and the stories of actual patients. Zooming out and zooming in gives us a more realistic perspective on our choices. We downplay the overly optimistic pictures we tend to paint inside our minds and instead redirect our attention to the outside world, viewing it in wide-angle and then in close-up.

On the importance of ooching/piloting:

The “ooching” terminology is our favorite, but we wanted to be clear that these groups are all basically saying the same thing: Dip a toe in before you plunge in headfirst. Given the popularity of this concept, and given the clear payoff involved—little bets that can improve large decisions—you might wonder why ooching isn’t more instinctive. The answer is that we tend to be awfully confident about our ability to predict the future.

Which also comes with a warning:

Ooching, in short, should be used as a way to speed up the collection of trustworthy information, not as a way to slow down a decision that deserves our full commitment.

3- Attain Distance Before Deciding

On overcoming short-term emotions, use the technique of giving advice to a friend:

The researchers have found, in essence, that our advice to others tends to hinge on the single most important factor, while our own thinking flits among many variables. When we think of our friends, we see the forest. When we think of ourselves, we get stuck in the trees. There’s another advantage of the advice we give others. We tend to be wise about counseling people to overlook short-term emotions.

On the importance of honoring your core priorities:

The goal of the WRAP process is not to neutralize emotion. Quite the contrary. When you strip away all the rational mechanics of decision making—the generation of options, the weighing of information—what’s left at the core is emotion. What drives you? What kind of person do you aspire to be? What do you believe is best for your family in the long run? (Business leaders ask: What kind of organization do you aspire to run? What’s best for your team in the long run?) Those are emotional questions—speaking to passions and values and beliefs—and when you answer them, there’s no “rational machine” underneath that is generating your perspective. It’s just who you are and what you want. The buck stops with emotion…All we can aspire to do with the WRAP process is help you make decisions that are good for you.

Maybe this advice sounds too commonsensical: Define and enshrine your core priorities. It is not exactly a radical stance. But there are two reasons why it’s uncommon to find people who have actually acted on this seemingly basic advice. First, people rarely establish their priorities until they’re forced to…Second, establishing priorities is not the same thing as binding yourself to them.

4- Prepare To Be Wrong

On bookend-ing the future:

Overconfidence about the future disrupts our decisions. It make us lackadaisical about preparing for problems. It tempts us to ignore early signs of failure. It leaves us unprepared for pleasant surprises. Fighting overconfidence means we’ve got to treat the future as a spectrum, not a point…To bookend the future means that we must sweep our spotlights from side to side, charting out the full territory of possibilities. Then we can stack the deck in our favor by preparing for both bad situations (via a premortem) and good (via a preparade).

On the importance of setting up tripwires to trigger decisions based on gradual changes:

Because day-to-day change is gradual, even imperceptible, it’s hard to know when to jump. Tripwires tell you when to jump. Setting tripwires would not have guaranteed that Kodak’s leaders made the right decisions. Sometimes even a clear alarm is willfully ignored. (We’ve probably all ignored a fire alarm, trusting that it is false.) But tripwires at least ensure that we are aware it’s time to make a decision, that we don’t miss our chance to choose because we’ve been lulled into autopilot.

On the importance of trusting the decision making process

The WRAP process, if used routinely, will contribute to that sense of fairness, because it allows people to understand how the decision is being made, and it gives them comfort that decisions will be made in a consistent manner. Beyond WRAP, there are a few additional ideas to consider as you navigate group decisions.

 

On a concluding note:

What a process provides, though, is more inspiring: confidence. Not cocky overconfidence that comes from collecting biased information and ignoring uncertainties, but the real confidence that comes from knowing you’ve made the best decision that you could. Using a process for decision making doesn’t mean that your choices will always be easy, or that they will always turn out brilliantly, but it does mean you can quiet your mind. You can quit asking, “What am I missing?” You can stop the cycle of agonizing.

Just as important, trusting the process can give you the confidence to take risks. A process can be the equivalent of a mountain climber’s harness and rope, allowing you the freedom to explore without constant worry. A process, far from being a drag or a constraint, can it actually give you the comfort to be bolder.

And bolder is often the right direction. Short-run emotion, as we’ve seen, makes the status quo seductive. But when researchers ask the elderly what they regret about their lives, they don’t often regret something they did, they regret things they didn’t do. They regret not seizing opportunities. They regret hesitating. They regret being indecisive.

Being decisive is itself a choice. Decisiveness is a way of behaving, not an inherited trait. It allows us to make brave and confident choices, not because we know we’ll be right but because it’s better to try and fail than to delay and regret.

Our decisions will never be perfect, but they can be better. Bolder. Wiser. The right process can steer us toward the right choice. And the right choice, at the right moment, can make all the difference.

A highly recommended read in the area of decision making. If you are interested in further readings in this topic, I would suggest an earlier post, On Left Brain Right Stuff.

On Left Brain Right Stuff

I recently finished reading Left Brain Right Stuff – How Leaders Make Winning Decisions by Phil Rosenzweig. The author had graciously provided me with a copy of his new book, as I had previously read and reviewed an earlier work of his (The Halo Effect).

Below are key excerpts from the book that I found particularly insightful:

1- “They make predictable errors, or biases, which often undermine their decisions. By now we’re familiar with many of these errors, including the following: -People are said to be overconfident, too sure of themselves and unrealistically optimistic about the future. -People look for information that will confirm what they want to believe, rather than seeking information that might challenge their hopes. -People labor under the illusion of control, imagining they have more influence over events than they really do. -People are fooled by random events, seeing patterns where none exist. People are not good intuitive statisticians, preferring a coherent picture to what makes sense according to the laws of probability. -People suffer from a hindsight bias, believing that they were right all along.”

2- “Yet for all we know about these sorts of decisions, we know less about others. First, many decisions involve much more than choosing from options we cannot influence or evaluations of things we cannot affect…Second, many decisions have a competitive dimension…Third, many decisions take a long time before we know the results…Fourth, many decisions are made by leaders of organizations…In sum, experiments have been very effective to isolate the processes of judgment and choice, but we should be careful when applying their findings to very different circumstances.”

3- “Great decisions call for clear analysis and dispassionate reasoning. Using the left brain means: -knowing the difference between what we can control and what we cannot, between action and prediction -knowing the difference between absolute and relative performance, between times when we need to do well and when we must do better than others -sensing whether it’s better to err on the side a’ taking action and failing, or better not to act; that is, between what we call Type I and Type II errors -determining whether we are acting as lone individuals or as leaders in an organizational setting and inspiring others to achieve high performance -recognizing when models can help us make better decisions, but also being aware of their limits.”

4- “Having the right stuff means: -summoning high levels of confidence, even levels that might seem excessive, but that are useful to achieve high performance -going beyond past performance and pushing the envelope to seek levels that are unprecedented -instilling in others the willingness to take appropriate risks.”

5- “Moore and his colleagues ran several other versions of this study, all of which pointed to the same conclusion: people do not consistently overestimate their level of control. A simpler explanation is that people have an imperfect understanding of how much control they can exert. When control is low they tend to overestimate. but when it’s high they tend to underestimate.”

6- “Of course managers don’t have complete control over outcomes. any more than a doctor has total control over patient health. They are buffeted by events outside their control: macroeconomic factors, changes in technology, actions of rivals, and so forth. Yet it’s a mistake to conclude that managers suffer from a pervasive illusion of control. The greater danger is the opposite: that they will underestimate the extent of control they truly have.”

7- “If you believe there’s an intense pressure to outperform rivals when that’s not the case, you might prefer a Type 1 error. You might take action sooner than necessary or act more aggressively when the better approach would be to wait and observe. The risks can be considerable, but perhaps not fatal On the other hand, if performance is not only relative but payoffs are highly skewed, and you don’t make every effort to outperform rivals, you’ll make a Type II error. Here the consequences can be much more severe. Fail now, and you may never get another chance to succeed. By this logic, the greater error is to underestimate the intensity of competition. It’s to be too passive in the face of what could be a mortal threat. When in doubt, the smart move is to err on the side of taking strong action.”

8- “The lesson is clear: in a competitive setting, even a modest improvement in absolute performance can have a huge impact on relative performance. And conversely, failing to use all possible advantages to improve absolute performance has a crippling effect on the likelihood of winning. Under these circumstances, finding a way to do better isn’t just nice to have. For all intents and purposes, it’s essential.”

9- “First, not even thing that turns out badly is due to an error. We live in a world of uncertainty, in which there’s an imperfect link between actions and outcomes. Even good decisions sometimes turn out badly, but that doesn’t necessarily mean anyone made an error. Second, not every error is the result of overconfidence. There are many kinds off error: errors of calculation, errors of memory, simple motor errors, tactical errors, and so forth. They’re not all due to overconfidence.”

10- “The Trouble with Overconfidence,” the single word—overconfidence—has been used to mean three very different things, which they call overprecision, overestimation, and overplacement…Overprecision is the tendency to be too certain that our judgment is correct…He’s referring to overprecision: the tendency to believe a prediction is more accurate than it turns out to be…Overestimation, the second kind of overconfidence, is a belief that we can perform at a level beyond what is objectively warranted…Overestimation is an absolute evaluation; it depends on an assessment of ourselves and no one else…Overplacement, the third kind of overconfidence, is a belief that we can perform better than others…She calls it the superiority bias and says it’s a pervasive error.”

11- “My suggestion is that anyone who uses the term should have to specify the point of comparison. If overconfidence means excessively confident, then excessive compared to what? In much of our lives, where we can exert control and influence outcomes, what seems to be an exaggerated level of confidence may be useful; and when we add the need to outperform rivals, such a level of confidence may even be essential.”

12- “When we have ability to shape events we confront a different challenge: making accurate estimates of future performance. The danger here is not one of overlooking the base rate of the broader population at a point in time, but neglecting lessons of the past and making a poor prediction of the future. Very often people place great importance on their (exaggerated) level of skills and motivation. The result is to make forecasts on what Kahneman and Tversky call the inside view. Unfortunately these projections, which ignore the experiences of others who have attempted similar tasks, often turn out to be wildly optimistic.”

13-“The question we often hear—how much optimism or confidence is good, and how much is too much—turns out to be incomplete. There’s no reason to imagine that optimism or confidence must remain steady over time. It’s better to ramp it up and down, emphasizing a high level of confidence during moments of implementation, but setting it aside to learn from feedback and find ways to do better.”

14- “Duration is short, feedback is immediate and clear, the order is sequential, and performance is absolute. When these conditions hold, deliberate practice can be hugely powerful. As we relax each of them, the picture changes. Other tasks are long in duration, have feedback that is slow or incomplete, must be undertaken concurrently, and involve performance that is relative. None of this is meant to suggest their deliberate practice isn’t a valuable technique. But we have to know when it’s useful and when it’s not.”

15- “When we use models without a clear understanding of when they are appropriate, we’re not going to make great decisions—no matter how big the data set or how sophisticated the model appears to be.”

16- “To get at the root of the problem, Capen looked at the auction process itself. He discovered an insidious dynamic: when a large number of bidders place secret bids, it’s almost inevitable that the winning bid will be too high. Capen called this the winner’s curse.”

17- “But do some kinds of acquisitions have a greater chance of success than others? A significant number—the other 36 percent were profitable, and they turned out to have a few things in common. The buyer could identify clear and immediate gains. rather than pursuing vague or distant benefits. Also, the gains they expected came from cost savings rather than revenue growth. That’s a crucial distinction, because costs are largely within our control, whereas revenues depend on customer behavior, which is typically beyond our direct control.”

18- “The real curse is to apply lessons blindly, without understanding how decisions differ. When we can exert control, when we must outperform rivals, when there are vital strategic considerations, the greater real danger is to fail to make a bold move. Acquisitions ah ways involve uncertainty, and risks are often considerable. There’s no formula to avoid the chance of losses. Wisdom calls for combining clear and detached thinking—properties of the left brain—with the willingness to take bold action—the hallmark of the right stuff.”

19- “Starting a new business involves many of the same elements we have seen in other winning decisions: an ability to distinguish between what we can control and what we cannot; a sense of relative performance and the need to do better than rivals; the temporal dimension, in which decisions do not always produce immediate feedback; and an awareness that decisions are made in a social context, in which leaders sometimes need to inspire others to go beyond what may seem possible. Together, these elements help new ventures get off to a winning start.”  

20- “To make great decisions, we need above all to develop the capacity to question, to go beyond first-order observations and pose incisive second-order questions. An awareness of common errors and cognitive biases is only a start. Beyond that, we should ask: Are we making a decision about something we cannot control, or are we able to influence outcomes?…Are we seeking an absolute level of performance, or is performance relative?…Are we making, a decision that lends itself to rapid feedback, so we can make adjustments and improve a next effort?…Are we making a decision as an individual or as a leader in a social setting?…Are we clear what we mean by overconfidence?…Have we given careful thought to base rates, whether of the larger population at a point in time or historical rates of past events?…As for decision models, are we aware of their limits as well as strengths?…When the best course of action remains uncertain, do we have a sense of on which side we should err?”

21- “In his profile of longtime St. Louis Cardinals manager Tony LaRussa, Buzz Bissinger wrote that a baseball manager requires “the combination of skills essential to the trade: part tactician, part psychologist, part river-boat gambler.” That’s a good description for many kinds of strategic decision makers. The tactician plays a competitive game, anticipating the way a given move may lead to a counter-move and planning the best response. The psychologist knows how to shape outcomes by inspiring others, perhaps by setting goals or by offering encouragement or maybe with direct criticism. The riverboat gambler knows that outcomes aren’t just a matter of cold numbers and probabilities, but that it’s important matter of cold numbers and probabilities, but that it’s important to read an opponent so as to know when to raise the stakes, when to bluff, and when to fold. Winning decisions call for a combination of skills as well as the ability to shift among them. We may need to act first as a psychologist, then as a tactician, next as a riverboat gambler, and perhaps once again as a psychologist. In the real world, where we have to respond to challenges as they arise, one skill or another is insufficient; versatility is crucial Even then success is never assured, not in the competitive arenas of business or sports or politics. Performance is often relative and consequences of failure are harsh. A better understanding of decision-making, however, and an appreciation for the role of analysis as well as action, can improve the odds of success. It can help us win.”

Regards,

Omar Halabieh

Left Brain Right Stuff