the film Apollo 13, there is a famous scene with Lead Flight Director Gene Kranz in Mission Control. The spacecraft is in trouble, the CO₂ levels are rising, and the engineers are given a brutal brief: “We’ve got to find a way to make this fit into the hole for this, using nothing but that.” Kranz doesn’t do the math himself. Instead, he defines the problem, sets the boundary conditions (time, resources, risk), orchestrates the experts in the room and pushes toward a decision that must be both technically sound and fast enough to save lives.

That short sequence is decision-making “in a nutshell”:

  • A critical objective under uncertainty.
  • Severe constraints on time and information.
  • A decision leader who shapes the process more than the content.

For managers in IP and innovation-intensive businesses, the context may be less dramatic, but the structure is similar. Every day, they choose which projects to fund, which patents to file, where to enforce rights, how to negotiate licenses, and when to walk away. And, as we saw in the CEIPI MIPLM lecture on Decision in IP Management, most of these decisions are taken under pressure, with incomplete information and very human cognitive limitations.

How Often Do We Actually Decide?

The average adult makes tens of thousands of decisions per day — what to read, where to click, whom to answer, what to prioritize. Managers simply have more at stake in each of those choices: budgets, careers, reputations, and competitive positions. In that sense, managers are professional decision-makers.

Two complementary resources from the IP Management context help frame what that really means. In the blog article “How Brands Influence Decision-Making”, decision-making is shown as a sequence of filters: we never process every piece of information; instead, brands, cues, and past experiences act as shortcuts that help us choose quickly. Brands reduce complexity and give us “ready-made” interpretations of the world — especially when time is short.

The glossary entry “Decision-Making” deepens this by defining decisions as systematic processes: setting objectives, identifying alternatives, gathering and evaluating information, and then selecting a course of action. It emphasizes that decision-making is not just a moment of choice; it is an end-to-end process shaped by context, incentives, and culture.

In theory, this leads us to an ideal of rational decision-making — carefully weighing options, probabilities, and consequences. This ideal is framed in the glossary entry on “Rational Decision-Making”, which describes a structured, stepwise process that maximizes expected benefit given available information. In practice, however, we fall short of that ideal far more often than we would like to admit.

Why We Are Not the Rational Heroes We Imagine

One of the most striking parts of the lecture is the simple question: “Are we good decision makers?” The YouTube explainer on cognitive biases, the “Are We Good Decision-Makers?” video, offers a sobering answer: we are systematically not as rational as we think.

Cognitive biases are not random mistakes. They are systematic patterns of deviation from rational judgment that arise from the way our brains are wired. They help us simplify the world, but they also distort reality. In management practice, this means that even smart, experienced people can be consistently wrong in predictable ways.

For instance:

  • We remember dramatic IP disputes more than quiet settlements, and then overestimate the likelihood of “bet-the-company” litigation.
  • We anchor on sunk R&D costs when valuing patents, even though the real question is future cash flow.
  • We are overconfident in the forecasts we like and ignore inconvenient data.

The article “Managers and Their Not-So Rational Decisions” by Certo, Connelly and Tihanyi, introduced in the lecture, and summarized in the “Managers and Their Not-So Rational Decisions” video, explains this through the lens of two cognitive systems and the idea of bounded rationality.

System 1 and System 2: Kahneman’s Two Modes of Thinking

Daniel Kahneman’s famous distinction between two thinking systems is central to understanding managerial decisions. In the talk summarized in the video “Two Systems of Thinking”, he describes:

  • System 1: fast, automatic, intuitive, emotional. It answers questions quickly using heuristics and patterns.
  • System 2: slow, deliberate, analytical, effortful. It checks assumptions, runs numbers, and reasons logically.

In the Certo et al. article, managers are shown as constantly toggling between these systems. System 1 is indispensable when decisions must be made at high speed, under ambiguity — think of a licensing negotiation that suddenly shifts or a competitor’s surprising move. System 2 is essential when you structure an IP portfolio, assess litigation risk, or interpret a valuation model.

The problem is that System 1 often takes the lead, even when System 2 should be in charge. The article also introduces the idea of the “bias blind spot”: we see biases in others very clearly, but believe that we ourselves are relatively objective. This illusion makes it even harder to correct our own judgment.

A Short Tour Through the Bias Landscape

The lecture uses a rich taxonomy of biases to show how broad the problem is. Many of these are illustrated in the “Thirteen Cognitive Biases” video, and in the detailed examples from IP practice.

Some key families:

  1. Availability biases (Ease of recall, retrievability, presumed associations)
    We judge likelihood by how easily examples come to mind. A recent spectacular IP enforcement success may cause a company to overestimate enforcement success in general. Or if we mostly hear about infringement cases that failed in China, we might avoid enforcement there — ignoring more recent data that the system has changed.
  2. Representativeness biases (insensitivity to base rates, sample size, misconceptions of chance, regression to the mean, conjunction fallacy)
    We ignore base rates and statistics when a story “feels” right. If a patent attorney hears that a few patents in a field were sold for very high prices, they might conclude that all patents in that field are extremely valuable — despite the tiny sample size.
  3. Anchoring and adjustment
    We latch onto a starting value and then adjust too little. A company may anchor IP valuation on historic R&D spend or on a first offer in a licensing negotiation, even when market evidence suggests a completely different range.
  4. Overconfidence and confirmation
    We overestimate the accuracy of our judgments and actively search for information that confirms them. An IP manager might be “sure” that a particular family of patents is strategically essential and then only collect evidence that supports that belief — ignoring cost, weak enforcement prospects, or technical obsolescence.
  5. Hindsight and the curse of knowledge
    After the fact, everything looks obvious. When a competitor quickly invents around a patent, it’s tempting to say, “We knew this would happen.” In reality, no contingency plans were prepared, which shows that the knowledge wasn’t truly integrated into the decision process.

In IP management, these biases can mean under- or over-enforcing rights, misallocating budgets, misjudging litigation risk, or clinging too long to unpromising technologies.

Can We Outsmart Our Own Biases?

The point of recognizing biases is not to become cynical, but to design better processes. In the Harvard Business Review article “The Big Idea: Before You Make That Big Decision” and the McKinsey article “The Case for Behavioral Strategy”, both discussed in the lecture, the authors show that process quality matters more than sheer analytical effort.

The McKinsey thinking is brought to life in the “Case for Behavioral Strategy” video. It argues that companies which systematically tackle biases in their decision processes achieve significantly better returns on investment. Key strategies include:

  • Make biases discussable: Treat biases as a technical problem, not a personal insult. It should be normal to say, “We may be anchored on last year’s portfolio decision.”
  • Separate advocacy from evaluation: The person who builds the business case for an IP initiative should not be the only one evaluating it. Design roles for critical review.
  • Use checklists and “red teams”: Structured challenge — like pre-mortems, devil’s advocates, and explicit alternative scenarios — forces System 2 to engage.
  • Measure process quality, not only outcomes: A “good” decision can still lead to a bad outcome if the world shifts. Process discipline helps distinguish luck from quality.

For IP decisions, this could mean formal review steps for major filings, explicit criteria for litigation, cross-functional teams for portfolio pruning, and robust documentation of why a decision was made at the time.

Two Decision-Makers, Two Worlds: Negotiation as a Perception Test

Another powerful illustration from the lecture is the negotiation example, shown in the “Two Sides of a Negotiation” clip. Two parties look at the same situation — data, contracts, history—and yet inhabit completely different mental worlds.

Each side has:

  • Different reference points (what counts as a gain or a loss).
  • Different focal risks (reputation vs. cash vs. precedent).
  • Different time horizons (short-term settlement vs. long-term licensing relationship).

From a descriptive decision-making perspective, both are “intendedly rational” within their own frames. But each underestimates how radically the other side’s world is structured by different incentives, histories, and internal constraints.

For IP professionals, this is a reminder: in licensing, disputes, and standard-setting, perception is often more important than objective facts. Understanding the other side’s decision environment can be as valuable as understanding your own.

Descriptive Decision-Making: How Firms Actually Decide

This brings us to descriptive decision theory, introduced in the lecture and explored in the “Descriptive Decision Theory” explainer. Following the tradition of Cyert and March’s A Behavioral Theory of the Firm, descriptive theory does not ask how perfectly rational agents should decide. It asks how real organizations actually decide under conditions of limited information and bounded rationality.

Key elements include:

  • Limited information retrieval and processing: Organizations cannot collect or process all relevant information; they select what is salient or easy to obtain.
  • Satisficing, not optimizing: Instead of searching for the global optimum, firms often stop when they find a “good enough” solution.
  • Routines and rules: Over time, firms codify repeated solutions into rules — “how we do IP enforcement here” — which then guide future decisions.
  • Path dependence: Past decisions and experiences shape the options that are even considered in the future.

In IP management, descriptive decision-making explains why firms keep using certain patenting or enforcement patterns long after the environment has changed, or why they prefer familiar jurisdictions and partners even when better options exist.

Decision Analysis: Bringing Structure into Complexity

If descriptive theory explains what is, decision analysis aims at what could be better. In the lecture, decision analysis is introduced as “a systematic procedure for transforming opaque decision problems by a sequence of transparent steps” (drawing on Ronald A. Howard’s classic work).

The “Introduction to Decision Analysis” video captures this spirit: instead of being overwhelmed by uncertainty, we decompose complex decisions into:

  • Alternatives: What actions can we choose (e.g., file, license, enforce, abandon)?
  • States of nature: What uncertain events could happen (e.g., grant vs. rejection, competitor entry, court decisions)?
  • Consequences: What outcomes — in cash flow, strategic position, risk — follow from each combination?
  • Probabilities and utilities: How likely are the states, and how do we value the resulting outcomes?

The lecture distinguishes three main information environments:

  • Decision under certainty: Outcomes are known (rare in IP).
  • Decision under risk: Probabilities of outcomes are known or can be reasonably estimated—this is where expected value and expected utility can be applied.
  • Decision under uncertainty: Neither probabilities nor reliable data exist; here, scenario planning, options thinking, and qualitative judgment are central.

For IP management, decision analysis is especially useful in:

  • IP valuation: Assessing licensing offers or acquisitions under risk.
  • Litigation strategy: Evaluating settlement vs. court action, given probabilities and cost distributions.
  • Trade secret risk management: As in the MIPLM thesis on “risk management practices and their applications in intellectual property management and trade secret management”, where decision theory supports rational selection of protection measures.

The crucial point is that rationality is procedural, not perfect foresight. A well-structured process, grounded in transparent assumptions, is more defensible than a supposedly “optimal” gut feeling — even if the world turns out differently than expected.

From Mission Control to IP Management

The Apollo 13 scene with Gene Kranz resonates so strongly because it shows what happens when decision-making is both disciplined and deeply human. There is no illusion of perfect rationality — just a clear problem frame, an orchestrated process, and a willingness to confront risk head-on.

In business and IP management, we operate with far more ambiguity and far less urgency than NASA did that night. Yet the underlying challenge is similar:

  • We make countless decisions every day, more than we can consciously track.
  • Our judgments are shaped by brands, frames, experiences, and social context, as described in “How Brands Influence Decision-Making” and the glossary entries on “Decision-Making” and “Rational Decision-Making”.
  • Our minds rely on fast, intuitive System 1 and slower, analytical System 2, which exposes us to systematic biases.
  • Descriptive theories help us understand how firms really behave; decision analysis helps us design better choices in complex, risky environments.

For IP experts, the message is clear: being a good decision-maker is not about being perfectly rational. It is about knowing your own limitations, understanding how decisions actually unfold in organizations, and building processes that make critical strategic choices more transparent, more structured, and more robust — especially when the stakes are high and the future is uncertain.