Economic theory rests on a single, devastating substitution: it replaced you with a fictional creature. Call this creature an Econ—a being who optimizes, who has unbiased beliefs, who treats money as fungible, and who makes decisions by calculating expected utility across all possible states of the world. Econs are Mr. Spock. They have no passions, no moods, no shame. They do not finish meals they've paid for if they're no longer hungry. They do not keep a ticket to a game they don't want to attend just because they've already bought it. They do not prefer to receive a discount rather than pay full price, even when the final price is identical. They experience no pleasure from a bargain. To an Econ, a $100 bottle of wine they own is worth exactly $100 to drink, sell, or give away—the price paid is irrelevant.
This is not a description of human beings. This is a description of a machine, a calculator, a being without inner life.1
The problem—and this is the central tension in all of behavioral economics—is that virtually every model taught in economics classrooms, every theorem derived in academic papers, and every policy recommendation handed down from central banks and treasuries rests on the assumption that we are Econs. Not approximately Econs. Not Econs on average. Econs, period. And because this assumption is so embedded, so foundational, questioning it feels like blasphemy.
Economic theory has two load-bearing walls: optimization and equilibrium. Together they create what Thaler calls "the most powerful of the social sciences" in terms of influence over policy.2
Optimization means choosing the best option from among all available options, subject to a budget constraint. For a household, this means: given all the goods and services available in the marketplace, and given the family's income, the family chooses the combination that maximizes their total happiness or "utility." An Econ family does not leave money on the table. They do not spend $100 on shoes when that $100 could have bought them something that makes them happier. They do not make a purchase impulsively and regret it later (because they have unbiased beliefs about their own preferences and cannot make forecasting errors about their own satisfaction). The family solves what mathematicians call a "constrained optimization problem," and they solve it correctly.2
Equilibrium means that when prices are free to move, they adjust until the quantity of a good that people want to buy exactly matches the quantity that sellers want to sell. In competitive markets, this happens automatically. If there is excess demand, prices rise. If there is excess supply, prices fall. No shortage, no surplus, no friction. Just perfect clearing.2
The combination of these two principles—Optimization + Equilibrium = Economics—is extraordinarily powerful. It allows economists to make specific, testable predictions about the world. Raise the price of milk, and people will buy less milk. Lower the tax on investment income, and people will save more. Cut wages in a recession, and unemployment will fall (prices adjust down like milk prices do). These predictions emerge cleanly from theory, which is why economics has the reputation of being "scientific" in a way that psychology, sociology, and anthropology are not. Economics can point to a set of core principles and derive nearly everything else from them.2
The problem is that the core principles are false.
Begin with a simple question: can you solve a constrained optimization problem?
Next time you visit a grocery store, try this. Count the number of distinct products available. A decent supermarket carries between 10,000 and 40,000 items, depending on size and format. Now imagine you have a household budget of $500 for groceries for the week. Your task: choose the combination of items that maximizes your household's utility, subject to the $500 constraint.
How long will this take you? A minute? An hour? A week?
The honest answer is: forever. The number of possible combinations is so astronomically large that even if you could evaluate one combination per millisecond, you could not examine all of them in a lifetime. This is not a limitation of your intelligence. This is a mathematical fact about combinatorial explosion.2
Yet Econs do this calculation automatically. They do it in grocery stores, in housing markets, in labor markets, in everything. They do it correctly. They never leave money on the table. They never regret a purchase. They never buy something on impulse that they later wish they hadn't.
Humans, on the other hand, use shortcuts. We use rules of thumb. We satisfice—we find something good enough, not optimal. We anchor on reference points (this shirt is a bargain at $40 off the regular $100 price) rather than calculating intrinsic value. We ask our friends what they think. We trust brands. We avoid options that have burned us before. We do not solve constrained optimization problems. We approximate, heuristically, and this approximation often works fine. Sometimes it works terribly.2
When Thaler taught economics at Cornell and Rochester, he would pose optimization problems on exams designed to distinguish between students who mastered the material and those who didn't. The problems were genuinely hard. But here is what he discovered: students complained not about the difficulty of the material, but about the low average score. A class average of 72 out of 100 made them angry, even when Thaler had explained that the grading curve would result in normal grade distributions (A's, B's, C's as usual).2
Why did the numerical score matter if the letter grade didn't? Because the students were not Econs. The number 72 felt like a loss. It triggered a sense of failure. The students were comparing the score not to some optimal benchmark, but to a reference point (the round number 100), and they felt the divergence as a loss. Thaler solved this by changing the total points available to 137 instead of 100. Now the average score was 96 points, and the students were happy. The grades were identical. The only thing that changed was the frame—the reference point from which the score was evaluated.2
This experiment, trivial as it sounds, contains the entire behavioral economics enterprise in miniature. Humans are not optimizing machines. They do not evaluate outcomes against an objective standard. They evaluate them against a reference point—often the status quo, often a number they've anchored on, often a social comparison. The gap between outcome and reference point determines how they feel about the outcome. And feelings drive decisions far more than calculations do.
Econs have unbiased beliefs. If an Econ estimates that a new business has a 75% chance of succeeding, that estimate reflects the actual base rate. New businesses have approximately a 75% success rate. The Econ knows this. The Econ's belief is calibrated to reality.
Humans, on the other hand, are overconfident. Ask a group of entrepreneurs what percentage of new businesses succeed, and they will give you an answer somewhere in the 70-85% range. The actual figure is far lower—most sources put it around 20-30% depending on industry and time horizon. But ask those same entrepreneurs about their own business's chances, and they will say something like 80-90%. They believe their business is far more likely to succeed than the base rate would predict.2
This is not stupidity. This is not ignorance of the statistics. This is systematic bias. It is baked into how human minds work. We are overconfident about our abilities, our knowledge, our prospects. We underestimate risk. We overestimate our control. We are, as Thaler puts it, not Econs.2
There are dozens of documented biases like this: the availability heuristic (we think things that come easily to mind are more common than they actually are), hindsight bias (we think we knew the outcome was likely after the fact), the representativeness heuristic (we judge probability by how well something matches a stereotype), anchoring (we're influenced by irrelevant numbers we happen to see), and many more.1
None of these would be predicted by economic theory. In fact, economic theory explicitly rules them out by assumption. Yet they appear consistently in experiments, in surveys, and in real-world behavior. They are not anomalies. They are features of human cognition.
An Econ would not buy a large portion of dinner on Sunday just because they're hungry on Sunday, knowing they'll eat the dinner on Tuesday and will not be hungry then. Hunger on Sunday is irrelevant to decisions about Tuesday's dinner. The only thing that matters is: will you be hungry on Tuesday? If yes, buy enough food. If no, don't.
An Econ would not feel worse about a $100 loss if that loss came as an explicit fee (a "surcharge") versus an equivalent gain that did not materialize (a "discount" that they didn't receive). The outcome is the same: you pay $100 more than the alternative price. But Humans feel these differently. Paying a surcharge feels like a loss. Not receiving a discount feels like missing an opportunity—less painful, more abstract.2
An Econ would not refuse to buy a bottle of wine at $100 just because it's the same wine they own bottles of at home and are currently refusing to sell for $100. If they're willing to drink a bottle worth $100 in opportunity cost, they should be willing to buy one at that price. Yet this is exactly what people do. Richard Rosett, the chairman of the economics department at the University of Rochester, had bottles in his cellar worth over $100 at market price. He would drink them on special occasions but would never buy replacements at that price, and he would never sell them even though he could pocket the money.2
These are what Thaler calls Supposedly Irrelevant Factors—SIFs. They are theoretically irrelevant. They should not affect decisions. Yet they are empirically essential. Humans do not ignore them. And when firms or policymakers assume that Humans will ignore them, the predictions fail, the policies backfire, and markets behave in unexpected ways.
Here is what makes this distinction dangerous: it is not merely academic. It affects real policy.
If you believe that everyone is an Econ, then you believe that people will save the right amount for retirement (if the information is available). You believe that no intervention is needed. Just publish the data about how much people need to save, and let the market work. People will figure it out.2
If you believe that people are Humans, you recognize that people are myopic, that they are present-biased, that they are not thinking about retirement at all. You recognize that they need help. You create automatic enrollment in retirement plans. You set default contribution rates. You use what Thaler calls a "nudge"—a small change in how choices are presented that steers people toward their own long-term interests without forbidding any option.2
Similarly, if you believe that financial markets are populated by Econs, you believe that bubbles are impossible (because Econs would recognize overvaluation and sell). You believe that the 2008 financial crisis could not happen (because markets are efficient). You believe that regulation is unnecessary (because competitive pressure forces firms to be rational). You believe that central banks should not try to manage asset prices (because prices are always right).2
If you believe that markets are filled with Humans, you recognize that herd behavior is possible, that people can get caught up in bubbles, that even sophisticated traders can be caught off guard by collective irrationality. You believe that regulation has a role. You believe that transparency and education matter. You believe that market prices can deviate substantially from fundamental values for extended periods.
The stakes are not small.
Given how spectacularly false the Econ model is—given that it fails to explain basic human decisions about food, wine, money, labor, and everything else—why does it persist?
There are several reasons, but the most important is this: the Econ model is incredibly useful as a starting point, even if it is false as a final answer. An Econ model is mathematically tractable. You can derive theorems from it. You can make specific predictions. You can build on it. A model that says "people are complicated and do all sorts of weird things" is not a model at all—it is a confession of ignorance.2
So economists made a deal with themselves. They would use Econ models as a useful fiction, a simplification, a stepping stone. The models would not be true, but they would be useful. And then something happened: the models became so established, so embedded in textbooks, so central to training, that people forgot they were fictions. They started to believe them. They started to defend them when evidence contradicted them. They started to make important policy decisions based on them.2
Thaler compares this to the history of physics. Physics began with the assumption that objects in motion stay in motion (Newton's first law), an assumption that is spectacularly false in everyday life (friction! air resistance!). But the assumption was useful—it allowed physicists to build models, make predictions, and ultimately refine the model. Physics did not get stuck defending the false assumption. It moved on. It added friction, air resistance, and hundreds of other factors to create more accurate models.2
Economics has not done this. Economics has stuck with the Econ model for decades, even as evidence accumulated that it was false. When anomalies were discovered—cases where people behaved in ways the model could not predict—they were often dismissed. Perhaps the subjects in the experiments were not thinking clearly. Perhaps the stakes were not high enough. Perhaps in the "real world," with real money, people would behave like Econs.2
Yet every time the stakes were raised, the anomalies persisted. Every time economists thought they had found a domain where people would be rational—financial markets, where there are enormous stakes and professional traders with every incentive to be rational—they found instead that people were even more systematically irrational.2
Over the past four decades, a small but growing group of economists, many trained in psychology as well as economics, have begun to build a more realistic model. They have not abandoned the Econ framework entirely. That framework remains useful for understanding how markets function when certain conditions are met (low stakes, easy problems, frequent feedback, opportunity to learn). But they have added Humans to the picture. They have added psychology.2
This movement is called behavioral economics, and it is not a different discipline. It is still economics. But it is economics done with strong injections of good psychology, and this change has cascading consequences.2
A behavioral economist asks: what do people actually do, not what should they do according to some theory? A behavioral economist builds models that explain why people behave in ways that seem irrational, and then asks whether these patterns of behavior persist in the real world or wash out. A behavioral economist designs policies with actual Humans in mind, not Econs.
This shift does not require abandoning all of economic theory. It requires relaxing one assumption: that people are optimizing Econs. Once that assumption is relaxed, space opens up for understanding how real decisions actually get made. And real decisions—how people save, how they spend, how they work, how they invest, how they react to prices and wages and risks—are far more interesting than the sanitized, rational decisions of Econs.2
Psychology: Ego Development Theory — Econ rationality is a specific developmental stage (Expert), not a universal capacity. Higher stages (Strategist, Constructive-Aware) recognize the limits of optimization and reason dialectically with constraints. Lower stages (Conformist, Achiever) are locked in social comparison and material success frames, making them more vulnerable to reference-dependent thinking. The shift from Econ to Human is partially a developmental shift: maturity means recognizing the fiction.
History: Machiavellian Realpolitik — Machiavelli was describing Humans, not Econs. His insights about power work because people are not rational optimizers. A ruler who treats his subjects as Econs would fail spectacularly; Machiavelli succeeds because he understands emotion, reference points, fairness violations, and how people actually respond to threats and incentives. Economic theory applied to statecraft fails; Machiavellian theory succeeds because it is behavioral.
Cross-Domain: Bias as Adaptive Heuristic — The cognitive biases and supposedly irrelevant factors that make Humans deviate from Econs are not defects. They are solutions to information processing problems in a world of scarcity and uncertainty. Overconfidence is a solution to paralysis by analysis. Loss aversion is a solution to catastrophic risk. Reference dependence is a solution to the impossible problem of assigning absolute value to everything. Understanding Humans requires understanding these biases not as bugs but as features of an adaptive system that works in the real world, not in the frictionless world of Econs.
The Sharpest Implication: If the entire edifice of economic theory rests on a false assumption about human nature, then every policy decision made by governments and central banks that relies on "people will respond rationally to incentives" is operating on sand. The 2008 financial crisis happened because economists and policymakers believed in Econs—in efficient markets, in rational actors, in the impossibility of bubbles. Policy recommendations about welfare, taxation, financial regulation, and labor markets all assume that people are Econs. If they are not, then the policy recommendations may be catastrophically wrong. This is not an academic problem. It is a practical problem with material consequences for millions of people.
Generative Questions: