Evolutionary psychology seems to dissolve morality into mechanism—survival and reproduction drives, status competition, kin favoritism. It appears to explain away ethics as genetic selfishness wearing a virtuous mask. Yet Darwin and John Stuart Mill arrived, independently, at strikingly similar moral positions: that the goal of human action should be to maximize overall happiness and minimize suffering.1 Mill developed utilitarianism as a philosophical framework; Darwin arrived at it through studying human nature. Neither was consciously building evolutionary theory into ethics. Yet their conclusions align because both were describing the same underlying reality: humans care about suffering—their own and others'—and this caring can be channeled toward systematic reduction of suffering at scale.
The convergence is not accidental. It reveals something profound about the relationship between evolutionary explanation and moral philosophy: explaining why humans have moral sentiments does not necessarily undermine those sentiments. Understanding that our capacity for empathy evolved to facilitate cooperation in small groups does not prevent us from using that capacity to coordinate suffering-reduction across millions.2
Utilitarian philosophy rests on a simple premise: suffering is bad; happiness is good. We should maximize the latter and minimize the former. This seems obvious until you ask: why? Why should we care about strangers' suffering? Why is overall happiness a legitimate moral goal rather than, say, tribal honor, or genetic spread, or personal virtue?
Evolutionary psychology provides an answer: humans evolved moral sentiments—emotions like empathy, guilt, fairness intuition, and disgust—that made cooperation possible in ancestral environments. These emotions operate through what Wright calls a "moral sense"—the immediate, pre-rational conviction that certain acts are right or wrong.3 We don't deliberate about whether we should feel horror at cruelty; we feel it automatically. We don't reason our way to caring about others' suffering; we experience it through empathy. The moral sense is an evolved mechanism, not a rational deduction.
But here's the strange bridge: once you have this moral sense (whether through evolution, developmental learning, or cultural transmission), you can build rational ethics on top of it. Utilitarianism takes the basic moral intuition—suffering matters, happiness matters—and extends it through logic. If suffering matters, and you can reduce suffering by policy change X, and the cost to other goods is acceptable, then X is moral. The reasoning is rational; the foundation is emotional.4
This creates a peculiar relationship between evolutionary explanation and ethical philosophy: evolution explains the foundation (why we care), but does not determine the structure (what caring entails at scale). A human with strong empathy and fairness intuitions can, through reasoning, arrive at conclusions evolution never "intended"—like committing resources to preventing suffering among genetic strangers, or sacrificing reproductive opportunity for abstract principle.5
But evolutionary psychology complicates utilitarianism in uncomfortable ways. If moral sentiments evolved for reproductive success, not happiness-maximization, then happiness might be a proxy for goals that evolution actually cares about. We feel good when we have status, reproduce successfully, secure resources, and care for kin. Does maximizing happiness as such actually maximize these underlying fitness-relevant outcomes?
Modern life suggests the answer is no. Contraception allows happiness (sexual pleasure) without reproduction. Wealth allows comfort without fitness gain. Status signaling (through consumption) creates happiness without corresponding actual resource control. The systems that generated our moral sentiments operated under ancestral conditions where happiness-relevant outcomes (resource abundance, status, reproductive access) and fitness-relevant outcomes (genetic spread) aligned. In modern environments, they systematically diverge.6
Mill and Darwin, working in the 19th century, did not fully confront this problem. They assumed that promoting human happiness would generally promote human flourishing. Wright, writing in 1994, sees the divergence clearly: we can make people happy through technologies and policies that reduce reproductive success, and there is no obvious reason evolution cares about our happiness separate from its impacts on genes.7
Yet this is not a fatal problem for utilitarianism—it is a clarification. Utilitarianism doesn't claim to be natural; it claims to be rational. The fact that maximizing happiness diverges from maximizing reproductive success is a feature, not a bug. It means humans have the capacity to pursue goals not shaped by selection—to value things beyond fitness. This capacity is itself a product of evolution (our ability to reason and project values), but the content of those values can escape the evolutionary logic that produced the capacity.
Evolutionary psychology explains why moral sentiments evolved in small-group contexts: reciprocal altruism required emotional commitment to fairness, guilt-based punishment of cheaters, and gratitude-based reward of cooperators. These emotions work through face-to-face interaction where reputation matters and repeated interaction is likely.8
But modern humans use these emotions in radically different contexts. A person watching a news report about suffering in a distant country feels empathy—the same emotion that evolved for caring for members of one's band. That empathy now targets people they will never meet, with whom they have no reciprocal relationship, and whom they can only help through abstract mediated channels (donations, policy advocacy). The emotion is ancestral; its application is unprecedented.9
This scaling creates a puzzle: why does empathy extend beyond contexts where it evolved to be useful? One possibility is that the emotion is misaligned with modern contexts—we feel empathy for distant others, but this feeling doesn't actually serve cooperation because reputation incentives don't apply. We're running ancestral software in novel contexts.
Another possibility is that the scaling reveals something deeper: the moral sentiments evolved not just for reciprocal altruism, but for something more abstract—concern for suffering as such, fairness as a principle, not merely as a strategy. Darwin's own moral development involved recognizing that distant others' suffering matters—that it was wrong to benefit from slavery even if no reciprocal relationship was possible. This extension seems to violate the logic of reciprocal altruism, yet it feels morally authentic to anyone who experiences it.10
Utilitarianism, philosophically, says: trust this authentic feeling. If suffering matters at all, it matters everywhere suffering occurs. The fact that the emotion scales beyond its ancestral context is not a bug—it's evidence that the foundation is real rather than merely functional.
Darwin vs. Mill on the Source of Moral Motivation
Darwin located morality in evolved sentiments—animals with social instincts experience morality as real emotion, not reasoned principle. Moral development is learning to extend these sentiments in wider circles, from family to tribe to species.11 The emotion comes first; philosophy formalizes what emotion already knows.
Mill, working as a philosopher, argued that morality must have a rational foundation—that sentiments alone cannot justify universal moral claims. Reason must examine whether our moral intuitions are consistent and whether they extend logically to conclusions we haven't yet reached.12 Philosophy comes first; it shapes which emotions are justified.
The tension is between sentiment and reason as the foundation of ethics. Yet both thinkers agree on the conclusion: humans should care about suffering generally, not just their own tribe's. For Darwin, this is where extension of sympathy naturally leads. For Mill, this is where utilitarian logic demands we go. The same conclusion; different justifications.
The synthesis appears in Wright: humans are motivated by sentiments (primarily), but those sentiments can be examined and extended through reason. The sentiments are real and powerful, but they are also revisable through philosophical reflection. Darwin's sympathy and Mill's logic operate together—emotion provides the initial motivation to care, reason provides the systematic way to extend that care.13
Wright vs. Evolutionary Cynics on the Reality of Morality
One reading of evolutionary psychology is deeply cynical: morality is genetic selfishness in disguise; we only care about others because evolution made us care, and only as far as reproductive interests extend. From this view, the apparent universality of moral concern is illusion—we're running algorithms designed for tribal advantage, extended to modern contexts where they no longer apply.14
Wright resists this cynicism while acknowledging its basis. Yes, moral sentiments evolved for reproductive success. Yes, they are mechanisms, not transcendent truths. Yes, they misalign with modern contexts in important ways. But none of this means morality is fake. The emotion is real. The concern is genuine. The confusion is treating mechanism-explanation as debunking—as if explaining the origin of a feeling dissolves its validity.15
The tension is whether evolutionary explanation of morality undermines moral philosophy or enriches it. The cynic says: "It's just evolution; therefore morality is illusion." Wright and Darwin suggest: "It's evolved sentiment; therefore we understand the foundation, and we can build on it."
The classical philosophical problem: from facts about what is, how do we derive conclusions about what ought to be? Evolution describes what is—humans have moral sentiments shaped by selection. Utilitarianism prescribes what ought to be—maximize happiness and minimize suffering. How can the latter follow from the former?
The traditional answer is: it can't. The naturalistic fallacy is treating descriptions of nature as prescriptions for conduct. Just because evolution shaped us to care about certain things does not mean we should care about those things. Just because humans have aggressive impulses does not mean aggression is justified.
But evolutionary psychology offers a subtler answer. Evolution didn't shape humans to pursue arbitrary goals—it shaped them to care about real things: avoiding suffering, experiencing pleasure, maintaining relationships, securing status. The fact that these things were fitness-relevant in ancestral conditions doesn't make them less real or less worth pursuing. It explains why they matter to us, but that explanation doesn't destroy their mattering.16
The handshake is that evolutionary explanation doesn't commit the naturalistic fallacy if it treats the explanation as deepening rather than replacing moral intuition. Yes, empathy is an evolved mechanism. But the mechanism connects us to something real—others' suffering is real, and mattering to us through empathy is a valid way for suffering to matter. Understanding the mechanism doesn't dissolve the obligation; it clarifies its foundation.17
Both Darwin and Mill were writing in 19th-century England, a period of rapid social change, imperial expansion, and moral re-evaluation. Slavery was being abolished, democratic representation was expanding, and questions about the moral status of distant peoples (in colonies) and future generations (through industrialization) were becoming urgent.
Mill's utilitarianism offered a universal framework for addressing these questions: if suffering matters, then the suffering of colonized peoples matters just as much as the suffering of English citizens. Darwin's moral sentiments pointed in the same direction: sympathy, properly understood, should extend across all humans.
But both thinkers were also embedded in Victorian assumptions that we now recognize as morally problematic. Darwin's racial hierarchies, Mill's paternalism about women's capacity for rational thought—both were inconsistent with their own moral frameworks, yet neither fully recognized the inconsistency. The utilitarian logic their work developed could eventually be used to critique their own blind spots, but they couldn't see those blind spots while embedded in them.18
The handshake is that evolutionary psychology and utilitarian philosophy are not culturally transcendent. They emerge from specific historical moments and embody the moral blindnesses of those moments. Yet they also provide frameworks that later thinkers can use to critique those blindnesses. Darwin's logic about extending sympathy, once developed, can be turned on Darwin himself—his racial hierarchies violate the symmetrical concern for suffering his own theory demands.19
Evolutionary psychology describes the mechanisms of moral judgment—how humans actually make moral decisions, what emotional systems drive them, what cognitive biases shape them. It is a descriptive science of how morality works.
Moral philosophy describes the justifications for moral judgment—what makes a moral claim true, what reasons count as valid, what principles should guide conduct. It is a normative science of how morality should work.
These are not the same domain. It's possible for a mechanism to be evolutionarily shaped yet still produce valid moral conclusions. It's possible for moral philosophy to be justified yet psychologically different from how humans actually reason. The handshake is understanding how they interact: psychological mechanisms generate moral conclusions, yet those conclusions can be examined philosophically for consistency and validity. The mechanisms are real, the justifications are real, and they operate through each other rather than replacing each other.20
If evolutionary psychology is correct, and moral sentiments evolved for reproductive success in ancestral conditions, then modern moral philosophy might be built on a foundation that is increasingly misaligned with its own logic. We use empathy—evolved for small-group cooperation—to justify massive resource transfers to genetic strangers. We use fairness intuition—evolved to enforce reciprocal relationships—to argue for rights that presuppose no reciprocal obligation. We are applying emotional systems designed for one problem to domains they were never shaped to address.
This could mean utilitarianism is built on an unstable foundation—that we're extending empathy beyond its designed limits, and the whole philosophical structure might collapse if we fully accepted that these emotions are just evolved mechanisms. Or it could mean something more interesting: that our evolved capacity for morality is actually real, not merely functional, and it's precisely this reality that allows it to escape the ancestral logic that produced it. You cannot tell from the structure of moral philosophy alone which is true. But evolutionary psychology provides the framework for asking the question precisely.