Cross-Domain/developing/Apr 21, 2026Open in Obsidian ↗
developingconcept2 sources

Mental Models Library

The Latticework: Thinking Tools That Think Through You

Most people think with the handful of models they absorbed by accident — from school, from early jobs, from whatever their culture treats as obvious. A mental model is a first-principles thinking tool: an explanatory pattern that reveals the deep structure of a situation rather than just describing its surface. A mental models library is the deliberate collection and internalization of these tools across domains — not as a reference list you consult but as a latticework of explanatory lenses running in the background of perception, shaping what you see before you consciously decide what to think.1

The distinction that matters: mental models are not heuristics. A heuristic is a rule of thumb — "birds of a feather flock together," "diversify your investments." It tells you what to do in familiar situations. A mental model is explanatory — it tells you what's actually happening in the situation, which then generates the heuristic as a natural consequence. "Supply and demand" is a model; "buy low, sell high" is the heuristic it produces. If you only have the heuristic, you're dependent on the situation matching the case the heuristic was built for. If you have the model, you can generate appropriate heuristics for novel situations by running the model on them.

Charlie Munger's "latticework of mental models" is the clearest working example: models from physics, biology, economics, psychology, mathematics, and history, each available for deployment against any problem encountered. The latticework metaphor is precise: the models support each other, intersect, and collectively hold more than any single strand could. When Munger encountered a business problem, he wasn't asking "what's the right thing to do?" — he was running multiple models against the situation simultaneously and reading what each revealed.2

The Library Mechanics (What Makes It Compound)

The compounding effect works through two mechanisms:

Cross-model illumination. Each model you add increases the number of intersections in the latticework. A model from evolutionary biology (selection pressure, variation, fitness landscape) running against a model from economics (incentive structures, equilibrium, marginal value) against a model from information theory (signal-to-noise, redundancy, compression) produces pattern recognition that none of the three models alone can generate. The value isn't in any individual model but in the density of the network.

Model refinement through collision. When two models generate contradictory predictions for the same situation, you've found something genuinely interesting: either one model is wrong (or inapplicable), or the situation is more complex than either model captures, or there's a deeper model that reconciles them. All three outcomes sharpen your thinking. The person with one model never has this experience — their model is always right because it's the only available explanation.

The Library Failure (Diagnostic Signs)

Model rigidity. Applying a beloved model to situations it doesn't fit. The person with a hammer sees every problem as a nail — but the version with mental models is the person who's internalized one powerful model (incentive structures, confirmation bias, compound interest) and applies it as the primary explanation for everything they encounter. The model becomes a lens they see through rather than a tool they pick up and put down.

Model hoarding. Collecting models intellectually without internalizing them deeply enough for automatic deployment. The latticework only works if the models are running as background process — not if you have to consciously remember "now I'll apply the Pareto principle." Collected but uninternalized models are a catalogue, not a latticework.

Abstraction escape. Using models as a way to avoid the specific, messy, particular reality of the situation at hand. The person skilled with mental models can always generate a plausible abstract explanation; this capacity can become a defense against engaging directly with the specific evidence of a specific situation.

Evidence / Tensions / Open Questions

Mental models as a construct is well-established in cognitive science, though the specific "library building" practice is primarily practitioner-level rather than formally researched. [POPULAR SOURCE]1

Tension with Polymathic Breadth: Models are extracted from domain immersion — but which models transfer across domains? The same surface model can have completely different structural implications in different domains, and the person who carries a model from one domain into another without checking its applicability is doing something dangerous. The physics concept of entropy (disorder increases in closed systems) has been productively applied to information theory and economics — but these applications required rigorous examination of whether the structural analog holds, not just a casual borrowing of the term.

Open Questions:

  • Is there a core set of models — a minimum viable latticework — that provides most of the cross-domain collision benefit? Or is the benefit genuinely proportional to library size with no meaningful plateau?

Cross-Domain Handshakes

Archetypes, in the Jungian/archetypal thinking framework, are described as "living patterns" rather than frameworks — but functionally they do the same work as mental models: they provide a recognition system that lets you identify which of a finite number of deep structures a situation instantiates. The difference is the register: mental models are drawn from formal disciplines (economics, biology, physics) and are epistemically verifiable; archetypes are drawn from mythology and psychology and operate at the level of meaning and narrative. A person with both systems has access to structural recognition at two levels — the causal/mechanical level (mental models) and the meaning/narrative level (archetypes). Together they produce a richer situational understanding than either alone.

  • Cross-DomainIntegrative Complexity: Mental models are tools; Integrative Complexity is the cognitive capacity to run conflicting tools simultaneously without forcing premature resolution. A rich models library without IC produces a person who switches between models sequentially, applying whichever fits best. With IC, you hold multiple models against the same situation simultaneously and read the tension between them as information. The library provides the material; IC determines what you do with contradictory outputs.

  • Cross-DomainLong Game Orientation: The compounding effect of a mental models library requires time — models need to be internalized, not just collected, and internalization happens through repeated application across varied situations over months and years. The Long Game orientation provides the patience and temporal framing within which the library can actually compound rather than remaining a collection.

The Live Edge

The Sharpest Implication

The most uncomfortable implication of the mental models framework is that most domain expertise is more parochial than it appears. An expert in a field has typically internalized the models of their field and the heuristics those models generate — but rarely the models from adjacent fields that would let them see their own domain from the outside. The specialist's expertise is real and deep, but it's built on a set of foundational models that the specialist usually cannot name because they've never encountered a framework that required them to. The models are invisible because everything runs through them. A rich cross-domain models library doesn't primarily help you think better within any domain — it primarily makes your own domain's hidden assumptions visible.

Generative Questions

  • If models are invisible until you encounter a framework that names them, what's the most effective way to surface the implicit models you're already running — before you've built the cross-domain library that would make them visible?
  • The latticework metaphor implies that models support each other structurally — but is that true? Are there models that are genuinely incompatible at the foundational level, such that adding them to a latticework destabilizes it rather than enriching it?

Connected Concepts

Footnotes