How do you think the most rational people in the world operate their minds? How do they make better decisions?
They do it by “chunking” away a massive, but finite amount of fundamental, unchanging knowledge that can be used in evaluating the infinite number of unique scenarios which show up in the real world.
That is how consistently rational and effective thinking is done, and if we want to learn how to think properly ourselves, we need to figure out how it’s done. Fortunately, there is a way, and it works.
Before we dig deeper, let’s start by watching this short video on a concept called mental models. Then continue on below.
It’s not that complicated, right?
Munger’s system is akin to “cross-training for the mind”—not siloing ourselves in the small, limited area we may have studied in school, but chunking away a broadly useful set of knowledge about the world, which will serve us in all parts of life.
In a famous speech in the 1990’s, Munger explained his novel approach to gaining practical wisdom:
Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ’em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.
You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.
What are the models? Well, the first rule is that you’ve got to have multiple models because if you just have one or two that you’re using, the nature of human psychology is such that you’ll torture reality so that it fits your models, or at least you’ll think it does…
It’s like the old saying, “To the man with only a hammer, every problem looks like a nail.” And of course, that’s the way the chiropractor goes about practicing medicine. But that’s a perfectly disastrous way to think and a perfectly disastrous way to operate in the world. So you’ve got to have multiple models.
And the models have to come from multiple disciplines because all the wisdom of the world is not to be found in one little academic department. That’s why poetry professors, by and large, are so unwise in a worldly sense. They don’t have enough models in their heads. So you’ve got to have models across a fair array of disciplines.
You may say, “My God, this is already getting way too tough.” But, fortunately, it isn’t that tough because 80 or 90 important models will carry about 90% of the freight in making you a worldly wise person. And, of those, only a mere handful really carry very heavy freight.(1)
Taking Munger’s concept as our starting point, we can figure out how to use our brains more effectively by building our own latticework of mental models.
The central principle of the mental model approach is that you must have a large number of them, and they must be fundamentally lasting ideas.
As with physical tools, the lack of a mental tool at the crucial moment can lead to a bad result, and the use of a wrong mental tool is even worse.
If this seems self-evident, it’s actually a very unnatural way to think. Without the right training, most minds take the wrong approach. They prefer to solve problems by asking: Which ideas do I already love and know deeply, and how can I apply them to the situation at hand? Psychologists call this the “Availability Heuristic” and its power is well-documented.
You know the old adage, to the man with only a hammer, everything starts looking a bit like a nail. Such narrow-minded thinking feels entirely natural to us, but it leads to far too many misjudgments. You probably do it every single day without knowing.
It’s not you don’t have some good ideas in your head. You probably do! No competent adult is a total klutz. It’s just that we tend to be very limited in our good ideas, and we over-use them. This makes our good ideas just as dangerous as bad ones!
The great investor and teacher Benjamin Graham explained it best:
You can get in way more trouble with a good idea than a bad idea, because you forget that the good idea has limits.
Smart people like Charlie Munger realize that the antidote to this sort of “mental overreaching” is to add more models to your mental palette; to expand your repertoire of ideas, making them vivid and available in the problem-solving process.
You’ll know you’re on to something when ideas start to compete with one another — you’ll find situations where Model 1 tells you X and Model 2 tells you Y. Believe it or not, this the sign that you’re on the right track: Letting the models compete and fight for superiority and greater fundamental truth is what good thinking is all about! It’s hard work, but that’s the only way to get the right answers.
It’s a little like learning to walk or ride a bike; at first, you can’t believe how much you’re supposed to do all at once but eventually, you wonder how you ever didn’t know how to do it.
As Charlie Munger likes to say, going back to any other method of thinking would feel like cutting off your hands. Our experience confirms the truth of Munger’s dictum.
More About Mental Models
What kinds of knowledge are we talking about adding to our repertoire?
It’s the Big, Basic Ideas of all the truly fundamental academic disciplines. The stuff you should have learned in the “101” course of each major subject but probably didn’t. These are the true general principles that underlie most of what’s going on in the world.
Things like: The main laws of physics. The main ideas driving chemistry. The big, useful tools of mathematics. The guiding principles of biology. The hugely useful concepts from human psychology. The central principles of systems thinking. The working concepts behind business and markets.
These are the winning ideas. For all of the “bestselling” crap that is touted as the new thing each year, there is almost certainly a bigger, more fundamental, and more broadly applicable underlying idea that we already knew about! The “new idea” is thus an application of old ideas, packaged into a new format.
Yet we tend to spend the majority of time keeping up with the “new” at the expense of learning the “old”! This is truly nuts.
The mental models approach inverts the process to the way it should be: Learning the Big Stuff deeply and then using that powerful database every single day.
The over-arching goal is to build a powerful “tree” of the mind with strong and deep roots, a massive trunk, and lots of sturdy branches. We use this to hang the “leaves” of experience we acquire, directly and vicariously, throughout our lifetime: The scenarios, decisions, problems, and solutions arising in any human life.
Now, let’s start by exploring the actual models we’ve found useful in more depth by clicking the links below.
And remember: Building your latticework is a lifelong project. Stick with it, and you’ll find that your ability to understand reality, make consistently good decisions, and help those you love will be always be improving.The Farnam Street Latticework of Mental Models
Otherwise known as thinking through in reverse or thinking “backwards,” inversion is a problem solving technique. Often by considering what we want to avoid rather than what we want to get, we come up with better solutions. Inversion works not just in mathematics but in nearly every area of life: As the saying goes, “Just tell me where I’m going to die so I can never go there.”
Closely related to inversion, and popularized by the philosopher Karl Popper, the modern scientific enterprise operates under the principle of falsification: A method is termed scientific if it can be stated in such a way that a certain defined result would cause it to be false. Pseudo-knowledge and pseudo-science operate and propagate by being unfalsifiable – as with astrology, we are unable to prove them either correct or incorrect because of the conditions under which they would be shown false are never stated.
An idea introduced by Warren Buffett and Charles Munger in relation to investing, each individual tends to have an area or areas in which they really, truly know their stuff: Their area of special competence. Areas not inside of that circle are problematic because not only are we ignorant, but we may be ignorant of our own ignorance. Thus, when making decisions it becomes important to define and attend to our special circle, so as to act accordingly.
4. The Principle of Parsimony (Occam’s Razor)
Named after the friar William of Ockham, Occam’s Razor is a heuristic by which we select among competing explanations. Ockham derived that we should prefer the simplest explanation with the least moving parts: They are easier to falsify (see: Falsification), easier to understand, and generally more likely, on average, to be correct. It is not an iron law but a tendency and a mindframe: If all else is equal, it’s more likely that the simple solution suffices. Of course, we also keep in mind Einstein’s famous idea (even if apocryphal) that “an idea should be as simple as possible, but not simpler.”
Harder to trace in its origin, Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, it helps us avoid extreme paranoia and ideology, often very hard to escape from, by not generally assuming that bad results are the fault of a bad actor, although they can be. More likely, a mistake has been made.
In all human systems and most complex systems, the second layer of effects often dwarf the first, yet many times go unconsidered. In other words, we must consider that effects have effects. Second-order thinking is best illustrated by the idea of standing on your tip-toes at a parade: Once one person does it, everyone will do it in order to see, thus negating the first tip-toer. However now, the whole parade suffers on its toes rather than its feet.
The map of reality is not reality itself. If any map were to represent its actual territory with perfect fidelity, it would be the size of the territory itself. Thus, no need for a map! This tells us that there will always be an imperfect relationship between the models we use to represent and understand reality, and the reality itself: It is a necessity in order to simplify. It is all we can do to accept this reality and act accordingly.
A technique popularized by Einstein, the thought experiment is a way to logically carry out a “test” in one’s own head that would be very difficult or impossible to perform in real life. With the thought experiment as a tool, we can solve problems with intuition and logic that could not be demonstrated physically, as with Einstein imagining himself traveling on a beam of light in order to solve the problem of Relativity.
9. Mr. Market
Mr. Market was introduced by the investor Benjamin Graham in his seminal book The Intelligent Investor to represent the vicissitudes of the financial markets. As Graham explains, the markets are a bit like a moody neighbor, sometimes waking up happy and sometimes waking up sad – your job as an investor is to take advantage of him in his bad moods and sell to him in his good moods. This attitude is contrasted to an “efficient” market hypothesis in which Mr. Market always wakes up in the middle of the bed, never feeling overly strong in either direction.
10. Probabilistic Thinking (See also: Numeracy/Bayesian Updating)
The unknowable human world is dominated by probabilistic outcomes, as distinguished from deterministic ones. While we cannot predict the future with great certainty, we are wise to ascribe odds to more and less probable events. We do this every day unconsciously as we cross the street and ascribe low, yet not negligible odds of being hit by a
1. Permutations & combinations
The mathematics of permutations and combinations leads us to understand the practical probabilities of the world around us, how things can be ordered, and how we should think about
2. Algebraic equivalence
The introduction of algebra allowed us to demonstrate mathematically and abstractly that two seemingly different things could be the same. By manipulating symbols, we can demonstrate equivalence or inequivalence, the use of which led humanity to untold engineering and technical abilities. Knowing at least the basics of algebra can allow us to understand a variety of important results.
Though the human brain has trouble comprehending it, much of the world is composed of random, non-sequential, non-ordered events. We are “fooled” by random effects by attributing false causality to things outside of our control. If we don’t course-correct for this fooled by randomness effect –our sense of false pattern-seeking – we will tend to see things as more predictable than they are and act accordingly.
4. Stochastic processes (Poisson, Markov, Random walk)
A stochastic process is a random statistical process and encompasses a wide variety of processes where the movement of an individual variable can be impossible to predict but can be thought through probabilistically. The wide variety of stochastic methods helps us describe systems of variables through probabilities without necessarily being able to determine the position of any individual variable over time. For example, it’s not possible to predict stock prices on a day to day basis, but we can describe the probability of various distributions of their movements over time. Obviously, it is much more likely that the stock market (a stochastic process) will be up or down 1% in a day than up or down 10%, even though we can’t predict what tomorrow will bring.
It’s been said that Einstein called compounding a wonder of the world. He probably didn’t, but it is true: Compounding is the process by which we add interest to a fixed sum, which then earns interest on the previous sum and the newly added interest, and then earns interest on that, and so on ad infinitum. It is an exponential effect, rather than a linear, or additive effect. Money is not the only thing that compounds: Ideas and relationships do as well. In tangible realms, compounding is always subject to physical limits and diminishing returns; intangibles can compound more freely. Compounding also leads to the time value of money, which underlies all of modern finance.
Any reasonably educated person knows that any figure multiplied by zero is, no matter how large the number, still zero. This is true in human system as well as mathematical ones: In some systems, a failure on one area can negate great effort in all other areas. Fixing the “zero” often has a much greater effect than trying to enlarge the other areas, as simple multiplication would show.
Insurance companies and subscription services are well aware of the concept of “churn” – every year, a certain number of customers are lost and must be replaced. Standing still is the equivalent of losing, as seen in the model the “Red Queen Effect”. Churn is present in many business and human systems: A constant figure is periodically lost and must be replaced before adding any new figures over the top.
8. Law of Large Numbers
One of the fundamental underlying assumptions of probability is that as more instances occur, the actual results will converge on the expected ones. For example, if I know that the average man is 5 foot 10 inches tall, I am far more likely to get an average of 5’10” by selected 500 men at random than 5 men at random. The opposite of this model is the law of small numbers, by which small samples can and should be looked at with great skepticism.
9. Bell Curve/Normal distribution
The normal distribution is a statistical process that leads to the well known graphical representation of a bell curve, with a meaningful central “average” with increasingly rare standard deviations from that average when correctly sampled. (The so-called central limit theorem.) Well-known examples include human height and weight, but it’s just as important to note that many common processes, especially in non-tangible systems like social systems, do not follow the normal distribution.
10. Power Laws
One of the most common processes that does not fit the normal distribution is that of a power law, whereby one quantity varies with another’s exponent rather than linearly. For example, the Richter scale describes the power of earthquakes on a power-law distribute scale: An 8 is 10x more destructive than a 7 and a 9 is 10x more destructive than an 8. The central limit theorem does not apply and there is thus no “average” earthquake. This is true of all power law distributions.
11. Fat-tailed processes (Extremistan)
A process can often look like a normal distribution but have a large “tail” – seemingly outlier events are far more likely than they are in an actual normal distribution. If the fat tail is the negative tail, a strategy or process may be far more risky than a normal distribution is capable of describing, or far more profitable if the fat tail is on the positive side. Much of the human social world is said to be fat-tailed rather than normally distributed.
Bayesian method is a method thought (named for Thomas Bayes) whereby one takes into account all prior relevant probabilities and then incrementally “updates” them as newer information arrives. This method is especially productive given the fundamentally non-deterministic world we experience: We must use prior odds and new information in combination to arrive at our best decisions. This is not necessarily our intuitive decision making engine.
In a normally-distributed system, long deviations from the average will tend to return to that average with an increasing number of observations: The so-called Law of Large numbers. We are often fooled by regression to the mean, as with a sick patient improving spontaneously around the same time they begin taking an herbal remedy or a poorly performing sports team going on a winning streak. We must be careful not to confuse statistically likely events with causal ones.
14. Order of magnitude
In many, perhaps most, systems, quantitative description down to a precise figure is impossible or not useful. For example, the distance between our galaxy and the next one over is not a matter of knowing the precise number of miles, but how many zeroes are “after the 1.” Is it about 1 million miles or about 1 billion? This thought habit can help us escape useless precision.
One of the most important principles of systems is that they are sensitive to scale. Properties (or behaviors) tend to change when you scale them up or down. In studying complex systems, we must always be roughly quantifying – in orders of magnitude, at least – the scale at which we are observing, analyzing, or predicting the system.
2. Law of Diminishing Returns
Related to scale, most important real-world results are subject to an eventual diminishment of incremental value. A good example would be a poor family: Give them enough money to thrive, and they are no longer poor. But additional money will not improve their lot after a certain point: There is a clear diminishing return of additional dollars at some roughly quantifiable point. Often, the law of diminishing returns veers into negative territory – i.e., too much money could destroy the poor family.
3. Pareto Principle
Named for Italian polymath Vilfredo Pareto, who noticed that 80% of Italy’s land was owned by about 20% of its population, the Pareto principle states that smaller amount of cause has a disproportionate effect. The Pareto Principle is an example of a “power law” type of statistical distribution – as distinguished from a traditional bell curve – and is demonstrated in natural phenomena ranging from wealth, to city population, to important human habits.
4. Feedback loops (and Homeostasis)
All complex systems are subject to positive and negative feedback loops whereby A causes B, which in turn influences A (and C), and so on – with higher order effects frequently resulting from continual movement of the loop. In a homeostatic system, a change in A is often brought “back into line” by an opposite change in B to maintain the balance of the system, as with the temperature of the human body or the behavior of an organizational culture: Automatic feedback loops maintain a “static” environment unless and until an outside force changes the loop. A “runaway feedback loop” describes a situation whereby the output of a reaction becomes its own catalyst. (Auto-catalysis)
5. Chaos dynamics (Sensitivity to initial conditions)
In a world such as ours governed by chaos dynamics, small changes (perturbations) of initial conditions have massive downstream effects as near-infinite feedback loops occur: The so-called butterfly effect. This makes aspects of physical systems fundamentally unpredictable (like the weather more than a few days from now) as well as social systems (the behavior of a group of human beings over a long period).
6. Preferential Attachment (Cumulative Advantage)
A preferential attachment situation occurs when the current leader is given more of the reward than the laggards, which tends to preserve or enhance the status of the leader. A strong network effect is a good example of preferential attachment: A market with 10x more buyers and sellers than the next largest market will tend to have a preferential attachment dynamic.
Higher level behavior tends to “emerge” from the interaction of lower-order components. The result is frequently not linear – not a matter of simple addition – but rather non-linear, or exponential. An important resulting property of emergent behavior is that it cannot be predicted from simply studying the component parts.
We find in most systems that there are irreducible quantitative properties: Examples being complexity, minimums, time, length. Below the irreducible level, the desired result simply does not occur: One cannot get several women pregnant to reduce to amount of time to have one child, and one cannot reduce a successfully built automobile to a single part. These results are, to a defined point, irreducible.
A concept introduced by the economist and ecologist Garrett Hardin, the Tragedy of the Commons states that in a system where a “common” resource is shared, with no individual responsible for the wellbeing of the resource, it will tend to be depleted over time. The Tragedy is reducible to incentives: Unless they collaborate, each individual derives more personal benefit than the cost that he or she incurs, and therefore depletes resource for fear of missing out.
10. Gresham’s Law
Gresham’s Law, named for the financier Thomas Gresham, states that in a system of circulating currency, forged currency will tend to drive out real currency, as real currency is hoarded and forged currency is spent. We see a similar result in human systems, as with bad behavior driving out good behavior in a crumbling moral system or bad practices driving out good practices in a crumbling economic system. Generally, Gresham’s Law type results require regulation and oversight in order to prevent.
While hard to precisely define, an algorithm is generally an automated set of rules or a “blueprint” leading a series of steps or actions resulting in a desired outcome, often in the form of a series of (If → Then) type statements. Algorithms are best known for their use in modern computing, but are a feature of biological life as well: For example, the human DNA contains an algorithm for building a human being.
12. Fragility – Robustness – Antifragility
Popularized by Nassim Taleb, the sliding scale of fragility, robustness, and antifragility refers to the responsiveness of a system to incremental negative variability. A fragile system or object is one in which additional negative variability has a disproportionately negative impact, as with a coffee shattering from a 6 foot fall but receiving no damage at all from a 1 foot fall rather than 1/6th of the damage. A robust system or object tends to be neutral to the additional negativity variability, and, of course, an antifragile system benefits: If there was a cup that got stronger when dropped from 6 feet than from 1 foot, it would be termed antifragile.
13. Backup systems/Redundancy
A critical model of the engineering profession is that of backup systems. A good engineer never assumes the perfect reliability of the components of the system: He or she builds in redundancy to protect the integrity of the total system. Without this application of this robustness principle, tangible and intangible systems tend to fail over time.
14. Margin of safety
Similarly, engineers have also developed the habit of adding a margin for error into all calculations. In an unknown world, driving a 9,500 pound bus over a bridge built to hold precisely 9,600 pounds is rarely seen as intelligent. Thus, on the whole, few modern bridges ever fail. In practical life outside of physical engineering, we can often profitably give ourselves margins as robust as the bridge system.
A system becomes critical when it is about to jump discretely from one phase to another. The marginal utility of the last unit before the phase change is wildly higher than any unit before it. A frequently cited example is water turning from a liquid to a vapor when heated to a specific temperature. Critical “mass” refers to the mass needed to have the critical event occur, most commonly in a nuclear system.
16. Network Effects
A network tends to become more valuable as nodes are added to the network: This is known as the network effect. An easy example is contrasting the development of the telephone system and the electricity system. If only one house has electricity, it has still has gained immense value; if only one house has a telephone, they have gained nothing of use: Only with additional telephones does the network gain value. This network effect is widespread in the modern world and creates immense value for organizations and customers alike.
17. Black Swan
Also popularized by Nassim Taleb, a “Black Swan” is a rare and highly consequential event that is invisible to a given observer ahead of time. It is a result of applied epistemology: If one has only ever seen white swans, they cannot categorically state that there are no black swans, but the inverse is not true: One black swan is enough to state that there are black swans. Black Swan events are necessarily unpredictable to the observer (as Taleb likes to say, Thanksgiving is a Black Swan for the turkey, not the butcher) and thus must be dealt with by addressing the fragility-robustness-antifragility spectrum rather than through better methods of prediction.
18. “Via negativa” – Omission/removal/avoidance of harm.
In many systems, improvement is best or at times only a result of removing bad elements rather than the addition of good elements. This is a credo built into modern medical profession: First, do no harm. Similarly, if one has a group of children behaving badly, removal of the instigator is often much more effective than any form of punishment.
19. Lindy Effect
The Lindy Effect refers to the life expectancy of a non-perishable object or idea being related to its current lifespan. Conditional on the idea or object having lasted for X number of years, it would be expected (on average) to last another X years: While a human being who is 90 and lives to 95 does not add 5 years to his or her expectancy, these non-perishables lengthen, their life expectancy as they continually survive. A classic text is a prime example: Conditional on humanity having read Shakespeare’s plays for 500 years, it will be expected to read them for another 500.
20. Renormalization Group
The renormalization group technique allows us to think about physical and social systems at different scales. An idea from physics, and a complicated one at that, the application of renormalization group to social systems allows us to understand why a small number of stubborn individuals can have a disproportionate impact if those around them follow suit on increasingly large scales.
21. Spring loading
A system is “spring loaded” if it is coiled in a certain direction, positive or negative. Positively spring loading systems and relationships is important in a fundamentally unpredictable world to help protect us against negative events. The reverse can be very destructive.
A complex adaptive system, as distinguished from a complex system in general, is one that can understand itself and change based on that understanding. Complex adaptive systems are social systems. The difference is best illustrated by thinking about weather prediction contrasted against stock market prediction. The weather will not change based on an important forecaster’s opinion, but the stock market might. Complex adaptive systems are thus fundamentally not predictable.
1. Laws of Thermodynamics
The laws of thermodynamics describe energy in a closed system. The laws cannot be escaped and describe a world in which useful energy is constantly being lost and energy cannot be created or destroyed, and underlie the physical world. Applying their lessons to the social world can be a profitable enterprise.
If I push on a wall, physics tells me that the wall pushes back with equivalent force. In a biological system, if one individual acts on another, their action will tend to be reciprocated in kind. And of course, human beings act with intense reciprocity demonstrated as well.
Velocity is not equivalent to speed: The two are sometimes confused. Velocity is speed plus vector: How fast something gets somewhere. An object that moves 2 steps forward and then 2 steps back has moved at a certain speed but shows no velocity. The addition of the vector, that critical distinction, is what we should consider in practical life.
Relativity has been used in several different contexts in the world of physics, but the important aspect to study is that an observer cannot truly understand a system of which he itself is a part. For example, a man inside of an airplane does not feel he is experiencing movement, but an outside observer can see this is not so. This form of relativity tends to impact social systems in a similar way.
A fire is not much more of a combination of carbon and oxygen, however the forests and coal mines of the world are not combusting at will because such a chemical reaction requires the input of a critical level of “activation energy” in order to get a reaction started. Two combustible elements alone are not enough.
A catalyst either kick-starts or maintains a chemical reaction, but isn’t itself a reactant. The reaction may slow or stop without the addition of catalysts. Social systems, of course, take on many similar traits, and we can view catalysts in a similar light.
Most of the engineering marvels of the world have been accomplished with applied leverage: As famously stated by Archimedes, “Give me a lever long enough and I shall move the world.” With a small amount of input force, we make great output force through leverage. Understanding where we can apply this model to the human world is a source of great success.
An object in motion with a certain vector wants to continue moving in that direction unless acted upon. This is a fundamental physical principle of motion, however, individuals, systems, and organizations display the same effect. It allows them to minimize the use of energy, however, can cause them to be destroyed or eroded.
When we combine various elements, we create new substances. This is no great surprise, but what can be surprising in the alloying process is that 2+2 can equal not 4 but 6 – the alloy can be far stronger than the simple addition of the underlying elements would lead us to believe. This process leads us to engineering great physical objects, but we understand many intangibles in the same way: A combination of the right elements in social systems or even individuals can create a 2+2=6 effect similar to alloying.
All creatures respond to incentives to keep themselves alive. This is the basic insight of biology. Constant incentives will tend to cause a biological entity to have constant behavior, to an extent. Humans are included and particularly great examples of the incentive-driven nature of biology, however humans are complicated in that their incentives can be hidden or intangible. The rule of life is to repeat what works & has been rewarded.
2. Cooperation (Incl. symbiosis)
Competition tends to describe most biological systems, but cooperation at various levels is just as important of a dynamic. In fact, the cooperation of a bacteria and simple cell probably created the first complex cell and all of the life we see around us. Without cooperation, no group survives, and the cooperation of groups gives rise to even more complex versions of organization. Cooperation and competition tend to co-exist at multiple levels.
3. Tendency to minimize energy output (mental & physical)
In a physical world governed by thermodynamics and competition for limited energy and resources, any biological organism that was wasteful with energy would be at a severe disadvantage for survival. Thus, we see in most instances that behavior is governed by a tendency to minimize energy usage when at all possible.
Species tend to adapt to their surroundings in order to survive, given the combination of their genetics and their environment – an always-unavoidable combination. However, adaptations are not passed down in the genetic code, as was once thought. Populations of species adapt through the process of evolution by natural selection, another important model.
5. Evolution by natural selection
Once called “the greatest idea anyone ever had”, Charles Darwin and Alfred Russel Wallace both simultaneous realized in the 19th century that species evolve through random mutation and differential survival rates. If we call human intervention in animal-breeding an example of “artificial selection”, we can call Mother Nature deciding the success or failure of a particular mutation “natural selection”. Those best suited for survival tend to be preserved. But of course, conditions change.
6. Red Queen Effect (Co-evolutionary arms race)
The evolution by natural selection model leads to something of an arms race among species competing over limited resources. When one species evolves an advantageous adaption, a competing species must of course respond in kind, or fail as a species. Standing pat can mean falling behind. This is called the Red Queen Effect for the character in Alice in Wonderland who said “Now, here, you see, it takes all the running you can do, to keep in the same place.”
A fundamental building block of diverse biological life is high-fidelity replication. The fundamental unit of replication seems to be the DNA molecule, which provides a blueprint for the offspring to be built from physical building blocks. There are a variety of replication methods, but most can be lumped into sexual and asexual.
8. Hierarchical/organizing instincts
Most complex biological organisms have an innate feel for how they should organize. While not all of them end up in hierarchical structures, many do, especially in the animal kingdom. Human beings like to think they are outside of this, but they feel the hierarchical instinct as strongly as any other.
9. Self-preservation instincts
Without a strong self-preservation instinct in an organism’s DNA, it would tend to disappear over time, thus eliminating that DNA. While cooperation is another important model, the self-preservation instinct is strong in all organisms and can cause violent, erratic, and/or destructive behavior for those around them.
10. Simple physiological reward-seeking
All organisms feel pleasure and pain from simple chemical processes in their body which respond predictably to the outside world. This is an effective survival promoting technique on average. However, those same pleasure receptors can be co-opted to cause destructive behavior, as with drug abuse.
Introduced by the biologist Steven Jay Gould, an exaptation refers to a trait developed for one purpose that is later used for another purpose. This is one way to explain the development of complex biological features like an eyeball: In a more primitive form, it may have simply been used for something else. Once it was there, and once it developed further, 3D sight became possible.
The inability to survive can cause an extinction event, whereby an entire species ceases to compete and replicate effectively. Once its numbers have dwindled to a critically low level, an extinction can be unavoidable (and predictable) given the inability to effectively replicate in large enough numbers.
An ecosystem describes any group of organisms co-existing with the natural world. Most ecosystems show diverse forms of life taking on different approaches to survival, with such pressures leading to varying behavior. Social systems can be seen in the same light as the physical ecosystems and many of the same conclusions can be made.
Most organisms find a method of competing and behaving for survival called a “niche”. Usually, a species will select a niche for which it is best adapted. The danger arises when multiple species begin competing for the same niche, which can cause an extinction – there can only be so many species doing the same thing before limited resources give out.
15. Dunbar’s Number
The primatologist Robin Dunbar described through study that the number of individuals a primate can get to know and trust closely is related to the size of its neocortex. Extrapolating from his study of primates, Dunbar theorized that the Dunbar number for a human being is somewhere in the 100-250 range, which is supported by certain studies of human behavior and social networks.
Fundamentally, the modern world operates on trust. Familial trust is generally a given (otherwise we’d have a hell of a time surviving), but we also choose to trust chefs, clerks, drivers, factory workers, executives, and many others. A trusting system is one that tends to work most efficiently: The “rewards to trust” are extremely high.
2. Bias from Incentives
We are highly incented creatures, with perhaps the most varied and hardest to understand set of incentives in the animal kingdom. This causes us to distort our thinking, when it is in our own interest to do so. A wonderful example is a salesman; truly believing that his product will improve the lives of its users. It’s not merely convenient that he sells the product: The fact of his selling the product causes a very real bias in his own thinking.
3. Pavlovian mere association
Ivan Pavlov very effectively demonstrated that animals can respond not just to direct incentives themselves, but associated objects. The famous salivating dogs at the ring of a bell. Human beings are much the same and can feel positive and negative emotion towards intangible objects from past associations rather than direct effects.
Humans have a tendency to feel envious of those receiving more than they are, and a desire “get what is theirs” in due course. The tendency towards envy is strong enough to drive otherwise irrational behavior, but is as old as humanity itself. Any system ignorant of envy effects will tend to self-immolate over time.
Based on past association, stereotyping, ideology, genetic influence, or direct experience, humans have a tendency to distort their thinking in favor of people or things that they like and against people or things they dislike. This leads to over-rating the things we like and under-rating or broadly categorizing things we dislike, often missing crucial nuance in the process.
6. Denial Tendency
Anyone who has been alive long enough realizes that, as the saying goes “denial is not just a river in Africa.” This is powerfully demonstrated in situations like war or drug abuse, where denial has powerful destructive effects but allows for behavorial inertia. Denying reality can be a coping mechanism, a survival mechanism, or a purposeful tactic, at least.
One of the most useful findings of modern psychology is what Daniel Kahneman calls the Availability Bias or Heuristic: We tend to most easily recall what is salient, important, frequent, and recent. The brain has its own energy-saving and inertial tendencies that we have little control over – the availability heuristic is likely one of them. Having a truly comprehensive memory would be debilitating. Some sub-examples the heuristic include the Anchoring & Sunk Cost Tendencies.
8. Representativeness Heuristic
The three major psychological findings that fall under Representativeness, also defined by Kahneman and his partner Tversky, are:
An unconscious failure to look at past odds in determining current or future behavior.
b. Stereotyping Tendency
The tendency to broadly generalize and categorize rather than look for specific nuance. Like availability, this is generally a necessary trait for energy-saving in the brain.
c. Failure to see false conjunctions
Most famously demonstrated by the “Linda Test”, the same two psychologists showed that students chose more vividly described individuals as more likely to fit into a pre-defined category than broader, more inclusive, but less vivid descriptions, even if the vivid example was a mere subset of the more inclusive set. These specific examples are seen as more “representative” of the category than the broader but vaguer descriptions, in violation of logic and probability.
9. Social proof (Safety in numbers)
Human beings are one of many social species, along with bees, ants, and chimps, among many more. We have DNA-level instinct to seek safety in numbers and will look for social guidance of our behavior. This creates a cohesive sense of cooperation and culture which would not otherwise be possible but also leads us to do foolish things if our group is doing them as well.
Human beings have been appropriately called “The Storytelling Animal” because of our instinct to construct and seek meaning in narrative. It’s likely that long before we developed the ability to write or create objects, we were telling stories and thinking in stories. Nearly all social organizations run on constructions of the narrative instinct, from religious institutions to corporations to nation-states.
11. Curiosity Instinct
We like to call other species curious, but we are most curious of all, an instinct which led us out of the savannah and led us to learn a great deal of information about the world around us, which we have then used to create the world in our collective minds. The curiosity instinct leads to unique human behavior and forms of organization like the scientific enterprise. Even before we there were direct incentives to innovate, humans innovated out of direct curiosity.
12. Language Instinct
The psychologist Steven Pinker calls our DNA-level instinct to learn grammatically constructed language The Language Instinct. The idea that grammatical language is not a simple cultural artifact was first popularized by the linguist Noam Chomsky. As we saw with the narrative instinct, we use these instincts to create shared stories, as well as gossip, solve problems, and fight, among other things. Grammatically ordered language theoretically carries infinite varying meaning.
13. First-Conclusion Bias
As Charlie Munger famously pointed out, the mind works a bit like a sperm and egg: The first idea gets in and then the mind shuts. Like many other tendencies, this is probably an energy saving device. Our bias to settle on first conclusions leads us to accept many erroneous results and cease asking questions and can be countered with some simple and useful mental routines.
It’s important for human beings to generalize: We need not see every instance to understand the general rule, and this works to our advantage. With that comes a subset of errors where we forget about the Law of Large Numbers and act as if it does not exist: We take a small number of instances and create a general category, even if we have no statistically sound basis for the conclusion.
15. Relative Satisfaction/Misreaction Tendencies
The envy tendency is probably the most obvious manifestation of the relative satistfaction tendency, but nearly all studies of human happiness show that it is related to the state of the person relative to either their past or their peers, not “absolute”. These relative tendencies causes us great misery or happiness in a very wide variety of objectively different situations and make us poor predictors of our own behavior and feelings.
As psychologists have frequently and famously demonstrated, humans are subject to a bias towards keeping their prior commitments and staying consistent with our prior selves when possible. This is a necessary trait for social cohesion: Those tending to change their conclusions and habits often are often distrusted for it. Yet our bias towards staying consistent can become, as one wag put it, a “hobgoblin of foolish minds” – when combined with the first conclusion bias, we end up landing on poor answers and standing pat in the face of great evidence.
17. Hindsight Bias
Once we know the outcome, it’s near impossible to turn back the clock mentally. Our Narrative Instinct leads us to reason that we “knew it all along”, when in fact we are often simply reasoning post-hoc with information not available to us prior to an event occurring. The hindsight bias explains why it’s wise to re-examine our beliefs when we convince ourselves that “knew it all along” and keep a journal or diary of important decisions for an unaltered record.
Justice runs deep in our veins: In another illustration of our relative sense of well-being, we are careful arbiters of what is fair. Violations of fairness can be considered grounds for reciprocal action, or at least distrust. Yet fairness itself seems a moving concept: What is seen as fair and just in one time and place may not hold in another: Consider that slavery has been seen as perfectly natural and perfectly unnatural in alternating phases of human existence.
19. Tendency to overestimate consistency of behavior (Fundamental Attribution Error)
We tend to over-ascribe the behavior of others to their innate traits rather than situational factors, leading us to overestimate how consistent that behavior will be in the future. In such a situation, predicting behavior seems not very difficult. Of course, in practice this is consistently demonstrated to be a wrong assumption, and we are consequently “surprised” at the behavior of others not acting in accordance with the innate traits we’ve endowed upon them.
20. Influence from Authority
The equally famous Stanford Prison Experiment and Milgram Experiments demonstrated what humans had learned practically many years before: The human bias towards being influenced by authority. In a dominance hierarchy such as ours, we tend to “look to the leader” for guidance on behavior, especially in situations of stress or uncertainty. Thus, authority figures have a responsibility to act well whether they like it or not.
21. Influence from Stress (Incl. Breaking point)
Stress causes both a mental and physiological response in the body and tends to amplify the other biases. Almost all human mental biases become worse in the face of stress as the body goes into a sort of fight or flight response, relying purely on instinct without the emergency brake of Daniel Kahneman’s “System 2” type reasoning. Stress causes hasty decisions, immediacy, and a fall back to habit, thus giving rise to the elite soldiers’ motto: “In the thick of battle, you will not rise to the level of your expectations, but fall to the level of your training.”
22. Survivorship bias
A major problem with historiography – our interpretation of the past – is that history is famously “written by the victors”. We do not see what Nassim Taleb calls the “silent grave” – the lottery ticket holders who did not win. Thus, we over-attribute success to things done by the successful agent rather than randomness or luck, and we often learn false lessons by exclusively studying victors without seeing all of the accompanying losers who acted in the same way but were not lucky enough to succeed.
23. Tendency to want to do something (Fight/Flight, Intervention, Demonstration of value, etc.)
We might term this Boredom Syndrome: Most humans have the tendency to need to act, even when they are not needed. We also tend to offer solutions even in light of not having enough knowledge to solve the problem.
What a man wishes, he also believes. Similarly, what we believe is what we choose to see: This is commonly referred to as the Confirmation Bias. It is a deeply ingrained mental habit to look for confirmations of long-held wisdom rather than violations: Both energy-conserving and comfortable. Yet the scientific process – including hypothesis generation, blind testing when needed, and objective statistical rigor – is designed to root out precisely the opposite, which is why it works so well when followed.
1. Opportunity Costs
Doing one thing means not being able to do another. We live in a world of tradeoffs, and the concept of opportunity cost rules all. Most aptly summarized as “there is no such thing as a free lunch.”
2. Creative Destruction
Coined by economist Joseph Schumpeter, creative destruction is a description of the capitalistic process at work in a functioning free market system. Motivated by personal incentives (including but not limited to financial profit), entrepreneurs will push to best one another in a never-ending game of creative one-upsmanship, in the process destroying old ideas and replacing them with newer technology. Beware getting left behind.
The Scottish economist David Ricardo had an unusual and non-intuitive insight: Two individuals, firms, or countries could benefit from trading with one another even if one of them was better at everything. Comparative advantage is best seen as applied opportunity cost: If it has the opportunity to trade, an entity gives up free gains in productivity by not focusing on what it does best.
4. Specialization (Pin Factory)
Another Scottish economist, Adam Smith, highlighted the advantages gained in a free-market system by specialization. Rather than having a group of workers each producing an entire item start to finish, Smith explained that it’s usually far more productive to have each of them specialize in one aspect of production. He also cautioned that each worker might not enjoy such a life, however, a tradeoff of the specialization model.
5. Seizing the middle
In chess, the winning strategy is usually to “seize” control of the middle of the board, so as to maximize the potential moves which can be made and control the movement of the maximal amount of pieces. The same strategy works profitably in business, as can be demonstrated by John D. Rockefeller’s control of the refinery business in the early days of the oil trade and Microsoft’s control of the operating system in the early days of the software trade.
6. Trademarks, patents, and copyright
These three concepts, along with other related ones, protect the creative work produced by enterprising individuals, thus creating additional incentive for creativity and promoting the creative destruction model of capitalism. Without them, information and creative workers have no defense against their work being freely distributed.
7. Double-entry book-keeping
One of the marvels of modern capitalism has been the book-keeping system introduced in Genoa in the 14th century. The double-entry system requires that every entry, such as income, be also entered into another corresponding account. Correct double-entry bookkeeping acts as a “check” on potential accounting errors and allows for accurate records and thus, more accurate behavior by the owner of a firm.
8. Utility (Marginal, Diminishing, Increasing)
The usefulness of additional units of any good tends to vary with scale. Marginal utility allows us to understand the value of one additional unit, and in most practical areas of life, that utility diminishes at some point. On the other hand, in some cases, additional units are subject to a “critical point” where the utility function jumps discretely up or down. As an example, giving water to a thirsty man has diminishing marginal utility with each additional unit, and can eventually kill him with enough units.
A bottleneck describes the place at which a flow (of a tangible or intangible) is stopped, thus holding it back from continuous movement. As with a clogged artery or a blocked drain, a bottleneck in production of any good or service can be small but have disproportionate impact if it is in the critical path.
10. Prisoner’s Dilemma
The Prisoner’s dilemma is a famous application of game theory by which two prisoners are both better off cooperating, but if the other cheats, is better off cheating himself. Thus the “dilemma.” This model shows up in economic life, in war, and in many other areas of practical human life. Though the prisoner’s dilemma theoretically leads to a poor result, in the real world cooperation is nearly always possible and must be explored.
Often ignored in mainstream economics, the concept of bribery is central to human systems: Given the chance, it is often easier to pay a certain agent to “look the other way” than to follow the rules. The enforcer of the rules is then neutralized. This principle/agent problem can be seen as a form of arbitrage, which we will see in the next model.
Given two markets selling an identical good, an arbitrage exists if the good can profitably be bought in one market and sold at a profit in the other. This model is simple on its face, but can present itself in disguised forms: The only gas station in a 50-mile radius is also an arbitrage as it can buy gasoline and sell it at the desired profit (temporarily) without interference. Nearly all arbitrage situations eventually disappear as they are discovered and exploited.
The basic equation of biological and economic life is one of limited supply of necessary goods and competition for those goods. Just as biological entities compete for limited usable energy, so too do economic entities compete for limited customer wealth and limited demand for their products. The point at which supply and demand for a given good are equal is called an equilibrium, however, in practical life, equilibriums tend to be dynamic and changing, never static.
Game theory describes situations of conflict, limited resources, and competition. Given a certain situation and a limited amount of resources and time, what decisions are competitors likely to make, and which should they make? One important note is that traditional game theory may describe humans as more rational than they really are: Game theory is theory, after all.
1. Seeing the Front
One of the most valuable military tactics is the habit of “personally seeing the front” – not always relying on advisors, maps, and reports to make decisions, all of which can be either faulty or biased. The Map/Territory model illustrates the problem with not seeing the front, as does the incentive model. Leaders of any organization can generally benefit from this habit, as not only does it provide first-hand information, it also tends to improve the quality of second-hand information.
2. Asymmetric Warfare
The asymmetry model leads to an application in warfare whereby one side seemingly “plays by different rules” than the other due to circumstance. Generally, this model is applied by an insurgency with limited resources. Unable to “out muscle” their opponents, asymmetric fighters use other tactics, as with terrorism creating disproportionate fear to their actual destructive ability.
3. Two-front War
The Second World War was a good example of a two-front war: Once Russia and Germany became enemies, Germany was forced to split its troops to separate fronts, weakening their impact on either front. In practical life, opening a two-front war can often be a useful tactic, as can solving a two-front war or avoiding one; as in the example of an organization tamping down internal discord to focus on its competitors.
Though asymmetric insurgent warfare can be extremely effective, over time competitors have also developed counter-insurgency strategies. Recently and famously, General David Petraeus of the United States lead the development of counter-insurgency plans that involved no additional force, but substantial additional gains. Tit-for-tat warfare or competition will often lead to a feedback loop that demands insurgency and counterinsurgency.
Somewhat paradoxically, the stronger two opponents become, the less likely they make be to destroy one another. This process of mutually assured destructions occurs not just in warfare, as with the development of global nuclear warheads, but in business, as with the avoidance of destructive price wars between competitors. However, in a fat-tailed world it is also possible that mutually assured destruction scenarios simply make destruction but more severe in the event of a mistake. (Pushing destruction into the “tails” of the distribution.)
Originally published at www.farnamstreetblog.com