Mental Models
Measure risk: All investment evaluations should begin by measuring risk, especially reputational. This is said to involve incorporating an appropriate margin of safety, avoiding permanent loss of capital and insisting on proper compensation for risk assumed.
Steve Jobs (Apple co-founder): Use storytelling to make your vision more compelling; not mission-speak.
Be independent: Only in fairy tales are emperors told they’re naked. Remember that just because other people agree or disagree with you doesn’t make you right or wrong – the only thing that matters is the correctness of your analysis.
Prepare ahead: The only way to win is to work, work, work, and hope to have a few insights. If you want to get smart, the question you have to keep asking is “why, why, why?”
Have intellectual humility: Acknowledging what you don’t know is the dawning of wisdom. Stay within a well-defined circle of competence & identify and reconcile disconfirming evidence.
Analyze rigorously: Use effective checklists to minimize errors and omissions. Determine value apart from price; progress apart from activity; wealth apart from size. Think forwards and backwards – Invert, always invert
Allocate assets wisely: Proper allocation of capital is an investor’s No. 1 job. You should remember that good ideas are rare – when the odds are greatly in your favor, bet heavily. At the same time, don’t “fall in love” with an investment.
Have patience: Resist the natural human bias to act. “Compound interest is the eighth wonder of the world” (Einstein); never interrupt it unnecessarily and avoid unnecessary transactional taxes and frictional costs.
Be decisive: When proper circumstances present themselves, act with decisiveness and conviction. Be fearful when others are greedy, and greedy when others are fearful. Opportunity doesn’t come often, so seize it when it comes.
Be ready for change: Accept unremovable complexity. Continually challenge and willingly amend your “best-loved ideas” and recognize reality even when you don’t like it – especially when you don’t like it
Stay focused: Keep it simple and remember what you set out to do. Remember that reputation and integrity are your most valuable assets – and can be lost in a heartbeat. Face your big troubles; don’t sweep them under the rug.”
80/20 Theory (Pareto’s Principle)
Theory of Constraints - Eli Goldratt
Important / Urgent / Not Important / Not Urgent- Stephen Covey
First Things First - Stephen Covey
Making decisions as if from your deathbed - Anon
Law of Attraction - Anon
Thoughts Become Things (Think Good Ones) - TUT
sundown rule- clean off desk- sam walton
Charlie Munger (billionaire investor): Analyze what can go wrong instead of what can go right.
“Invert, always invert: Turn a situation or problem upside down. Look at it backward. What happens if all our plans go wrong? Where don’t we want to go, and how do you get there? Instead of looking for success, make a list of how to fail instead — through sloth, envy, resentment, self-pity, entitlement, all the mental habits of self-defeat. Avoid these qualities and you will succeed. Tell me where I’m going to die so I don’t go there.”
List the ways the project could fail
Assign a probability to each possibility
Prioritize actions that can be taken to avoid failure
Warren Buffett (billionaire investor): Use checklists to avoid stupid mistakes.
“We try more to profit from always remembering the obvious than from grasping the esoteric.”
Brainstorm. Dream up as many possible solutions as you can. This helps you avoid availability bias, which often results in us choosing the first solution that comes to mind rather than the best solution.
Test. Test as many potential solutions as you can afford to. This avoids theconfirmation bias of rationalizing the one solution you chose.
Evaluate. Have a minimum success criteria for each experiment. This allows you to avoid doubling down on bad ideas that aren’t working in an effort to recoup sunk costs.
Learn. Dive deeply into the data and learn from EVERY experiment, not just the one that worked best. Avoid taking mistakes personally and feeling shame over something that did not work.
Ray Dalio (billionaire investor): Learn how to think independently so you can be smarter than everyone else.
To Dalio, the key to having enduring, extraordinary performance is to do what others won’t or can’t AND to be right.
https://cdn-images-1.medium.com/max/800/0*Qff9Tg-Hss-F80sm.png
Thinking independently is more than one simple hack. Broadly speaking, it requires:
Courage to stand up against the herd when you’re right and everyone else is wrong
Access to or understanding of information that other people don’t have
Unique ways of analyzing that information
Ability #1: Stand Up Against The Herd
Ability #2: Develop An Information Advantage
Ability #3: Develop An Analytical Advantage
Jeff Bezos (Amazon founder): Invest in what will NOT change instead of only what will change
Jeff Bezos shows that big trends are only part of the story. It’s also about doing the exact opposite and focusing on what does not change.
To apply this principle to your business, identify a core customer need that will likely stay the same (even as technology and culture evolve) to which your company is uniquely positioned to cater.
Then build your company around it.
This is what Ohio-based entrepreneur Jason Duff did.
“It’s impossible to imagine a future 10 years from now where a customer comes up and says, ‘Jeff I love Amazon; I just wish the prices were a little higher,’ or ‘I love Amazon; I just wish you’d deliver a little more slowly.’”
Steve Jobs (Apple co-founder): Use storytelling to make your vision more compelling; not mission-speak.
According to academic studies on storytelling, great stories transport others into a whole other world and, in doing so, alter their beliefs, cause a loss of access to real-world facts, evoke emotions, and significantly reduce their ability to detect inaccuracies.
Retreat: First, grab a notebook and find a quiet space where you don’t have any distractions from your daily life.
Visualize: Transplant yourself five years into the future. See yourself looking around at your life and your business. Imagine that you’re really in that place where the future HAS already happened. For example, if you have a five-year old child, imagine your child is now ten. Then, imagine yourself five years older.
Ask: Once you’ve transported yourself to that place, ask yourself some questions that will help you “crystal ball” the future. Here are some key questions to ask yourself:
- What is your top-line revenue?
- How many people are on your team?
- How would your people describe the culture of your company when talking to a family member?
- What is the press saying about your business? Be as specific as possible: what would your local paper say about your company? What would your favorite magazine say?
- What do your people love about your vision and where the company is headed?
- How would a customer describe their experience with you? What would they say to their best friend?
- What accomplishment are you most proud of? What accomplishment are your people most proud of?
- What do you do better than anyone else on the planet?
- Describe your office environment in detail.
- Describe your service area. Who are your customers and how do they feel?
Reid Hoffman (LinkedIn founder): Build deep, long-term relationships that give you insider knowledge.
If you reverse engineer the relationships of many successful entrepreneurs, as I have, you will realize that many people work with the same people over and over in their careers.
Hoffman goes so far as to say that the biggest mistake in his career was deciding that in order to be a product manager he needed to learn product management skills. In retrospect, he would have focused on placing himself in the right network by working at one of the fastest growing, futuristic companies at that time: Netscape.
Hoffman refers to the information that only exists in people’s heads as the ‘dark net.’ This includes information that is not searchable online, in any book, or in any classroom and never will be.
Getting access to this ‘dark net’ information from people who have accomplished what you want to accomplish is extremely valuable and will help you think independently. The ‘dark net’ includes people’s lessons learned and hacks, topics that are too sensitive to talk about because they make someone look bad, and tacit knowledge (knowledge that people have but aren’t able to articulate).
Hoffman explains the power of the ‘dark net’:
“Ten extremely informed individuals who are happy to share what they know with you when you engage them can tell you a lot more than a thousand people you only know in the most superficial way.”
Key #1: Be extremely picky about whom you spend a lot of time around.
Key #2: Invest the time.
Elon Musk (SpaceX and Tesla co-founder): Use decision trees to make better decisions.
https://docs.google.com/spreadsheets/d/1FiaWdb-i8ulxfvwHnbViZ6z0wo9NRZuEg-oEi3nn850/edit?usp=sharing
Decision trees are particularly useful for avoiding stupid risks and big bets that aren’t likely to succeed.
https://cdn-images-1.medium.com/max/800/0*6-HMfO-ZkO3sf8tc.jpg
Making unlikely big bets.
Avoiding “Russian roulette” risks.
Understand the different outcomes that could happen (both positive and negative)
Calculate the expected return or loss of each outcome:
Attach a probability to each outcome
Understanding the magnitude of the return or loss
Multiple the probability by the magnitude (probability of winning value of win) — (probability of losing cost of the loss)
Understand the different outcomes that could happen (both positive and negative)
Calculate the expected return or loss of each outcome:
Attach a probability to each outcome
Understanding the magnitude of the return or loss
Multiply the probability by the magnitude (probability of winning x value of win) — (probability of losing x cost of the loss)
Add up and subtract all of the expected returns and losses
Prioritization
Dinner Plate Prioritization.
If you have too much to do, start from the assumption that you’ll do none of it rather than from the assumption that you’ll do all of it.
This is like an empty dinner plate. Next, if you could do only one of those tasks or projects, which would it be?
Put that on your plate. If you could do only two, what would the 2nd task or project be?
That goes on the plate. Keep going for a 3rd and 4th. Then stop — your plate is full. If your plate is full earlier, stop earlier. Whatever doesn’t fit on the plate either isn’t that important, or is important but has to be delegated or mitigated instead of done by you. A similar technique is often called Kanban.
Agile / Lean
If you’re not sure what product to build, but have only a rough general idea, don’t plan the whole project out and commit to final delivery.
Time box yourself: take two weeks, not more, to build a prototype that’s simple, primitive, self contained, and set up to elicit feedback from customers and stakeholders. Don’t write perfect code, just make it work (this is called technical debt — now is the best time to take it). Show it to them. Learn. If you’re going in the right direction, take another two weeks to improve the product; rinse and repeat. If you’re not going in the right direction, throw away your prototype and spend another two weeks on a new one; rinse and repeat. Eventually you’ll have a product that your customers and stakeholders love, and a lot of technical debt. Once you’re making money, you can “pay off” the technical debt with reengineering.
What Would Have To Be True?
If you’re trying to change the world, look for cause and effect.
You want an affordable flying car? What would have to be true, for you to have an affordable flying car?
Make a list. Got it? Now look at the items on the list. What would have to be true, for the things on the list to be true?
Keep going until you get to something actionable — something you’re confident that you could make true given time and resources.
The First-Order Model
How do you begin to model something? With due respect to Mr. Musk and to critics of inductive reasoning, the first-order model (X is like Y, therefore reasoning about Y yields valuable insights into X) is the very foundation of critical thinking. Need to quickly reason about batteries? As a first-order model a battery is an ideal fixed voltage source with an ideal series resistance. Need a first-order model of humans? They’re selfish but susceptible to mass-hysteria. The first-order model is like an agile prototype: you learn from it, in order to make a better model; rinse and repeat.
analogies about x and using that analogy to learn more about x
Occam’s Razor
How can we explain phenomena? Simpler explanations are better, because simpler requires less energy, has higher entropy, and has higher natural probability. Given several possible explanations for phenomena, start with the simplest (a form of first-order model) and stick with it until you can prove that the simple explanation is insufficient. Why do apples fall to the ground? The earth has a huge attractive force; things not otherwise anchored are drawn to it. Call it gravity. Go with that until you find something it should explain but doesn’t.
The Five Whys
Why did a system fail? There’s a surface explanation, but often that explanation isn’t actionable. Dig deeper; why did the surface explanation happen? Even that might not be actionable. Deeper. When you get to five levels of depth (use as many as you need, five is just a mnemonic) you understand the failure well enough to mitigate and prevent it. Digging deeper like this is called root cause analysis. Why was the website down? The servers were overloaded with traffic. Why were the servers overloaded with traffic? It’s Final Four week - we had a lot more users than our capacity plan projected. Why did the capacity plan fail? Um..we accidentally used a traffic model from 2010 instead of 2017. How did that happen? We restored the model from the wrong backup after a power failure last week. Why did you pick the wrong backup? We have a policy that recycles backup IDs after seven years, and hadn’t yet made the 2017 backup. Aha! Actionable.
Scenario Planning / Roadmapping
How do we get the engineering department and our tech portfolio where it needs to be five years from now? To start with, let’s have the CEO and Product Management tell us their goals for that time frame. From that let’s identify key technologies and skills. Let’s look at historical data for key tech; maybe it follows Moore’s Law, maybe not; either way let’s project it out five years. Let’s look at our competitors, and at ourselves, and rank our and their strengths in key areas. Let’s look at the probability of disruptive innovation, and how it could derail our plans.
What are the risks? What are the warning signs and KPIs? How do we mitigate?
Now, let’s lay out a series of timelines for how we move, incrementally or in a revolutionary way, from where we are to where we need to be in five years. Here’s the part that blows your mind though: we invest a lot of time and energy in creating this roadmap, but we all know it’s BS. Nobody can see out even six months, much less five years. We’re going to do it all over again in a year.
The power of this technique, of this exercise, is threefold and counterintuitive: first, it gets everyone on the same page; second, it creates a kind of “pole star” that the organization can navigate toward but never reach; third, in striving for a better future the organization might solve some completely different problem and become the source of its own disruptive innovation.
Why 10,000 Experiments Beat 10,000 Hours
Perhaps the most popular current success formula is the 10,000-hour rule popularized by Malcolm Gladwell. The idea is that you need 10,000 hours of deliberate practice to become a world-class performer in any field.
Research now tells us, however, that this formula is woefully inadequate to explain success, especially in the professional realm. A 2014 review of 88 previous studies found that “deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions. We conclude that deliberate practice is important, but not as important as has been argued.”
This means that deliberate practice may help you in fields that change slowly or not at all, such as music and sports. It helps you succeed when the future looks like the past, but it’s next to useless in areas that change rapidly, such as technology and business.
https://cdn-images-1.medium.com/max/800/0*_xgEYSfxwLBIsQvB.
What Edison and others (see more examples below) teach us is that we should maximize the number of experiments, not hours. Instead of the 10,000-hour rule, we need what I call the 10,000-experiment rule.
Throughout history, the scientific method has arguably produced more human progress than any other philosophy. At the heart of the scientific method is experimentation: develop a hypothesis, perform a test to prove the hypothesis right or wrong, analyze the results, and create a new hypothesis based on what you learned. The 10,000-experiment rule takes this proven power of experimentation out of the lab and into day-to-day life.
Following the 10,000-experiment rule means starting your day with not just a to-do list but a “to-test” list like Leonardo Da Vinci. According to Walter Isaacson, one of Da Vinci’s biographers, “Every morning his life hack was: make a list of what he wants to know. Why do people yawn? What does the tongue of a woodpecker look like?”
As you go through your day, following the 10,000-experiment rule means constantly looking for opportunities to collect data rather than just doing what you need to do. It means adding a deliberate reflection process based on reviewing data before the day ends.
For example, do you want to improve your sales results by asking a new question at the end of sales calls? Now every sales call becomes an opportunity to ask that question and collect data so that you can learn how to make better sales calls in the future. Do you want to sleep better so that you can have more energy during the day? You can research all the best practices for falling asleep, turn the most compelling ones into a routine, use a sleep tracker to get objective data on the quantity and quality of your sleep, and then make adjustments to your routine to improve the results.
To achieve 10,000 hours of deliberate practice requires three hours of deliberate practice per day for 10 years. I argue that the 10,000-experiment rule is just as difficult, yet doable, requiring three experiments per day.
In other words, the key to maximizing creative success, according to the theory, is producing more experiments.
THE COSTS OF INNOVATING
This is hardly a secret. Amazon chief Jeff Bezos has said, “Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day.”
“If you can increase the number of experiments you try from a hundred to a thousand, you dramatically increase the number of innovations you produce.”
Second, you only need a few big wins to make all those experiments worth it
Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten. We all know that if you swing for the fences, you’re going to strike out a lot, but you’re also going to hit some home runs. The difference between baseball and business, however, is that baseball has a truncated outcome distribution. When you swing, no matter how well you connect with the ball, the most runs you can get is four. In business, every once in awhile, when you step up to the plate, you can score 1,000 runs. This long-tailed distribution of returns is why it’s important to be bold. Big winners pay for so many experiments.
Work toward building a platform or approach that makes it possible to run not just twice as many experiments as you already do but 10 to 100 times more.
Empower your most junior employees to conceive and perform their own experiments.
Come up with a standard methodology for validating results.
Share the results and lessons learned from experiments across the company, then use those to inform new ones; even failed experiments should help you ask better questions.
Select experiments to go forward with that have the highest potential ROI (in other words, stop testing the color of the package).
Set aside a percentage of your marketing and product-development budget strictly for experimentation. Most companies spend 0%–5% of their time and money on this, when it should be closer to 10%.
First, we live in a culture that is obsessed with productivity: getting more done and less time, systematizing, automating, and even outsourcing. If one has a frame of short-term productivity, then taking time out of your day to nurture a creative process with unpredictable results that don’t pay off immediately is extremely hard. What is productive in the long-term often feels not productive in the short-term.
Also, performing experiments is time intensive. To squeeze out some deliberate learning every day requires at least 15 minutes, but even more challenging is that most experiments fail. While failure is increasingly celebrated in our society, most people still have a visceral feeling of shame and disappointment that comes from it.
It wasn’t until I understood the math behind experimentation that I was able to get past my fear of failure.
If you do enough experiments, the odds are in your favor. The quality of each subsequent experiment increases because you will tend to apply the lessons learned from previous experiments. Those lessons make your success curve exponential rather than linear.
https://cdn-images-1.medium.com/max/800/0*emEmvwZkHK0dfyeA.
One big winner pays more than enough for all the losing experiments,
Today’s tools allow anyone to increase their quantity of experimentation by an order of magnitude. A new breed of affordable apps, services, and trackers help us learn about what works for other people, collect data on ourselves, interpret it, stay accountable, and track our progress in real time. In the health space, for example, these new tools have led to the biohacking and quantified self-movements where people use blood glucose levels, sleep, activity, heart rate, gut biome, and genetics to facilitate their experimentation. Similar experimentation explosions are happening in the world of relationships, sexuality, intelligence, happiness, productivity, and personal finances.
If so many people across so many fields can incorporate deliberate experimentation, so can you!
First, identify at least one jackpot experiment that could change your life. The road to deliberate experimentation starts with one experiment, but not all experiments are created equal. Some experiments are extremely time and money intensive. Some create incremental changes, while others could be life changing. Some have a 1% chance of success. Others are a sure bet. When picking a first jackpot experiment to pursue, you want to pursue an experiment that is easy to do monetarily and timewise, has the potential to be life changing, and carries a reasonable probability of paying off. I call these Jackpot experiments.
Second, I recommend running three experiment tests each day. When you start the day, identify three tests you want to perform. Collect data throughout the day, and before the day ends, analyze the results.
Try this for one month, or 30 tests, and see the difference it makes!
Interested in really taking action to become a deliberate experimenter? After studying how dozens of the world’s most prolific experimenters work, we spent dozens of hours creating a free mini-course that includes several email lessons and a webinar to help you be successful with the 10,000-experiment rule. It includes case studies of experimenters who improved their health, confidence, and lifestyle, tips on how to perform a valid experiment, details on the nuances of Blind Variation And Selective Retention and much more.
General Thinking Concepts (9)
The Map is not the Territory
The map of reality is not reality. Even the best maps are imperfect. That’s because they are reductions of what they represent. If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us. A map can also be a snapshot of a point in time, representing something that no longer exists. This is important to keep in mind as we think through problems and make better decisions.
Circle of Competence
When ego and not competence drives what we undertake, we have blind spots. If you know what you understand, you know where you have an edge over others. When you are honest about where your knowledge is lacking you know where you are vulnerable and where you can improve. Understanding your circle of competence improves decision making and outcomes.
First Principles Thinking
First principles thinking is one of the best ways to reverse-engineer complicated situations and unleash creative possibility. Sometimes called reasoning from first principles, it’s a tool to help clarify complicated problems by separating the underlying ideas or facts from any assumptions based on them. What remains are the essentials. If you know the first principles of something, you can build the rest of your knowledge around them to produce something new.
Thought Experiment
Thought experiments can be defined as “devices of the imagination used to investigate the nature of things.” Many disciplines, such as philosophy and physics, make use of thought experiments to examine what can be known. In doing so, they can open up new avenues for inquiry and exploration. Thought experiments are powerful because they help us learn from our mistakes and avoid future ones. They let us take on the impossible, evaluate the potential consequences of our actions, and re-examine history to make better decisions. They can help us both figure out what we really want, and the best way to get there.
Second-Order Thinking
Almost everyone can anticipate the immediate results of their actions. This type of first-order thinking is easy and safe but it’s also a way to ensure you get the same results that everyone else gets. Second-order thinking is thinking farther ahead and thinking holistically. It requires us to not only consider our actions and their immediate consequences, but the subsequent effects of those actions as well. Failing to consider the second and third order effects can unleash disaster.
Probabilistic Thinking
Probabilistic thinking is essentially trying to estimate, using some tools of math and logic, the likelihood of any specific outcome coming to pass. It is one of the best tools we have to improve the accuracy of our decisions. In a world where each moment is determined by an infinitely complex set of factors, probabilistic thinking helps us identify the most likely outcomes. When we know these our decisions can be more precise and effective.
Inversion
Inversion is a powerful tool to improve your thinking because it helps you identify and remove obstacles to success. The root of inversion is “invert,” which means to upend or turn upside down. As a thinking tool it means approaching a situation from the opposite end of the natural starting point. Most of us tend to think one way about a problem: forward. Inversion allows us to flip the problem around and think backward. Sometimes it’s good to start at the beginning, but it can be more useful to start at the end.
Occam’s Razor
Simpler explanations are more likely to be true than complicated ones. This is the essence of Occam’s Razor, a classic principle of logic and problem-solving. Instead of wasting your time trying to disprove complex scenarios, you can make decisions more confidently by basing them on the explanation that has the fewest moving parts.
Read more on Occam’s Razor
Hanlon’s Razor
Hard to trace in its origin, Hanlon’s Razor states that we should not attribute to malice that which is more easily explained by stupidity. In a complex world, using this model helps us avoid paranoia and ideology. By not generally assuming that bad results are the fault of a bad actor, we look for options instead of missing opportunities. This model reminds us that people do make mistakes. It demands that we ask if there is another reasonable explanation for the events that have occurred. The explanation most likely to be right is the one that contains the least amount of intent.
Theory of Constraints
Numeracy (14)
Permutations and Combinations
The mathematics of permutations and combinations leads us to understand the practical probabilities of the world around us, how things can be ordered, and how we should think about things
Order of Magnitude
In many, perhaps most, systems, quantitative description down to a precise figure is either impossible or useless (or both). For example, estimating the distance between our galaxy and the next one over is a matter of knowing not the precise number of miles, but how many zeroes are after the 1. Is the distance about 1 million miles or about 1 billion? This thought habit can help us escape useless precision.
The Bayesian method is a method of thought (named for Thomas Bayes) whereby one takes into account all prior relevant probabilities and then incrementally updates them as newer information arrives. This method is especially productive given the fundamentally non-deterministic world we experience: We must use prior odds and new information in combination to arrive at our best decisions. This is not necessarily our intuitive decision-making engine.
Stochastic Processes (Poisson, Markov, Random Walk)
A stochastic process is a random statistical process and encompasses a wide variety of processes in which the movement of an individual variable can be impossible to predict but can be thought through probabilistically. The wide variety of stochastic methods helps us describe systems of variables through probabilities without necessarily being able to determine the position of any individual variable over time. For example, it’s not possible to predict stock prices on a day-to-day basis, but we can describe the probability of various distributions of their movements over time. Obviously, it is much more likely that the stock market (a stochastic process) will be up or down 1% in a day than up or down 10%, even though we can’t predict what tomorrow will bring.
Algebraic Equivalence
The introduction of algebra allowed us to demonstrate mathematically and abstractly that two seemingly different things could be the same. By manipulating symbols, we can demonstrate equivalence or inequivalence, the use of which led humanity to untold engineering and technical abilities. Knowing at least the basics of algebra can allow us to understand a variety of important results.
Randomness
Though the human brain has trouble comprehending it, much of the world is composed of random, non-sequential, non-ordered events. We are “fooled” by random effects when we attribute causality to things that are actually outside of our control. If we don’t course-correct for this fooled-by-randomness effect – our faulty sense of pattern-seeking – we will tend to see things as being more predictable than they are and act accordingly.
Compounding
It’s been said that Einstein called compounding a wonder of the world. He probably didn’t, but it is a wonder. Compounding is the process by which we add interest to a fixed sum, which then earns interest on the previous sum and the newly added interest, and then earns interest on that amount, and so on ad infinitum. It is an exponential effect, rather than a linear, or additive, effect. Money is not the only thing that compounds; ideas and relationships do as well. In tangible realms, compounding is always subject to physical limits and diminishing returns; intangibles can compound more freely. Compounding also leads to the time value of money, which underlies all of modern finance.
Multiplying by Zero
Any reasonably educated person knows that any number multiplied by zero, no matter how large the number, is still zero. This is true in human systems as well as mathematical ones. In some systems, a failure in one area can negate great effort in all other areas. As simple multiplication would show, fixing the “zero” often has a much greater effect than does trying to enlarge the other areas.
Churn
Insurance companies and subscription services are well aware of the concept of churn – every year, a certain number of customers are lost and must be replaced. Standing still is the equivalent of losing, as seen in the model called the “Red Queen Effect.” Churn is present in many business and human systems: A constant figure is periodically lost and must be replaced before any new figures are added over the top.
Law of Large Numbers
One of the fundamental underlying assumptions of probability is that as more instances of an event occur, the actual results will converge on the expected ones. For example, if I know that the average man is 5 feet 10 inches tall, I am far more likely to get an average of 5′10″ by selecting 500 men at random than 5 men at random. The opposite of this model is the law of small numbers, which states that small samples can and should be looked at with great skepticism.
Bell Curve/Normal Distribution
The normal distribution is a statistical process that leads to the well-known graphical representation of a bell curve, with a meaningful central “average” and increasingly rare standard deviations from that average when correctly sampled. (The so-called “central limit” theorem.) Well-known examples include human height and weight, but it’s just as important to note that many common processes, especially in non-tangible systems like social systems, do not follow the normal distribution.
Power Laws
One of the most common processes that does not fit the normal distribution is that of a power law, whereby one quantity varies with another’s exponent rather than linearly. For example, the Richter scale describes the power of earthquakes on a power-law distribution scale: an 8 is 10x more destructive than a 7, and a 9 is 10x more destructive than an 8. The central limit theorem does not apply and there is thus no “average” earthquake. This is true of all power-law distributions.
Fat-Tailed Processes (Extremistan)
A process can often look like a normal distribution but have a large “tail” – meaning that seemingly outlier events are far more likely than they are in an actual normal distribution. A strategy or process may be far more risky than a normal distribution is capable of describing if the fat tail is on the negative side, or far more profitable if the fat tail is on the positive side. Much of the human social world is said to be fat-tailed rather than normally distributed.
Bayesian Updating
The Bayesian method is a method of thought (named for Thomas Bayes) whereby one takes into account all prior relevant probabilities and then incrementally updates them as newer information arrives. This method is especially productive given the fundamentally non-deterministic world we experience: We must use prior odds and new information in combination to arrive at our best decisions. This is not necessarily our intuitive decision-making engine.
Regression to the Mean
In a normally distributed system, long deviations from the average will tend to return to that average with an increasing number of observations: the so-called Law of Large Numbers. We are often fooled by regression to the mean, as with a sick patient improving spontaneously around the same time they begin taking an herbal remedy, or a poorly performing sports team going on a winning streak. We must be careful not to confuse statistically likely events with causal ones.
Systems (22)
Optimal Stopping - When to Stop Looking?
In the case of time or other resource constraint, one should make a decision after 37% of total time or options have been exercised, after what one should choose the best from the lot or which is better than the average of analyzed.
Secretary example p.14.
explore / exploit
Explore/Exploit problem. Optimism minimizes regret. You’ve found some restaurants you really like. How often should you exploit that knowledge for a guaranteed good meal, and how often should you optimistically take a chance and explore new places to eat? The answer, again, depends partly on the interval of time involved. When you’re new in town, explore like mad. If you’re about to leave a city, stick with the known favorites.
Scale
One of the most important principles of systems is that they are sensitive to scale. Properties (or behaviors) tend to change when you scale them up or down. In studying complex systems, we must always be roughly quantifying – in orders of magnitude, at least – the scale at which we are observing, analyzing, or predicting the system.
Criticality
A system becomes critical when it is about to jump discretely from one phase to another. The marginal utility of the last unit before the phase change is wildly higher than any unit before it. A frequently cited example is water turning from a liquid to a vapor when heated to a specific temperature. “Critical mass” refers to the mass needed to have the critical event occur, most commonly in a nuclear system.
Law of Diminishing Returns
Related to scale, most important real-world results are subject to an eventual decrease of incremental value. A good example would be a poor family: Give them enough money to thrive, and they are no longer poor. But after a certain point, additional money will not improve their lot; there is a clear diminishing return of additional dollars at some roughly quantifiable point. Often, the law of diminishing returns veers into negative territory – i.e., receiving too much money could destroy the poor family.
Pareto Principle
Named for Italian polymath Vilfredo Pareto, who noticed that 80% of Italy’s land was owned by about 20% of its population, the Pareto Principle states that a small amount of some phenomenon causes a disproportionately large effect. The Pareto Principle is an example of a power-law type of statistical distribution – as distinguished from a traditional bell curve – and is demonstrated in various phenomena ranging from wealth to city populations to important human habits.
Feedback Loops (and Homeostasis)
All complex systems are subject to positive and negative feedback loops whereby A causes B, which in turn influences A (and C), and so on – with higher-order effects frequently resulting from continual movement of the loop. In a homeostatic system, a change in A is often brought back into line by an opposite change in B to maintain the balance of the system, as with the temperature of the human body or the behavior of an organizational culture. Automatic feedback loops maintain a “static” environment unless and until an outside force changes the loop. A “runaway feedback loop” describes a situation in which the output of a reaction becomes its own catalyst (auto-catalysis).
Chaos Dynamics (Butterfly Effect)/ (Sensitivity to Initial Conditions)
In a world such as ours, governed by chaos dynamics, small changes (perturbations) in initial conditions have massive downstream effects as near-infinite feedback loops occur; this phenomenon is also called the butterfly effect. This means that some aspects of physical systems (like the weather more than a few days from now) as well as social systems (the behavior of a group of human beings over a long period) are fundamentally unpredictable.
Preferential Attachment (Cumulative Advantage)
A preferential attachment situation occurs when the current leader is given more of the reward than the laggards, thereby tending to preserve or enhance the status of the leader. A strong network effect is a good example of preferential attachment; a market with 10x more buyers and sellers than the next largest market will tend to have a preferential attachment dynamic.
Emergence
Higher-level behavior tends to emerge from the interaction of lower-order components. The result is frequently not linear – not a matter of simple addition – but rather non-linear, or exponential. An important resulting property of emergent behavior is that it cannot be predicted from simply studying the component parts.
Irreducibility
We find that in most systems there are irreducible quantitative properties, such as complexity, minimums, time, and length. Below the irreducible level, the desired result simply does not occur. One cannot get several women pregnant to reduce the amount of time needed to have one child, and one cannot reduce a successfully built automobile to a single part. These results are, to a defined point, irreducible.
Tragedy of the Commons
A concept introduced by the economist and ecologist Garrett Hardin, the Tragedy of the Commons states that in a system where a common resource is shared, with no individual responsible for the wellbeing of the resource, it will tend to be depleted over time. The Tragedy is reducible to incentives: Unless people collaborate, each individual derives more personal benefit than the cost that he or she incurs, and therefore depletes the resource for fear of missing out.
Gresham’s Law
Gresham’s Law, named for the financier Thomas Gresham, states that in a system of circulating currency, forged currency will tend to drive out real currency, as real currency is hoarded and forged currency is spent. We see a similar result in human systems, as with bad behavior driving out good behavior in a crumbling moral system, or bad practices driving out good practices in a crumbling economic system. Generally, regulation and oversight are required to prevent results that follow Gresham’s Law.
Algorithms
While hard to precisely define, an algorithm is generally an automated set of rules or a “blueprint” leading a series of steps or actions resulting in a desired outcome, and often stated in the form of a series of “If → Then” statements. Algorithms are best known for their use in modern computing, but are a feature of biological life as well. For example, human DNA contains an algorithm for building a human being.
Fragility – Robustness – Antifragility
Popularized by Nassim Taleb, the sliding scale of fragility, robustness, and antifragility refers to the responsiveness of a system to incremental negative variability. A fragile system or object is one in which additional negative variability has a disproportionately negative impact, as with a coffee cup shattering from a 6-foot fall, but receiving no damage at all (rather than 1/6th of the damage) from a 1-foot fall. A robust system or object tends to be neutral to the additional negativity variability, and of course, an antifragile system benefits: If there were a cup that got stronger when dropped from 6 feet than when dropped from 1 foot, it would be termed antifragile.
Backup Systems/Redundancy
A critical model of the engineering profession is that of backup systems. A good engineer never assumes the perfect reliability of the components of the system. He or she builds in redundancy to protect the integrity of the total system. Without the application of this robustness principle, tangible and intangible systems tend to fail over time.
Margin of Safety
Similarly, engineers have also developed the habit of adding a margin for error into all calculations. In an unknown world, driving a 9,500-pound bus over a bridge built to hold precisely 9,600 pounds is rarely seen as intelligent. Thus, on the whole, few modern bridges ever fail. In practical life outside of physical engineering, we can often profitably give ourselves margins as robust as the bridge system.
Network Effects
A network tends to become more valuable as nodes are added to the network: this is known as the network effect. An easy example is contrasting the development of the electricity system and the telephone system. If only one house has electricity, its inhabitants have gained immense value, but if only one house has a telephone, its inhabitants have gained nothing of use. Only with additional telephones does the phone network gain value. This network effect is widespread in the modern world and creates immense value for organizations and customers alike.
Black Swan
Also popularized by Nassim Taleb, a Black Swan is a rare and highly consequential event that is invisible to a given observer ahead of time. It is a result of applied epistemology: If you have seen only white swans, you cannot categorically state that there are no black swans, but the inverse is not true: seeing one black swan is enough for you to state that there are black swans. Black Swan events are necessarily unpredictable to the observer (as Taleb likes to say, Thanksgiving is a Black Swan for the turkey, not the butcher) and thus must be dealt with by addressing the fragility-robustness-antifragility spectrum rather than through better methods of prediction.
Via Negativa – Omission/Removal/Avoidance of Harm
In many systems, improvement is at best, or at times only, a result of removing bad elements rather than of adding good elements. This is a credo built into the modern medical profession: First, do no harm. Similarly, if one has a group of children behaving badly, removal of the instigator is often much more effective than any form of punishment meted out to the whole group.
The Lindy Effect
The Lindy Effect refers to the life expectancy of a non-perishable object or idea being related to its current lifespan. If an idea or object has lasted for X number of years, it would be expected (on average) to last another X years. Although a human being who is 90 and lives to 95 does not add 5 years to his or her life expectancy, non-perishables lengthen their life expectancy as they continually survive. A classic text is a prime example: if humanity has been reading Shakespeare’s plays for 500 years, it will be expected to read them for another 500.
Renormalization Group
The renormalization group technique allows us to think about physical and social systems at different scales. An idea from physics, and a complicated one at that, the application of a renormalization group to social systems allows us to understand why a small number of stubborn individuals can have a disproportionate impact if those around them follow suit on increasingly large scales.
Spring-loading
A system is spring-loaded if it is coiled in a certain direction, positive or negative. Positively spring-loading systems and relationships is important in a fundamentally unpredictable world to help protect us against negative events. The reverse can be very destructive.
Complex Adaptive Systems
A complex adaptive system, as distinguished from a complex system in general, is one that can understand itself and change based on that understanding. Complex adaptive systems are social systems. The difference is best illustrated by thinking about weather prediction contrasted to stock market prediction. The weather will not change based on an important forecaster’s opinion, but the stock market might. Complex adaptive systems are thus fundamentally not predictable.
Physical World (9)
Laws of Thermodynamics
The laws of thermodynamics describe energy in a closed system. The laws cannot be escaped and underlie the physical world. They describe a world in which useful energy is constantly being lost, and energy cannot be created or destroyed. Applying their lessons to the social world can be a profitable enterprise.
Activation Energy
A fire is not much more than a combination of carbon and oxygen, but the forests and coal mines of the world are not combusting at will because such a chemical reaction requires the input of a critical level of “activation energy” in order to get a reaction started. Two combustible elements alone are not enough.
Reciprocity
If I push on a wall, physics tells me that the wall pushes back with equivalent force. In a biological system, if one individual acts on another, the action will tend to be reciprocated in kind. And of course, human beings act with intense reciprocity demonstrated as well.
Velocity
Velocity is not equivalent to speed; the two are sometimes confused. Velocity is speed plus vector: how fast something gets somewhere. An object that moves two steps forward and then two steps back has moved at a certain speed but shows no velocity. The addition of the vector, that critical distinction, is what we should consider in practical life.
Relativity
Relativity has been used in several contexts in the world of physics, but the important aspect to study is the idea that an observer cannot truly understand a system of which he himself is a part. For example, a man inside an airplane does not feel like he is experiencing movement, but an outside observer can see that movement is occurring. This form of relativity tends to affect social systems in a similar way.
Catalysts
A catalyst either kick-starts or maintains a chemical reaction, but isn’t itself a reactant. The reaction may slow or stop without the addition of catalysts. Social systems, of course, take on many similar traits, and we can view catalysts in a similar light.
Leverage
Most of the engineering marvels of the world have been accomplished with applied leverage. As famously stated by Archimedes, “Give me a lever long enough and I shall move the world.” With a small amount of input force, we can make a great output force through leverage. Understanding where we can apply this model to the human world can be a source of great success.
Inertia
An object in motion with a certain vector wants to continue moving in that direction unless acted upon. This is a fundamental physical principle of motion; however, individuals, systems, and organizations display the same effect. It allows them to minimize the use of energy, but can cause them to be destroyed or eroded.
Alloying
When we combine various elements, we create new substances. This is no great surprise, but what can be surprising in the alloying process is that 2+2 can equal not 4 but 6 – the alloy can be far stronger than the simple addition of the underlying elements would lead us to believe. This process leads us to engineering great physical objects, but we understand many intangibles in the same way; a combination of the right elements in social systems or even individuals can create a 2+2=6 effect similar to alloying.
The Biological World (15)
Self-Preservation Instincts
Without a strong self-preservation instinct in an organism’s DNA, it would tend to disappear over time, thus eliminating that DNA. While cooperation is another important model, the self-preservation instinct is strong in all organisms and can cause violent, erratic, and/or destructive behavior for those around them.
Incentives
All creatures respond to incentives to keep themselves alive. This is the basic insight of biology. Constant incentives will tend to cause a biological entity to have constant behavior, to an extent. Humans are included and are particularly great examples of the incentive-driven nature of biology; however, humans are complicated in that their incentives can be hidden or intangible. The rule of life is to repeat what works and has been rewarded.
Cooperation (Including Symbiosis)
Competition tends to describe most biological systems, but cooperation at various levels is just as important a dynamic. In fact, the cooperation of a bacterium and a simple cell probably created the first complex cell and all of the life we see around us. Without cooperation, no group survives, and the cooperation of groups gives rise to even more complex versions of organization. Cooperation and competition tend to coexist at multiple levels.
Tendency to Minimize Energy Output (Mental & Physical)
In a physical world governed by thermodynamics and competition for limited energy and resources, any biological organism that was wasteful with energy would be at a severe disadvantage for survival. Thus, we see in most instances that behavior is governed by a tendency to minimize energy usage when at all possible.
Adaptation
Species tend to adapt to their surroundings in order to survive, given the combination of their genetics and their environment – an always-unavoidable combination. However, adaptations made in an individual’s lifetime are not passed down genetically, as was once thought: Populations of species adapt through the process of evolution by natural selection, as the most-fit examples of the species replicate at an above-average rate.
Evolution by Natural Selection
Evolution by natural selection was once called “the greatest idea anyone ever had.” In the 19th century, Charles Darwin and Alfred Russel Wallace simultaneous realized that species evolve through random mutation and differential survival rates. If we call human intervention in animal-breeding an example of “artificial selection,” we can call Mother Nature deciding the success or failure of a particular mutation “natural selection.” Those best suited for survival tend to be preserved. But of course, conditions change.
The Red Queen Effect (Co-evolutionary Arms Race)
The evolution-by-natural-selection model leads to something of an arms race among species competing for limited resources. When one species evolves an advantageous adaptation, a competing species must respond in kind or fail as a species. Standing pat can mean falling behind. This arms race is called the Red Queen Effect for the character in Alice in Wonderland who said, “Now, here, you see, it takes all the running you can do, to keep in the same place.”
Replication
A fundamental building block of diverse biological life is high-fidelity replication. The fundamental unit of replication seems to be the DNA molecule, which provides a blueprint for the offspring to be built from physical building blocks. There are a variety of replication methods, but most can be lumped into sexual and asexual.
Hierarchical and Other Organizing Instincts
Most complex biological organisms have an innate feel for how they should organize. While not all of them end up in hierarchical structures, many do, especially in the animal kingdom. Human beings like to think they are outside of this, but they feel the hierarchical instinct as strongly as any other organism.
Simple Physiological Reward-Seeking
All organisms feel pleasure and pain from simple chemical processes in their bodies which respond predictably to the outside world. Reward-seeking is an effective survival-promoting technique on average. However, those same pleasure receptors can be co-opted to cause destructive behavior, as with drug abuse.
Exaptation
Introduced by the biologist Steven Jay Gould, an exaptation refers to a trait developed for one purpose that is later used for another purpose. This is one way to explain the development of complex biological features like an eyeball; in a more primitive form, it may have been used for something else. Once it was there, and once it developed further, 3D sight became possible.
Extinction
The inability to survive can cause an extinction event, whereby an entire species ceases to compete and replicate effectively. Once its numbers have dwindled to a critically low level, an extinction can be unavoidable (and predictable) given the inability to effectively replicate in large enough numbers.
Ecosystems
An ecosystem describes any group of organisms coexisting with the natural world. Most ecosystems show diverse forms of life taking on different approaches to survival, with such pressures leading to varying behavior. Social systems can be seen in the same light as the physical ecosystems and many of the same conclusions can be made.
Niches
Most organisms find a niche: a method of competing and behaving for survival. Usually, a species will select a niche for which it is best adapted. The danger arises when multiple species begin competing for the same niche, which can cause an extinction – there can be only so many species doing the same thing before limited resources give out.
Dunbar’s Number
The primatologist Robin Dunbar observed through study that the number of individuals a primate can get to know and trust closely is related to the size of its neocortex. Extrapolating from his study of primates, Dunbar theorized that the Dunbar number for a human being is somewhere in the 100–250 range, which is supported by certain studies of human behavior and social networks.
Human Nature & Judgment (23)
Trust
Fundamentally, the modern world operates on trust. Familial trust is generally a given (otherwise we’d have a hell of a time surviving), but we also choose to trust chefs, clerks, drivers, factory workers, executives, and many others. A trusting system is one that tends to work most efficiently; the rewards of trust are extremely high.
Bias from Incentives
Highly responsive to incentives, humans have perhaps the most varied and hardest to understand set of incentives in the animal kingdom. This causes us to distort our thinking when it is in our own interest to do so. A wonderful example is a salesman truly believing that his product will improve the lives of its users. It’s not merely convenient that he sells the product; the fact of his selling the product causes a very real bias in his own thinking.
Pavlovian Association
Ivan Pavlov very effectively demonstrated that animals can respond not just to direct incentives but also to associated objects; remember the famous dogs salivating at the ring of a bell. Human beings are much the same and can feel positive and negative emotion towards intangible objects, with the emotion coming from past associations rather than direct effects.
Tendency to Feel Envy & Jealousy
Humans have a tendency to feel envious of those receiving more than they are, and a desire “get what is theirs” in due course. The tendency towards envy is strong enough to drive otherwise irrational behavior, but is as old as humanity itself. Any system ignorant of envy effects will tend to self-immolate over time.
Tendency to Distort Due to Liking/Loving or Disliking/Hating
Based on past association, stereotyping, ideology, genetic influence, or direct experience, humans have a tendency to distort their thinking in favor of people or things that they like and against people or things they dislike. This tendency leads to overrating the things we like and underrating or broadly categorizing things we dislike, often missing crucial nuances in the process.
Denial
Anyone who has been alive long enough realizes that, as the saying goes, “denial is not just a river in Africa.” This is powerfully demonstrated in situations like war or drug abuse, where denial has powerful destructive effects but allows for behavioral inertia. Denying reality can be a coping mechanism, a survival mechanism, or a purposeful tactic.
Availability Heuristic
One of the most useful findings of modern psychology is what Daniel Kahneman calls the Availability Bias or Heuristic: We tend to most easily recall what is salient, important, frequent, and recent. The brain has its own energy-saving and inertial tendencies that we have little control over – the availability heuristic is likely one of them. Having a truly comprehensive memory would be debilitating. Some sub-examples of the availability heuristic include the Anchoring and Sunk Cost Tendencies.
Representativeness Heuristic
The three major psychological findings that fall under Representativeness, also defined by Kahneman and his partner Tversky, are:
Failure to Account for Base Rates
An unconscious failure to look at past odds in determining current or future behavior.
Tendency to Stereotype
The tendency to broadly generalize and categorize rather than look for specific nuance. Like availability, this is generally a necessary trait for energy-saving in the brain.
Failure to See False Conjunctions
Most famously demonstrated by the Linda Test, the same two psychologists showed that students chose more vividly described individuals as more likely to fit into a predefined category than individuals with broader, more inclusive, but less vivid descriptions, even if the vivid example was a mere subset of the more inclusive set. These specific examples are seen as more representative of the category than those with the broader but vaguer descriptions, in violation of logic and probability.
Social Proof (Safety in Numbers)
Human beings are one of many social species, along with bees, ants, and chimps, among many more. We have a DNA-level instinct to seek safety in numbers and will look for social guidance of our behavior. This instinct creates a cohesive sense of cooperation and culture which would not otherwise be possible, but also leads us to do foolish things if our group is doing them as well.
Narrative Instinct
Human beings have been appropriately called “the storytelling animal” because of our instinct to construct and seek meaning in narrative. It’s likely that long before we developed the ability to write or to create objects, we were telling stories and thinking in stories. Nearly all social organizations, from religious institutions to corporations to nation-states, run on constructions of the narrative instinct.
Curiosity Instinct
We like to call other species curious, but we are the most curious of all, an instinct which led us out of the savanna and led us to learn a great deal about the world around us, using that information to create the world in our collective minds. The curiosity instinct leads to unique human behavior and forms of organization like the scientific enterprise. Even before there were direct incentives to innovate, humans innovated out of curiosity.
Language Instinct
The psychologist Steven Pinker calls our DNA-level instinct to learn grammatically constructed language the Language Instinct. The idea that grammatical language is not a simple cultural artifact was first popularized by the linguist Noam Chomsky. As we saw with the narrative instinct, we use these instincts to create shared stories, as well as to gossip, solve problems, and fight, among other things. Grammatically ordered language theoretically carries infinite varying meaning.
First-Conclusion Bias
As Charlie Munger famously pointed out, the mind works a bit like a sperm and egg: the first idea gets in and then the mind shuts. Like many other tendencies, this is probably an energy-saving device. Our tendency to settle on first conclusions leads us to accept many erroneous results and cease asking questions; it can be countered with some simple and useful mental routines.
Tendency to Overgeneralize from Small Samples
It’s important for human beings to generalize; we need not see every instance to understand the general rule, and this works to our advantage. With generalizing, however, comes a subset of errors when we forget about the Law of Large Numbers and act as if it does not exist. We take a small number of instances and create a general category, even if we have no statistically sound basis for the conclusion.
Relative Satisfaction/Misery Tendencies
The envy tendency is probably the most obvious manifestation of the relative satisfaction tendency, but nearly all studies of human happiness show that it is related to the state of the person relative to either their past or their peers, not absolute. These relative tendencies cause us great misery or happiness in a very wide variety of objectively different situations and make us poor predictors of our own behavior and feelings.
Commitment & Consistency Bias
As psychologists have frequently and famously demonstrated, humans are subject to a bias towards keeping their prior commitments and staying consistent with our prior selves when possible. This trait is necessary for social cohesion: people who often change their conclusions and habits are often distrusted. Yet our bias towards staying consistent can become, as one wag put it, a “hobgoblin of foolish minds” – when it is combined with the first-conclusion bias, we end up landing on poor answers and standing pat in the face of great evidence.
Hindsight Bias
Once we know the outcome, it’s nearly impossible to turn back the clock mentally. Our narrative instinct leads us to reason that we knew it all along (whatever “it” is), when in fact we are often simply reasoning post-hoc with information not available to us before the event. The hindsight bias explains why it’s wise to keep a journal of important decisions for an unaltered record and to re-examine our beliefs when we convince ourselves that we knew it all along.
Sensitivity to Fairness
Justice runs deep in our veins. In another illustration of our relative sense of well-being, we are careful arbiters of what is fair. Violations of fairness can be considered grounds for reciprocal action, or at least distrust. Yet fairness itself seems to be a moving target. What is seen as fair and just in one time and place may not be in another. Consider that slavery has been seen as perfectly natural and perfectly unnatural in alternating phases of human existence.
Tendency to Overestimate Consistency of Behavior (Fundamental Attribution Error)
We tend to over-ascribe the behavior of others to their innate traits rather than to situational factors, leading us to overestimate how consistent that behavior will be in the future. In such a situation, predicting behavior seems not very difficult. Of course, in practice this assumption is consistently demonstrated to be wrong, and we are consequently surprised when others do not act in accordance with the “innate” traits we’ve endowed them with.
Influence of Authority
The equally famous Stanford Prison Experiment and Milgram Experiments demonstrated what humans had learned practically many years before: the human bias towards being influenced by authority. In a dominance hierarchy such as ours, we tend to look to the leader for guidance on behavior, especially in situations of stress or uncertainty. Thus, authority figures have a responsibility to act well, whether they like it or not.
Influence of Stress (Including Breaking Points)
Stress causes both mental and physiological responses and tends to amplify the other biases. Almost all human mental biases become worse in the face of stress as the body goes into a fight-or-flight response, relying purely on instinct without the emergency brake of Daniel Kahneman’s “System 2” type of reasoning. Stress causes hasty decisions, immediacy, and a fallback to habit, thus giving rise to the elite soldiers’ motto: “In the thick of battle, you will not rise to the level of your expectations, but fall to the level of your training.”
Survivorship Bias
A major problem with historiography – our interpretation of the past – is that history is famously written by the victors. We do not see what Nassim Taleb calls the “silent grave” – the lottery ticket holders who did not win. Thus, we over-attribute success to things done by the successful agent rather than to randomness or luck, and we often learn false lessons by exclusively studying victors without seeing all of the accompanying losers who acted in the same way but were not lucky enough to succeed.
Tendency to Want to Do Something (Fight/Flight, Intervention, Demonstration of Value, etc.)
We might term this Boredom Syndrome: Most humans have the tendency to need to act, even when their actions are not needed. We also tend to offer solutions even when we do not enough knowledge to solve the problem.
Microeconomics & Strategy (14)
Opportunity Costs
Doing one thing means not being able to do another. We live in a world of trade-offs, and the concept of opportunity cost rules all. Most aptly summarized as “there is no such thing as a free lunch.”
Creative Destruction
Coined by economist Joseph Schumpeter, the term “creative destruction” describes the capitalistic process at work in a functioning free-market system. Motivated by personal incentives (including but not limited to financial profit), entrepreneurs will push to best one another in a never-ending game of creative one-upmanship, in the process destroying old ideas and replacing them with newer technology. Beware getting left behind.
Comparative Advantage
The Scottish economist David Ricardo had an unusual and non-intuitive insight: Two individuals, firms, or countries could benefit from trading with one another even if one of them was better at everything. Comparative advantage is best seen as an applied opportunity cost: If it has the opportunity to trade, an entity gives up free gains in productivity by not focusing on what it does best.
Specialization (Pin Factory)
Another Scottish economist, Adam Smith, highlighted the advantages gained in a free-market system by specialization. Rather than having a group of workers each producing an entire item from start to finish, Smith explained that it’s usually far more productive to have each of them specialize in one aspect of production. He also cautioned, however, that each worker might not enjoy such a life; this is a trade-off of the specialization model.
Seizing the Middle
In chess, the winning strategy is usually to seize control of the middle of the board, so as to maximize the potential moves that can be made and control the movement of the maximal number of pieces. The same strategy works profitably in business, as can be demonstrated by John D. Rockefeller’s control of the refinery business in the early days of the oil trade and Microsoft’s control of the operating system in the early days of the software trade.
Trademarks, Patents, and Copyrights
These three concepts, along with other related ones, protect the creative work produced by enterprising individuals, thus creating additional incentives for creativity and promoting the creative-destruction model of capitalism. Without these protections, information and creative workers have no defense against their work being freely distributed.
Double-Entry Bookkeeping
One of the marvels of modern capitalism has been the bookkeeping system introduced in Genoa in the 14th century. The double-entry system requires that every entry, such as income, also be entered into another corresponding account. Correct double-entry bookkeeping acts as a check on potential accounting errors and allows for accurate records and thus, more accurate behavior by the owner of a firm.
Utility (Marginal, Diminishing, Increasing)
The usefulness of additional units of any good tends to vary with scale. Marginal utility allows us to understand the value of one additional unit, and in most practical areas of life, that utility diminishes at some point. On the other hand, in some cases, additional units are subject to a “critical point” where the utility function jumps discretely up or down. As an example, giving water to a thirsty man has diminishing marginal utility with each additional unit, and can eventually kill him with enough units.
Bottlenecks
A bottleneck describes the place at which a flow (of a tangible or intangible) is stopped, thus holding it back from continuous movement. As with a clogged artery or a blocked drain, a bottleneck in production of any good or service can be small but have a disproportionate impact if it is in the critical path.
Prisoner’s Dilemma
The Prisoner’s Dilemma is a famous application of game theory in which two prisoners are both better off cooperating with each other, but if one of them cheats, the other is better off cheating. Thus the dilemma. This model shows up in economic life, in war, and in many other areas of practical human life. Though the prisoner’s dilemma theoretically leads to a poor result, in the real world, cooperation is nearly always possible and must be explored.
Bribery
Often ignored in mainstream economics, the concept of bribery is central to human systems: Given the chance, it is often easier to pay a certain agent to look the other way than to follow the rules. The enforcer of the rules is then neutralized. This principle/agent problem can be seen as a form of arbitrage.
Arbitrage
Given two markets selling an identical good, an arbitrage exists if the good can profitably be bought in one market and sold at a profit in the other. This model is simple on its face, but can present itself in disguised forms: The only gas station in a 50-mile radius is also an arbitrage as it can buy gasoline and sell it at the desired profit (temporarily) without interference. Nearly all arbitrage situations eventually disappear as they are discovered and exploited.
Supply and Demand
The basic equation of biological and economic life is one of limited supply of necessary goods and competition for those goods. Just as biological entities compete for limited usable energy, so too do economic entities compete for limited customer wealth and limited demand for their products. The point at which supply and demand for a given good are equal is called an equilibrium; however, in practical life, equilibrium points tend to be dynamic and changing, never static.
Scarcity
Game theory describes situations of conflict, limited resources, and competition. Given a certain situation and a limited amount of resources and time, what decisions are competitors likely to make, and which should they make? One important note is that traditional game theory may describe humans as more rational than they really are. Game theory is theory, after all.
Military & War (5)
Seeing the Front
One of the most valuable military tactics is the habit of “personally seeing the front” before making decisions – not always relying on advisors, maps, and reports, all of which can be either faulty or biased. The Map/Territory model illustrates the problem with not seeing the front, as does the incentive model. Leaders of any organization can generally benefit from seeing the front, as not only does it provide firsthand information, but it also tends to improve the quality of secondhand information.
Asymmetric Warfare
The asymmetry model leads to an application in warfare whereby one side seemingly “plays by different rules” than the other side due to circumstance. Generally, this model is applied by an insurgency with limited resources. Unable to out-muscle their opponents, asymmetric fighters use other tactics, as with terrorism creating fear that’s disproportionate to their actual destructive ability
Two-Front War
The Second World War was a good example of a two-front war. Once Russia and Germany became enemies, Germany was forced to split its troops and send them to separate fronts, weakening their impact on either front. In practical life, opening a two-front war can often be a useful tactic, as can solving a two-front war or avoiding one, as in the example of an organization tamping down internal discord to focus on its competitors.
Counterinsurgency
Though asymmetric insurgent warfare can be extremely effective, over time competitors have also developed counterinsurgency strategies. Recently and famously, General David Petraeus of the United States led the development of counterinsurgency plans that involved no additional force but substantial additional gains. Tit-for-tat warfare or competition will often lead to a feedback loop that demands insurgency and counterinsurgency.
Mutually Assured Destruction
Somewhat paradoxically, the stronger two opponents become, the less likely they may be to destroy one another. This process of mutually assured destruction occurs not just in warfare, as with the development of global nuclear warheads, but also in business, as with the avoidance of destructive price wars between competitors. However, in a fat-tailed world, it is also possible that mutually assured destruction scenarios simply make destruction more severe in the event of a mistake (pushing destruction into the “tails” of the distribution).
principle
application (SOP) birds eye view
step by step in depth