Seleccionar página

Shane Parris es el creador de Farnam Street, una plataforma de divulgación acerca de sabiduría práctica, toma de decisiones y otros aspectos relacionados con la mejora personal y profesional. A través de artículos, newsletters, podcasts y contenido multimedia, Farnam Street comparte ideas basadas en el conocimiento y experiencia de personas destacadas en múltiples ámbitos y que podemos aplicar en nuestro propio desarrollo.

Shane y su equipo han editado una serie de libros sobre modelos mentales. En el primero, “The Great Mental Models: General Thinking Concepts”, abordan modelos mentales generales que podemos aplicar ampliamente para mejorar nuestra toma de decisiones, productividad y la forma en que vemos el mundo.

Nuestras notas están tomadas de la versión original del libro.

“I don’t want to be a great problem solver. I want to avoid problems”

Peter Bevelin

Preface and introduction

  • Education does not prepare you for the real world
  • Charlie Munger –> mental models: chunks of knowledge from different disciplines that can be simplified and applied to better understand the world; representation of how sth works
  • Discipline of mastering the best of what other people have figured out
  • Remove blind spots –> think better: find simple processes that help us work through problems from multiple dimensions and perspectives, allowing us to better choose solutions
  • Thinking better isn’t about being a genius. It is about the processes we use to uncover reality and the choices we make once we do
  • Removing blind spots means thinking through the problem using different lenses or models
  • Multidisciplinary thinking: ability to shift perspective by applying knowledge from multiple disciplines. Most problems are multidimensional –> having more lenses often offers significant help
  • The chief enemy of good decisions is a lack of sufficient perspectives on a problem
  • Limits of our perception. We must be open to other perspectives
  • Simple ideas are of great value because they can help us prevent complex problems
  • Understanding only becomes useful when we adjust our behavior and actions accordingly –> mental models: actionable insights that can be used to effect positive change in your life
  • You will either understand and adapt to find success or you will fail
  • We rarely reflect on our decisions and the outcomes. Without reflection we cannot learn
  • We optimize for short-term ego protection over long-term happiness. Increasingly, our understanding of things becomes black and white rather than shades of grey
  • Understanding reality not only helps us decide which actions to take but helps us remove or avoid actions that have a big downside. Sometimes making good decisions boils down to avoiding bad ones
  • People see part of the situation, the one that makes sense to them. They don’t see the entire situation unless they are thinking in a multidisciplinary way – they have blind spots. “To the man with only a hammer, everything starts looking like a nail”
  • Three buckets of knowledge: (i) laws of math and physics (inorganic systems); (ii) biology (organic systems); (iii) human history (Peter Kaufman)

Understand the world better –> more success –> more time, less stress –> more meaningful life

The map is not the territory

The description of the thing is not the thing itself. The model is not reality. The abstraction is not the abstracted

  • The only way we can navigate the complexity of reality is through some sort of abstraction. But we don’t remember that our maps and models are abstractions and thus we fail to understand their limits
  • Risks of the territory that are not shown on the map. We do not understand a model, map, or reduction unless we understand and respect its limitations
  • Having a general map, we may assume that if a territory matches the map in a couple of respects it matches the map in all respects
  • Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful
  • Territories change, sometimes faster than the maps and models that describe them –> we should update the maps based on our own experiences in the territory. That’s how good maps are built: feedback loops created by explorers
  • A map captures a territory at a moment in time. Just because it might have done a good job at depicting what was, there is no guarantee that it depicts what is there now or what will be there in the future
  • Maps are not purely objective creations. They reflect the values, standards and limitations of their creators. Models, then, are most useful when we consider them in the context they were created
  • In using maps, abstractions and models, we must always be wise to their limitations. They are reductions of sth far more complex

Circle of competence

Know what you understand, know where you are vulnerable. Deep knowledge

  • Operating in your circle of competence you will always have a better understanding of reality
  • Within our circles of competence, we know exactly what we don’t know
  • There is no shortcut to understanding. Building a circle of competence takes years of experience, of making mistakes, and of actively seeking out better methods of practice and thought
  • You can’t operate as if a circle of competence is a static thing, that once attained is attained for life. The world is dynamic. Knowledge gets updated, and so too must your circle
  • Practices to build a circle of competence:
  • Curiosity and a desire to learn. Approach your circle with curiosity, seeking out information that can help you expand and strengthen it. Learning = experience + reflection
  • Monitoring. Honest self-reporting. Keep a journal of your own performance
  • Feedback. Outside perspective
  • Operating successfully outside a circle of competence:
  • Learn at least the basics. Basic information is easy to obtain
  • Talk to someone whose circle of competence in the area is strong. Define questions you need to ask, and what information you need, to make a good decision
  • Warren Buffet –> stick to your area of special competence and be very reluctant to stray; when we stray too far, we get into areas where we don’t even know what we don’t know. We may not even know the questions we need to ask
  • Karl Popper –> falsification (vs. verification): try to show the theory is incorrect, and if you fail to do so, you actually strengthen it
  • Testable predictions / testable hypothesis: the theory has to be stated in a way to allow for experience to refute it
  • Bertrand Russell’s example of the chicken that gets fed every day until it gets its head chopped off

First principles thinking

Separating the underlying ideas or facts from any assumptions based on them

  • Platón, Sócrates, Aristóteles, Descartes: building knowledge from first principles –> foundational knowledge that would not change, elements that are non-reducible
  • Knowledge can only be built when we are actively trying to falsify it
  • Test our assumptions. Everything that is not a law of nature is just a shared belief
  • Socratic questioning –> reveal underlying assumptions and separate knowledge from ignorance:
  • Ideas: what do I think? why do I think this?
  • Assumptions: how do I know this is true?
  • Evidence: how can I back this up? what are the sources?
  • Alternative perspectives: what might others think?
  • Consequences and implications: what if I am wrong?
  • Original questions: what conclusions can I draw from the reasoning process?
  • The Five Whys –> systematically delving further into a statement or concept so that you can separate reliable knowledge from assumption
  • The way things are might not be the way they have to be. Reasoning from first principles allows us to step outside of history and conventional wisdom and see what is possible
  • Principles vs. methods: when you really understand the principles at work, you can decide if the existing methods make sense
  • The man who grasps principles can successfully select his own methods; the man who tries methods, ignoring principles, is sure to have trouble

Second-order thinking

Thinking further ahead and holistically. Not only consider our actions and their immediate consequences, but also the subsequent effects of those actions

  • “Law of Unintended Consequences”. Ej. cobras in Delhi: the snake problem was worse than when it started
  • Any comprehensive thought process considers the effects of the effects
  • We operate in a world of multiple, overlapping connections; if you don’t consider “the effects of the effects,” you can’t really claim to be doing any thinking at all
  • High degrees of connections make second-order thinking all the more critical, because denser webs of relationships make it easier for actions to have far-reaching consequences
  • Second-order thinking involves asking ourselves if what we are doing now is going to get us the results we want
  • Going for the immediate payoff in our interactions with people, unless they are a win-win, almost always guarantees that interaction will be a one-off
  • By delaying gratification now, you will save time in the future: the short term is less spectacular, but the payoffs for the long term can be enormous
  • Arguments are more effective when we demonstrate that we have considered the second-order effects and put effort into verifying that these are desirable as well
  • When making choices, considering consequences can help us avoid future problems. “And then what?” A little time spent thinking ahead can save us massive amounts of time later

Probabilistic thinking

Estimate the likelihood of any specific outcome. Understanding the likelihood of events that could impact us

  • Thomas Bayes –> Bayesian thinking: take into account what we already know when we learn something new. What might I already know that I can use to better understand the reality of the situation?
  • Conditional probability: observe the conditions preceding an event you’d like to understand
  • Fat-tailed curves. Position ourselves to survive or even benefit from the wildly unpredictable future –> antifragility (Nassim Taleb)
  • For the rare and impactful events in our world, predicting is impossible! It’s more efficient to prepare. Try to create scenarios where randomness and uncertainty are your friends, not your enemies
  • Improve your odds of encountering opportunity
  • Learn how to fail properly: (i) never take a risk that will do you in completely – never get taken out of the game completely; (ii) learn from failures and start again. Those who are not afraid to fail (properly) have a huge advantage over the rest; what they learn makes them less vulnerable to the volatility
  • Identifying what matters, coming up with a sense of the odds, doing a check on our assumptions, and then making a decision

Inversion

Approaching a situation from the opposite end of the natural starting point. Flip the problem around and think backward

  • Starting with the endpoint
  • Assume what you’re trying to prove is either true or false – what else would have to be true? à Sherlock Holmes, A scandal in Bohemia
  • Avoiding stupidity is easier than seeking brilliance
  • Think about what you want to avoid and then see what options are left over
  • Inversion process:
  • Identify the problem
  • Define your objective
  • Identify the forces that support change towards your objective
  • Identify the forces that impede change towards the objective
  • Strategize a solution –> combine steps 3&4
  • It can be just as powerful to remove obstacles to change
  • “He wins his battles by making no mistakes” (Sun Tzu)
  • Move from “how do we fix this problem” to “how do we stop it from happening in the first place”

Ocamm’s razor

Simpler explanations are more likely to be true than complicated ones

  • We often spend lots of time coming up with very complicated narratives to explain what we see around us
  • Avoiding unnecessary complexity by identifying and committing to the simplest explanation possible
  • “When you hear hoofbeats, think horses, not zebras”

Hanlon’s razor

We should not attribute to malice what is more easily explained by stupidity

  • When we see sth we don’t like happen and which seems wrong, we assume it’s intentional. But it’s more likely that it’s completely unintentional
  • People make mistakes. Don’t generally assume that bad results are the fault of a bad actor
  • “I need to listen well so that I hear what is not said”
  • The explanation most likely to be right is the one that contains the least amount of intent and energy to execute
  • Failing to prioritize stupidity over malice causes things like paranoia. For every act of malice, there is almost certainly far more ignorance, stupidity, and laziness
  • “Devil theory”: attribute conditions to villainy that simply result from stupidity
  • All humans make mistakes and fall into traps of laziness, bad thinking and bad incentives

The quality of our thinking is largely influenced by the mental models in our head

Supporting ideas

Necessity vs. sufficiency

  • The gap between what is necessary to succeed and what is sufficient is often luck, chance, or some other factor beyond your direct control [cnfr. Nassim Taleb: Fooled by randomness]
  • Billionaire success takes all of those things (hard work, intelligence, capital, etc.) and more, plus a lot of luck
  • Without them you definitely won’t be successful, but on their own they are not sufficient for success

Causation vs. correlation

  • Confusion between causation and correlation often leads to a lot of inaccurate assumptions
  • Trying to invert the relationship can help you sort through claims to determine if you are dealing with true causation or just correlation
  • We often mistakenly attribute a specific policy or treatment as the cause of an effect, when the change in the extreme groups would have happened anyway

Basado en Farnam Street. Latticework Publishing (2019)