Risk Savvy : how to make good decisions
by Gerd Gigerenzer (2014)
This blog introduces defensive decision-making and takes a look at a book that should be on everyone’s reading list. It presents a critical examination of our shared self-serving habits in decision-making. Our shared propensity to do what comes naturally to us all – be selfish – and ultimately be the cause of wider problems in the name of a common good. The blog ends with a question of how deeply embedded this concept may dwell.
Regardless of whether project, risk, or people management sits within the remit of your roles in life, we are all making daily decisions. As agents of time-bound intended change I would argue our decisions are tightly connected within the bounds of projects, risk, and people. Projects | within projects.
Gerd Gigerenzer is a Professor of Psychology. Formerly at the University of Chicago; formerly Director (and now Emeritus Professor) of Max Planck Institute of Human Development; and founder of Simple Rational : Decision Institute, a name that corresponds to his 2015 book “Simply Rational – Decision-making in the real world”.
Gerd Gigerenzer, if Wikipedia were to be your guide, is labelled as a critical opponent of the Daniel Kahneman and Amos Tversky world of decision bias. To my mind that is a little too polarising. I have found plenty of room to apply the work of both. I am however also minded to make more of this comparison at a future moment of blogging research interest.
Several key concepts within Risk Savvy are introduced in this blog. I recommend this book for its psychological intrigue, just as enthusiastically as the Professor of Project Management who first recommended it to me. All page references hereunder are from Gigerenzer (2014).
What is it to be “risk savvy”
Gigerenzer presents the term “risk savvy” to mean our ability to actively apply risk literacy coupled with a wider skill to bridge the inevitable gap between knowledge and the unknown. An inevitable unknown, and therefore incalculable (pp3). He contends that as a society we lack this literacy, and use a flawed logic and language to erroneously overcome the unknown.
…as a percentage of what?
Gigerenzer tells us that when we are told there is a percentage chance of an event, we will each artificially add the subject matter to which this event is referring – when it is not explicitly offered. Gigerenzer offers a weather forecast example “tomorrow there is a 30% chance of rain”. He argues that to some this will mean 30% of the region in question will have rain. Some that 30% of the day will be rain effected. How we define what rain is, may vary. Others may consider this percentage a confidence level of the certainty that it will or will not rain e.g. three forecasters have said it will, seven forecasters have said it will not.
To counter the reference class error, he advocates always asking for a clarification of the reference class being framed i.e., “as a percentage of what?” (pp7). He distinguishes “absolute” from “relative” comparisons, in the context of change from one state to another. Healthcare being particularly guilty in this regard. By example the emotive response to being told a the chance of side effects in a new drug is 100% greater than before vs 1 in 10,000 is now 1 in 5,000 people are reported to have side effects.
A helpful rule, ask “as a percentage of what?”. Gigerenzer offers many pithy questions to pose throughout the book. These become tools in the decision-makers tool box of heuristics or the “adaptive toolbox” pp115-117

A contemporary example from our Covid19 era
I offer another healthcare example (click here). In this example a risk of viral infection is presented a percentage but with not explanation as to reference class, “as a percentage of what?”. Our most contemporary science papers and government advice shown to be presenting percentage without clarity of to what these percentage refer.
The fallacy of the plan
Gigerenzer offers us a joke. On page 18, data driven certainty is presented as an illusion sold by readers of tarot cards disguised as algorithms. It is page 20 that he recites what he sources as an old Yiddish joke “do you know how to make God laugh, tell him your plans”. There are comparison I could make here to the difference between the High Reliability Organisation that is focused upon training and an informed, adaptive, and empowered work force, to the more typically hierarchical and business continuity planning approach to major event planning.
Instead, Gigerenzer spends thirty example rich pages presenting how decision-making by experienced people will out-perform decisions supported by the ill-defined parameters of detailed calculations. Rule of thumb intuitions (page 29) to which his adaptive tool box later becomes the store (page 115). The Turkey illusion of being more certain of safety the longer all is well (page 39) becomes the metaphorical explanation for why Value at Risk (VaR) becomes fallacious in the face of more significant events than the system within which it operates have defined.

Here are a selection of other helpful rules of thumb tools from pp116-121
- “hire well, let them do their job”
- “decentralised operation and strategy”
- “promote from within”
- “Listen, then speak”
- “nothing else matters without honesty and trustworthiness.”
- “Encourage risks, empower decisions and ownership”
- “Innovate to succeed”
- “Judge the people not just the plan”
- “mirror pecking orders to sell based on past sales”
- “it’s never revenge”
- “the more precise, the less transferable the rule”
- “Less is more”
Luck and guess work
He brings our attention to Gestalt Psychology which continues to reformulate problems until the solution becomes more easily found. This proceeds to the necessary guess-work and illusory clarity we use from a young age to short-cut or simply make possible the learning of language. Not by word by word memory but by rules we learn via mistakes and slowly bettering our application in everyday use. He presents our innate ability to make guesses in other areas too. This section points out (page 49) that without error we have no learning. Furthermore without the possibility of risk bringing unexpected cross-overs there is no serendipitous discovery.
Defensive Decision Making
These examples are the early introductory remarks to introduce the concept of the defensive decision maker.
if its life or death make sure it includes your own
He presents the comparable cases of doctors and pilots and the interest in the safety checks, lessons learnt culture, and scrutiny towards change driven by cost in two similarly professional, skilled, and high pressure jobs. Various examples demonstrate the priority and insistence, and resistance to compromise, toward controls and procedures in the pre-action and post-action stages. His point being that regardless of what we may think it is to be professional, decisions become more personal and effort more willingly expended when it is your welfare at risk too.
On page 50 we are introduced to blame culture and the premise of no errors flagged, no learning or early correction possible. This exemplified as the typical pilots vs doctors enthusiasm or not for checklists. This becomes a question of motivation born out of self-interest. By page 55 this has been expanded into a wider set of defensive decision-making principles which I think we can all know as true from our own experiences and those we witness. The “we need more data”, or “don’t decide and so don’t get blamed”; or “recognition heuristics” for example choosing the bigger name is easier to defend even if it is the lesser choice. The point is all of these self-serving decisions become the means to evade accountability. In leadership I think this is everywhere, and in the context of blame, we are all at fault every time we ignore the challenges faced and just demand the head of whoever was last to duck.
I have much to introduce on this concept. In Gigerenzer, the psychological reflections upon how this is inherently wound into risk and the self-serving behaviour we all find ourselves guilty, seems to me a powerful reflection of every headline in the news. That includes the motivations for those headline chasing interests themselves, and every blame transferring opportunity we each read them in hope to find.
How deep, or how low, can we go?
My questions are many. But one I am pondering right now is can this be a little closer to a universally applicable source of our failings as whole societies. In the project language I am attempting to introduce, it reflects our interfaces, our lack of being mode, the distant we try to create between ourselves and necessary action, and the separated motivations we then each stand behind. Every time we let our singular interest in visibility | behaviour | trust defend our own needs at the expense of others, we create a project of self-interest, with its own reasons to justify a truth. This project of self-interest sitting primary and priority to others we may subscribe. The more projects | within projects we permit by the self-serving interests of our controls, the more defensive decision-making we can permit to stand.
visibility | behaviour | trust
To my way of thinking, this is precisely why we have no trust in each other. Why visibility becomes centred upon ourselves. It becomes our justification for behaving badly towards others. We divide ourselves, by the singular interests of our individual projects. We selfishly allow controls to exist that support the same. We elect leaders who advocate more of the same or we ignore them completely and just do as we please.
Perhaps the following contemporary examples can be related to this propensity to make defensively minded decisions, or blame those who do when we would do the same? The current queues for petrol; the positions we take on whether wealth or health should be Covid19s first response; the blame we put upon impotent government; the despair at a headline chasing press; the divides in our society and across borders; the self-serving politics and back-biting distractions, the executive bonus’ that go unchecked or the trade union disruptions on spurious grounds of safety; the constant erosions of interest in our schools, our hospitals, and our distant kin; the loss of interest by those who can afford it, and collective despair by those that cannot.
We are all defensive decision-making machines and we are all playing the zero sum game. As I return to university with psychology at my fingertips, I am wondering how deep this may go. Are we each even fooling ourselves, with defensive decision-making within that goes largely unseen.
About Me
In psychology we are required to look beneath the mask. This blog series is attempting to unmask some hidden parts of projects to engender a more collaborative way.