Uncertain relations

My visibility | behaviour | trust in you

My thanks to Chris Bragg for a line of questioning that prompting these prose; Jason Hier for promoting the dialogue from which I repeat my part in here; and Dinah Turner who generated the original visual prompt. All of which started {here}, a discussion on LinkedIn. My thanks also to Bill Sherman. Another LinkedIn thread {here}, one with a challenge I accepted yesterday, and answer at the end of this blog.

Visibility of what? v | b | t in context

Visibility for my project purposes i.e., Visibility | b | t , is addressing transparency between parties. Directed towards what is known and what remains uncertain. In our projects, how much visibility are we sharing one party (project actor) to the next?

This is how visibility relates behaviour i.e., v | behaviour | t , as transparency by one party to the next. This transparency reveals or hides certain behaviour. That could be our intentions, motivations, or actions. Derived perhaps from something as simple as our hubris or belief that we are surely right. Or something more self-interested or malevolent. From these two variables we can ask if we are affording the right level of trust i.e., v | b | trust , to the exchange. Assessing all three presents an indication of collaborative nature, as it relates to all parties supporting the intended change, as project truth.

How are we safeguarding a project from what we do not know?

This sketch, from Dinah Turner, prompted the wider discussion I refer to above. If the dot was the minimal amount of necessary information, Jason Hier teased us with asking what if it was as little as 1%, then let’s respond with the question as a percentage of what? We need to have more awareness to the reality that somethings are not knowable – but that our processes need to have the adaptability to manage these later realisations.

Image used with permission from Dinah Turner

As a graphic to reflect our limited availability of information, what was prompted here was a discussion around making best use of the little information we have. From my perspective (as related to project knowledge), the diagram also presented a third area of interest. (1) the spot of what is known; (2) the assumed everything there is; (3) a challenge to the assumption we can ever bound everything there is to know – beyond the circle. This is what Gigerenzer (2014) would reflect upon when comparing risk vs. uncertainty. It is the difference between working within a closed system vs one that interrelates to more. Or Engwall‘s “no project is an island” from which we can remove ourselves from closed system thinking in any project situation. Combining these two principles, we always have some uncertainty. I suggest the circle in the above graphic houses “the question we asked”. But outside the circle is “the question we wish we knew to pose”. From here we can hope to critically appraise the manner of any decisions being made, for what purpose, and from an information perspective we can ask “based upon what?“.

Being able to seek clarity on what the 1% represents enables better questions. Anyone who knows me, will know that my most likely answer to a question, is another question. This is because a question directs our attention to a set of assumptions and constraints. Are these parameters intended to facilitate an open dialogue, or are they intended to funnel and dissuade a wider perspective? Is this reflecting behaviour of the person posing the question, that we trust to have this right?

It is at these earliest of moments – in defining both problem and constraints – that we can begin to become unstuck. And why we should all therefore be first challenging the question, to see what visibility, behaviour, and trust, is represented. See other blogs on these areas individually, including for example sensemaking and wider problem solving perspectives.

Projects as time bound intended change

This is a dynamic position, and therefore change. In the modelling idea I have in mind, this is where my attempt to define everything by a project definition comes into play: as time bound intended change. And that any change, even one of enquiry, can be captured by this project definition.

Projects within projects

This also challenges us to consider if our collaborative practices are ever actually aimed at the same project. Or are two project actors working on their own projects and attempting to direct outcomes to their own intended outcome – even if that is at the expense of the other. I have in mind here game theory models that represent zero sum outcomes (winner:loser); or those where lesser outcomes emerge because of failures to cooperate (see prisoner’s dilemma, or tragedy of the commons, as examples).

Other factors are then able to be introduced:

  • direction of influence. Interests directed into the one project, or away from the project and directed instead toward the party with most momentary influence.
  • manner of project control to support the retained inward influence of both the one project aim and protection of all project actors.

These are factors that relate to potential outcomes. The one shared outcome, if we are claiming to be in the one same project. Each factor (visibility, behaviour, trust, influence, and control) the aggregation of contributing factors. So, if a question is asked with a hidden or misguided agenda in mind, the project of enquiry is immediately more likely to fail. Failure because it fails at least one participant, and probably the project overall. Or if the intent was misdirection, there was never a single project with the two parties in mind.

At bigger scale, this is why the inevitable uncertainty that exists is eroding this collaborative endeavour if it is simply defined and offloaded in contract. This is not the same as project outcome control. It is more simply a financial risk transfer with increased likelihood of dispute. Arguably a later revelation that project truth never existed. Only the roughly aligned interests of the two separate projects and outcomes each party was interested in, influencing, and operating with suboptimal visibility, behaviour, and trust.

I would argue this is the default position in construction. As one example where hidden agenda is almost always assumed, even if not shown. Low visibility as data is filtered between commercial boundaries. Malevolent behaviours. No trust. Contracts attempting to replace trust, but therein failing to regain control.

If this observation is accepted, then it offers an rough guide to likelihood of project success. If we see a project with inadequate control of its truth (the totality of visibility, behaviour, and trust) it is a riskier project than it needs to be. It is representing a project at risk of unseen influences, permitting malevolent interests, and abuses of empowerment bestowed. Therein is the prospect of increased potential for dispute, plus missed opportunity to intervene.

Are we one project? My ongoing hypothetical

I am yet to be convinced we can ever truly be one project. It is why this entire blogsite is called Projects | Within Projects. But I do think we can seek to ensure our own projects are more closely aligned. As well as all the other project assessments we undertake, I am suggesting this v | b | t assessment of the many influences, directed at the appropriateness of controls containing them, can be one of those higher level quick indications of the human made threats to success.

This affords a simple question, “why say yes to this project?“. Why as a potential project actor agree to enter this enterprise if the divergent interests are not a central focus of control? Why insure it? Why invest in it? Why be party to it? Why approve it? If one can heuristically identify this increased chance of failure, the questions you ask can all be directed this way.

This is visibility | behaviour | trust as a rule of thumb. A heuristic tool, directed at the overall collaborative interest at a project’s core. A work in progress. One that keeps me returning to first principles, new discourse, and regular revisits to this hypothesis as I go.

Agreed or not agreed – is that my question?

And finally…to the challenge I am responding to.

Can you distil your best ideas down into a simple question?

asks Bill Sherman via LinkedIn

Bill Sherman – a writer of thought leadership, and taking ideas to scale. His post yesterday was quite different. He compared NASA mission statements to those we each set ourselves. He offered contemporary examples of the questions NASA set to define their missions. Each a single question – pithy and capturing the imagination of any five year old or older still living with a sense of wonder. His challenge, which I agreed, is to set my idea into a single question. What is the essence of what this big idea is trying to do?

Bills advises us to be guided by the following:-

What’s your big idea that you’re pursuing?

How are you staying connected to your sense of wonder?

Are you able to explain that wonder to others?

Here’s a quick way to check:

1. Write your big idea in one sentence that evokes joy/wonder.
2. Then, test it out. Ask people what they think.
3. Keep going until people say “wow.”

I will confess to writing this entire blog with this question in mind. So here goes, attempt number one.

Can our modelling of projects be linked, to better guide all scales of intended change?

Version one

Can success or failure be gauged by a simple assessment of external influences and resulting appropriateness of project controls?

Version two

Projects are jeopardised if rogue influence gains control: can we avoid the invitations to fail?

Version three

Can we be risk savvy and reference class forecast cost?

Another reflection of what it is to be Risk Savvy, in the context of RCF

This blog is a first look at the psychological aspects of Reference Class Forecasting and how this relates to Project Management. I link this blog to several papers and contemporary academic debates that sit central to the direction project management betterment is being directed toward. These initial source flags simply highlight the contemporary nature of current debate which in some quarters may be represented as definitive truth.

This is prompted by a line in Gerd Gigerenzer’s 2014 book Risk Savvy, and a passing comment I am yet to better source. This suggestion that his perspectives differ significantly from Daniel Kahneman and Amos Tversky. Given the central theme Kahneman and Tversky play in the papers introducing Reference Class Forecasting to Project Management, these two perspectives may guide my own research better in whether one perspective can inform or must necessarily dispute the other.

Project Management and reference class forecasting – RCF

Whilst explaining some rudimentary mistakes in representing risk, Gigerenzer states the following, “left on their own people intuitively fill in a reference class to make sense for them” (pp3).

From a Project Management perspective the contemporary discussion on cost estimating is often framed around the concept of “reference class forecasting”. The Infrastructure and Projects Association (IPA) advocate this approach {click here and refer to slide 28}. Oxford Said have supported RCF and developed it into a meaningful betterment of government estimates of project cost, examples here are projects in Scotland and Hong Kong. RCF also has 21st Century and mainstream backing in psychology.

However, government advise has not been ubiquitous in its support. Note the reference here to a paper presented to a House of Commons select committee enquiry in 2019, sourced from the open records of an equivalent representative body in Newfoundland, Canada during the recent Muskrat Falls enquiry.

I remain undecided either way. I have had the privelege of attending several lectures by the Oxford Said Business School. One specifically outlined how RCF is being applied. The Gigerenzer perspective, and the RCF counter-narratives flagged here, present reason to keep asking what it is that drives our decisions. Is RCF sufficiently robust to enable defensive decision-making to be countered? Or are these two accounts compatible? Particularly if this reflects separate sets of variables and influences beyond optimism bias.

In this regard I see Gigerenzer presenting different dynamics to those of Daniel Kahneman and Amos Tversky, and the entire set of risks I believe RCF are intended to address. Both may therefore be correct, but neither complete. I will perhaps understand this better once a more complete review of the literature is undertaken.

v | b | t

Per my last blog, it is the Gigerenzer case that seems more compatible with what I am leading with, as possible root-cause. I am of the view that many of our project failings are not directly resulting from the estimates of cost, but more the divided motivations of employer and contractor that thereafter emerge. The human behaviour element, being the unaccounted for reality of colloquial decision-making motivations. This is my reason to think the Gigerenzer view to be at least as valid as the estimating bias being countered by RCF.

About Me

In psychology we are required to look beneath the mask. This blog series is attempting to unmask some hidden parts of projects to engender a more collaborative way.

Find my professional mask here:

Defensive decision-making

Risk Savvy : how to make good decisions

by Gerd Gigerenzer (2014)

This blog introduces defensive decision-making and takes a look at a book that should be on everyone’s reading list. It presents a critical examination of our shared self-serving habits in decision-making. Our shared propensity to do what comes naturally to us all – be selfish – and ultimately be the cause of wider problems in the name of a common good. The blog ends with a question of how deeply embedded this concept may dwell.

Regardless of whether project, risk, or people management sits within the remit of your roles in life, we are all making daily decisions. As agents of time-bound intended change I would argue our decisions are tightly connected within the bounds of projects, risk, and people. Projects | within projects.

Gerd Gigerenzer is a Professor of Psychology. Formerly at the University of Chicago; formerly Director (and now Emeritus Professor) of Max Planck Institute of Human Development; and founder of Simple Rational : Decision Institute, a name that corresponds to his 2015 book “Simply Rational – Decision-making in the real world”.

Gerd Gigerenzer, if Wikipedia were to be your guide, is labelled as a critical opponent of the Daniel Kahneman and Amos Tversky world of decision bias. To my mind that is a little too polarising. I have found plenty of room to apply the work of both. I am however also minded to make more of this comparison at a future moment of blogging research interest.

Several key concepts within Risk Savvy are introduced in this blog. I recommend this book for its psychological intrigue, just as enthusiastically as the Professor of Project Management who first recommended it to me. All page references hereunder are from Gigerenzer (2014).

What is it to be “risk savvy”

Gigerenzer presents the term “risk savvy” to mean our ability to actively apply risk literacy coupled with a wider skill to bridge the inevitable gap between knowledge and the unknown. An inevitable unknown, and therefore incalculable (pp3). He contends that as a society we lack this literacy, and use a flawed logic and language to erroneously overcome the unknown.

…as a percentage of what?

Gigerenzer tells us that when we are told there is a percentage chance of an event, we will each artificially add the subject matter to which this event is referring – when it is not explicitly offered. Gigerenzer offers a weather forecast example “tomorrow there is a 30% chance of rain”. He argues that to some this will mean 30% of the region in question will have rain. Some that 30% of the day will be rain effected. How we define what rain is, may vary. Others may consider this percentage a confidence level of the certainty that it will or will not rain e.g. three forecasters have said it will, seven forecasters have said it will not.

To counter the reference class error, he advocates always asking for a clarification of the reference class being framed i.e., “as a percentage of what?” (pp7). He distinguishes “absolute” from “relative” comparisons, in the context of change from one state to another. Healthcare being particularly guilty in this regard. By example the emotive response to being told a the chance of side effects in a new drug is 100% greater than before vs 1 in 10,000 is now 1 in 5,000 people are reported to have side effects.

A helpful rule, ask “as a percentage of what?”. Gigerenzer offers many pithy questions to pose throughout the book. These become tools in the decision-makers tool box of heuristics or the “adaptive toolbox” pp115-117

🧰
Adaptive tool box

A contemporary example from our Covid19 era

I offer another healthcare example (click here). In this example a risk of viral infection is presented a percentage but with not explanation as to reference class, “as a percentage of what?”. Our most contemporary science papers and government advice shown to be presenting percentage without clarity of to what these percentage refer.

The fallacy of the plan

Gigerenzer offers us a joke. On page 18, data driven certainty is presented as an illusion sold by readers of tarot cards disguised as algorithms. It is page 20 that he recites what he sources as an old Yiddish joke “do you know how to make God laugh, tell him your plans”. There are comparison I could make here to the difference between the High Reliability Organisation that is focused upon training and an informed, adaptive, and empowered work force, to the more typically hierarchical and business continuity planning approach to major event planning.

Instead, Gigerenzer spends thirty example rich pages presenting how decision-making by experienced people will out-perform decisions supported by the ill-defined parameters of detailed calculations. Rule of thumb intuitions (page 29) to which his adaptive tool box later becomes the store (page 115). The Turkey illusion of being more certain of safety the longer all is well (page 39) becomes the metaphorical explanation for why Value at Risk (VaR) becomes fallacious in the face of more significant events than the system within which it operates have defined.

🧰

Here are a selection of other helpful rules of thumb tools from pp116-121
  •  “hire well, let them do their job”
  • “decentralised operation and strategy”
  • “promote from within”
  • “Listen, then speak”
  • “nothing else matters without honesty and trustworthiness.”
  • “Encourage risks, empower decisions and ownership”
  • “Innovate to succeed”
  • “Judge the people not just the plan”
  • “mirror pecking orders to sell based on past sales”
  • “it’s never revenge”
  • “the more precise, the less transferable the rule”
  • “Less is more”

Luck and guess work

He brings our attention to Gestalt Psychology which continues to reformulate problems until the solution becomes more easily found. This proceeds to the necessary guess-work and illusory clarity we use from a young age to short-cut or simply make possible the learning of language. Not by word by word memory but by rules we learn via mistakes and slowly bettering our application in everyday use. He presents our innate ability to make guesses in other areas too. This section points out (page 49) that without error we have no learning. Furthermore without the possibility of risk bringing unexpected cross-overs there is no serendipitous discovery.

Defensive Decision Making

These examples are the early introductory remarks to introduce the concept of the defensive decision maker.

if its life or death make sure it includes your own

He presents the comparable cases of doctors and pilots and the interest in the safety checks, lessons learnt culture, and scrutiny towards change driven by cost in two similarly professional, skilled, and high pressure jobs. Various examples demonstrate the priority and insistence, and resistance to compromise, toward controls and procedures in the pre-action and post-action stages. His point being that regardless of what we may think it is to be professional, decisions become more personal and effort more willingly expended when it is your welfare at risk too.

On page 50 we are introduced to blame culture and the premise of no errors flagged, no learning or early correction possible. This exemplified as the typical pilots vs doctors enthusiasm or not for checklists. This becomes a question of motivation born out of self-interest. By page 55 this has been expanded into a wider set of defensive decision-making principles which I think we can all know as true from our own experiences and those we witness. The “we need more data”, or “don’t decide and so don’t get blamed”; or “recognition heuristics” for example choosing the bigger name is easier to defend even if it is the lesser choice. The point is all of these self-serving decisions become the means to evade accountability. In leadership I think this is everywhere, and in the context of blame, we are all at fault every time we ignore the challenges faced and just demand the head of whoever was last to duck.

I have much to introduce on this concept. In Gigerenzer, the psychological reflections upon how this is inherently wound into risk and the self-serving behaviour we all find ourselves guilty, seems to me a powerful reflection of every headline in the news. That includes the motivations for those headline chasing interests themselves, and every blame transferring opportunity we each read them in hope to find.

How deep, or how low, can we go?

My questions are many. But one I am pondering right now is can this be a little closer to a universally applicable source of our failings as whole societies. In the project language I am attempting to introduce, it reflects our interfaces, our lack of being mode, the distant we try to create between ourselves and necessary action, and the separated motivations we then each stand behind. Every time we let our singular interest in visibility | behaviour | trust defend our own needs at the expense of others, we create a project of self-interest, with its own reasons to justify a truth. This project of self-interest sitting primary and priority to others we may subscribe. The more projects | within projects we permit by the self-serving interests of our controls, the more defensive decision-making we can permit to stand.

visibility | behaviour | trust

To my way of thinking, this is precisely why we have no trust in each other. Why visibility becomes centred upon ourselves. It becomes our justification for behaving badly towards others. We divide ourselves, by the singular interests of our individual projects. We selfishly allow controls to exist that support the same. We elect leaders who advocate more of the same or we ignore them completely and just do as we please.

Perhaps the following contemporary examples can be related to this propensity to make defensively minded decisions, or blame those who do when we would do the same? The current queues for petrol; the positions we take on whether wealth or health should be Covid19s first response; the blame we put upon impotent government; the despair at a headline chasing press; the divides in our society and across borders; the self-serving politics and back-biting distractions, the executive bonus’ that go unchecked or the trade union disruptions on spurious grounds of safety; the constant erosions of interest in our schools, our hospitals, and our distant kin; the loss of interest by those who can afford it, and collective despair by those that cannot.

We are all defensive decision-making machines and we are all playing the zero sum game. As I return to university with psychology at my fingertips, I am wondering how deep this may go. Are we each even fooling ourselves, with defensive decision-making within that goes largely unseen.

About Me

In psychology we are required to look beneath the mask. This blog series is attempting to unmask some hidden parts of projects to engender a more collaborative way.

Find my professional mask here: