Risk Aversity

If you happen to be a CEO, have you noticed a certain sluggishness in your department managers? A reluctance to take decisive action. Perhaps not. The problem is pervasive, but it is more noticeable from the bottom of the organization than it is from the top. The reason is that even though problem causes can often be found high up, the effects are usually felt further down.

Most organizations have a built in resistance to take action. It is generally safer for a person, especially for managers, to take no action at all. The illustration below shows how it works:

The Current Reality Tree (CRT) shows that there are two root causes contributing to exaggerated risk aversity:

  • Detected mistakes are punished
  • Decisions resulting in no action are not recorded
Because of this, it is a much safer strategy for an individual to take no action, than to take action, when faced with a problem. This discourages managers from taking action. Employees at the lower levels of the corporate hierarchy hesitate to bring problems to their managers, because they know the manager won't like it. As a result, the organization becomes risk averse.

Because information about mistakes of inaction are not recorded, upper management will probably not even be aware of the problem. (There is an old saying that managers are like mushrooms: keep 'em in the dark and feed 'em horseshit. For an individual, this is certainly a viable tactic in a risk averse organization.)

The problem, from the organization's point of view, is that being overly risk averse can hurt, or even kill it. Most business organizations that get into serious trouble do so because of what they did not do, rather than what they did. Even when they get in trouble because they did something wrong (or even because they did something right), they probably did something wrong because they did not do something right before that.

I once worked in an organization where the team I belonged to uncovered some serious problems in the way we developed software. We also worked out how to fix the problems so that they would never occur again. Just when we were set to go, our company merged with a very risk averse organization, and we got a new manager. His first directive was "don't change anything, for any reason". Partly because of this, the department collapsed. Most people in it left the company, and profits from software development dropped like a rock.
Still, it was a viable tactic for the departmental manager. He remained with the company, and the last I heard, he had gotten promoted.

How does one go about improving the way an organization handles risk? By removing the causes that make managers avoid taking action. The Future Reality Tree (FRT) below shows one way to do it:
To begin with, the organization must teach its members that making mistakes is OK. It is OK, because it is from mistakes we learn things. Success brings no new knowledge. It only confirms what we already know how to do. Failure increases knowledge (assuming we are willing to learn from our mistakes), and so has value. This value may often be greater than the cost incurred by making the mistake in the first place, provided that the organization, just not the individual, learns from it, and adapts to avoid such mistakes in the future.

An organization can do this by rewarding mistakes that lead to learning something new. It is important to ensure that mistakes are not repeated. One way to do this would be by punishing repeated mistakes, but this does not work very well. A better way is to ferret out the root causes, and fix those. By the way, root causes are almost always systemic, so it is very rare that an individual really is to blame.

It is also necessary to start recording decisions not to take action. Inactions probably cause more problems than actions in your organization. If mistakes due to inaction are not recorded, the organization cannot learn from them.

There is one important thing about evaluating mistakes: a common error is to evaluate a decision based solely on the result. This does not work very well. Most decisions have a wide range of possible outcomes. Randomness plays a big part, much larger than we are usually comfortable admitting. Rather, decisions should be evaluated on whether they are strategically and tactically sound, using the currently approved management model. (The most famous example of a company that consistently does this is Toyota.)

When the outcome of a decision differs from the outcome predicted by the management model, then the cause must be evaluated separately. Was it a random result (i.e. we don't know why, but we do not believe the model was at fault), was the model misapplied, or, horror, is the management model wrong? If it is, the model must be updated or replaced.

Be conservative. You should not adopt a management model without a lot of evidence of it being a good one. Neither should you throw one out without a lot of evidence it is a bad one.

Comments

Anonymous said…
The analysis is convincing, and dovetails with a number of other articles that appeared in the blogsphere around the same time. I collected some of them together here.

Popular posts from this blog

Waterfall - The Dark Age of Software Development Has Returned!

Waterfall vs. Agile: Battle of the Dunces or A Race to the Bottom?

Interlude: The Cost of Agile