Crises such as the current pandemic force leaders to make tough decisions. Governments were—and in many cases still are—faced with practical and moral questions like “To what extent can we stifle social interactions to limit the spread of the disease, but at the same time protect businesses from going bankrupt?”, “Where’s the break-even point beyond which the suffering caused by a recession surpasses the number of lives threatened by the pandemic itself?”, and “Is the number of lives saved or lost the right measurement to use at all?”

But not only do governments have to make those decisions, as critical members the public we also need to question them. Too often though, that questioning is done in bad faith: Opposing politicians and media outlets attack those in charge simply because they are on the other team, rather than for objective reasons.

Statements like “it would have been better if we closed the schools a week earlier” or “we shouldn’t have mandated wearing face-masks in shops” are easy to make with the benefit of hindsight, but unhelpful noise at best and counterproductive garbage at worst. That kind of after-the-fact judgement doesn’t take into account what was and what wasn’t know at the time the decisions were made. It also neglects the fact that good decisions can still lead to bad outcomes—and (sometimes) vice versa.

How decisions are made

Competent leaders faced with deciding between different courses of action rely on models that predict what the future is likely to hold. Those models can be simple estimations or complex simulations, there can be high confidence or much uncertainty about them, their methodology can be transparent or opaque. But in any case they shine a light on a few key properties of a potential future state of affairs. “Strategy A will lead to a peak of 10.000 simultaneous infections but will require to halt economic activity for two months, strategy B will peak at 50.000 infections and will impede the economy for one month”, and so forth.

But those models alone are only one factor influencing the decision. Another important, but subtle, one is which property one wants to optimize for. To use an example from product management: Let’s assume you have to decide to add either feature A or B to your product and you have high confidence in the estimations about the effort each will require and the revenue each will bring in. If revenue is the one and only property you’re optimizing for the decision is easy: Just pick the one with the higher number, regardless of the effort.

If in reality things didn’t work out as planned, and you’re afterwards asked to justify your decision, you can point to the fact that, based on the given estimations and the fact that revenue was we were trying to optimize, it was the best possible option. So regardless of the outcome, you made an objectively good decision.

Completely ignoring properties such as effort, cost, or quality is of course hardly a good idea in reality. But the example illustrates that whenever there are conflicting properties one could strive for, the decision which of those to prioritize is in itself an important one. Often that meta-decision isn’t clearly communicated and agreed upon though: For a company’s top managers it might be obvious that they should maximize shareholder value first and foremost whereas the company’s employees might not care about that at all.

The lesson to learn here is: Be transparent about what you’re optimizing for. We tend to assume that others have the same priorities and values as we ourselves, which is why we never openly discuss them. That leads to dissatisfaction and disengagement, but also to leadership decisions being questioned for reasons that had nothing to to with the decisions themselves.

Saving the most lives

To return to politics: In the pandemic situation, one would assume that the top quality leaders try to optimize for would be the overall number of lives saved.

The case in the UK…

So let’s assume that when the UK government initially decided to apply the widely criticized herd-immunity strategy their top priority was protecting the most lives. To judge whether or not that was a good decision we would need to know if the best possible models available to them at the time foretold that applying this strategy would in fact result in a the smallest number of deaths overall. Again, let’s assume that that was the case. (Given that the UK administration was quick to switch strategies once the Imperial College London published a report that foretold a staggering number of deaths, one can at least assume that also the prior decision was made on reasonable grounds.)

If both those assumptions are true, we can hardly criticize that they made that decision at first, and then changed to a different strategy later. Afterwards pointing out that it would have been better to decide otherwise because of what we know now is like complaining about spilled milk: Sure, it would have been better not to spill the milk in the first place. But if I knew that my action would lead to spelling the milk before it did, I would not have taken that action at all.

…and what happened in Tyrol

In Austria, questionable decisions were also made at the beginning of this crisis. Popular winter holiday destinations were at first not closed, quarantined, or evacuated despite reports of large numbers of infections originating from those areas.

Those decisions have been criticized harshly, especially because small changes in the officials behavior at the time would likely have saved hundreds or thousands of people all over Europe from getting infected at all. Let’s try to apply the same framework of judgement that we used before with the UK example: Did the officials—again, at the time—have the right models? And what did they try to optimize for?

The first question I think is easy to answer: We have good reason to believe that Iceland and other countries informed the Tyrol officials about high numbers of tourists returning home carrying the infection. Making the mental leap from that warning to a prediction that many more hidden infections, and an escalation of the situation, was very likely should be easy for any qualified public health official, even if they didn’t have access to complex mathematical and statistical modeling tools.

What’s more questionable compared to the UK situation is the second aspect of good decision making: Did the Tyrol officials in fact try to do everything in their power to save the most lives? To limit the number of infections? To protect the vulnerable? Or did they look for a scenario in which they could keep the hotels, bars, and skiing resorts in operation for as long as possible? And if they did, what was the ulterior motive for that?

It is very well possible that the latter is true, that they tried to continue business as usual, and that they did so with morally sound motives. If their top priority was protecting lives, they may have wanted to avoid panic among large crowds of departing tourists. Maybe they needed more time to ensure organized return journeys for all guests. Maybe they were afraid of food shortages during a complete lockdown.

The problem is: That’s scenario doesn’t seem likely. Given how influential the tourism lobby is in western Austria, it is hard to believe that the money each additional day of normal winter operations would yield, and the long-term marketing damage one would have to expect resulting form an early closure of the skiing season would not have influenced the judgement of the officials. And that’s the big difference between the UK and the Tyrol case: In the former, it is hard to insinuate bad motives for making the initial decision to try to apply herd-immunity, while in the Tyrol case its getting harder and harder to assume good motives for the decisions that were made.

Conclusion

When faced with decisions we have to make assumptions about what the future will look like if we take a certain course of action. But we also need to be clear about what we’re looking for in the future. Judgement of strategic decisions—political, economic, or otherwise—always has to take into account what the decision maker could know at the time, and what they were optimizing for.

If we want to increase acceptance of the decisions we make, and to avoid being judged bad decision makers, we need to be transparent about both aspects: What models did we use and what properties did we try to optimize?