StratChat 3 December 2020 Fake News and Fallacies

Strategies rely on evidence. But evidence can easily go wrong. We talked about some common mistakes people make, what causes them, and what you can do about it.

Advertising image for StratChatStratChat is a weekly virtual networking event for anyone with an interest in developing and executing better business strategies. It is an informal community-led conversation hosted by StratNavApp.com. To sign up for future events click here.

StratChat is conducted under Chatham House Rules. Consequently, the summary below is presented without any attribution of who said what. As such, it is an agglomerate of views - not necessarily just those of the writer or indeed of any one individual.

On 3 December 2020, we talked about the Fake News and Fallacies.

It is said we live and post-truth world. Fake News abounds. Positions polarise.

As business strategists, our role is to find the threads of logic on which business can prosper. To bring people together and align them so that co-ordinated action becomes possible.

How do we spot fake news and fallacies and what do we do about them?

Common errors

Two of the most common errors that people make are:

  1. Mistaking correlation for causality. Ice-cream sales are highly correlated to sunglasses sales - but one does not cause the other.
  2. Applying the average. It's possible to drown in a lake which is only an inch deep on average - if the lake is very wide, is mostly half an inch deep, and has a deep pit in the middle of it.
  3. Assuming data has a normal distribution when it fact it could be significantly skewed or even be multi-modal (have multiple 'peaks').

We don't have to be expert statisticians to interpret data accurately, but we do need a basic working knowledge of statistics to avoid falling into such traps.

The computer must be right

The availability of big data and powerful statistical software may be making the problem worse. More and more people are able to collect large data sets and apply statistical modelling at the push of a button. But they may not necessarily understand the theory behind it. As a result, they could produce very accurate but highly misleading analysis.

When such analysis is built into AI, the consequences become even more worrying.

We have a tendency to believe that data produced by a computer must be right. And so we stop questioning it. But even it if is right, without understanding how it was calculated, we may misunderstand what it means.

Some practical examples

The R-number

We see the problem of averages in the COVID-19 R-number. In the UK, we started by calculating the national R-number and putting the whole country in lockdown (or taking it out). Then we worked out that some areas were worse affecting than others, so we started calculating more regional R numbers and applied a regional tiered system.

But both systems leave you with the problem that the R-number in a city-centre might be much higher than it is in a nearby small town. Switching the granularity from national to regional may reduce the impact of this averaging, but it does not eliminate it or the significant impact it has on people's lives.

Fundamentally, the R-number, being an average glosses over the extreme variability in the infection mechanisms. A small number of super-spreaders (either people or events/circumstances) can infect very large numbers of people, while most infected people may infect no-one at all.

Rush hour

In the book How to Make the World Add Up, author Tim Harford tells of his frustration with his daily commute across London. Every tube and every bus was always crammed to capacity. To the point that sometimes he'd had to let them pass and wait for the next one.

He asked other people if they had a similar experience. They all said that they did. He even started collecting his own data by estimating and averaging the occupancy of each carriage he rode. It was much higher than the TfL data.

Yet the figures from Transport for London (TfL) showed that, on average, tubes and buses ran with very few people on them.

He was so frustrated with this disparity between his experience and TfL's data that he called them up and challenged their data gathering techniques. But he discovered that they were actually pretty good.

So what was the problem?

The problem is of course, that most people travel during rush hour. Rush hour traffic tends to be directional. For every rammed full tube or bus travelling in one direction, there was another one returning in the opposite direction with virtually nobody on it. And outside of rush hour when everyone is either at work or back at home, there are very few people on the tubes and buses.

And asking more people wouldn't help. Most of them were also commuters and having a similar experience. There are actually very few people who mostly travel outside of rush hour. So the chances of finding and asking one are relatively slim.

So TfL's statistics are completely accurate. But, at least for the purposes of understanding their customers' experiences of the services, also completely useless.

This is why methodologies like Six Sigma focus on data which measures processes on an end-to-end basis from the customer's perspective.

Call waiting time

Many call centres manage call waiting time. Some even set targets to reduce it.

In one call centre, an operator was seen picking up and putting down his handset in quick succession. When asked, he pointed at the average call waiting time figure. This was visible on an electronic display.  He explained that the number had been creeping up, but that by quickly hanging up on a few callers, he'd managed to get it down again.

This highlights two problems:

  1. Some customers were waiting longer than was desirable to get through. Other customers weren't getting through at all. They would just have to call back, at which point they'd probably end up waiting longer than was desirable or get cut off again.

    The average figure visible on the electronic display looked great, but no-one was actually getting what they wanted.

  2. Statistics are great for describing what's going on. If used correctly they can reveal valuable insights.

    But they can be dangerous if used to change behaviour. Of course, there are other techniques, such as the Balanced Scorecard approach, which can mitigate these problems. Every measure has a side-effect. So you need other measures to balance them out.

We've all, no doubt heard similar stories:

  1. Patients left in ambulances outside hospitals in order to keep hospital admission times within targets.
  2. Postal delivery services leaving 'we tried to deliver your parcel but no-one was home' notices, even when you were home, because it was quicker than actually delivering the parcel.

Cohort analysis

Cohort analysis provides one way of reducing the systemic effect of mixing distinctly different subgroups when calculating an average.

This can be particularly useful in measuring processes which take place over a long period of time. For example, when tracing 'address unknown' customers, it can take a while to find a new address for them, and even longer for them to respond and confirm their new address. Sometimes up to 6 months if, indeed, you ever find them at all.

A measure of success could be, for example, how many you'd correctly traced within 1, 2, 3 and 4 months of starting. You'd expect the number to go up each month.

However, if you average the number of people you've found including the ones you started looking for 1 month again and the ones you started looking for 4 months ago, you lose valuable information.

In one organisation, part of the tracing process had stopped working. But because they were averaging the data across all time-cohorts, this failure remained hidden behind the 'average' performance for several months.

The threat of expertise

It requires a certain amount of expertise to spot and rectify these sorts of problem.

Unfortunately, many people just want simple answers. Attempts to produce more accurate analysis can be dismissed as "academic". Or busy decision-makers simply turn to the back page to read the conclusions without considering the analysis that went into them.

However, there is also a more fundamental communications gap in organisations. This can mitigate against the proper use of good evidence and analysis.

The Peter Principle suggests that every employee will eventually get promoted to their level of incompetence.

A technical expert may produce a brilliant piece of accurate and insightful analysis. However, in a large organisation, that analysis may need to be passed up through a long chain of command in order to reach the decision-maker. Any of those people it passes through could have reached their level of incompetence. This creates many opportunities for the analysis to be simplified, distorted or even just filtered out. So the brilliant piece of accurate and insightful analysis may never see the light of day.

Similarly, those at the top of the organisational hierarchy usually have the broadest understanding of all of the challenges and organisation faces. This information must also pass down through the organisation to reach the technical experts. And there are similar opportunities for that insight to be distorted or filter. As a result, the technical expert is working in the dark. No matter how technically good their analysis is, it is less likely to meet the needs of the decision-makers in either its content or form.

Finding your place in organisations

Our discussion about the Peter Principles led us to a broader discussion about people's roles in organisations.

There are many examples of organisations who have promoted their best salesperson to be the head of the sales division. Often, they end up losing their best salesperson and gaining a poor sales manager. Everyone is unhappy.

Some organisations develop parallel career tracks for managers and for professionals/technical experts to get around this. But, for many, the social expectation to climb the corporate ladder remains too great.

We see similar situations where an entrepreneur/founder stays on as the CEO of an organisation long after the organisation becomes established, successful and grows. The skillsets required to found as opposed to run an organisation are different.

In fact, organisations probably go through several stages of a life cycle. Each stage has different challenges and needs different kinds of people to meet them.

Participants: Chris Fox (host), David Winders, Mark Cardwell, Simon Krystman.

Next week

Next week, on 10 December 2020, well continue the conversation by looking at organisational life cycles? How do organisations change from inception, through growth and maturity to commoditization? What impact does this have on what strategies they need and how they develop and execute them? How has strategic thinking evolved and how does it map to this understanding.

To join us for a free, informal and invariably lively conversation, please sign up here


Everytime you share anything about StratNavApp with someone else, you help them to develop and execute better business strategies, and you help to support us and our ability to continue to make the platform even better for you. So it really is a win-win!


If any part of this text is not clear to you, please contact our support team for assistance.

Updated: 2023-12-11

© StratNavApp.com 2024

Loading...