StratChat 26 November 2020: Evidence

Strategies without evidence are just wishful thinking. Join us as we get to the bottom of the role of evidence in business strategy.

Advertising image for StratChatStratChat is a weekly virtual networking event for anyone with an interest in developing and executing better business strategies. It is an informal community-led conversation hosted by StratNavApp.com. To sign up for future events click here.

StratChat is conducted under Chatham House Rules. Consequently, the summary below is presented without any attribution of who said what. As such, it is an agglomerate of views - not necessarily just those of the writer or indeed of any one individual.

On 26 November 2020, we talked about the importance of evidence in business strategy.

The power of evidence

One of the things that consultancies, especially the bigger ones, do so much better than organisations' own teams is to produce evidence. Thick decks packed full of charts and case studies.

Not everyone in a client organisation welcomes consultants' contributions. Perhaps they are worried that the consultants will uncover something inconvenient. Or that their interventions will result in some unwelcome changes in a comfortable status quo. And most of them know at least parts of the business much better than the consultants do. So consultants need to back up everything they say with evidence.

(Commercial: The importance of building a strong evidence base was one of the initial motivations for building StratNavApp.com.)

But there is more to evidence in business strategy than justifying consulting bills.

Jim Barksdale famously said "If we have data, let’s look at data. If all we have are opinions, let’s go with mine."

Evidence brings objectivity. And objectivity increases alignment.

Evidence is the antidote to the hubris of the experienced industry insider who claims they know everything about how their industry works. Or of the evangelical startup founder who is convinced they have the idea the world has been waiting for.

Because if they are right, they should have no problem finding the evidence to confirm it. And if the evidence doesn't confirm it, then you have created an opportunity to learn. You may also have avoided investing a lot of time and money into an ill-conceived idea.

Phantom evidence

What do you do when someone claims to have evidence but doesn't produce it? How do you insist on seeing it without the conversation becoming adversarial? Without implying that you don't believe them?

We discussed a few techniques:

  1. If the person is pitching a proposal, you could note that it would be easier for more people to get behind their idea more quickly if the evidence was included in the presentation. This would invariably be true, and so count as good advice.
  2. You could similarly argue, that if they want you to invest your time and reputation, you need to see the evidence in the same way that an investor would want to see it. (That is, make it clear that they are pitching you for your help.)
  3. You could present incompatible evidence - there is invariably some as few proposals are ever that definitive - and ask them to help you understand what they think is wrong with it.
  4. You could present it as a deal-breaker. That is, without sight of the evidence, you're not in a position to assist them.
  5. Depending on what assistance or support you've been asked to provide, you could caveat it. That is you could explicitly say that your input is contingent on the assumptions made on the basis of the unseen evidence being true.

But whichever route you choose, a failure to reveal the evidence should ring alarm bells.

A sample of one

Many startups are founded to solve problems that the founders have personally experienced. It is easy for them to assume that if they have experienced this problem then many others will also have experienced it.

That may well be true.

However, it also may not. So some external validation is required.

Furthermore, founders often conflate their problem experience with their solution idea. So they skip the step of validating the problem and progress straight to trying to validate the solution.

It is much better to validate the problem and the solution separately. If you don't, it becomes difficult to subsequently analyse the evidence you gather.

For example, if your solution doesn't validate, but your problem does, then you can probably fix your solution. But if your problem doesn't validate, then fixing the solution probably won't help.

Not all evidence is created equal

One of the reasons for wanting to see the evidence for yourself is that not all evidence is created equal.

Alex Osterwalder and David Bland recently produced a Webinar in which a brave founder offered up his evidence and evidence-gathering processes for live critique. You can see a recording of the webinar here.

In it, they make several interesting observations:

  1. If your evidence and evidence-gathering process are not granular enough, you won't be able to discern the causes and effects from the evidence. This can lead to assumptions about what the evidence means.

  2. It is much better to ask people about their past experiences than their future intentions. For example, if you ask them about problems they've encountered in the past and how they solved them, you will get much more accurate and detailed answers. However, if you ask them if they think they would like to use your solution, you're asking them to imagine the future. Their answers will likely be much less reliable.

    Osterwalder and Bland advocate the "Jobs to be Done" framework in this context. We've talked about this on StratChat previously when we talked about Metaphors and Business Strategy.

  3. The reliability and value of evidence are proportional to the investment the person made when giving it.

    For example, it costs a respondent next to nothing to answer a question. So the value of question answers is relatively low - even if you followed the advice in 2 above.

    On the other hand, if someone pays a deposit to pre-order your product or service, then you can be much more confident that they think it is a good idea.

    Osterwalder and Bland arranged the credibility of evidence from various sources on a scale (from highest to lowest):
    • Mock sale
    • Presale
    • Deposit
    • Letter of intent
    • Follow-up physical meeting
    • Follow up call
    • Ask to extend the interview time
    • Sign up with an email
    • URL tracking

Only by seeing the evidence for yourself, in as much detail as possible, will you be able to determine what it really means and how reliable it is.

Evidence does not need to cost the earth

If you are a multi-billion pound corporation about to embark on a significant investment programme, then it makes sense to invest a substantial sum in getting it right. This is where paying a premium for one of the larger consulting firms may make sense. They have the depth of resources required. But they are usually expensive.

However, if you're a small startup, perhaps bootstrapping or relying on funding from friends and family, then this may be both unaffordable and unnecessary.

Fortunately, there are ways, such as those outlined above, to gather quite good evidence at quite low costs.

Using digital technology enables things that weren't possible in the past.

  • Services like Ideas Nest can help you to test new ideas very quickly and cheaply.
  • AB Testing can be cheaper and more reliable, in some circumstances, than more expensive focus groups or surveys.

Unfortunately, the governance in many larger organisations mitigates against some of these more agile tactics. This is one of the reasons startups are often able to outcompete incumbents.

In fact, evidence gathered cheaply can, in some ways be better than evidence gathered at great cost. If you run a quick test over a week or two, and the results aren't favourable, then you've not really lost much. It is easy to recalibrate and carry on. If, however, you invest 6 months in a process, you are more likely to become emotionally invested and less likely to want to admit that you got it wrong.

Design thinking

Design thinking is very compatible with these sorts of agile evidence-gathering processes.

Design thinking advocates that you:

  1. Start by empathising with the problem the customer experiences.
  2. Draft a statement of the problem.
  3. Come up with some ideas.
  4. Build some kind of prototype (in the widest possible sense of the word - it could be a product prototype or even just some sort of form or mockup).
  5. Test this with customers.
  6. Reiterate back through any or all of the preceding steps depending on what you learn.

We discussed design thinking at one of our very first StratChats.

How evidence improves decision making

There are few, if any, absolute truths in the world. Evidence can take you close to them. But at the end of the day, there will always be some judgement and a decision required.

If you enter into a conversation on the basis that:

  1. Here is some evidence, and
  2. Here are the insights and conclusions that we've drawn from it

then your audience is left with three choices:

  1. Agree and accept what you're saying,
  2. Produce different evidence, or
  3. Draw different insights and conclusions from the evidence.

But the one thing they cannot easily do is just disagree without presenting something new and better.

(1) is obviously a win. But so are (2) and (3), because you will have learned something and increased the quality of the debate. You may need to go back to the drawing board, but you will go back with better evidence and/or deeper insight. All of which should lead to a better outcome.

Importantly, it moves you away from the situation where everyone around the table has an opinion and the only way to break the deadlock is for someone with authority to over-rule everyone else.

Which comes first, the chicken or the egg?

As a general rule, is it better to:

  1. come up with an hypothesis first, and then look for data to confirm or disconfirm it
    OR
  2. look at the data and then see what hypotheses emerge?

There is a whole universe of data out there. Without at least an initial hypothesis, or even a hunch, what data would you look at? But once you start looking at the data, your hypothesis will inevitably evolve.

So the answer is probably that it's a bit of a mixture of both approaches.

When looking at the data, it is also important not only to look for data which confirms your hypotheses. You will inevitably find some! So you should also consider what data might disconfirm your hypotheses and look for that too. This second search can be particularly difficult to do if it's your own idea. Especially if it's one you feel quite passionately about and have already invested a lot time and money in. An objective outsider (like an external consultant) who can look at the evidence more dispassionately can be a great help here.

Looking at the competition can also provide clues. Not in order to copy them, but in order to make sure you don't. So, you might have an hypothesis that a particular market exists and is profitable. The existence of a competitor in that space might

  • Confirm that hypothesis.
  • Help you work out if that market is already saturated or if there is still space.
  • Give you clues as to what you might need to do differently to win the remaining market share or capture market share from the existing competitor.

It's not all or nothing

Finding disconfirmatory data is not the end of the world. The decision is not necessarily as simple as to take an idea forward or to give up on it.

Evidence can be used to shape, improve and sharpen an idea. Evidence may lead to

  • A different but equally if not more attractive market segment. 
  • Or to a better product feature.
  • Or a cheaper way to produce it.

Of course, the ability to translate the evidence into a range of different ideas requires some creativity. This takes us back to an earlier StratChat we had on the subject of creativity in business strategy.

Everybody Lies

The book "Everybody Lies" by Seth Stephens-Davidowitz is full of warnings about the quality of evidence. The book is full of stories about data sets which would seem to contradict each other. In each case, the author then examines which of the two is more likely to be more reliable, and why.

Osterwalder and Bland score the quality of evidence based on the investment the respondent has made (see above). Stephens-Davidowitz concludes that evidence-based on what people do, especially when they think no one is watching, is much more reliable than evidence based on what they say (to other people). Fortunately, there are now techniques like product analytics and AB testing that we can use to observe customers' behaviour as they use our products.

This conclusion and the evidence he presents in support of it is very compelling. It's almost obvious in hindsight. Yet, so much of what so many rely on when doing business strategy is still based on evidence from what people say in surveys, customer focus groups and interviews.

See also:



If any part of this text is not clear to you, please contact our support team for assistance.

Updated: 2024-08-09

© StratNavApp.com 2024

Loading...