Abstract: The efforts to contain SARS-CoV-2 and reduce the impact of COVID-19 have been supported by Test, Trace and Isolate (TTI) systems in many settings, including the United Kingdom. The mathematical models underlying policy decisions about TTI make assumptions about behaviour in the context of a rapidly unfolding and changeable emergency. This study investigates the reported behaviours of UK citizens in July 2021, assesses them against how a set of TTI processes are conceptualised and represented in models and then interprets the findings with modellers who have been contributing evidence to TTI policy. We report on testing practices, including the uses of and trust in different types of testing, and the challenges of testing and isolating faced by different demographic groups. The study demonstrates the potential of input from members of the public to benefit the modelling process, from guiding the choice of research questions, influencing choice of model structure, informing parameter ranges and validating or challenging assumptions, to highlighting where model assumptions are reasonable or where their poor reflection of practice might lead to uninformative results. We conclude that deeper engagement with members of the public should be integrated at regular stages of public health intervention modelling.

This post is inspired by a question from one of our readers, Lukasz. I’m going to outline how I find and examine research on organisations, agile/lean, and culture. Hopefully this will inspire you to dig more into what stuff is true and what is just crap.

Finding the genuine facts among the huge volume of opinion is hard. It’s hard in politics, it’s hard in management, and it’s hard in social science. As a mathematician, I come from a world things are either true or not, and I continue to find exploring ambiguous and opinion-rife research challenging.

Finding an interesting topic

First you need to know what you want to know. Inspiration for what to research can be found in case studies, papers, blogs, books, conversations, your own experience etc. I personally find my ways of thinking most easily challenged by experience, books, videos, and conferences (probably because these are accessible!).

Finding the research

Once you’ve something you want to know, and the vocabulary to describe it, I’d recommend googling with specific terms. For example, if you are interested in the impact of management on team members, I’d recommend something like: “role hierarchy team impact” or something similar. Stay away from buzzwords like “management” or “agile”.

Google scholar is good for finding paper titles, but often due to publishers you will have to pay for them. Knowing the title, if you search again specifically for those papers/authors you can often find a free version on the author’s academic page, or at least some related content.

Assessing research quality

Be cynical. Assume everyone is lying and check their “facts”.

Beware sweeping statements. It is hard to have good social science that is very general.

Use your noggin. E.G. Is the sample size big enough? Have they got a control group?

Beware research fashion

Just because something is popular to talk about (or highly cited) doesn’t make it good. A good example is Myers Briggs Type Indicators. Yes it is popular, and arguably helpful to some, but that doesn’t make it true or “the way to classify people”. Similarly some leadership styles are more heavily researched than others. The weight of research can be tempting to give in to, but keep sifting through, especially when the research is about models to help understand a topic (rather than an absolute truth).

Finally

Once you’ve something you think looks solid, a good test is to try it yourself! Run an experiment relevant to your situation, and see if you get results in line with the theory. Then tell other people what you’ve learned. (Yes I’m ignoring confirmation bias etc.)

If you have other techniques, or questions or suggested improvements to my ways of researching, please do share them in the comments!

Bear with this post as it goes through some equations at the beginning, but it is worth it. We’ll be doing some of the calculations to get this picture:

This is the set of numbers “c” such that is bounded. These z are complex numbers, which we’ll ignore for now. It is much easier to understand if we look at some examples:

Let’s say c = -1.

We start with

This is repeating, and the numbers are bounded.

Let’s now try c = 0.5.

We start with

We can see that these numbers are getting bigger and bigger, and it is not bounded.

One more: c=-1.9

It bounces around a lot, never getting very big or very small, so it is bounded. It is kinda fun to sit with a calculator and try this.

Mathematicians call this kind of system “chaos”, as it is very sensitive to the starting conditions. Sometimes this is called the butterfly effect. Note that chaotic is not the same as random: in chaotic systems if you know everything about the initial conditions you know what will happen, whereas in random systems even if you knew everything about the initial conditions you wouldn’t know what was going to happen.

Benoit Mandelbrot was one of the first mathematicians to have access to a computer. Hopefully you can also see now why Benoit Mandelbrot needed a computer to work these out. He repeated this for lots of values of c. The pretty picture we started with is really a plot of the set of c (called the Mandelbrot set), where the colours indicate what happens to the sequence (eg how quickly it converges, if it does).

You can zoom into the colourised picture to see how complex this is here. Lots of people (me included) think it is pretty cool. It is really worth taking a look to appreciate the complexity.

Other than being pretty, why does this matter?

Stepping back: This picture is made from the formula . This is so simple, and yet gives rise to infinite complexity. In the words of Jonathan Coulton,

Infinite complexity can be defined by simple rules

Benoit Mandelbrot went on to apply this to the behaviour of economic markets, among other things. Later people have applied this to fluid dynamics (video), medicine, engineering, and many other areas. Apparently there is even a Society for Chaos Theory in Psychology & Life Sciences..!

Further reading

This article is good for more explanation of the maths.

Apologies to any Pure mathematicians for the simplifications in this article.

Here is my short, simple step-by-step guide for smart collection of data.

Step 1) Determine what matters, ideally in accordance with a Company or Product vision

Step 2) Come up with as many different ways of measuring aspects that matter or impact what matters

Step 3) Collect data! Ideally setting up easily repeatable ways of this, and automated wherever possible

Step 4) Form hypotheses: how do you believe certain measures affect your vision? What do you expect the data to tell you?

Step 5) Collect more data

Step 6) Test your hypotheses

Step 7) Collect even more data. Quite simply, the more data the better.

Let’s look at an example. Suppose a government manager wishes to improve the innovation of her employees.

Step 1) Target: what matters here is “innovation” – which we define more precisely in…

Step 2) Measurement: Some of the ways in which innovation can be measured are volume of ideas, number of staff submitting ideas, percentage of staff submitting ideas, value delivered, employee perception of innovation produced, manager perception, and customer perception (in this case the public would be the customer), etc.

Step 3) Collection: This involves ensuring that things are centrally recorded and surveys are done to create a baseline.

Step 4) Hypothesis: It is suggested that an innovation rewards’ ceremony would help to improve the morale. Note that it is important that the hypothesis is formed after the first data collection, as we want to be able to dig deeper into anything interesting we find. This means that often we need to collect more detailed data more specifically targeted towards proving or disproving our hypothesis.

Step 5) Collection: A more accurate, probably quantative, measure of morale is added to the existing survey.

Step 6) Action: An innovation rewards’ ceremony is run.

Step 7) Collection: The survey is conducted again – morale is measured as having improved. Success! Note that the other measures (e.g. the volume of ideas produced) are now also being consistently measured and can be easily tracked throughout future experiments.

After running through these steps we can ask ourselves the following questions.

What do we now know?

Key measures, and how they are changing with time

Whether the key measures remain the same, or if other aspects should be considered.

What can we not imply?

“Correlation does not imply causation”: just because a trend becomes apparent this does not mean that one workplace modification is the main contributor to a measured difference. For example, if morale improves during the summer months this may have been due to nicer, warmer weather rather than any particular managerial decisions.

We cannot assume that any trends apply in similar cases elsewhere: our sample is too small and too specific. Luckily, a full research paper is not the goal here!

As some of you may have noticed, this is very similar to the Six Sigma methodology of “Define, Measure, Analyse, Improve, Control”. It also mirrors the “Plan, Do, Check, Act” process found in many management handbooks.

The detail of the steps you yourself follow is not particularly important here, all I am really suggesting is to:

Ensure you are working on what really matters.

Add wider data collection before directing all your attention to one particular area. This way you will have a richer understanding of the problems and opportunities.