Evaluating policy using the flip of a coin

How a method that revolutionised the medical field can determine if a policy intervention works.

For decades, aid workers have looked for ways to increase attendance at voluntary child vaccinations. Even when vaccinations are free, many people don’t show up and so expose their children to life-threatening diseases. Perhaps an education programme on vaccinations and disease would help. Maybe some people object to the procedure.

Whatever their reasons, a recent study showed people in India were twice as likely to go for vaccinations if given a small bag of lentils for doing so. Its impact was larger than other programmes offering cash rewards for attendance.

This is a particularly striking example of the need for a Randomised Controlled Trial (RCT) – the gold standard for determining whether an intervention works. RCTs are the only way to definitively establish cause and effect. This method revolutionised the field of medicine and we believe it has a similar role to play in public policy.

What is a Randomised Controlled Trial?

Almost every medicine prescribed has had its effects compared to a placebo. In most of these clinical trials, researchers will split a room of patients into two groups. It is important that this split is randomised: imagine the researcher asking each patient to come forward and then flipping a coin to determine whether they receive the new medicine.

The people who do not receive the medicine are known as the control. They will often receive a placebo without being told so. After waiting for the medicine to have its effect, the researchers compare the symptoms of the control group to those of the intervention group (who actually took the medicine) to determine whether it works.

In the first example, imagine that the aid agency had offered the lentils to everyone and saw no increase in attendance. Can they be certain that the lentils were ineffective? Suppose a change in weather or other random event had coincided with their efforts? RCTs are the most rigorous way of being sure of the exact causal impact of the lentils.

In government policy, we often use our business-as-usual services as a control group. Instead of a medicine, the intervention can be a small change in the wording of a letter or a totally new government programme.

Using RCTs in our work

Companies such as Amazon run hundreds of RCTs each year. They test the effects of different website layouts, ordering processes, menus, etc. Digital RCTs are often referred to as A/B tests. Typically, an A/B test will compare two contender designs for a web page. The best performing design can be repeatedly tested against new variants to gradually optimise a page.

We have also been using RCTs to test our Behavioural Insights work in Singapore. We partnered with the Public Service Division and the Housing & Development Board (HDB) to encourage residents to make payments online. The HDB identified several thousand residents in estates undergoing upgrade works. They randomly split the residents into two groups. The HDB sent the control group a standard request for housing upgrade payments. They sent a new behavioural insights letter to the other residents and tracked whether people paid online or by visiting their local HDB branch office. More than 3 in 10 people who received the new letter paid online, compared with 1 in 10 in the control group. Of course, because it was an RCT, we can be confident that the change in online payment was due to our improved letter, and not any other factor.

Uses for the Public Service

Policy officers often have a range of good ideas for how to make our policies and services more effective at achieving their goals. RCTs allow us to rigorously test our best ideas to determine which of them achieve the results we want.

The modern world has got used to a very high standard of evidence-based medical care that is built on thousands of RCTs. Hopefully one day, the same could be said about public policy and we will have fewer cases where we find out our policies have been ineffective many years after implementing them. Instead, we would have found more effective and cost-efficient alternatives to conventional policies, such as bags of lentils.

The authors are Samuel Hanes, Director of the UK government’s Behavioural Insights Team (Singapore office), and Leonard Chen, Manager (Service Strategy), PS21 Office in the Public Service Division.

    Nov 1, 2016
    Samuel Hanes and Leonard Chen
  • link facebook
  • link twitter
  • link whatsapp
  • link email