How do you know if your product is really working for users? You can guess, you can ask them, or best of all, you can watch them actually using it.

With OpenPrescribing, we want to make it easier for Clinical Commissioning Groups (CCGs) to monitor prescribing behaviour, and for GPs to prescribe in the best way. So it’s important that it really does help those users do those things. And the only way to verify that is with user research.

In this post, I’m writing about how we ran a recent user research session - we’re sharing it to show how we work, and to remind ourselves how we can do things better next time.

1. Work out what you want to test

We wanted to test the latest versions of the OpenPrescribing CCG and practice pages, plus some new email alert features we’re planning.

I built the first versions of these pages nearly a year ago. They just showed a few simple prescribing indicators, were only tested with a few users, and honestly weren’t that great.

More recently I rebuilt these pages to show the organisation’s performance compared to national deciles, which made unusual behaviour a lot easier to see.

But in both cases I designed and built them alone, with just a bit of high-level input from Ben. I tried hard to discover user needs and design iteratively, but inevitably spent most of the time wrangling code, servers and databases.

Thanks to new funding, we now have a team, more resource to design and test, and new features planned.

So, it was time for our first team user research session. Exciting!

2. Get a team, and a room

You’ll need:

  • a quiet room, a phone line, wifi
  • lots of wall space
  • a whiteboard
  • a printer, blu-tack, post-it notes in different colours, and pens.

Next, you want someone with user research skills to run the sessions, and your team. Our sessions were led by the brilliant consultant Henry Hadlow: also on the team were me (tech lead), Lisa (community manager) and Seb (developer).

This might seem like a lot of people - and it was (probably too many when we interviewed in person, rather than over the phone). But it’s helpful for everyone on the team to see real users using the product, as often as possible.

3. Find your users

Next, you want to find your users. Ideally, these should be the people who actually use your site - in our case that’s GPs, CCGs and pharmacists.

Awesomely, Lisa found seven users to test on with just a few days’ notice: we talked to pharmacists, GPs, and CCG staff.

Most were outside Oxford, so Lisa set everyone up with screen-sharing sessions. Not everyone in the NHS has access to Google Hangouts or Skype, so we used Webex - you have to pay for this, but if you’re testing on NHS users it’s probably necessary.

4. Prepare your tests

We wanted to test two things:

  • the current CCG pages and practice pages
  • a new, as-yet-unbuilt feature: email alerts for users about changes or opportunities in prescribing.

With the current pages, we could just share URLs. But the email features don’t exist yet, so we had to make prototypes for users to try out - Henry drew the email signup process on paper, then mocked it up in Marvel.

Meanwhile, Seb and I hacked together a prototype of the content the new emails might contain, just using Google Docs.

We printed copies of everything we were testing, and put them on the wall. And we prepared the links ready for each testing session.

5. Write a script

It’s helpful to have a script for each session. Henry wrote a script, which went roughly like this:

  • what is your role / where do you work - set the scene and get the user talking
  • what do you do to monitor and improve prescribing at the moment - we’re interested in real, current practice
  • are you aware of the site / when did you last use it and what for - this is less awkward than asking “how often do you use the site”!
  • here’s a link to a page/prototype, please talk us through what you’re thinking as you use it - listen carefully
  • how do you think you do this task / that task - specific questions to check the user understands the feature
  • any questions for us!

Also, it’s good to reassure the user at the start that you’re testing your design, not their computer skills!

6. Test, test, test

Soon was time for the first session. It’s best to ask to record the call, or take detailed notes - that is so you can share raw observations, and refer back to exactly what the user said.

We started with Henry running the calls, then switched roles. I found running the calls the hardest part! Here are some tips that Henry gave us:

  • if the user goes silent for a long time while they’re looking at your test, gently nudge them: “what are you thinking now?”
  • try to explore what the user would really do, not what they might hypothetically do
  • try to get them to be as concrete as possible.

Thirty to sixty minutes between calls is ideal - this gives you breathing space to relax and write down observations.

7. Pink post-its: observations

Everyone got a block of pink post-it notes to write up interesting observations and quotes during the call. We wrote these down and stuck them onto the relevant bit of the prototypes on the wall.

The top tip here is to write down the user’s exact quotes or actions - not your own interpretation of them. It’s reportage, not comment.

For example, we wrote: “Why on earth are they prescribing so much co-amoxiclav? Are they having a lot of dog bites?” not The user is surprised.

8. Yellow post-its: problems

After the first few calls, we reviewed the observations, and looked for themes. We used yellow post-it notes to write down what seemed to be common problems.

It turned out that no-one had any problems understanding the email flow. But there were some things in the dashboards and email text that were confusing, so on day 2 we tried to explore those more.

9. Green post-its: solutions

Once testing was over, we’d grouped the yellow post-it and tried writing down some “reckons” for solutions, on green post-its. Then we discussed as a team which ones we thought would work best.

Some of these were easy fixes: for example, we noticed that no-one read the intro text, and that many people mentioned cost pressures, so we thought shortening the text and highlighting the potential cost savings would help.

Some were longer-term problems: for example, some of our measures for GPs aren’t that meaningful when there’s only a few prescriptions of a drug in a month. We decided we need to make better measures and/or do statistical work to sort that out.

10. Prioritise your solutions

Finally, we went through the list of problems and solutions, and decided which were the most pressing, and which were the most work to implement. This makes it easy easier for Anna and Seb to work out what to do next.

We prioritised these by moving post-it notes up and down: the top priorities went highest up! High-tech problems, high-tech solutions.

11. Share your findings

After two days, it was time to share our findings and reckons with Ben, our glorious leader. You can share with stakeholders by showing them the wall, or a brief slide deck.

Now we just need to design, build and iterate the solutions we came up with, but that’s the easy bit, right?

If you’ve found this post useful, a good resource for running user research and testing is the GDS service manual. I’d also recommend Steve Krug’s Rocket Surgery Made Easy.

If you work in prescribing and would like to be part of our next round of user testing, please get in touch - we’d love to hear from you.

Thanks to our testers, who generously gave up their time: Helen, Iain, Janice, Julian, Kamal, Samantha and Susanna.