user testing

Using Skype for Moderated Remote Usability Testing

By | Research, Uncategorized, Usability Testing, UX | 7 Comments

I used Skype and Ecamms Call Recorder for Skype to conduct moderated remote usability testing around the world on a clickable Marvel app prototype. Heres how I got on.

I’ve been working for a Japanese sportswear company who wanted some usability testing done on a new feature they were planning for their fitness app. As the app has a healthy (pun intended) global user base they were keen to test with users from around the word which mean’t remote usability testing on a clickable prototype put together in Marvel app.

When I’ve conducted moderated remote usability testing in the past I’ve used screen sharing services like Webex and GoToMeeting but I’ve always found the requirement on the end user to install various plugins troublesome and the cost (at the time) was prohibitive.

My loose set of requirements this time around was:

  • No plugins to install
  • Low cost
  • Reliable
  • Record screen, audio and video of the participant

I settled on Skype in the end as its pretty ubiquitous these days and although the app would need setting up and installing the chances were that a lot of respondents would already have it up and running.  Because I had a few thousand potential participants I took a gamble and was explicit in the screener recruitment survey that we would be using and Skype on a desktop or laptop and this was a requirement. Despite this I still got a 20% response rate which was better than I normally get for face to face interviews (what also helped was a fairly loose set of requirements for participants and also doing the tests over the weekend).

Skype doesn’t have an inbuilt recording capability but there are a few plugins that can allow this. I used Call Recorder for Skype by Ecamm and at $29.95 was pleased with the results.

What went well with the testing:

  •  Low barrier to entry mean’t that I got a good response and including Skype as a requirement didn’t seem to put people off
  •  Not having to mess about with talking people through installing plugins on the day took a lot of the pressure off me and allowed me to focus on the testing
  •  The call and video quality was excellent so I was able to include excerpts in the final presentation deck for the client
  •  Conducting the testing at the weekend meant I got a good response and because it was remote meant I could spread it out over a couple of days without having to hire meeting rooms
  •  People seemed more at ease with the test and opened up a lot quicker because they were at home
  • It was cheap to run compared with hiring meeting room space

What went less well:

  •  Skype wasn’t quite as flawless as I hoped. Two out of the five couldn’t share their screen without Skype crashing.
  •  I also had the call drop a few times for no particular reason which was annoying and interrupted the flow

So would I use Skype again? Probably yes. Despite its flaws it was pretty easy to use for the participants and although its no substitute for face to face interviews it was still pretty effective and I got lots of good feedback. Conducting it over the weekend wasn’t ideal (for me!) but mean’t I could talk to more people and turn around the test results quickly.

How to use a user centred approach to prioritising features in your site

By | Agile, UX | One Comment

As part of my work at the BFI I’ve been thinking about how best we can integrate a more user centred approach to the prioritisation of the user stories in the project backlog, after all we’re designing the site for the user not us so why shouldn’t they have a say in what features and pages get built?

One solution to this problem is to use the Kano Model of Customer Satisfaction which was developed by Professor Noriaki Kano in the 1980’s. In a nutshell Kano says that user preferences for functions or features can be distilled into 5 categories:

1) Attractive Quality – The ‘nice to haves’  or delighters, features and functions which add to the experience but are not essential and wouldn’t be missed if not included. For example the little bag of sweets that Firebox include in their parcels.

2) One Dimensional – features which (when they work and are done well)  add a great deal but cause frustration when they do not deliver as promised.

3) Must Be – The givens. These are the things we expect to be provided and while they do not add to the users satisfaction cause a great deal of frustration when missing or badly implemented.

4) Indifferent – Features which are neither good or bad and do not result in either satisfaction or dissatisfaction with users.

5 Reverse   – The best way to describe this is assuming that all your users are alike. For example a gadget which is so overloaded with features it is at the expense of the user who thinks your product is overly complicated.

Kano illustrates this best in this diagram:

Kano Model

Now using the Kano Model in user centred design is well known, in fact Andrew Harder who gave an excellent talk on Kano as part of his User Research as a generative partner with design talk at this years UX People which is what inspired me to look into this further.

We will be using a variation of the model to test users appetite for the features we are proposing as part of our usability testing sessions. We conduct one round of user testing towards the end of each sprint (I’ll talk about our approach to user testing in another post). At the end of the testing session we will ask users to rate the features of the wireframes  (a feature could be a page, a section or a widget) and use an aggregate of those ratings as a guide for when the Product Owner is prioritising User Stories (features) in the Project Backlog.

The rating we will use will look something like this:

How would you feel if feature x was to be in the new site?

  • I’d really like it
  • I’d expect it
  • I might use it
  • Its unlikely I’ll ever use it
  • I really dislike it

These responses are then aggregated according to the persona (we recruit users for testing against personas, I’ll talk further about how we use personas in another post). So for example if we recruited 5 people who matched our Nick persona and 3 of them responded very positively to feature X then the corresponding user story in the backlog might read. “Nick really likes feature X and would like to see it included” or if we didn’t have a majority the story could read ” On the whole Nick really likes feature X and would like to see it in the new site”

So there you go. This model isn’t intended to replace the Product Owners backlog prioritisation function but should help act as a sanity check when looking at the user stories.

Looking for UX design and research help with your project? Get in touch