Gathering User Feedback to Help Prioritise Page Content
I’ve been working with the National Theatre recently on a redesign for their NT Live website. As part of this project we’ve been looking at the content on the production pages and wrestling with the age old web design question
‘What bit goes where on the page and how do we decide which bit is most important?’
I’ve been using this content strategists technique to help prioritise page content with content owners for a few years now and by adding in a bit of validation from the end users I've found it really helps guide the page content and prioritisation discussions which I’ve found invaluable so I thought I’d share it with you.
Firstly break all the content modules up and give them all labels ( eg. title, overview, booking etc.) and write them out on individual pieces of paper.
Next gather the project team together and set them a challenge to prioritise the labels in order of importance for the end user to complete the primary goal for the page. For example for the National Theatre the challenge was ‘If the primary goal of this page is to persuade and enable users to buy tickets put the labels in order of importance to them’.
Once we have the teams view on page prioritisation we can now see what the users think and if they agree with the teams view.
To do this we go through similar exercise and ask users to prioritise the labels according to their needs when it comes to completing the primary page function. Anything which they feel doesn’t help them achieve their goal they can discard. When they’ve done this take a quick picture of the labels for analysis later.
When all the results are in create a spreadsheet and add the labels to one column and the users you tested with each get a column. Then refer back to your testing results and for User 1 give each of their labels a number in ascending order with the label they placed first with the highest score and work your way down the list. Do the same for the remainder of the users in your test in their columns (if a user discarded a label then don’t give it a score). Next add the results up and get an aggregate score for each label. You should then be able to look at the labels score and see which ones the users ranked as most important with the highest scores right through to the labels with least importance with the lowest scores.
After we have the users scores we can compare what the user thinks is the most important to what the team does and see if there are any discrepancies.
Its not an exact science but it should be enough to give your page content prioritisation conversations a bit of grounding. It is quick exercise which I normally add onto the end of usability testing session but you may want to throw it out to a wider audience as part of a card sorting exercise.
If you do give this method a try please do let me know how you get on and if you have any suggestions to improve it.