Latest Tweets

5 Things People Get Wrong When Talking to Users

| 08.17.2009

I was talking to an engineer the other day who was describing his startup’s first experience in trying to get user feedback about their new product. Since it was a small company and the product didn’t exist in production yet, their goals for gathering user feedback were:

  • Get information about whether people thought the product was a good idea.
  • Identify potential customer types, both for marketing and further research purposes.
  • Talk to as many potential users as possible to get a broad range of feedback.
  • Keep it as cheap as possible!

He had, unsurprisingly, a number of stories about mistakes they had made and lessons they’d learned during the process of talking to dozens of people. As he was sharing the stories with me, the thought that kept going through my head was, “OF COURSE that didn’t work! Why didn’t you [fill in the blank]?” Obviously, the reason he had to learn all this from scratch was because he hadn’t moderated and viewed hundreds of usability sessions or had any training in appropriate user interview techniques. Many of things that user researchers take for granted were brand new to him. Having spoken with many other people at small companies with almost non-existent research budgets, I can tell you that this is not an isolated incident. While it’s wonderful that more companies are taking user research seriously and understanding how valuable talking to users can be, it seems like people are relearning the same lessons over and over.

In order to help others who don’t have a user experience background not make those same mistakes, I’ve compiled a list of 5 things you’re almost certainly doing wrong if you’re trying to get customer feedback without much experience. Even if you’ve been talking to users for years, you might still be doing these things, since I’ve seen these mistakes made by people who really should know better. Of course, this list is not exhaustive. You could be making dozens of other mistakes, for all I know! But just fixing these few small problems will dramatically increase the quality of your user feedback, regardless of the type of research you’re doing.

Don’t give a guided tour

One of the most common problems I’ve seen in customer interviews is inexperienced moderators wanting to give way too much information about the product up front. Whether they’re trying to show off the product or trying to “help” the user not get lost, they start the test by launching into a long description of what the product is, who it’s for, what problems it’s trying to solve, and all the cool features it has. At the end of the tour, they wrap up with a question like, “So, do you think you would use this product to solve this exact problem that I told you about?” Is there any other possible answer than, “ummm…sure?”

Instead of the guided tour, start by letting the user explore a bit on his own. Then, give the user as little background information as possible to complete a task. For example, to test the cool new product we worked on for Superfish, I might give them a scenario they can relate to like, “You are shopping online for a new pair of pants to wear to work, and somebody tells you about this new product that might help. You install the product as a plug in to Firefox and start shopping. Show me what you’d do to find that pair of pants.” The only information I’ve given the user is stuff they probably would have figured out if they’d found the product on their own and installed it themselves. I leave it up to them to figure out what Superfish is, how it works, and whether or not it solves a problem that they have.

Shut up, already

Remember, while you may have been staring at this design for weeks or months, this may be the first time your participant has even heard of your product. When you first share a screen or present a task, you may want to immediately start quizzing the participant about it. Resist that impulse for a few minutes! Give people a chance to get their bearings and start to notice things on their own. There will be plenty of time to have a conversation with the person after they’ve become a little more comfortable with the product, and you’ll get more in depth comments than if you put them on the spot immediately.

Ask open ended questions

When you do start to ask questions, never give the participant a chance to simply answer yes or no. The idea here is to ask questions that start a discussion.

These questions are bad for starting a discussion:

  • “Do you think this is cool?”
  • “Was that easy to use?”

These questions are much better:

  • “What do you think of this?”
  • “How’d that go?”

The more broad and open ended you keep your questions, the less likely you are to lead the user and the more likely you are to get interesting answers to questions you didn’t even think to ask.

Follow up

This conversation happens at least a dozen times in every test:
Me: What did you think about that?
User: It was cool.
Me: WHAT WAS COOL ABOUT IT?
User: [something that's actually interesting and helpful.]

Study participants will often respond to questions with words that describe their feelings about the product but that don’t get at why they might feel that way. Words like “cool,” “intuitive,” “fun,” and “confusing” are helpful, but it’s more helpful to know what it was about the product that elicited that user reaction. Don’t assume you know what makes a product cool!

Let the user fail

This can be painful, I know. Especially if it’s your design or product that’s failing. I’ve had engineers observing study sessions grab the mouse and show the participant exactly what to do at the first sign of hesitation. But the problem is, you’re not testing to see if somebody can be SHOWN how to use the product. You’re testing to see if a person can FIGURE OUT how to use the product. And frequently, you learn the most from failures. When four out of four participants all fail to perform a task in exactly the same way, maybe that means that the product needs to change so that they can perform the task in that way.

Also, just because a participant fails to perform a task immediately doesn’t mean that they won’t discover the right answer with a little exploration. Watching where they explore first can be incredibly helpful in understanding the partipant’s mental model of the application. So let them for fail for awhile, and then give them a small hint to help them toward their goal. If they still don’t get it, you can keep giving them stronger hints until they’ve completed the task.

Are those all the tricks to a successful user study? Well, no. But they’re solutions to mistakes that get made over and over, especially by people without much experience or training in talking to users, and they’ll help you get much better information than you would otherwise. Now get out there and start talking to your users!

Share with

6 Comments

  1. Some ‘cool’ advice here Laura. Thanks :)

    posted by Andrew at 3:53 am on 08.19.09
  2. Keeping my mouth shut was the hardest skill to learn. Next hardest was letting the user fail on his/her own. Good article, Laura. Thanks!

    posted by Bill at 10:56 am on 09.17.09
  3. Thanks, Bill! It’s amazing how hard it is NOT to say anything, isn’t it? When doing with observers in the room, I always need to spend some time with new observers before each session and explain that they must not, under circumstances, grab the mouse from the participant or explain in detail what the person is doing wrong. I learned that the hard way.

    posted by Laura Klein at 12:50 pm on 09.17.09
  4. Some really good points, thanks, I’ve marked this up for new facilitator to take onboard.
    Another related one about scenarios is about unintentionally leading participants. For example don’t as “how would you find a Christmas gift for your pet dog fido when “gifts for pets” is one of the options presented. Stive to avoid the possibility of users latch onto any words you have used. Sounds obvious but I’ve seen it happen.

    posted by Siherd at 7:35 am on 10.22.09
  5. Like you, I’ve observed all these behaviors on the part of new facilitators and I agree with you on all of them, except I have a slightly different take on the last. I let users fail, wait for a while, hint around, then I ask them to narrate the screen to me – right to left and top to bottom – which I find often jogs their minds into an idea they might not have had otherwise, but I believe it’s still important to note that point as a problem and to follow up with them after the task is over about what happened to them there: what did they expect, what did they not understand, etc. Also, in cases where the tester is completely stymied and unable to find a way out, I will take them to the next step. I do that because I now know that there’s a problem at point A (and as in the first instance, I’ll follow up on it later) but I still need the information about where the problems are in the rest of the interaction. This has worked well for me as a method for getting as much information as possible from each test.

    posted by Katie Albers at 3:56 pm on 01.15.10
  6. Hi Katie,

    Excellent suggestions for recovering from complete participant failure. I do similar things once the user is well and truly lost. I’ve often found that inexperienced moderators err on the side of not letting the user fail for long enough, which is why I was so adamant about not helping out.

    But, of course, you’re right. At some point, the test must go on, and that often includes prompting the participant in various ways.

    On tasks where I expect people might fail or where people have failed in the past, I’ll frequently have a set of prompts that I use in a particular order so that everybody gets the same hints. Then I can figure out how much help people needed on average.

    posted by Laura Klein at 4:18 pm on 01.15.10

Leave a comment