Latest Tweets

Part 2: The PRACTICING Design Thinkers’ Top Ten FAQs

| 09.6.2016


Part 1 of this series dealt with overall questions that people ask me about design thinking. This second part deals with questions I hear about specific parts of the process after someone has tried it on their own. If you’re not sure what design thinking is, go back to Part 1. If you’ve learned about it, tried it, and are ready to get into specifics, read on.

1. How many people should I interview for the initial user research?

Start with 4-6. Move up from there. Interview more people if you have different kinds of target users, like new moms and empty nesters. You’ll find that as you get towards 10-12, the interviews become redundant, and you should split your research into two studies — an initial and a follow-on to answer new questions. A common mistake is interviewing too many which leads to unwieldy amounts of data. Start small.

2. I showed someone a demo of my stuff and they commented on it. Is that user research?

No, this is showing someone a demo of your work. User research involves less of you showing and more of you watching people actually using your stuff and asking questions.

3. I’m not a storyteller, but I hear that storytelling is important for design thinking. Why?

If you have ever told a friend a story about your day, you are a storyteller. Storytelling is a key to the empathy building that design thinking demands. People are hardwired to think in terms of stories, and only by hearing specific stories about problems, not generalizations, are we empowered to solve them. When conducting and sharing research, focus on the stories, not the generalizations of what is going on. Think about how to solve the specific problems and address the specific emotions that are emblematic in the stories. If you think about how to solve the problem in general, the solution you come up with will be as general and uninspired as the problem statement.

4. What’s next after I do some observations?

If you have done a strong job with your initial observations, by conducting solid needfinding interviews and unpacking insights, the next steps, ideation and prototyping, will flow easily from there. If you have not done a strong job, your path will seem fuzzy. You’ll need to do more research or reach out for help analyzing your research. Users will not give you the answer directly — your analysis will. Check out techniques like affinity diagrams and journey maps as tools for unpacking data to help you get the most out of your research.

5. What’s an insight, from a design thinking perspective?

An insight is a new perspective about the problem you are solving that is not something you could have come up with sitting in your room, thinking about the problem really hard. For example, if I am working on a problem related to health, an insight would not be: people would like to talk their doctor about small medical issues from home instead of coming in, because it’s too time consuming. This is obvious. Instead, an insight about doctors and patients would be some new story or statement about what is really happening for them that makes you think,“hmmm…I hadn’t thought about it that way before.” An insight is something you can preface with: “I was amazed to discover that….” It can be very small, but it can’t be super obvious. For example, a good small insight you might have uncovered after interviewing doctors is:  “I was amazed to discover that doctors spend more time on remote video appointments with patients than on in-person appointments at the clinic because it takes so long to get the technology working for each phone call.”

6. How do I know if I have a good “how might we” question for my brainstorm?

“How might we” (HMW) questions are a big stumbling block for new design thinkers because they must have the right level of granularity — not so broad that they are unsolvable and not so narrow that they describe the solution. The best “how might we” questions include your insight as part of the statement — so if your insight is weak, so is your HMW. A good way to check if your HMW question is good is if it has at least three elements from the “Who, What, When, Where, Why” set. For example:

Too broad: “How might we help people find health related information?” (Who are the people? Why are they looking for it? When are they looking for it?)

Too narrow:“How might we help people with diabetes find online information about diabetes classes?” (describes the solution in the question)

About right: “How might we help people newly diagnosed with diabetes feel supported as they have to make unfamiliar lifestyle changes?”  (clear who, when, and what)

7. When I do brainstorms with my group, they always judge the ideas. What do I do?

Brainstorms often fail because you have to set aside the norms that we usually have when sharing ideas and follow a new set of rules to free your thinking. That’s hard to do when someone in the room, like your manager, the in-house skeptic, or the CEO, is either making faces at bad ideas or sitting across the room with their arms crossed. A few ideas for combatting these brainstorm killers:

  1. Start with a review of the rules for the brainstorm even if everyone knows them. We do this every time. Read them out loud and post them up.
  2. Bring in a new person who is not a part of your team to moderate. Have them enforce the rules. This can be someone outside of your organization, or inside but not a part of your team.
  3. Go someplace new. New places means new behaviors.
  4. Start with writing down ideas independently on pieces of paper and then make sure that everyone shares the ideas with only positive comments after each shared idea.

8. I’m stuck. How do I know this process is gonna work?

As you are going through the process, there always comes a time that you become convinced that the problem is too hard or unsolvable or the process is not deterministic enough to hint at the solution early on. My students often get stuck because they don’t know where things are heading. Design Thinking forces you to deal with ambiguity. Suspend your disbelief and continue to follow the process … you will get to the results.

9. How do I get better at design thinking?

Practice, practice, practice. Seek out coaching. Invite talented design thinkers to help you unpack your user research, brainstorm solutions, and help you figure out ways to test the efficacy of your ideas.  Design thinking is about collaboration. Go collaborate. Ask experienced design thinkers to review your plans and give you feedback. Knowing the rules is just the beginning. You must get coached and practice to become awesome.

10. How do I get others to buy into following this process?

First of all, start with just trying out the design thinking mindset. You can continue following your current processes, but try out parts of the mindset. For example, ask your team in a meeting to avoid judging any idea. If they don’t like an idea, ask them to build on it instead. Or, invite someone new to a meeting who isn’t usually on your team to try out radical collaboration. Or, when someone comes up with an idea, ask them to draw it or prototype it in some way. Just try out parts of the mindset and see what happens. Then, try the process on something small. Use it to solve a small problem at work like not enough seating at lunch and amaze others by the success. Get others to learn by involving them in the doing. Baby steps.

Or, hire someone like Sliced Bread to show you the process and inspire others to buy in.


Top 10 Design Thinking FAQs

| 01.27.2016

People looking at a map together.

Design thinking and Sliced Bread go back about 14 years. But, for the last five, I’ve been teaching design thinking at the Stanford and more recently, in the Computer Science department. The same questions about design thinking keep cropping up from clients and students so I thought I’d set the story straight.

1. What is design thinking? 

Design thinking is a human-centered process for solving problems that results in effective, innovative solutions.

It includes a series of specific steps that must be done in a specific order and a set of core principles. The steps are observations, insights, ideas, and prototypes — which are followed cyclically . The principles are empathy, thinking by doing, iteration, and collaboration.

It is a way to radically increase the likelihood that you are going to have success when you’re trying to solve a problem or do something new.

2. Can you describe the steps in the process in detail?

There are many different diagrams of the design thinking process, but our favorite displays it as a circular, iterative workflow that starts at the top left:

Let’s break down the steps:

design thiinking cycle

Design Thinking diagram based on an original design by Michael Barry.


This step is the foundation of design thinking — user research. Go out and understand what is happening with the problem you are trying to solve by observing and interviewing users. Gather data about the problem by understanding the human stories. The first time you cycle through this quadrant, the type of user research you’re doing is called Needfinding because it’s about understanding user’s needs. This is also the time to interview all the stakeholders involved in this problem — i.e not just those that have the problem, but the folks who understand the business opportunity and the technology options. Subsequent times when you cycle through this quadrant, you’ll do different kinds of observation of your users like rapid experimentation, usability testing, co-creation sessions, etc…


Once you’ve completed your observations, it’s time to unpack what you learned to find the insights that are going to drive the rest of the process. Initially, your insights will be focused on defining what you are solving for. What stories did you hear in your research that really stick out? What needs did you uncover? What frame will you take on the problem space? In subsequent iterations, insights will be focused on teasing out what you learned from user testing and rapid experimentation to evolve your idea or take it in a new direction.


In this step, you take the insights that you’ve gathered and use them to seed a brainstorm. In design thinking, brainstorming is taken to a new level through structured rules which encourage creativity and through the link to real user needs. This is also one of the best steps to incorporate radical collaboration, bringing in people from different backgrounds to help brainstorm solutions from new perspectives.


The final step in the design thinking cycle is about thinking by doing. Stop talking about the ideas and actually make something that people can evaluate and discuss! You might sketch a workflow, build a model, create an HTML wireframe — it all depends on what questions you are answering. You might prototype to explore the idea space for yourself. You might prototype to test some aspect of the idea in the next observation cycle with users. Or you might prototype to convince others to fund the idea. As you move through iterative cycles, the prototypes will become more and more refined culminating in the final solution.

Those are the four steps. Now lather, rinse, repeat.

3. Why is design thinking so effective?

Two reasons:

One, design thinking has a laser focus on the actual, human roots of a given problem. By understanding and empathizing with the distinct human stories underlying a problem, you are able to solve for real needs from the beginning. And, by remaining in touch with users throughout the design cycles, you can stop guessing and make decisions based on actual human feedback.

Two, design thinking provides a defined, replicable approach for a creative process. When followed correctly by skilled practitioners, it virtually guarantees an effective, innovative solution to problems that are simple or fabulously complex and ill-defined. This has been proven many times over by studies at Stanford, well-known companies, and in our own work at Sliced Bread. We don’t want to take on a problem without a guarantee that we will get somewhere great at the end. Design thinking gives us the confidence to offer that kind of guarantee.

4. Is design thinking the only way to solve problems and be innovative?

Of course not. There are many ways to solve problems including sitting at your desk and thinking really hard. This method happens to be extremely effective so we are going with it.

5. What kinds of problems can it be applied towards?

You can use design thinking to solve ANY problem. This includes business problems and personal problems. I used design thinking to help my client think through the process for server installation, to help my child deal with a mean kid at school, and to plan a party. It’s the same process…only the content differs.

6. Who can do it?

Anybody can learn the design thinking process and use it to solve problems. However, at the start not everyone can do it well. For example, little kids can learn the rules and techniques of soccer. However, it’s only through a lot of practice that someone can become a soccer superstar like David Beckham. Just like anything else, the best way to improve in design thinking is through practice and coaching. Getting coach by an expert mentor is the best way to learn. Reading about it is not going to make you better…it will just help you understand the rules of the game.

7. Can I just do parts of it and be successful?

Sure, you can use the tools individually and be partially successful. However, if you are solving a problem soup to nuts, you need to follow the process soup to nuts.

8. Can you do design thinking without initial user research?


9. How do I get started? 

I recommend taking a class or hiring someone (like us) to help you do it. You can read a book to get an introduction, but many of the skills like how to interview people effectively or how to pull out the insights from your interviews can’t be learned without watching others model the right techniques and without strong coaching. If there are no workshops locally, find an online class that will at least give you some examples of how it works that you can watch. Or, better yet, hire someone who can help you walk through the process the first time and teach you along the way. If you hire someone who is going to follow the process to solve your problem, make sure they emphasize the need for you to be involved…otherwise they are breaking one of the rules of design thinking, collaboration.

10. How did you get so good at it?

Two reasons:

One, I  have been doing it for a long time and have been working with the world’s best coaches. Folks like Michael Barry and Pam Hinds who also teach at the, coworkers like the fabulous Mia Silverman, Jenny Mailhot, Kim Ladin, and Molly Wilson who are fabulous design thinkers, and astute clients who ask us why we do what we do and force us to explain our processes. All of this makes you awesome.

Two, I believe in the process. I believe in it so much that I am willing to follow it in many circumstances because I can see the power that it has. When a client tells us they don’t have time for user research, we don’t do the project because we have no way of guaranteeing the results. I wouldn’t want to spend a lot of money on something where I’m not sure if it’s going to work and so we are not going to accept our client’s money under those circumstances.

For more FAQs on design thinking, check out Part 2: Ten more FAQs for the PRACTICING design thinker. 

Innovative Health Technology Launches at Health 2.0, User Experience Still Lacking

| 10.27.2015

Our favorite solutions from this year’s Health 2.0 conference Launch session all covered very different angles of the healthcare industry. The following solutions, which launched at Health 2.0, are ones that, although their designs still need refining, we think have strong potential to be successful based on their innovative concepts.

  • Gliimpse – aggregates all your healthcare services (primary care physician, pharmacy, dentist, O.B., etc.) into a single health record. Think of it like a Mint for healthcare, a solution that gives you a comprehensive view of your health history. You can view results from tests in visual format and drill down to see the history. You can take notes on things you want to discuss with your doctor, and upload images or documents and then share your record with your doctor or a family member. Although we found their UI still needed work, the concept is one that we think the market is ready for.
  • Vivor – helps people with cancer find and connect with financial aid programs in order to find funding for cancer treatments. As medical care costs skyrocket and less people are able to afford expensive treatments, we think this service will help give some patients ease of mind that there is financial help out there and they have it at their fingertips.
  • Sensentia – How many times have you had to scroll through endless PDF files to figure out if a treatment was included in your health plan? Sensentia is an interactive tool that not only gives a natural language explanation of your insurance benefits but also tracks your usage and deductibles. Users can search in natural terms, like “Is physiotherapy covered in my plan?” and get a relevant answer. We think this type of self-service solution will appeal to users and can succeed as long as the user interaction is smooth.
  • MedWand – In the realm of health-on-demand apps, we think MedWand is an interesting one to watch. Patients can communicate with their providers via video and, with the help of a device called MedWand that they keep at home, doctors can take their patients’ vitals and assess their eyes, ears, nose, and throat remotely. The key to their success will be to move away from just focusing on the hardware and also in becoming a viable option for providers, who are still finding virtual visits to be less efficient. Nailing the UX for patients so that they use the device correctly will also be key. 

In spite of these very innovative ideas, as Dr. Robert Wachter pointed out during his presentation, the lack of user centered design is the biggest issue in healthcare tech at the moment. Although some players like Accordion Health and Athena Health clearly get the importance of design, and it shows, most of the healthcare tech industry has some catching up to do.

Top Healthcare Tech Trends From Health 2.0 2015

| 10.21.2015

A team from Urban Design and Planning Company attended Health 2.0 2015 earlier this month and the first thing that was obvious was a decreased presence of wearables and devices from last year’s conference. This year was more about  innovative ideas on how to leverage the data that comes from all these new devices, as well as consumer solutions that offer patients health-on-demand, and solutions for providers that aim to alleviate the hassle of administrative tasks, such as entering information into medical records.

Some of the trends we observed at this year’s Health 2.0:

  • Big data…what now?

Big data was the big craze last year and at Health 2.0 2014 we found that companies were focused on the technical discussions around how to handle all this new data and integrate it with existing healthcare systems. Providers were not enthusiastic with the prospect of more patient data, but tech companies were up for the challenge and went ahead and built a myriad apps to analyze all of the health care data out there such as Ayasdi, Sentrian, and Healthline. Technical hurdles have indeed been overcome, however, we feel that the user’s perspective was missing from these solutions.  What are users supposed to do with all this new, aggregated data? Accordion Health was one of the rare standouts in this space, taking a very user focused approach to data analysis.

Also, the availability of new data sources raises a number of design (and ethical) questions. When a patient’s data from his wearable device is added to his chart, the patient will expect the caregivers to be able to digest and make sense of all this new data, alongside, say, his EKG. But who is responsible for this data and who is looking at it? Should doctors be trained to analyze this data? Is it too much information? Are some apps just collecting data for data’s sake?

  • Health-on-demand

This year we saw a surge in the number of apps that facilitate the care provider coming to you, either via text or video, like MedWand, Doctor on Demand, and VSee. While serving a patient need, providing convenience and timely access to care, the challenge ahead with these services is on the provider side who report that these virtual visits are, in fact, taking longer than office visits. With a 30% shortfall in primary care providers these days, these services could be putting a stress in the system instead of doing the opposite.

The question is, why are remote visits taking longer and how can they be improved? We also haven’t seen any data on adoption rates of these services nor how effective they are at addressing patient needs, aside from the increased convenience.Inflatable Bounce House

  • Provider focused apps

At a time when providers are spending 44% of their time recording notes and data into EHRs, it is high time that people look more closely at this problem. One band-aid solution we heard was the use of Google glass to record patient visits which are then entered into EHR through trained transcribers. Augmedix was a good example of this. 

However, the real culprit here is horrible EHR UIs. In fact, Dr. Robert Wachter, the keynote on Day 2 noted that one hospital advertising for a doctor stated that they don’t have an EHR as a value add.  A couple of our favorite EHRs at Health 2.0 2015 were EMA from Modern Medicine, an iPad app for visually entering patient data during patient visits, and Athena Health’s new secure texting app integrated into it’s Clinicals product.  However, most of the products we saw demoed had so much potential to be better. The question here is when will the time spent on bad technology interactions be so egregious that mainstream companies like Epic and eClinicalWorks (who, by the way, demoed a new UI that was “eh” at best) will be forced to spend money on creating a great user experience because their customer base has rebelled.

Overall, while some companies have identified compelling solutions to real problems in the healthcare industry, we did leave this conference with the unsettling feeling that healthcare doesn’t get UX…yet!

User experience design, a practice that has been around for decades, applies research methodologies to look at how to best design products that users will actually use – a simple but powerful concept. There are challenges, but I am optimistic that things are changing and the next gen solutions will put the needs of patients and providers at the center of their products.


Read more on our Health 2.0 takeaways: Innovative Health Technology Launches at Health 2.0, User Experience Still Lacking


Finding the User Testing Sweet Spot

| 04.10.2014

At Sliced Bread, we use a technique called Fast Insight Testing to get lightweight feedback as we prototype. Compared to a lot of user testing, it’s pretty unscripted and casual – not to mention quick. We’re happy to get a rant or a rave, even if it means we’re deviating a bit from our plan.

But we don’t just let people run their mouths about whatever they feel like talking about. That’s not a user test – that’s a bull session.

So here are 2 of my favorite techniques for striking that balance between “customer survey” and “psychotherapy session.”

1. Let them know what kind of feedback you want.

This sounds obvious, but it’s surprisingly easy to forget. Most people have filled out forms, surveys, and questionnaires, but less scripted research is probably unfamiliar territory. Participants are generally eager to please, but they do need to know what sort of responses you’re looking for.

We begin Fast Insight tests by telling people we are interested in their unvarnished, unfiltered personal opinion. We’ll often add “I didn’t design this, so don’t worry about hurting my feelings.”

Okay, from time to time that’s a little white lie. We’re a small group, and we all do both research and design, sometimes on the same project. But it gets our message across: don’t hold back. (And it reminds us that, even if we did design the prototype, we need to let go of our attachments to it.)

2. Ask open-ended questions about specific things.

If your interviewees seem confused or unfocused, it doesn’t mean they don’t have opinions – they just may not know where to start.

The combination of asking an open-ended question, but asking it about a very specific thing, can work wonders.

Here are a few examples:

• “See that text at the bottom of the page – what do you think of it?

• “What are your feelings about the sidebar?”

• “Tell me about the sliders on the left.”

As people are talking, we’ll frequently interject with follow-up questions. Asking “why?” over and over gets old (and can make you feel like an overly inquisitive toddler), so try one of these alternatives:

• “Tell me more about that.”

• “How does that make you feel?”

• “What makes you think/feel that?”

• “What’s going on with that?”

• “What is that like for you?”

• “I’d love to hear more about that.”

• “I’m curious about why that is.”

The reason this works so well is that it gives your users two important elements at the same time: a relevant starting point (the specific element you’re asking about) and a license to be honest and casual (the open-ended question).

Used together, these two techniques help me keep user tests focused but friendly. Give it a try, and let us know how it goes!

Which Metrics Equal Happy Users?

| 12.3.2009

One of the greatest tools available to me as an interaction designer is the ability to see real metrics. I’m guessing that’s surprising to some people. After all, many people still think that design all happens before a product ever gets into the hands of users, so how could I possibly benefit from finding out what users are actually doing with my products?

Well, for one thing, I believe that design should continue for as long as a product is being used by or sold to customers. It’s an iterative process, and there’s nothing that gives me quicker, more accurate insight into how a new product version or feature is performing than looking at user metrics.

But there’s something that I, as a user advocate, care about quite a lot that is really very hard to measure accurately. I care about User Happiness. Now, I don’t necessarily care about it for some vague, good karma reason. I care because I think that happy users are retained users and, often, paying users. I believe that happy users tell their friends about my product and reduce my acquisition costs. I truly believe that happy users can earn money for my product.

So, how can I tell whether my users are happy? You know, without talking to every single one of them?

Although I think that happy users can equal more registrations, more revenue, and more retention, I don’t actually believe that this implies the opposite. In other words, there are all sorts of things I can do to retain customers or get more money out of them that don’t actually make them happy. Here are a few of the important business metrics you might be tempted to use as shorthand for customer happiness – but it’s not always the case:


An increase in retention numbers seems like a good indication that your customers are happy. After all, happier customers stay longer, right?

But, do you mean retention or forced retention? For example, I can artificially increase my retention numbers by locking new users into a long contract, and that’s going to keep them with me for awhile. Once that contract’s up, they are free to move wherever they like, and then I need to acquire a customer to replace them. And, if my contract is longer than my competitors’, it can scare off new users.

Also, the retention metric is easy to affect with switching barriers, which may increase the number of months I have a customer while making them less happy. Of course, if those switching barriers are removed for any reason – for example, cell phone number portability – I can lose my hold over long time customers.

While retention can be an indicator of happy customers, increasing retention by any means necessary doesn’t necessarily make your customers happier.


Revenue’s another metric that seems like it would point to happy customers. Increased revenue means people are spending more, which means they like your service!

There are all sorts of ways I can increase my revenue without making my customers happier. For example, I can rope them into paying for things they didn’t ask for or use deceptive strategies to get them to sign up for expensive subscriptions. This can work in the short term, but it’s likely to make some customers very unhappy, and maybe make them ex-customers in the long run.

Revenue is also tricky to judge for free or ad-supported products. Again, you can boost ad revenue on a site simply by piling more ads onto a page, but that doesn’t necessarily enhance your users’ experience or happiness.

While increased revenue may indicate that people are spending more because they find your product more appealing, it can also be caused by sacrificing long term revenue for short term gains.

NPS – Net Promoter Score

The net promoter score is a measure of how many of your users would recommend your product to a friend. It’s actually a pretty good measure of customer happiness, but the problem is that it can be tricky to gauge accurately. It generally needs to be obtained through surveys and customer contact rather than simple analytics, so it suffers from relying on self-reported data and small sample sizes. Also, it tends to be skewed in favor of the type of people who answer surveys and polls, which may or may not be representative of your customer base.

While NPS may be the best indicator of customer happiness, it can be difficult to collect accurately. Unless your sample size is quite large, the variability from week to week can make it tough to see smaller changes that may warn of a coming trend.

Conversion to Paying

For products using the freemium or browsing model, this can be a useful metric, since it lets you know that people like your free offering enough to pay for it. However, it can take awhile to collect the data after you make a change to your product because you have to wait for enough new users to convert to payers.

Also, it doesn’t work well on ad-supported products or products that require payment upfront.

Most importantly, it doesn’t let you know how happy your paying customers are, since they’ve already converted.

Conversion to Paying can be useful, but it is limited to freemium or browsing models, and it tends to skew toward measuring the free part of the product rather than the paid product.


Engagement is an interesting metric to study, since it tells me how soon and often users are electing to come back to interact with my product and how long they’re spending. This can definitely be one of the indicators of customer happiness for ecommerce, social networking, or gaming products that want to maximize the amount of time spent by each user. However, increasing engagement for a utility product like processing payroll or managing personal information might actually be an indicator that users are being forced to do more work than they’d like.

Also, engagement is one of the easiest metrics to manipulate in the short run. One time efforts, like marketing campaigns, special offers, or prize giveaways can temporarily increase engagement, but unless they’re sustainable and cost effective, they’re not going to contribute to the long term happiness of your customers.

For example, one company I worked with tried inflating their engagement numbers by offering prizes for coming back repeatedly for the first few days. While this did get people to return after their first visit, it didn’t actually have any effect on long term user happiness or adoption rates.

Engagement can be one factor in determining customer happiness, but this may not apply if you don’t have an entertainment or shopping product. Also, make sure your engagement numbers are being driven by actual customer enjoyment of your product and not by artificial tricks.


While registration can be the fastest metric to see changes in, it’s basically worthless for figuring out how happy your users are, since they’re not interacting with the product until after they’ve registered. The obvious exception is products with delayed (i.e. lazy) registration, in which case it can act like a lower barrier-to-entry version of Conversion to Paying. When you allow users to use your product for awhile before committing, an increase in registration can mean that users are finding your product compelling enough to take the next step and register.

Registration is only an indicator of happy customers when it’s lazy, and even then it’s only a piece of the puzzle, albeit an important one.

Customer Service Contacts

You’d think that decreasing the number of calls and emails to your customer service team would give you a pretty good idea of how happy your customers are. Unfortunately, this one can be manipulated aggressively by nasty tactics like making it harder to get to a representative or find a phone number. A sudden decrease in the number of support calls might mean that people are having far fewer problems. Or, it might mean that people have given up trying to contact you and gone somewhere else.

Decreased Customer Service Contacts may be caused by happier customers, but that’s not always the case.

So which is it?

While all of these metrics can be extremely important to your business, no single one can tell you if you are making your customers happy. However, looking at trends in all of them can certainly help you determine whether a recent change to your product has made your customers happier.

For example, imagine that you introduce a new element to your social networking site that reminds users of their friends’ birthdays and then helps them choose and buy the perfect gifts. Before you release the feature, you decide that it is likely to positively affect:

  • Engagement – every time you send a reminder of a birthday, it gives the user a reason to come back to the product and reengage.
  • Revenue – assuming you are taking a cut of the gift revenue, you should see an increase when people find and buy presents.
  • Conversion to Paying – you’re giving your users a new reason to spend money.
  • (Lazy) Registration – if you only allow registered users to take advantage of the new feature, this can give people a reason to register.
  • Retention – you’re giving users a reason to stay with you and keep coming back year after year, since people keep having birthdays.

Once the feature is released, you look at those numbers and see a statistically significant positive movement in all or most of those metrics. As long as the numbers aren’t being inflated by tricks or unsustainable methods (for example, you’re selling the gifts at a huge loss, or you’re giving people extra birthdays), you can assume that your customers are being made happy by your new feature and that the feature will have a positive impact on your business.

Of course, while you’re looking at all of your numbers and metrics and analysis, some good old fashioned customer outreach, where you actually get out and talk directly with users, can also do wonders for your understanding of WHY they’re feeling the way they’re feeling. But that’s another post.

Interested? You should follow me on Twitter.

For more information on the user experience, check out:

6 Reasons Users Hate Your New Feature

| 11.13.2009

You spend months on a new feature for your existing product: researching it, designing and building it, launching it. Finally, it’s out in the world, and you sit back and wait for all those glowing comments to come in about how happy your users are that you’ve finally solved their biggest problems. Except, when the emails, forum posts, and adoption data actually come in, you realize that they hate it.

There is, sadly, no single reason why your new feature failed, but there are a number of possibilities. The failure of brand new products is its own complicated subject. To keep the scope narrow, I’m just going to concentrate on failed feature additions to current products with existing users.

Your Existing Product Needs Too Much Work

Ah, the allure of the shiny new feature! It’s so much more exciting to work on the next big thing than to fix bugs or improve the user experience of a boring old existing feature.

While working with one company, I spoke with and read forum posts written by thousands of users. I also used the product extensively myself. One of the recurring themes of the complaints I heard was that the main product was extremely buggy and slow. The problem was, fixing the bugs and the lagging was really, really hard. It involved a significant investment in infrastructure change and a serious rewrite of some very tricky code.

Instead of buckling down and making the necessary improvements, management spent a long time trying to build new features on top of the old, buggy product. Unfortunately, the response to each new, exciting feature tended to be, “Your product still crashes my computer.  Why didn’t you make it stop doing that instead of adding this worthless thing that I can’t use?”

Now, you obviously don’t need to fix every last bug in your existing offering before you move on and add something new. You do, however, need to be sensitive to the actual quality of your product and the current experience of your users before adding something new. You wouldn’t build a second story on a house with a shaky foundation. Don’t tack brand new features onto a product that has an unacceptably high crash rate, severe usability problems, or that runs too slowly for a significant percentage of your users.

Before you add a new feature to a product, ask yourself, “Have I fixed the major bugs, crashes, and UX issues that are currently preventing my users from taking advantage of core features?”

Your Product Interface Is a Giant Mess

Remember the old Saturday Night Live spoof commercial that advertised, “It’s a floor wax! It’s a dessert topping!”? It’s not as funny when it’s true. Products cannot do everything, and when they try, they end up with interfaces far too complicated for the average user to navigate.

I see this happen all the time, especially with startups looking for a way to make their product appeal to more people. Instead of improving their core product and adding features that enhance that experience, they add unrelated feature after unrelated feature, often stolen directly from more successful companies with larger user bases. Their goal is to find something that makes them blow up huge, but they just end up with an overly complicated product that tries to do too many things and doesn’t do any of them well.

It’s not just startups that suffer from this. Products that have been around for many years often get bogged down with feature after feature, all of which have to be supported because some fraction of the user base still uses it. These products then become vulnerable to new challengers with more focused, easy to use interfaces and smaller feature sets.

Of course, there are times when companies have to take their products in a new direction. For example, Flickr started as a set of tools for an MMORPG called Game Neverending. The game has ended, but Flickr lives on as an entirely different business.  PayPal began as a way to make PDA to PDA mobile payments, but that feature got killed years ago when they realized that web payments were a far better business model.

When you do find that killer feature that’s going to change your whole business model, commit to it and make it a serious focus rather than burying it under dozens of less popular features. Don’t try to be all things to all people, or you will end up with nothing but a giant, unusable mess.

Before adding a new feature, ask yourself, “Does this enhance my current product experience or just add to an already confusing and cluttered interface?” And, if it doesn’t fit with your current product offering but you still want to do it, ask, “Am I prepared to cut other features to make this part of my core offering and simplify my experience?”

You Didn’t Build What They Asked For

Let’s face it, sometimes your priorities are different from your users’. For whatever reason, be it a new business partnership, a need for a new revenue stream, or the desire to attract a different group of users, sometimes you’re going to build something that your current users don’t want and didn’t ask for.

This isn’t always a bad thing. For example, something that annoys your current non-paying users but attracts a whole slew of new, paying users is worth a few nasty emails to your customer service department. Just make sure that the new feature is really going to do what you think it will. It sucks to piss off your current, paying customers to build a feature that never really fulfills its initial promise. Trust me on this.

Before building a feature that potentially has more benefits to your company than your current users, ask yourself, “Am I prepared to deal with the fact that this is going to annoy some of my customers, and what is the real likelihood that I will get more out of this than I will lose?”

You Built EXACTLY What They Asked For

I know. It doesn’t seem fair. They’re angry if you don’t do what they want, and they’re angry if you do what they want.

The truth is that users will often ask you for a solution when it would really be more helpful to tell you that they have a problem. I’ve written more extensively about when you shouldn’t be listening to your users, but the upshot is that users aren’t great predictors of which brand new features will be big hits. Sometimes users will tell you that they want a toaster in their car, when what they really mean is that they don’t have time to make breakfast in the morning.

Before building a new feature that your customers are demanding, ask yourself, “What known user problems is this solving, and is this the best way to solve it for everybody?”

The Feature’s Not Finished

Now, I’m all for building the minimum viable product, getting it out in front of users, and then iterating on it to improve it, but some features just aren’t ready for prime time. By launching a half baked feature without key functionality, you’re running the risk of turning a lot of people off on the idea before they ever get a chance to really try it out.

Remember, your customers aren’t in the conference room with you when you come up with your grand vision. They don’t know where you’re going with this neat new idea. They’re judging the feature based on their first experience with it. Make sure that the first version is at least usable and hopefully that it’s far enough along that users can see the same promise that you do.

Also, good enough to ship doesn’t necessarily mean good enough to remain in your product long term. A big part of shipping early is continuing to improve the feature once it’s been out for awhile. One company I worked with had the tendency to ship early versions of features and then let them just sit there gathering dust, rather than iterating on them until they were truly high quality. What they ended up with was an enormous product that all seemed about half finished and a lot of unhappy customers who didn’t believe features would ever improve past version 1.0.

Before shipping a new feature, ask, “Is this good enough that users will get why they should care about it? And, if they do care about it, am I committed to improving it?”

The Feature’s Not Finishable

At many of the companies I’ve worked with, features have tended to evolve before they even get built. What generally happens is this: you have an idea based on something you’ve heard from users; that idea gets brainstormed and grows based on internal input; UX and visual designers spec out the whole idea, often expanding on the original idea; then engineering gets involved and gives a time estimate of how long the feature will take to build; finally the whole thing gets cut back by about 80% based on the estimates.

Unfortunately, the 20% you end up implementing may not solve the original problem. That means, when you finally announce your great new feature, users who originally asked to have that particular problem solved are justifiably upset.

Before drastically cutting your new feature back, ask yourself, “Does this still solve the original problem I was trying to solve?” If it doesn’t, ask, “Can this problem be solved with a reasonable level of investment?” There’s no shame in answering no to either or both of these questions, as long as you decide not to go forward with the new feature.

The Secret to Making Everybody Love Everything You Make

I’m joking. There’s no secret. The truth is that it’s almost certainly impossible. But by asking yourself the right questions during your feature development phase, you can dramatically cut back on time spent creating things your users hate.

And never forget, when you do build something they hate, acknowledge it, apologize for it, and fix it. It will go a long way toward making your users happy again, and it might even get them to like that neat new feature you just shipped.

Interested? You should follow me on Twitter.

For more information on the user experience, check out:

Is Continuous Deployment Good for Users?

| 11.4.2009

The recent release of Windows 7 got me thinking about development cycles. For those of us who suffered through the last 2+ years of Vista, Windows 7 has been a welcome relief from the lagging, bugs, and constant hassle of a failed operating system. Overall, as a customer, I’m pretty happy with Windows 7. But, at least on my part, there is still some latent anger – if Windows 7 hadn’t been quite as good as it seems to be, they would have lost me to Apple. They still might.

A big part of my unhappiness is the fact that I had to wait for more than two years before they fixed my problems. That’s a lot of crashes and frustration to forget about.

One approach that many software companies have been adopting to combat the huge lag time built into traditional software releases is something called continuous deployment. This sort of deployment means that, instead of having large, planned releases that go through a strict process and may take months or years, engineers release new code directly to users constantly, sometimes multiple times a day. A “release” could include almost anything: a whole new feature, a bug fix, or a text change on the landing page.

I worked with a software development organization that practiced continuous deployment on a very large, complicated code base, and I can definitely say, the engineers loved it. From the point of view of the employees, continuous deployment was a giant win.

But how was it for the users? The fact is, some decisions that seem like they only affect engineering (or marketing, business, PR, etc.) can actually have a huge impact on end users. So, whenever organizations make decisions, they should always be asking, “how might this affect my customers, and how can I make it work best for them?”

Is Continuous Deployment Good For Users?

As with so many decisions, the answer is yes and no. Continuous deployment has some natural pros and cons for the customer experience, but knowing about them can help you fix the cons and benefit even more from the pros.

Big Customer Wins

Fast Bug Fixes

Perhaps the biggest win for users is that bugs can get addressed immediately. Currently, even Microsoft releases patches for some of its worst security holes, but there is certainly a class of non-critical, but still important bugs that have to wait until the next major release to get addressed. That means weeks, months, or even years of your users dealing with something broken, even if the fix is simple. In continuous deployment, a fix can be shipped as soon as it’s done.

Fast Things vs. Slow Things

Similar to the first point, continuous deployment lets you get everything, not just bug fixes, to users as soon as they’re ready to go. Small features, easy changes, and UI tweaks don’t have to wait for larger, unrelated features to be released to customers. After all, should a new design for the splash screen really have to wait on the implementation of a whole new payment system?

More Opportunities for Community Involvement

If you’re having a constant dialog with your customers (you are, right?), they’re probably making some pretty good suggestions about problems they’re having or ways to improve the product. A by-product of the first two benefits is that those users are going to feel even more involved in the development process when their suggestions or concerns are dealt with quickly, rather than if they have to wait months or even years for the next major release.

(Mostly) Avoidable Customer Problems

As with anything, continuous deployment can also cause some problems for users. Of course, some of these problems can exist in big, staged deployments as well, but these are a few things in particular to watch for.

Constant Change

Imagine if every day, the layout on your car changed, sometimes slightly, other times drastically. You want to drive to the store, but the steering wheel is on the other side and you’ve suddenly got an extra pedal. It would make it a lot harder to get where you were going, wouldn’t it?

Well, presumably your users are using your product to get something done, and they’ve got a certain way that they’ve learned to do it. Continuous deployment can mean that the interface for your product can change at any moment, even several times a day. If features constantly appear and disappear, it can be very disruptive to your users’ process.

There are a few things you can do to minimize the disruption. First, make sure that you’re testing your biggest changes on small cohorts of people. Iterate on a subset of your user base, rather than hitting every single user with every single change. This will limit the change that any individual sees while still giving you the benefit of constantly pushing code to customers.  In fact, use this as an opportunity to do the A/B testing you should be doing anyway.

Also, and this should be obvious, try to limit your truly disruptive user interface changes so that things don’t feel like they’re in constant flux. You can still change things, but be aware of how frequently you’re making major changes and stay in contact with your users to make sure they’re not feeling dizzy.

Inconsistent UIs

When a big new release is planned, often there is a comprehensive design phase where all the new changes are mapped out and discussed. This means that any inconsistencies in the UI can be found and addressed.

In continuous deployment, different pieces of the product are getting built and shipped to customers all the time, and there is rarely a time when the entire UI is reevaluated as a single entity. This means that sometimes UI standards can tend to…oh, let’s say evolve.

This problem can be controlled by having a UX team member embedded in the development team and constantly working with the engineers to enforce standards before things ship to customers. It can also be improved by providing wireframes, visual designs, and tools like templates to developers so that the look and feel of the product doesn’t shift too dramatically over time.

Avoiding QA

I’m certainly not claiming that every single thing in a traditional release process gets a full QA pass, but I do find that continuous deployment makes it easier for code to slip out to users without any human testing at all. Any time you give engineers the ability to ship code directly into production, you’re tempting fate. Somebody’s going to say, “oh, it’s just a tiny change,” and it’s going to get out without any testing. I’m not naming any names, but you only have to have a tiny change break the entire product once before you realize that there’s no such thing as a tiny change.

Also, continuous deployment can make certain types of testing much less likely to happen. While large release cycles tend to have a code freeze and weeks or months devoted to testing, hopefully including regression, unit tests, and end to end testing, continuous deployment doesn’t necessarily have that baked into the process.

You can always add periodic end to end testing of the product to your own process, of course, and it can be quite helpful in improving code quality, especially when your engineers occasionally slip through a “tiny change.”

Communication Issues

When you’re constantly releasing new features and bug fixes, communication with your users can be a challenge. You don’t have one big release with new help docs, a big roll out plan, and an advertising campaign. Instead, stuff is coming out all the time, and users can get overwhelmed by keeping up with the changes.

Context sensitive help and inline information for each feature can help users get quickly oriented.  Also, clearly marking new features as alpha or beta can let users know when a feature is still being developed so they can set their expectations accordingly.

Documentation can also easily get out of date when things are getting released constantly. Big, staged development cycles often have a built-in time for creating documents. Typically, manuals or help documentation and FAQs go through QA sometime after code freeze and before release. But since you’re not necessarily doing a big, monolithic release with continuous deployment, you can end up with this material never getting any sort of end to end editing to make sure they stay current. Make time for this. It’s good for both your customer experience and your customer support team.

Frequent Downloads and Updates

While continuous deployment is quite natural with web applications, even downloaded products can be constantly updated. However, you should always be aware of the burden you’re placing on your user base. If you’re forcing people to download a large file and go through an installation process too often, you’re going to annoy people.  As a personal example, iTunes appears to have a new version every week, and I’ve started to flinch every time I open it.

There are a few things you can do to make downloads easier on your users. First, you can ask the user to allow the update to download in the background and install automatically the next time the product opens. This means that the updating happens with very little user annoyance. Also, it’s best to keep the update quick by making the downloads incremental. For example, your virus protection software probably updates its virus information daily without asking you to reinstall the entire product every time.

So, Is It Good for Users or Not?

Continuous deployment can be done in a way that’s good for both engineering teams and users, but you do need to take some precautions. By taking care when you introduce changes that your users will really notice and making sure that you make time for important processes like QA, you can get features out faster and constantly improve your product. And that is very good for users.

Interested? You should follow me on Twitter.

For more information on the user experience, check out:

A Faster Horse – When Not To Listen To Users

| 10.2.2009

Henry Ford once said that, if he’d asked his customers what they wanted, they’d have asked for a faster horse. In the high tech industry, this quote is often used to justify not talking to users. After all, if customers don’t know what they want, why bother talking to them?

You need to talk to users because, if you ask the right questions, they will help you build a better product. The key is figuring out the right questions.

For starters, users are great at telling you when there’s something wrong with your product. They can tell you exactly which parts of the product are particularly confusing for them or are keeping them from being happy, repeat customers. Figuring out what to do about those problems is your job.

In general, users are not going to be able to answer the following types of questions:

  • What new technical innovation is going to revolutionize a particular industry?
  • What’s the next cool gadget that you’d like to buy?
  • Do you think that people like you would buy this new cool gadget that you’ve just learned about?
  • What new features would make this product more interesting/compelling/fun/easy to use? (although, this question becomes more answerable when the user is presented with some options for which features they might prefer.)
  • How exactly should we change the product to make it easier for you to use?

They are fantastic at answering questions like these:

  • What do you most love or hate about this product?
  • Do you find anything about this product hard to use or confusing?
  • Does this product solve your problem better or worse than what you’re currently doing?
  • How are you currently solving a particular problem that may or may not be addressed by this product?
  • What don’t you like about your current solutions for a particular problem?
  • Why did you choose this particular solution as opposed to another solution?

Obviously, there are innumerable other questions that you might want to ask your users, so how do you decide which ones they’ll be able to answer with any degree of accuracy?

Problems vs. Solutions

Users are much better at telling you about problems that they’re having than solutions that they want. In Ford’s example, when people asked for a faster horse, what they were really saying was that the horses they had were too slow. They didn’t specifically want a faster horse. They wanted a faster means of transportation that was no worse than a horse in other respects.

Frequently in user tests or customer feedback sessions, customers will tell you very clearly, “I want x!” Your job is to understand why they want x and then to determine whether or not x is really the right solution. It’s not that they never have good solutions, but users tend to only look at the product from their own perspective and usage patterns, while you should be talking to lots of different types of users with lots of different types of problems. They’re not thinking about the product as a whole or how to fix things for the other several million people who might have slightly different problems.

When users try to give you solutions, encourage them to talk about their problems instead. Then figure out what they’re really asking for, and give it to them.

Past vs. Future Events

It’s much easier for people to answer questions or give opinions about something specific that has already happened than about something that might happen in the future.

Consider the question, “What do you want to eat tonight?” vs. the question, “What did you think of the meal you just ate?” For the vast majority of us, the second one is much easier to answer. It simply asks you to formulate a concrete opinion about a single event that has recently happened. The first question asks you to imagine all the various available options for food and make a decision about what you might like in the future based on probably imperfect information.

This is true with products, as well. It will be much easier for your user to explain how performing a particular task went than to predict how he would like to perform that particular task in the future. That’s why, when you’re doing your preliminary research to determine product direction or early feature development, it’s very important to give users hands on tasks that they can perform for you and then give opinions on rather than to talk abstractly about the solution you’re considering providing for them with your product.

Users vs. Other People

Unless you’re really lucky, you’ve probably realized that people are terrible at figuring out what other people want. Perhaps you came to this realization on some birthday or other gift giving holiday. Users suffer from the same problems as gift givers. They’re almost always terrible at telling you how other people will react to a product.

And yet, talk to just a few customers or user test participants, and you’re guaranteed to hear one of them say, “Well, it’s not for me, but my mom/friend/boss/brother would be really into this…” Another one you hear a lot is, “My mom/friend/boss/brother would never be able to use this. It’s way too complicated.”

This information can be marginally useful if you’re trying to find the right customer segment, but take it with a grain of salt. Reassure the user that you’re also going to talk to people like their mom/friend/boss/brother, and what you’re really interested right now in is the user’s opinion. Then talk to the mom/friend/boss/brother to find out their real feelings. Chances are, the person you’re talking to doesn’t really know what anybody else wants as well as they think they do.

The Right Questions

So, what should Henry Ford have been asking his customers? Instead of, “What do you want?” he could have asked, “Is there anything you particularly like or don’t like about your horse and wagon?” If they chose not to buy a car, he could ask, “Why didn’t you buy that car?” Once they bought a car, he could have asked, “What made you decide to buy a car?” or “Was there anything you found particularly confusing or hard to use about your new car?” He could even have gone for a drive with some of them and observed the various problems that they encountered.

In fact, there were dozens of things he could have done that might have helped him improve the design and marketing of his product. He just couldn’t ask them, “What do you want?” because they almost certainly would have said, “a faster horse.”

For more information on our approach to getting customer feedback, check out:

Improving the ROI for Your User Research

| 09.23.2009

So, you decided to do some user research in order to find out where you can make improvements. After a few hours of user interviews, you ended up with a notebook full of scribbled information that all seemed really critical. How in the world do you figure out what to do with all that information?

If your answer is “talk about it all abstractly with everybody in the company or write a huge paper that nobody will read and then go on with business as usual,” you’re in good (bad?) company.

But you have to DO something with all that data. You have to analyze it and turn it into actionable items that your engineering department can use to fix your product. It’s not always easy, but I’m going to give you an approach that should make it a little easier. This isn’t the only way to do your test analysis, but it’s one of the quickest and easiest that I’ve found when you are concerned with key metrics.

When to use this method:

  • You have an existing product with a way to measure key metrics, and you’re interested in improving in particular areas related to your bottom line
  • You have a limited research and development budget and want to target your changes specifically to move key metrics
  • You are looking for the “low hanging fruit” that is getting in the way of your users performing important tasks with your product
  • You are working in an agile development environment that is constantly tweaking and improving your product  and then testing the changes

When not to use this method:

  • You have an existing product that you are planning to completely overhaul, and you want to understand all of the major problems before you do your redesign
  • You are trying to create an overall awesome, irresistible user experience that is not related to a specific metric
  • You are designing a new product or feature and are observing people using other products to identify opportunities for innovation

If you fall into the first bucket, read on…

The Five Basic Steps:

  • Identify key metrics you’d like to improve
  • Identify the tasks on your site that correlate with improvement in those metrics
  • Observe people performing the appropriate tasks
  • Identify the barriers preventing people from completing or repeating the tasks
  • Develop recommendations that address each specific barrier to task completion

Step 1: Identify Key Metrics

First, identify the metrics that you care about, and look at the distinct steps that your users are most likely to take to change those metrics. This way, you can plan your qualitative testing to show you exactly what is getting in the way of your users reaching each subsequent step.

As an example, let’s imagine that you have a social networking site that earns revenue when people give virtual gifts to each other to display on their profile pages. Because your revenue depends on lots of users purchasing lots of things for each others’ pages, you might care about the following metrics: how many people register, how many people customize their profile pages, how many people make friends, how many friends they make, and how many people purchase one or more gifts. Obviously, that last is the most important, since that’s where your revenue comes from, but the others are also important, since improving those numbers should increase everything downstream, if done in the correct way.

Your goal is to get people from point A to point D as quickly and pleasantly as possible. To do that, you want to look at all of the barriers you have erected in the way of your users accomplishing those tasks.

Here’s are the pieces of the user flow that change your key metrics:

Registration > Profile Customization > Finding a Friend > Purchasing a Gift

Step 2: Identify Individual Task Flows

Now that you know what your key metrics are, you need to make sure that you observe your test participants trying to accomplish the tasks that will eventually lead to changing those metrics. To plan your tasks and analyze your data, it helps to break down your individual tasks into smaller user flow steps based on your user interface.

For example, while the number of successful registrations may be a single metric, there may be several screens or discrete interactions involved in your entire registration process. Make a list of what they are now.

As an example, I ran a test recently for a product that had a 4 step registration process. The user flow for registration on the site looked like this:

Landing Page > User name > Personal Info > Download

Step 3: Observe People Performing Tasks

So let’s say that, based on the metrics you want to change in your hypothetical social networking site, you’ve determined that you need to gather data about registration, profile customization, finding friends, and purchasing a gift. You’ve scheduled 5 people in your target demographic to come in for user test sessions. I’m not going to teach you everything you need to know about running a test, but here are some basics.

First, let the user begin using your product however they want and note all of the things that they do that aren’t the tasks you’ve identified. Remember, any time spent not doing your target tasks is time that is not being spent improving your metrics, so you want to know what these other things are! If you give your participant a targeted task right away, you will never learn what other things they are doing instead. Finding these distractions allows you to eliminate them or change them so that they also improve your key metrics.

Watch everything your user does with the perspective of the ideal flow for your metric. Do your participants wander off track or getconfused during registration? Do they fail to find their own profile? Do they not have the information they need to find a friend? Do they fail to understand what gifts are or how to purchase them? Do they get distracted by a totally different part of your site that doesn’t contribute to improving your key metrics?

Once you’ve allowed the participant some free exploration time, you may need to move them to a particular task. For example, in our current example, it might be that you need to ask the participant to try to purchase a gift, even if that’s something they wouldn’t do in the context of a study environment. You can move them along by saying something casual like, “Did you notice that there is a way to purchase gifts? What do you think about that?”

Meanwhile, record what the participant is doing and encourage them to talk about their impressions of the product.

Step 4: Identify Barriers

Once you’ve gathered data by watching 4 or 5 people repeat the same tasks, you need to analyze your data to figure out where the barriers are and communicate them effectively to the team. Determining the barriers should be pretty easy. Just ask yourself the following questions:

  • Where did participants seem confused or distracted and stop performing a task?
  • Which things took longer than they should have?
  • Why did participants fail to accomplish a task?
  • What tasks did the participants perform that would not improve the key metrics?

In the product I tested with the four step registration process, the metrics showed significant drop off one each screen. On screen number 2, the user was asked to choose a unique user name and enter some information. It had a surprising amount of drop off considering how simple it was. After watching a few user tests, I noticed that many people were trying to select user names that were not available, but they were failing to notice the error message telling them to choose a different one. Even if they noticed the error, several people had a tough time finding a user name they liked and would spend minutes typing in word after word, getting frustrated, and finally giving up.

By watching people try to register for the product, we identified several significant barriers to successful registration in just one step of the process. In fact, we found similar problems – some large, some small – in each of the steps.

Once we’d identified the barriers, organizing and explaining them to the team was easy with a simple, annotated flow of the steps leading up to the goal along with the problems associated with each step. For example, the flow for the registration test might have looked like this:

Landing Page

  • Didn’t understand the product
  • Felt the product was aimed at teens
  • Thought the network was very small

User name

  • Didn’t know when a user name was taken
  • Couldn’t come up with a decent name

Personal Info

  • Were concerned about privacy
  • When failed Captcha, all info had to be re-entered


  • Were uncomfortable downloading b/c of viruses
  • Didn’t know what they were downloading


  • Were bored during registration process or complained that it was taking a long time

Step 5: Develop Recommendations

Now that you know what barriers are keeping your customers from accomplishing the goals you’ve set for them, you need to generate recommendations for ways to remove those barriers. You can do this by looking at exactly what the barrier is and what the users’ reactions were to it, and then brainstorming ideas to help the user overcome that barrier. Yes, this is the tough, somewhat creative part. You don’t want to just take any recommendation that the user gives you. You need to understand what problem they’re having and come up with a way to fix it that doesn’t cause any other problems.

Going back to my example, once we found the problems on the user name page, we looked at several solutions. We definitely needed to make it more obvious to the user when they selected a name that was already taken. We came up with a few simple ways to solve this particular problem: we could suggest other user names when people tried one that was already taken; we could make the error more noticeable; we could make the Next button obviously disabled so that they couldn’t even try to move on; we could free up some names that had been taken but weren’t currently in use so that the selection of user names was better. We then selected a couple of these solutions based on expected ROI calculations.

Even the lowest effort of these suggestions, improving the visual design of the error, caused an immediate and statistically significant decrease in drop offs for that step in the registration flow, and eventually improved revenue by increasing the number of successful sign ups. The barrier was lowered, if not entirely removed.

You may have noticed that the last section is labeled “Overall.” This is where you look at the process as a whole, rather than a sum of its parts. For example, one comprehensive solution for drop off at each step might be to reduce the number of steps it takes to register (almost never a bad idea!). Doing this wouldn’t necessarily mean that you wouldn’t have to address the other problems individually, but you might get fewer drop offs simply because it would take less time for users to finish the process.

How Is This Different?

You might be wondering how this is different from more traditional ways of running user tests and analyzing data. In any user test, you’ll need to come up with tasks, observe users, figure out where they’re having problems, and come up with solutions for the problems.

There is a difference though. My experience has been that companies do not fix every single UX problem that they find in user research. I don’t love this, but I do understand the need to prioritize important changes.

So, if you’re only going to fix a few problems, you should make an effort to identify the most important ones. The counter intuitive part is that the most important problems might not be the worst problems from a user experience stand point, but they will definitely be the ones that are keeping your users from reaching the goals that improve your most critical metrics.

Using this framework will help you identify your key user flows, find and communicate the major barriers to success, and propose targeted solutions that will improve both your user experience and your business.

For more information on our approach to getting customer feedback, check out: