Latest Tweets

2 minute guide to creating the perfect dashboard

| 10.12.2016

At Sliced Bread we have seen and designed A LOT of dashboards. Dashboards for CRM, energy, IT, farming, social — you name it. Unfortunately, most dashboards we see out there fail the snooze test. They are just presentations of data that is available, rather than presentation of actionable data that is useful.   

So, how do you create a useful, usable, meaningful dashboard? When you design your dashboard, it must do THREE things really well to be successful.

1. Answer a real question

Don’t display data just because you have it. This is the number one problem with most dashboards. Data should only be displayed on a dashboard if it is answering a real question that users of this dashboard actually have.

For example, “You have walked 2,520 steps today” is useless data. Have you ever met anyone who has actually asked the question “How many steps have I walked today?” unless they are a professional shoe tester? The real question is probably “Have I had a good amount of exercise today for someone in my physical condition?” or “Have I walked enough today to overcome the chocolate cake I ate for lunch?” or “Has my walking today helped me reach my exercise goals?” Providing answers to those questions might involve analyzing data related to steps, but the steps data on its own is meaningless and boring.

Most dashboards we see are more about data trivia than useful information because they are not actually presenting the right data about the right question.  In fact, data is usually only interesting when compared to other known facts in the world that position the data contextually — for example steps compared to the average for someone your age, steps compared to your exercise goal for the day, calories burned in terms of chocolate cake, etc..  Before you start any design work, be crystal clear on the top  questions users have when they arrive at your Dashboard and write those questions down somewhere big on your wall. And, by the way, the question “What data is available?” doesn’t count.

2. Explain what is going on with the data

Fancy charts are great and can be very compelling for showing trends. However, fancy charts are even more effective if you pair them with a plain english explanation of what is going on.  No one wants to think.  One sentence that says what is happening is all you need.

And, don’t forget to tie your explanation directly to the chart displayed. I used to work at a startup that provided investment advice about 401k’s to regular folks. The company came up with a chart to show the likelihood of having enough money to reach your goal. It was a fancy histogram that exactly 0% of people understood in user testing.

financial-engines

A chart that no-one understands.

We added a simple bracket and plain text next to the histogram. Suddenly 100% of people we showed this to understood what was going on. Magic? No, just basic communication.  

Financial Engines with a bracket.

The bracket tied the forecast to the chart, and the text communicated the bottom line. Everyone in testing understood this chart.

3. Provide next steps

Actionable dashboards spur you to ACTION.  You should predict what the most common actions might be after someone views your data and offer gateways to those actions.  For example, truly interesting dashboards often tell you about something you didn’t expect. So, the next step after seeing the data might be to understand why this is happening. You could start answering the “why” question right there for your user or provide links for further exploration and explanation.

For example, if you had a dashboard showing that energy use was going up and was higher than the same time last year, you might also explain that the weather was warmer than usual or that the air conditioner was possibly broken. These threads would help your user decide on next steps to address the data. You might offer links to other areas of your product, buttons to start workflows in response to the data, drill-ins that open up more information inline. Whatever. Give your user some options — OR ideally, do some user research and give your user the most common ONE option to help them down the path to action.

Does your dashboard make the cut?

Take a critical look at your dashboard. Does it look like this?

Typical complex dashboard

A dashboard with an overwhelming mishmash of data forces the user to work hard to reach their own conclusions and do anything actionable.

Or like this: 

Clear dashboard.

This dashboard excerpt answers a clear question, explains what is going on, offers hooks to answer why, and provides a link to do the next step.

Make sure your dashboard answers real questions, explains the data, and provides next steps, and you will be well on your way to empowering your users with data.

Part 2: The PRACTICING Design Thinkers’ Top Ten FAQs

| 09.6.2016

stickies

Part 1 of this series dealt with overall questions that people ask me about design thinking. This second part deals with questions I hear about specific parts of the process after someone has tried it on their own. If you’re not sure what design thinking is, go back to Part 1. If you’ve learned about it, tried it, and are ready to get into specifics, read on.

1. How many people should I interview for the initial user research?

Start with 4-6. Move up from there. Interview more people if you have different kinds of target users, like new moms and empty nesters. You’ll find that as you get towards 10-12, the interviews become redundant, and you should split your research into two studies — an initial and a follow-on to answer new questions. A common mistake is interviewing too many which leads to unwieldy amounts of data. Start small.

2. I showed someone a demo of my stuff and they commented on it. Is that user research?

No, this is showing someone a demo of your work. User research involves less of you showing and more of you watching people actually using your stuff and asking questions.

3. I’m not a storyteller, but I hear that storytelling is important for design thinking. Why?

If you have ever told a friend a story about your day, you are a storyteller. Storytelling is a key to the empathy building that design thinking demands. People are hardwired to think in terms of stories, and only by hearing specific stories about problems, not generalizations, are we empowered to solve them. When conducting and sharing research, focus on the stories, not the generalizations of what is going on. Think about how to solve the specific problems and address the specific emotions that are emblematic in the stories. If you think about how to solve the problem in general, the solution you come up with will be as general and uninspired as the problem statement.

4. What’s next after I do some observations?

If you have done a strong job with your initial observations, by conducting solid needfinding interviews and unpacking insights, the next steps, ideation and prototyping, will flow easily from there. If you have not done a strong job, your path will seem fuzzy. You’ll need to do more research or reach out for help analyzing your research. Users will not give you the answer directly — your analysis will. Check out techniques like affinity diagrams and journey maps as tools for unpacking data to help you get the most out of your research.

5. What’s an insight, from a design thinking perspective?

An insight is a new perspective about the problem you are solving that is not something you could have come up with sitting in your room, thinking about the problem really hard. For example, if I am working on a problem related to health, an insight would not be: people would like to talk their doctor about small medical issues from home instead of coming in, because it’s too time consuming. This is obvious. Instead, an insight about doctors and patients would be some new story or statement about what is really happening for them that makes you think,“hmmm…I hadn’t thought about it that way before.” An insight is something you can preface with: “I was amazed to discover that….” It can be very small, but it can’t be super obvious. For example, a good small insight you might have uncovered after interviewing doctors is:  “I was amazed to discover that doctors spend more time on remote video appointments with patients than on in-person appointments at the clinic because it takes so long to get the technology working for each phone call.”

6. How do I know if I have a good “how might we” question for my brainstorm?

“How might we” (HMW) questions are a big stumbling block for new design thinkers because they must have the right level of granularity — not so broad that they are unsolvable and not so narrow that they describe the solution. The best “how might we” questions include your insight as part of the statement — so if your insight is weak, so is your HMW. A good way to check if your HMW question is good is if it has at least three elements from the “Who, What, When, Where, Why” set. For example:

Too broad: “How might we help people find health related information?” (Who are the people? Why are they looking for it? When are they looking for it?)

Too narrow:“How might we help people with diabetes find online information about diabetes classes?” (describes the solution in the question)

About right: “How might we help people newly diagnosed with diabetes feel supported as they have to make unfamiliar lifestyle changes?”  (clear who, when, and what)

7. When I do brainstorms with my group, they always judge the ideas. What do I do?

Brainstorms often fail because you have to set aside the norms that we usually have when sharing ideas and follow a new set of rules to free your thinking. That’s hard to do when someone in the room, like your manager, the in-house skeptic, or the CEO, is either making faces at bad ideas or sitting across the room with their arms crossed. A few ideas for combatting these brainstorm killers:

  1. Start with a review of the rules for the brainstorm even if everyone knows them. We do this every time. Read them out loud and post them up.
  2. Bring in a new person who is not a part of your team to moderate. Have them enforce the rules. This can be someone outside of your organization, or inside but not a part of your team.
  3. Go someplace new. New places means new behaviors.
  4. Start with writing down ideas independently on pieces of paper and then make sure that everyone shares the ideas with only positive comments after each shared idea.

8. I’m stuck. How do I know this process is gonna work?

As you are going through the process, there always comes a time that you become convinced that the problem is too hard or unsolvable or the process is not deterministic enough to hint at the solution early on. My students often get stuck because they don’t know where things are heading. Design Thinking forces you to deal with ambiguity. Suspend your disbelief and continue to follow the process … you will get to the results.

9. How do I get better at design thinking?

Practice, practice, practice. Seek out coaching. Invite talented design thinkers to help you unpack your user research, brainstorm solutions, and help you figure out ways to test the efficacy of your ideas.  Design thinking is about collaboration. Go collaborate. Ask experienced design thinkers to review your plans and give you feedback. Knowing the rules is just the beginning. You must get coached and practice to become awesome.

10. How do I get others to buy into following this process?

First of all, start with just trying out the design thinking mindset. You can continue following your current processes, but try out parts of the mindset. For example, ask your team in a meeting to avoid judging any idea. If they don’t like an idea, ask them to build on it instead. Or, invite someone new to a meeting who isn’t usually on your team to try out radical collaboration. Or, when someone comes up with an idea, ask them to draw it or prototype it in some way. Just try out parts of the mindset and see what happens. Then, try the process on something small. Use it to solve a small problem at work like not enough seating at lunch and amaze others by the success. Get others to learn by involving them in the doing. Baby steps.

Or, hire someone like Sliced Bread to show you the process and inspire others to buy in.

 

If you’re hiring a UX firm, you probably need a therapist

| 06.21.2016

therapist

No, you’re not crazy to be hiring a UX firm. That’s a smart move! But, we’ve noticed that when companies hire us, they tend to be in a state of intense organizational flux and team drama. While some drama is predictable, normal, and addressable, too many companies ignore critical organizational issues until it’s too late to address them.

Clients — that is, you — are usually in one of three states when they hire us:

1. The product sucks, and the folks who built it are still there.

The feeling when it hits you: your core idea is good, but your product sucks — and the people who created it are still sitting next to you. Awkward. In addition to hiring us to fix your product, you are planning to fire and replace some of the people who built it. Everyone is on edge, and then we come waltzing in all smiles and bright ideas.  How will your team dynamics affect the success of this endeavor?

2. You’ve been brought in to provide new leadership and direction.

This state is an advanced version of the previous one. You’ve just been hired to shake up a team that needs it. You’ve brought on some folks who you’ve worked with before, and you’re in the process of weeding out rot. Your arrival has polarized the team: some people are excited, while others are grumpy and may soon quit or be fired unless they can change their attitudes and adapt to your new vision. Meanwhile, you’re still hiring, so people parachute in intermittently as the project is progressing. People with all kinds of disruptive ideas want to quickly “make their mark” (read: pee on something). The team dynamics create awkward politics and whiplash in decision making.

3. You’re a new team or company.

You’re a brand spanking new team that’s come together to build a brand new product that you’re really excited about. But reality is setting in and you’re realizing you don’t agree with everything in the marketing roadmap, and things aren’t gelling magically the way everyone had in mind. Are you going to find a way to come together as a cohesive team? Or, more likely, is someone going to end up suddenly leaving in a huff?

In all three cases, your product needs help…

but so does your team. And guess what? Your UX design firm is not a group of therapists, organizational change consultants, or team builders. We know a lot about product innovation and the design process, but we can only help you if your team is receptive to being helped.

I can’t tell you how many companies have spent a lot of money with us to, only be derailed by core, human, internal team dynamics issues. The good news is that, once you recognize the monkey on your back, you can do something about it.

So look around and assess the psychological healthiness of your team. Be honest, and if it looks like things are dicey, don’t just plow forward hoping it will work. Address the issues directly and consider hiring an organizational consultant. If you can’t afford a consultant to help your team work together more effectively, read some of the great articles in the Harvard Business Review on team dynamics.

Whatever you do, don’t put your head in the sand and hope that fixing your product will fix everything. It won’t. Before you fix your product, fix your team. With or without a therapist.

 

Top 10 Design Thinking FAQs

| 01.27.2016

People looking at a map together.

Design thinking and Sliced Bread go back about 14 years. But, for the last five, I’ve been teaching design thinking at the Stanford d.school and more recently, in the Computer Science department. The same questions about design thinking keep cropping up from clients and students so I thought I’d set the story straight.

1. What is design thinking? 

Design thinking is a human-centered process for solving problems that results in effective, innovative solutions.

It includes a series of specific steps that must be done in a specific order and a set of core principles. The steps are observations, insights, ideas, and prototypes — which are followed cyclically . The principles are empathy, thinking by doing, iteration, and collaboration.

It is a way to radically increase the likelihood that you are going to have success when you’re trying to solve a problem or do something new.

2. Can you describe the steps in the process in detail?

There are many different diagrams of the design thinking process, but our favorite displays it as a circular, iterative workflow that starts at the top left:

Let’s break down the steps:

design thiinking cycle

Design Thinking diagram based on an original design by Michael Barry.

Observations

This step is the foundation of design thinking — user research. Go out and understand what is happening with the problem you are trying to solve by observing and interviewing users. Gather data about the problem by understanding the human stories. The first time you cycle through this quadrant, the type of user research you’re doing is called Needfinding because it’s about understanding user’s needs. This is also the time to interview all the stakeholders involved in this problem — i.e not just those that have the problem, but the folks who understand the business opportunity and the technology options. Subsequent times when you cycle through this quadrant, you’ll do different kinds of observation of your users like rapid experimentation, usability testing, co-creation sessions, etc…

Insights

Once you’ve completed your observations, it’s time to unpack what you learned to find the insights that are going to drive the rest of the process. Initially, your insights will be focused on defining what you are solving for. What stories did you hear in your research that really stick out? What needs did you uncover? What frame will you take on the problem space? In subsequent iterations, insights will be focused on teasing out what you learned from user testing and rapid experimentation to evolve your idea or take it in a new direction.

Ideas

In this step, you take the insights that you’ve gathered and use them to seed a brainstorm. In design thinking, brainstorming is taken to a new level through structured rules which encourage creativity and through the link to real user needs. This is also one of the best steps to incorporate radical collaboration, bringing in people from different backgrounds to help brainstorm solutions from new perspectives.

Prototypes

The final step in the design thinking cycle is about thinking by doing. Stop talking about the ideas and actually make something that people can evaluate and discuss! You might sketch a workflow, build a model, create an HTML wireframe — it all depends on what questions you are answering. You might prototype to explore the idea space for yourself. You might prototype to test some aspect of the idea in the next observation cycle with users. Or you might prototype to convince others to fund the idea. As you move through iterative cycles, the prototypes will become more and more refined culminating in the final solution.

Those are the four steps. Now lather, rinse, repeat.

3. Why is design thinking so effective?

Two reasons:

One, design thinking has a laser focus on the actual, human roots of a given problem. By understanding and empathizing with the distinct human stories underlying a problem, you are able to solve for real needs from the beginning. And, by remaining in touch with users throughout the design cycles, you can stop guessing and make decisions based on actual human feedback.

Two, design thinking provides a defined, replicable approach for a creative process. When followed correctly by skilled practitioners, it virtually guarantees an effective, innovative solution to problems that are simple or fabulously complex and ill-defined. This has been proven many times over by studies at Stanford, well-known companies, and in our own work at Sliced Bread. We don’t want to take on a problem without a guarantee that we will get somewhere great at the end. Design thinking gives us the confidence to offer that kind of guarantee.

4. Is design thinking the only way to solve problems and be innovative?


Of course not. There are many ways to solve problems including sitting at your desk and thinking really hard. This method happens to be extremely effective so we are going with it.

5. What kinds of problems can it be applied towards?

You can use design thinking to solve ANY problem. This includes business problems and personal problems. I used design thinking to help my client think through the process for server installation, to help my child deal with a mean kid at school, and to plan a party. It’s the same process…only the content differs.

6. Who can do it?

Anybody can learn the design thinking process and use it to solve problems. However, at the start not everyone can do it well. For example, little kids can learn the rules and techniques of soccer. However, it’s only through a lot of practice that someone can become a soccer superstar like David Beckham. Just like anything else, the best way to improve in design thinking is through practice and coaching. Getting coach by an expert mentor is the best way to learn. Reading about it is not going to make you better…it will just help you understand the rules of the game.

7. Can I just do parts of it and be successful?

Sure, you can use the tools individually and be partially successful. However, if you are solving a problem soup to nuts, you need to follow the process soup to nuts.

8. Can you do design thinking without initial user research?

No.

9. How do I get started? 

I recommend taking a class or hiring someone (like us) to help you do it. You can read a book to get an introduction, but many of the skills like how to interview people effectively or how to pull out the insights from your interviews can’t be learned without watching others model the right techniques and without strong coaching. If there are no workshops locally, find an online class that will at least give you some examples of how it works that you can watch. Or, better yet, hire someone who can help you walk through the process the first time and teach you along the way. If you hire someone who is going to follow the process to solve your problem, make sure they emphasize the need for you to be involved…otherwise they are breaking one of the rules of design thinking, collaboration.

10. How did you get so good at it?

Two reasons:

One, I  have been doing it for a long time and have been working with the world’s best coaches. Folks like Michael Barry and Pam Hinds who also teach at the d.school, coworkers like the fabulous Mia Silverman, Jenny Mailhot, Kim Ladin, and Molly Wilson who are fabulous design thinkers, and astute clients who ask us why we do what we do and force us to explain our processes. All of this makes you awesome.

Two, I believe in the process. I believe in it so much that I am willing to follow it in many circumstances because I can see the power that it has. When a client tells us they don’t have time for user research, we don’t do the project because we have no way of guaranteeing the results. I wouldn’t want to spend a lot of money on something where I’m not sure if it’s going to work and so we are not going to accept our client’s money under those circumstances.


For more FAQs on design thinking, check out Part 2: Ten more FAQs for the PRACTICING design thinker. 

A Designer’s Quick-Start Guide to CSS Preprocessors

| 11.13.2014

CSS can be a hot mess to write and read. I am here to repair your relationship with it by introducing you to your new secret weapon.

I once hand-wrote over 300 lines of CSS to style one lousy navbar, and I know I’m not alone. Writing CSS by hand, as many designers do, is an exercise in repetition. Reading CSS is a real workout for your scrollbar and your patience. Nevertheless, we’re stuck with it. How else could we round those corners or drop those shadows?

Writing code to style a website does not have to be (quite) this tedious. Preprocessors are the missing ingredient in your HTML/CSS workflow.

Lots of articles about preprocessors target developers, but this one goes out to UX and UI designers. A lot of designers I know – good ones! – have a solid basic understanding of HTML and CSS, but are cautious and awkward around improving their workflow. No wonder – with all the frameworks and libraries available, it’s hard to know where to start. Many developer tools are overkill for designers, not to mention their documentation assumes a lot of prior knowledge and a high level of server access.

While they may look like developer tools, preprocessors are a fabulous addition to a designer’s workflow. Even if your HTML/CSS is pretty basic, there’s something for you here. I’ll tell you about my favorite features and how they help me as a designer.

(This article assumes that you’re somewhat familiar with CSS. If you’re not there yet, try a tutorial at treehouse.com, lynda.com, or Codecademy.)

What’s a preprocessor?

A preprocessor lets you write CSS in a language that feels like a better, more sensible version of CSS. Then it creates, or compiles, a browser-readable CSS file. You don’t touch that compiled CSS file. You work on a nice tidy file with the extension “scss”. SCSS files are written in a preprocessing language.

preprocessor-wide

Naming note: Many people call this language both SASS and SCSS, but that’s not quite accurate. The language, called SASS, actually has two syntax modes. SCSS, the newer syntax mode, uses brackets around chunks of code. SASS, the older syntax mode, has the same name as the language and uses indentation and whitespace instead of brackets. Of the two syntax modes, I prefer SCSS – the brackets are foolproof and don’t get screwed up in different text editors.

Which preprocessor should you use?

I use SCSS, but you might also want to use a preprocessor called LESS. If your coworkers use one or the other, just follow their lead – both SCSS and LESS are great. If you’re on the fence, here’s why I recommend SASS (Chris Coyier at Treehouse agrees):

  • It has more logic built into it. You can do for-each loops and if-then statements right in your CSS. As long as you’re using a preprocessor, why not pick the one with the most features?
  • Adding the Compass library to SCSS adds a ton of useful reusable patterns.

So what can you do with SCSS?

Nest Now, Thank Yourself Later

Have you ever written CSS that looks like this?

.navbar {
	// styles
}
.navbar ul li a {
	color: red;
}
.navbar ul li a:hover {
	// styles
}
.navbar ul li {
	// styles
}
.navbar ul {
	// styles
}
.navbar ul {
// styles
}

You have to cram selectors into your CSS in order to make sure your rules have the right scope; you don’t want to make every link red, only the links in the navigation bar. The only thing more annoying than writing this structureless, repetitive code is trying to read it later.

With a preprocessor, you can write this instead:

.navbar {
	// styles
    ul {
		// styles
        li {
        	// styles
        	a {
	        	color: red;
				&:hover {
						// styles
				}
			}
		}
	}
}

In SCSS, “&” refers to the current selector: the selector just outside the surrounding bracket. So, in the example above, &:hover compiles to a:hover.

The nested code is far less repetitive and easier to read. Just don’t nest hundreds of lines deep, or (if you’re anything like me) you risk misplacing a bracket or two. I try to keep nested chunks of code small enough to fit on my screen.

Use Variables as a Style Guide

We designers truly want to be open to feedback at every stage of the process. But it’s hard not to groan when responding to feedback means hunting through thousands of lines of CSS. Where’d you put that color? How wide was that column? Why is one button’s border radius refusing to change?

If you think ahead and define those properties in variables, making major visual changes is a breeze. You can define yourself a style guide that dynamically updates your entire site.

In SCSS, you define variables with $ signs. Once you’ve defined a variable, you can use it anywhere you’d use the text you’ve stored in it.

$orange: #f64212;
$subtle-light-gray: #efafef;

a {
	color: $orange;
	background-color: $subtle-light-gray;
}

Colors are only the most obvious way to use variables. These are all variables I’ve written in SCSS:

$orange: #f64212;
$help-text-size: .8em;
$total-width: 1024px;
$menu-width: 25px;
$main-content-width: $total-width - $menu-width;
$mobile-breakpoint-width: 468px;
$logo-height: 2em;

SCSS knows how to do math even with quantities that aren’t exactly numbers. It’s even smart about mixing units.

width: 11px + 2em + 50% + $menu-width;
color: $orange - rgb(25,14,14); // yes, you can do math with colors!

“Make the logo bigger”? No problem.

Mixins and Partials: Variables on Steroids

A mixin takes the idea of a variable one step further. It’s a collection of rules that you know you’ll want to reuse. I often use them for UI elements that appear frequently, like labels, tooltips, buttons, and dropdowns.

Define a mixin by using the word “mixin” and an @ symbol, followed by the name you want to call the mixin.

mixin @tooltip {
	border-radius: 5px;
	border: 1px solid $lightgray;
	background-color: #fff;
	box-shadow: 2px 2px 3px 0px rgba(20, 20, 20, 0.16);
	cursor: pointer;
	padding: 10px 19px;
	font-size: 90%;
}

 

Then include the mixin with the word @include , followed by the name of the mixin.

.onboard-tooltip {
	@include tooltip;
	color: $orange;
}
.premium-user-tooltip {
	@include tooltip;
	color: $darkgrey;
}

You could define these mixins at the beginning of your SCSS file. But if you have more than one SCSS file, copying and pasting variables into all your various files, you’re living dangerously.

Partials will help you keep your mixins and variables in one place. A partial is a snippet of SCSS that gets included before a file is compiled. To mark a SCSS file as a partial, give it a name starting with an underscore, such as _variables.scss . Then, to include it in your main file, write @import ‘variables’  at the beginning of the document. That partial will not get processed into CSS. Your file structure will look like this:

Partials are also great for chunking out well-defined bits of CSS, like my aforementioned 300-line navbar.

Are you processing locally or remotely?

The preprocessor can work its magic either on your local computer or on a server.

You’ll probably want to set it up on your computer, so that it creates an upload-ready CSS file right next to your SCSS file. If your processing is happening locally, keep the SCSS file and its compiled CSS together at all times. Upload them together, download them together, and if you delete one, delete the other. (And, if you’re using git, add the CSS files to .gitignore.)

You will also want to protect your computer by a VPN (virtual private network). I suggest looking up some VPN ratings and see what would be best for you.

If your workplace is technical, you may have a preprocessor already installed on the server, so that this conversion happens after you upload your files, not before. Ask around if you think this might be the case.

Setting up your text editor

If you’re working locally, you need to set something to “watch” your SCSS file and, when you hit save, turn the SCSS into matching CSS.

Coda 2:
Install this plugin, and any .scss files you create will be automatically compiled to a .css file of the same name in the same folder whenever you save the file.

Sublime Text:
This is a little trickier; there’s no one add-on that does it all. Here’s how to get Sublime Text supporting SCSS.

Text editor not supported?

If you’re using Dreamweaver, BBEdit, or another text editor that doesn’t have a SCSS plugin or add-on, no worries. You can either use the command line, or you can use a standalone app that watches your SCSS files for you. At Sliced Bread, a lot of us use Koala (free). Other popular options are CodeKit ($29) and LiveReload ($9.99).


 

That’s it! A preprocessor does take a little time to set up, but it’s eminently worth it. SCSS took me 30 minutes to figure out, but it saved me hours in the first month I was using it.

The one downside? I can’t live without it. On the rare occasion when I do need to write vanilla CSS, it feels like I’ve downgraded from a Tesla to a Buick LeSabre: slow, unattractive, and retro (not in a good way).

So, make your next CSS file a SCSS file, and join me in writing faster stylesheets – let’s make time for the fun part of design.

Ditch the Rainbows

| 10.1.2014

Color is becoming more important than ever. With the increasing popularity of flat design, designers are less able to rely on 3D buttons, drop shadows, and other crutches that used to make functionality apparent.

Of all the formal elements of design, color hits the fastest and hardest. Color communicates the mood of a website or application in a fraction of a second, long before we grasp the semantic content. Color has the power to clarify or confuse how a user is meant to interact with a digital environment and with great power comes great responsibility.

I see a lot of websites and apps where the designers just went overboard with color, either splashing it all over the place like Jackson Pollock, or pulling an Ellsworth Kelly and filling the entire page with a garish monochrome. Using just the right amount of color is harder than it might seem. Here, I’ll share some examples of effective and ineffective color use to help you make wise chromatic choices in your own designs.

Why Less is More

We’ll get to websites in a moment, but first let’s think about information design more broadly. Information design is the art of foregrounding what’s essential, while backgrounding or removing what isn’t. Take maps for example. Like websites, maps present a lot of information in a small space, and cartographers must make very shrewd color choices to maximize signal, and minimize noise. But, like web designers, they don’t always get it right. Here’s a map that visual information guru Edward Tufte singled out for its egregious abuse of color in his book Envisioning Information (Graphics Press, 1990):

What’s wrong with this map, aside from locating Alaska just to the left of Juárez? Whoever did the inking took the “more-is-more” approach. All colors are equally saturated, and the background blue is the dominant hue, pulling the eye away from the central information at every opportunity. Within the landmass itself, every swatch is bright and loud, and, aside from the inherent attraction of red, no single element stands out from the others. This map has failed to foreground anything.

Tufte, the visual information expert, advocates a sparing use of color, ideally against a muted background. See how the structure and function of the buildings and fields in this city plan leap out to announce themselves to the viewer:
good-map

When color is used with precision and purpose it brings focus to pertinent information, while leaving enough breathing room to temper the distraction of background noise.

Using Color in Interaction Design

Judicious use of color is even more important when it comes to designing screens a user is meant to interact with. When we are looking at a web page, typically we are not just soaking in its aesthetic beauty; we are looking for something. The job of a designer is to figure out what things the user is most likely looking for and make those things as apparent as possible, or in the case of marketing sites, to direct the user where you want them to look.

Let’s look at an example from Target’s homepage.

Now this is not really fair because Target’s branding is all red, which presents considerable challenges. I’d like to pause for a moment and point out that all colors are not created equal. From the bright arterial blood coursing from a wounded centurion’s neck to the vermillion glow of fires in the night, the color red is permanently imprinted in our collective psyche as a sign of urgency and danger.

The UI designers at Target do not appear to have gotten that memo.
target

See how the search bar and side nav are superimposed over the header bar in shades of crimson and burgundy that are only slightly off from the carmine red of the top navigation bar. This red abuse is exacerbated by the excessive prevalence of highlighted text, “free shipping”, “free returns”, “15% off”, “TV sale.” There are fires everywhere! The eye is drawn hither and thither with nowhere to land. I have no idea where to click. I’m paralyzed.

Old Navy, bless their hearts, does a much better job.
old-navy

Like the city plan we looked at before, this site avoids clutter with plenty of whitespace. Best of all, they use highlights in a meaningful way. The exact same shade of red, an unsaturated scarlet, is consistent throughout the design, creating a harmonious visual relationship between elements that are actually related. In this case the large color block explains the details of the sale “5 Must-Haves to fall for”, which are then referenced with the red numbers to the right of the model. The subtle highlight of “BOYS” on the main nav stands out in sharp relief from the surrounding elements and reiterates the relationship between the selected item and the displayed content. Good job team!

The Perils of Flatland

A flatter design aesthetic can be a beautiful thing. When done well, it accentuates the expansive space afforded by the widescreen display and offers a distinctly modern functionalism to web design. But the impulse to overuse color presents real danger to a site’s legibility.

When thinking about how to use color in your interface designs, I urge you to consider how you are drawing the eye around the page. Like a well-designed workflow, good use of color whisks a user through an interface so smoothly that they barely even notice it is there.

Jesse Day is an intern at Sliced Bread and the author of Line Color Form: The Language of Art and Design.

cover41

Finding the User Testing Sweet Spot

| 04.10.2014

At Sliced Bread, we use a technique called Fast Insight Testing to get lightweight feedback as we prototype. Compared to a lot of user testing, it’s pretty unscripted and casual – not to mention quick. We’re happy to get a rant or a rave, even if it means we’re deviating a bit from our plan.

But we don’t just let people run their mouths about whatever they feel like talking about. That’s not a user test – that’s a bull session.

So here are 2 of my favorite techniques for striking that balance between “customer survey” and “psychotherapy session.”

1. Let them know what kind of feedback you want.

This sounds obvious, but it’s surprisingly easy to forget. Most people have filled out forms, surveys, and questionnaires, but less scripted research is probably unfamiliar territory. Participants are generally eager to please, but they do need to know what sort of responses you’re looking for.

We begin Fast Insight tests by telling people we are interested in their unvarnished, unfiltered personal opinion. We’ll often add “I didn’t design this, so don’t worry about hurting my feelings.”

Okay, from time to time that’s a little white lie. We’re a small group, and we all do both research and design, sometimes on the same project. But it gets our message across: don’t hold back. (And it reminds us that, even if we did design the prototype, we need to let go of our attachments to it.)

2. Ask open-ended questions about specific things.

If your interviewees seem confused or unfocused, it doesn’t mean they don’t have opinions – they just may not know where to start.

The combination of asking an open-ended question, but asking it about a very specific thing, can work wonders.

Here are a few examples:

• “See that text at the bottom of the page – what do you think of it?

• “What are your feelings about the sidebar?”

• “Tell me about the sliders on the left.”

As people are talking, we’ll frequently interject with follow-up questions. Asking “why?” over and over gets old (and can make you feel like an overly inquisitive toddler), so try one of these alternatives:

• “Tell me more about that.”

• “How does that make you feel?”

• “What makes you think/feel that?”

• “What’s going on with that?”

• “What is that like for you?”

• “I’d love to hear more about that.”

• “I’m curious about why that is.”

The reason this works so well is that it gives your users two important elements at the same time: a relevant starting point (the specific element you’re asking about) and a license to be honest and casual (the open-ended question).

Used together, these two techniques help me keep user tests focused but friendly. Give it a try, and let us know how it goes!

Be Experiment Smart

| 01.30.2014

Experimentation has hit the mainstream. It’s become de rigueur to sprinkle references to “A/B testing” into job descriptions, business plans, and company websites. But let’s look past the buzz for a moment. How you experiment is as important as whether you’re experimenting. Productive experimentation means picking the right experiment for the right phase in your project.

I’m teaching a class at the Stanford d.school this quarter that’s all about experimentation: we’re calling it Prototyping and Rapid Experimentation Lab. Along with my co-teacher Pam Hinds and our course assistant Nik Martelaro, we’re helping advanced design thinking students learn their way around how and why to experiment.

There are two core principles at work here. First, test any idea that you’re considering with as little commitment as possible. You want to learn on the cheap, before you’ve invested a lot of time and money going down the wrong path.

Second, experiment along multiple axes. You’ve heard of “A/B testing” – but there are many, many kinds of A/B tests. What exactly are you hoping to learn? You need to know how your idea stacks up according to these dimensions:

  • Interest – would people want this?
  • Use – does this solve a real need in a way that people want it solved?
  • Usability – is this easy, sensible, and efficient to use?
  • Implementation – can this be done?

 

I’m borrowing from some ideas proposed by Houde and Hill in their seminal paper for search engine optimization What do Prototypes Prototype? (1997), and giving these ideas a spin to be more relevant to what and how we design today.

Here, I’m focusing on the first three axes (interest, use, and usability). Keep in mind, though, that you’ll eventually need to launch experiments about feasibility as well.

Let’s take a look at each axis independently to understand how to dive in.

Interest: would people want this?

Asking people “would you buy this?” is not going to give you an accurate sense of what they think of your idea. You need to be sneakier about assessing their interest for like best waist corset. Create a situation where somebody happens upon your idea passively, and let them tell you whether they’re interested in this first taste. This is a great place for quantitative data – you want to prove objectively that there is enough potential to push forward.

Advertising

Run an ad (or several) and see if people click. Try Google ads, Facebook ads, or ads on community sites that your target user might visit (a foodie message board if your product is for restaurant diners, say). If you’re feeling especially sly, start chatting about the prospective product in forums where your users hang out and see what people think!

Landing pages and painted doors

To get more detailed feedback on your concept, link those ads to an actual landing page that describes your concept and link the ad described above to it. Create a button or other affordance on your site that is a doorway to the functionality you’re considering, but don’t build out the functionality yet. Gauge visitor interest by tracking page views, clicks, and/or sign-ups to learn more.

These quantitative approaches will show you what’s not working, but not why. For more qualitative feedback, go to a cafe (or wherever your potential users hang out) and show your design around for some quick guerilla feedback.

Create content

If your idea has to do with a certain kind of content, try starting a bare-bones blog, twitter feed, or pinboard. Driving and observing your traffic will teach you a lot about what readers are interested in. For example, if you’re hoping to make an app related to vegan eating, you might experiment with recipes, photos, restaurant reviews, travel guides, and nutrition advice, just to see what seems to catch on. Alternatively, try Hidden Radio’s approach – submit a description of your product idea to a well-trafficked blog and see if people seem interested. The team learned, in a week, that they’d hit on an idea worth pursuing.

Us or them?

Create a landing page for your product. Show your product to users, then show a competitor’s product (switch up which product you show first). Ask people to compare them. Which do they prefer? Which do they think looks most practical, most fun, most valuable?

What about surveys?

Surveys are notoriously poor at gauging interest. They can work well for getting factual, quantitative, objective data about real events (as long as respondents don’t feel social pressure to pick a particular answer). But you can’t learn much from a survey if you’re asking open-ended or hypothetical questions – people will say yes to anything. If you must ask a survey question about a nonexistent offering, create a quick prototype and ask about people’s reaction to the prototype.Web Design located in Southend, Essex were recently awarded with the best website design company.

Use: Are we solving the right problem?

So people are interested in your idea, and you’ve built a prototype. Now you need to figure out if you’re really solving their problem – and if you’re doing it in a way that makes sense for them.

Testing in context

Your product doesn’t exist in a vacuum. To get a sense of how context impacts how people use your product, build a prototype using one of a million tools out there and test it in context – or fake the context as closely as you can for testing purposes. For example, say you’re making an app that helps people troubleshoot their internet at home. To test it in context, you might start by instructing your user to do something engaging on the internet. Next, you, the tester, could artificially cause the network to fail, and see if your user can successfully use your product to fix it.

Wizard of Oz (WOZ) Tests

WOZ tests use low-tech workarounds to make it look like a computer is doing work. Create a working front end, but then set up a human-powered kludge for the backend. If you’re building an experience for online takeout, build the interface, complete with “place your order” button – but have people behind the scenes call-in orders to restaurants. You’ll learn a ton about your product without investing time or money in an order automation system.

In her book UX for Lean Startups, Laura Klein describes how Airbnb used WOZ testing: they had a sense that better photos increased a rental property’s desirability, so they hired a few pros to take photos of a few selected properties. The properties with the professional photos did much better – so much better that Airbnb now offers professional photography as a free service to people listing properties and has an automated backend to support it. A web designer Miami guru cleverly re-purposed this very tactic to cater to his existing real estate agent clients with great success.

source: airbnb.com

Usability: Can people use it?

After interest and use has been assessed to determine if you have a good idea to being with, you can start to “debug” your prototype. What’s unclear, what’s hard to find, what needs tweaking?

Moderated usability testing

I’m sure you’ve seen this technique before: make a prototype, ask people to complete a task using your prototype, and see if they can do it. If they can do it, congratulations – your prototype is easy to use. But don’t forget to test its usefulness as well as its usability. I’ve seen plenty of products that are beautifully simple, exceptionally clear, and completely pointless.

At Sliced Bread, we conduct a variation on the classic usability test that we call a Fast Insight Test. We usually run 3 to 5 tests of around 30 minutes each, and we test early sketches, not just finished prototypes. These quick tests give us a treasure trove of information, without all the expense and overhead of a usability testing lab (we often test in context or through screensharing) or a long study. Often we squeeze even more value out of tests by iterating between sessions, or even on the fly. For more info on how to do it yourself, see our previous post on Fast Insight Testing.

Unmoderated usability testing

A whole industry has sprung up around automating user testing. You’ll have no trouble finding a service that will recruit users and run them through a task for you, then give you results: try Usabilla, usertesting.com, Mouseflow, Five Second Test, or Loop11. These tools will help you gather quantitative feedback on the usability of very specific aspects of your site. They’ll show you where the fire is, but not why it’s burning.

Now go out there and do it

There’s a lot of subtlety to experimentation, but at its core, it’s about doing. Whether you’re looking for information about interest, use, or usability, you can frame an experiment to help you out – you can get started this very afternoon. In my next post, I’ll talk more about how to develop the questions and hypotheses that drive a good experiment.

Ten Ways to Improve Your Demand Response Program

| 02.22.2010

demand response image

While website design orange county‘s grid energy portals are an important area for user centered design, there is an often overlooked design challenge in helping utilities craft a demand response (DR) program that really works. For readers unfamiliar with the term, demand response is a program utilities are exploring which asks customers to reduce electricity use during peak times in exchange for financial incentives. Utilities have recently launched DR programs with the basic assumption that providing access to energy usage data and an economic incentive would motivate users to change their behavior. Turns out, encouraging behavior change is not so easy. With that challenge in mind, I decided to look at what’s been done in the past to motivate energy behavior change and see how learnings from past efforts can be applied to the design of demand response systems – from a consumer perspective.

Based on my literature review, the following are ten ideas to consider when crafting your demand response program to create an effective user experience:

1. Carefully craft and explain rate structures
Construct the rates and program carefully with consideration of more than the just the economics. A 2008 study of a time of use pricing pilot found that suggestions for behavior change were highly time sensitive to key family patterns such as mealtimes and did not work if they were disruptive to the household. To make sure you create a structure that is within the capabilities of your target audience, consider conducting a user study to understand how household behaviors align with specific time periods. Then you can craft a program with realistic expectations for consumption management and provide users with actionable advice that they can follow without changing their family patterns.

2. Create a goal – get commitment – provide feedback
Consider structuring the DR program so that participants get a specific difficult goal for participation, commit to the goal, and then get feedback on their goal. This type of structure has proven repeatedly to be one of the strongest approaches for motivating energy behavior change. In one study, researchers gave households a difficult goal (20% energy reduction), easy goal (2% reduction), or no goal for energy use. All groups (including the no goal control) were then given information on which appliances used the most energy. The goal was also combined with feedback or not. Households who received a difficult goal + feedback conserved the most (15.1%) and were the only group to significantly differ from the control. Participants with the easy goal did not differ in behavior from the control at all. To make an even stronger program, consider an extra reward if the goal is reached.

3. Provide frequent feedback
The more continuous the feedback, the more effective the intervention. In a seminal study conducted over 30 years ago in 1979, households were given continuous feedback over a period of 11 months about monetary costs of electricity use by means of a monitor displaying electricity use cents per hour. On average, households that had a monitor installed reduced electricity by 12%. Although hourly, daily, weekly, and monthly feedback all create savings effects, the more frequent the feedback, the more effective it is. Consider creative ways to deliver that feedback via web portals, in home devices, smart phones or on SMS (intermittently).

4. Emphasize choice and control
One study considered people’s resistance to installing automatic day/night thermostats. Once the thermostat was redesigned to allow residents to override the system temporarily, the thermostat was much more attractive to residents – even though in actual use most people never overrode them. Similarly, a DR program should emphasize choice and control – people can opt into the program and still have full control over their consumption.

5. Tap into the power of the group
One energy conservation program that had a lot of success enrolled people in groups where they discussed and compared conservation behavior with their social group over a long term basis. Similarly, virtual networks of known groups can be set up to motivate participation in DR programs – for example by tapping into existing social networks of friends on Facebook to encourage participation.
In addition, consider a structure that offers additional savings if everyone in a group or neighborhood participates and reaches a set goal (see point 1 above). One study indicated that an incentive that offered on an individual and group level – in this case for all residents of one apartment building — was more effective than solely an individual incentive.

6. Frame program benefits as avoiding loss rather than emphasizing gain
The amount of joy that someone experiences when winning $100 is not equal to the consternation suffered when losing the same amount. Most people are more willing to take risks to avoid or minimize a loss than to increase their fortune. So, focus on showing residents how much money they are losing every month by not enrolling in demand response. Once the loss is obvious, people will take action.

7. Integrate complex information
When calculating energy savings, people usually can’t take into account all the elements such as rising fuel costs, the real long terms benefit, etc… So, do the math for them! Give price information that shows the full savings, presented as avoidance of a negative consequence of non-action (see point above). Use the actual data you already know about the consumer’s energy usage to make the information actionable and real.

8. Present information using vivid personal stories and videos
Statistical data summaries and impersonal information are less effective than case studies and colorful stories for motivating participation. For example, imagine that you are considering a new car and are choosing between a Volvo and a Saab. Consumer Reports informs you that the consensus of its studies is that the Volvo has a better repair record. That evening, you go to a party and run into an acquaintance who tells you a horrific story about a Volvo. Although the Consumer Reports article is based on hundred of repair records and your friend’s story is just one additional data point, most people will be swayed by their friend not to buy that car.
When communicating the benefits of a direct response program, demonstrate benefits with concrete stories about real people who save more energy than average but are “just like you. To be even more effective, present the content in videos. Numerous studies have also shown that videos of people modeling the desired actions are more effective in getting people to change their behavior than written information or lectures.

9. Use a foot in the door strategy
Individuals who agree to a small initial task are much more likely to agree to a larger request. So, instead of asking people to enroll in the full DR program immediately, first ask people to participate in a small act, such as filling out a survey, and then later ask them to consider signing up for demand response as a follow up to the first request. For example, one representative study showed that the percentage of people agreeing to an unattractive sign being put on their front lawn encouraging people to drive carefully increased dramatically (from 17% to 55%) if they had first been given the opportunity to sign a petition favoring safe driving.

10. Communicate trust
One key differentiator for successful energy programs is successful marketing to get people to even consider trying it out. We’ve found in our research that people inherently don’t trust their utility so partner with a local organization people do trust to market your program. In a marketing experiment conducted in Minnesota, a county government contracted with a private company to install energy saving equipment in homes in exchange for payment of a percentage of the value of the energy saved. To market the program, households received one of three types of letters: one letter was sent on company letterhead with no mention of cooperation with the county, one letter went out on company letterhead and mentioned the county’s role, and the third went out on county letterhead and was signed by the County Board of Commissioners. The source of information had a profound effect on consumer response – request for energy audits came from 6%, 11% and 26% respectively of households receiving the three types of letters.

Next Steps

We’re continuing to do more research in this area and will publish more insight expanding into some areas mentioned above. In the meantime, here’s a partial list of references…happy reading!

  • Abrahamsen, W., Steg, L., Vlek, C., & Rothengatter, T. (2005). A review of intervention studies aimed at household energy conservation. Journal of Environmental Psychology, 25, 273-291.
  • Krantz, D. H., & Kunreuther, H. C. (2007). Goals and plans in decision making. Judgment and Decision Making, 2(3), 137-168.
  • Geller, E. S., (1992) It takes more than information to save energy. American Psychologist, 814-815.
  • Geller, E. S., Winett, R. A., & Everett, P. B. (1982). Preserving the environment: New strategies for behavior change. Elmsford, NY: Pergamon Press.
  • Lutzenhiser, S. et al (2009) Beyond the Price Effect in Time-of-Use Programs: Results from a Municipal Utility Pilot, 2007-2008. Presented at the International Energy Program Evaluation Conference, Portland, OR, August 12-14, 2009. http://drrc.lbl.gov/pubs/lbnl-2750e.pdf
  • McKenzie-Mohr, D. and Smith, W. (1999) Fostering Sustainable Behavior. Gabriola Island, B.C., Canada: New Society Publishers.
  • Swim, Janet et al. (2009) Psychology and global climate change: addressing a multi-faceted  phenomenon and set of challenges, A report by the American Psychological Association Task Force on the Interface between psychology and global climate change. http://www.apa.org/science/about/publications/climate-change.aspx
  • Winett, R. A., Hatcher, J. W., Fort, T. R., Leckliter, I. N., Love, S. Q., Riley, A. W., et al. (1982). The effects of videotape modeling and daily feedback on residential electricity conservation, home temperature and humidity, perceived comfort, and clothing worn: Winter and summer. Journal of Applied Behavior Analysis, 15(3), 381-402.
  • Winett, R.A. and Geller, E.S., (1981) Comment on “Psychological research and energy policy”. American Psychologist, (425-426).
  • Yates, S. and Aronson, E., (1983) A social psychological perspective on energy conservation in residential buildings. American Psychologist, (435-444).
  • Stern, P. C., Aronson, E., Darley, J. M., Hill, D. H., Hirst, E., & Kempton, W. et al. (1986). The effectiveness of incentives for residential energy conservation. Evaluation Review, 10(2),
    147-176.

For related posts about designing for the smart grid, check out:

Which Metrics Equal Happy Users?

| 12.3.2009

One of the greatest tools available to me as an interaction designer is the ability to see real metrics. I’m guessing that’s surprising to some people. After all, many people still think that design all happens before a product ever gets into the hands of users, so how could I possibly benefit from finding out what users are actually doing with my products?

Well, for one thing, I believe that design should continue for as long as a product is being used by or sold to customers. It’s an iterative process, and there’s nothing that gives me quicker, more accurate insight into how a new product version or feature is performing than looking at user metrics.

But there’s something that I, as a user advocate, care about quite a lot that is really very hard to measure accurately. I care about User Happiness. Now, I don’t necessarily care about it for some vague, good karma reason. I care because I think that happy users are retained users and, often, paying users. I believe that happy users tell their friends about my product and reduce my acquisition costs. I truly believe that happy users can earn money for my product.

So, how can I tell whether my users are happy? You know, without talking to every single one of them?

Although I think that happy users can equal more registrations, more revenue, and more retention, I don’t actually believe that this implies the opposite. In other words, there are all sorts of things I can do to retain customers or get more money out of them that don’t actually make them happy. Here are a few of the important business metrics you might be tempted to use as shorthand for customer happiness – but it’s not always the case:

Retention

An increase in retention numbers seems like a good indication that your customers are happy. After all, happier customers stay longer, right?

But, do you mean retention or forced retention? For example, I can artificially increase my retention numbers by locking new users into a long contract, and that’s going to keep them with me for awhile. Once that contract’s up, they are free to move wherever they like, and then I need to acquire a customer to replace them. And, if my contract is longer than my competitors’, it can scare off new users.

Also, the retention metric is easy to affect with switching barriers, which may increase the number of months I have a customer while making them less happy. Of course, if those switching barriers are removed for any reason – for example, cell phone number portability – I can lose my hold over long time customers.

While retention can be an indicator of happy customers, increasing retention by any means necessary doesn’t necessarily make your customers happier.

Revenue

Revenue’s another metric that seems like it would point to happy customers. Increased revenue means people are spending more, which means they like your service!

There are all sorts of ways I can increase my revenue without making my customers happier. For example, I can rope them into paying for things they didn’t ask for or use deceptive strategies to get them to sign up for expensive subscriptions. This can work in the short term, but it’s likely to make some customers very unhappy, and maybe make them ex-customers in the long run.

Revenue is also tricky to judge for free or ad-supported products. Again, you can boost ad revenue on a site simply by piling more ads onto a page, but that doesn’t necessarily enhance your users’ experience or happiness.

While increased revenue may indicate that people are spending more because they find your product more appealing, it can also be caused by sacrificing long term revenue for short term gains.

NPS – Net Promoter Score

The net promoter score is a measure of how many of your users would recommend your product to a friend. It’s actually a pretty good measure of customer happiness, but the problem is that it can be tricky to gauge accurately. It generally needs to be obtained through surveys and customer contact rather than simple analytics, so it suffers from relying on self-reported data and small sample sizes. Also, it tends to be skewed in favor of the type of people who answer surveys and polls, which may or may not be representative of your customer base.

While NPS may be the best indicator of customer happiness, it can be difficult to collect accurately. Unless your sample size is quite large, the variability from week to week can make it tough to see smaller changes that may warn of a coming trend.

Conversion to Paying

For products using the freemium or browsing model, this can be a useful metric, since it lets you know that people like your free offering enough to pay for it. However, it can take awhile to collect the data after you make a change to your product because you have to wait for enough new users to convert to payers.

Also, it doesn’t work well on ad-supported products or products that require payment upfront.

Most importantly, it doesn’t let you know how happy your paying customers are, since they’ve already converted.

Conversion to Paying can be useful, but it is limited to freemium or browsing models, and it tends to skew toward measuring the free part of the product rather than the paid product.

Engagement

Engagement is an interesting metric to study, since it tells me how soon and often users are electing to come back to interact with my product and how long they’re spending. This can definitely be one of the indicators of customer happiness for ecommerce, social networking, or gaming products that want to maximize the amount of time spent by each user. However, increasing engagement for a utility product like processing payroll or managing personal information might actually be an indicator that users are being forced to do more work than they’d like.

Also, engagement is one of the easiest metrics to manipulate in the short run. One time efforts, like marketing campaigns, special offers, or prize giveaways can temporarily increase engagement, but unless they’re sustainable and cost effective, they’re not going to contribute to the long term happiness of your customers.

For example, one company I worked with tried inflating their engagement numbers by offering prizes for coming back repeatedly for the first few days. While this did get people to return after their first visit, it didn’t actually have any effect on long term user happiness or adoption rates.

Engagement can be one factor in determining customer happiness, but this may not apply if you don’t have an entertainment or shopping product. Also, make sure your engagement numbers are being driven by actual customer enjoyment of your product and not by artificial tricks.

Registration

While registration can be the fastest metric to see changes in, it’s basically worthless for figuring out how happy your users are, since they’re not interacting with the product until after they’ve registered. The obvious exception is products with delayed (i.e. lazy) registration, in which case it can act like a lower barrier-to-entry version of Conversion to Paying. When you allow users to use your product for awhile before committing, an increase in registration can mean that users are finding your product compelling enough to take the next step and register.

Registration is only an indicator of happy customers when it’s lazy, and even then it’s only a piece of the puzzle, albeit an important one.

Customer Service Contacts

You’d think that decreasing the number of calls and emails to your customer service team would give you a pretty good idea of how happy your customers are. Unfortunately, this one can be manipulated aggressively by nasty tactics like making it harder to get to a representative or find a phone number. A sudden decrease in the number of support calls might mean that people are having far fewer problems. Or, it might mean that people have given up trying to contact you and gone somewhere else.

Decreased Customer Service Contacts may be caused by happier customers, but that’s not always the case.

So which is it?

While all of these metrics can be extremely important to your business, no single one can tell you if you are making your customers happy. However, looking at trends in all of them can certainly help you determine whether a recent change to your product has made your customers happier.

For example, imagine that you introduce a new element to your social networking site that reminds users of their friends’ birthdays and then helps them choose and buy the perfect gifts. Before you release the feature, you decide that it is likely to positively affect:

  • Engagement – every time you send a reminder of a birthday, it gives the user a reason to come back to the product and reengage.
  • Revenue – assuming you are taking a cut of the gift revenue, you should see an increase when people find and buy presents.
  • Conversion to Paying – you’re giving your users a new reason to spend money.
  • (Lazy) Registration – if you only allow registered users to take advantage of the new feature, this can give people a reason to register.
  • Retention – you’re giving users a reason to stay with you and keep coming back year after year, since people keep having birthdays.

Once the feature is released, you look at those numbers and see a statistically significant positive movement in all or most of those metrics. As long as the numbers aren’t being inflated by tricks or unsustainable methods (for example, you’re selling the gifts at a huge loss, or you’re giving people extra birthdays), you can assume that your customers are being made happy by your new feature and that the feature will have a positive impact on your business.

Of course, while you’re looking at all of your numbers and metrics and analysis, some good old fashioned customer outreach, where you actually get out and talk directly with users, can also do wonders for your understanding of WHY they’re feeling the way they’re feeling. But that’s another post.

Interested? You should follow me on Twitter.

For more information on the user experience, check out: