Task-Centered User Interface Design
A Practical Introduction
by Clayton Lewis and John Rieman
Copyright ©1993, 1994: Please see the "shareware notice" at the front of the book.
skip navigation Contents | Foreword | Process | Users&Tasks | Design | Inspections | User-testing | Tools | Documentation | Legal | Managing | Exercises

Appendix M: Managing User Interface Development

We've covered the key methods required for developing good user interfaces. We've seen in a general way how these pieces fit into an overall process. But how do you make this process happen in your organization?

In this chapter we'll write as if you are the manager of your development group: if you get to make the decisions, here's how you ought to make them. If you aren't the manager you ought to read this stuff anyway. You may be able to influence the people you work with, or you may be able to recognize that you need to find a job in a better organization.

M.1 Staffing

Of course you want the right people in your group, but what makes people right for user interface development does not seem to be obvious to a lot of managers. We'll start with two don'ts.

DON'T get high-powered technical people who think computers and the things they can make them do are the most interesting and important things in life, and who think people are an unfortunate, but temporary, faltering in the march of evolution. Clayton once worked with a user interface designer who said that if he could get the results of user testing of his designs immediately and at no cost he would ignore them, because he KNEW his interface designs were the best possible and if users didn't like them they were wrong. This guy was a great programmer, and popular with management for that reason. But he was hopeless as an interface designer because he was interested in his designs just as pieces of computing, not as things actual people could or could not do their actual work with.

On the other hand, DON'T get psychologists or human factors people who literally or figuratively want to wear white lab coats as symbols of their status as serious behavioral scientists. These folks may seem to be interested in people, but they really aren't: at any rate they're interested in them only as objects of study rather than as living beings trying to get things done. The key symptom to be concerned about: the person refuses to offer a judgement on anything without running a big experiment.

You have to be careful here, because some of the very best user interface designers are in fact psychologists. Psychologists know a lot about how to do user testing, and about how people solve problems, how they learn, and other matters crucial to good interface design. If they are willing to use this knowledge in the service of building useful systems, and stay focussed on that goal, they can be invaluable. But if doing a good study is more important to them than building a good system, they can't help you much.

Another point of caution: a person who refuses to offer a judgement about anything without visiting some users may seem just as unhelpful as the person hung up on experiments. But in fact this may be just the person you want. There's no virtue in being free and easy with judgements just because a quick judgement lets people get on with the job. You want people who care enough about the success of your system to get the information needed to do things right. As we've stressed here all along, that information centers on users and their work.

More abstractly, the people you want are interested in the richness and detail of human life. They like to know what people do and how they do it, and what problems they encounter. They're more excited about seeing their system help somebody do real work than about the logic of their design. In our opinion this trait is more important than technical skills, whether in computing or psychology, because it's harder to acquire.

M.2 Organization

Traditional organizational structures often segregate people by technical specialty, so that planners, designers, programmers, writers, usability people, and quality control people often find themselves in different groups. We urge you to avoid this setup if you possibly can and aim for an integrated organization in which the people responsible for design, implementation, evaluation, and documentation of your user interface are fully integrated into your development group.

There are three crucial issues here. One is integration of design: you can't separate user interface design from specification of functions and from documentation without loss of quality. The following example gives Clayton's favorite illustration of this point.

Example: The Worst Interface Ever and How It Came About

When Clayton was working for a large computer company he got a call from a planner down south who wanted people to come and see a great new financial analysis product. He said it was better than the spreadsheet (a hot new concept at the time) and people around the company needed to come and see it.

Clayton and a bunch of other interested people showed up for what turned out to be a kind of mass user trial, with a room full of people sitting in pairs at terminals trying and failing to make the thing work. He and his partner had trouble getting anywhere, and were especially baffled by what seemed to be unrepeatable errors. They would run into trouble, back up, and try to do the very same thing again, only to get different results. The developers were hovering around, looking puzzled and hurt, and Clayton called one over for consultation.

After some discussion the developer explained the problem. This system ran on a type of terminal in which most keys produced input that was buffered in the terminal, but some special keys, including the function keys, communicated immediately with the host computer. The ENTER key was one of these special keys, and the developers had the bright idea of using it as an extra function key. They arranged the system so that when you hit ENTER you got whatever function was associated with the last function key you had pressed. Clayton and his partner were getting baffling results because in fooling around between attempts at their task they hit different function keys and hence set up different bindings for ENTER. The developer was surprised they hadn't figured that out.

"Is there some way we can tell what we'll get if we use ENTER," Clayton asked? "I'm surprised you didn't figure that out either," said the developer. "See that list of function key bindings at the bottom of the screen? The function you'll get is the one that's missing from that list."

Also hovering in the room was a technical writer who came over to join the discussion. "I'm so delighted you're having all these problems," she said. "I keep trying to tell them there's no way in the world I can describe this so it seems sensible, but they won't listen. They say they know there are rough spots but I should just explain them in the manuals."

This is the type specimen of the "peanut butter theory of usability," in which usability is seen as a spread that can be smeared over any design, however dreadful, with good results if the spread is thick enough. If the underlying functionality is confusing, then spread a graphical user interface on it. (In fact, that was exactly the origin of this system: it was an existing financial modelling package to which a new user interface had been fitted.) If the user interface still has some problems, smear some manuals over it. If the manuals are still deficient, smear on some training which you force users to take.

Of course the theory doesn't work, as this system showed so dramatically. The original design has to consider usability, and the problem of how to explain things to users has to be dealt with up front, not as an afterthought.

The trial session was a great success for the guy who invited Clayton. He knew all along the system was a disaster, but he knew he would need help to kill it. He also knew no-one would come if he told them the truth about it. So he lied, lots of people came from around the company, and the project was quietly shelved.

The second issue is avoiding what we call "the doer-kibitzer split". In one common setup there are usability people in a support group empowered to review designs that the developers come up with, with or without testing them, and suggest usability improvements. Consistently this setup leads to the development of two bad attitudes. The developers come to see the usability people as a drag on their progress, outsiders who just sit and snipe while the developers try nobly to press on with the real work of the organization. The usability people complain that they get no respect, and that the developers are insisting on shipping rubbish that will bankrupt the company.

Some organizations have responded to this by allowing developers to choose freely whether or not to call on the usability people for advice, and whether or not to pay any attention to the advice they get. This takes some of the poison out of the air, though usability people can still feel like fifth wheels. But in our view it doesn't give usability the central focus it needs to have.

Another organizational variation is to entrust not just usability critiquing but all of user interface design to a separate support group. This again takes some of the poison out but it moves user interface design out of the center of power in development and off to one side. The main development group correctly views getting a good user interface as somebody else's job.

We think user interface development should be just as much a core responsibility of the main development group as any other aspect of function, implementation or performance. Members of the development group should be encouraged to respond professionally to that responsibility, rather than to pass it off to somebody else. Just as developers make it a point of professional pride to be knowledgeable about programming languages and tools so they should demand of themselves and their co-workers that they be knowledgeable about their users and the work they do. Just as they hold themselves to high standards regarding good choices of data representations and algorithms they should set high standards for the fit between their user interface and user needs. All these things should be seen as parts of their professional contribution, all demanding professional knowledge and hard work of them, not of somebody else.

Are we saying everybody has to be a usability specialist? No, no more than everybody in a group has to be an algorithms specialist. In any group there are people who focus more on some aspects of the job and less on others. That will be as true for usability as it is for algorithms or anything else. What we think should be avoided is an organizational structure that puts up a barrier with usability on one side and other issues on the other, and with usability people on one side and everybody else on the other. That makes it too easy for most people in the organization to ignore their own responsibility for usability.

One argument you hear in favor of segregating usability people organizationally is that it supports their professional development. Since many usability people have training in psychology, rather than in computer science, the argument goes, you need to create an environment in which they work with other people with similar background and interests. This will keep them from feeling isolated in a sea of programmers. It may also create a career path for them, since there will need to be more senior usability people, managers of usability groups, and so on. Usability managers will recognize that the usability people are different from programmers and treat them better.

The problems this argument describes are real enough: usability people with poor background in computing do have a hard time making it. But segregation doesn't avoid the problems and in fact can make them worse by creating official second class citizens instead of unofficial ones. Individual usability people can hope to broaden their knowledge and be accepted as regular systems people, only with added value, if they are not trapped in a limited role. Similarly, regular systems people can develop usability skills, if usability isn't made out of bounds for them.

M.3 Resource Allocation

One of your key functions as a manager is to decide how much effort to spend on various aspects of development. This saddles you with one of the toughest problems in user interface development: when should you stop iterating your design?

If you demand a scientific answer to this question you probably won't be a very good manager: you're paid to make lots of decisions like this, without having a good basis for them. When is performance good enough? When is the bug rate low enough? You have to make calls like this on the basis of ideology and instinct, not science (6 sigma quality, a defect rate around ten to the minus 7, isn't something companies aspire to because they have data showing it's the economic rate; they aspire to it because it says something about their view of themselves. It's a way they can be the best).

Well, you say, I AM a good manager, and I still demand a scientific answer on when my user interface is good enough. We're still going to give you a hard time. Do you have scientific answers to those other questions, the ones about performance and bug rate? No? Then why do you insist on one for usability? We think it's because you don't want to be held responsible for usability: you want to pass the buck to some scientific decision process. You don't do this for performance and bug rates because you accept them as part of your territory as a professional. Like it or not, usability is part of your professional territory too and you will only get grief by trying to pretend it's not.

If you're still interested after that sermon, we'll tell you how to get that scientific answer. Read the HyperTopic on quantitative usability targets.

HyperTopic: Quantitative Usability Targets

Warning! Unrecommended method!

[We'll expand this topic in a future version. For now, the idea is that you set quantitative usability goals for the product: times for test tasks, number of errors, rated test user satisfaction. Then you keep iterating until you get test results showing you have met your objectives. Sounds simple, but problems are many: how to handle statistical uncertainty, how to set the targets to begin with, how to avoid adjusting the targets when you don't live up to your ambitions.]

So if science won't help you, how do you decide about those iterations? Here are the main factors you'll be juggling.

How do you feel about the interface? Are you proud of it? Does it work smoothly on your sample tasks? If you don't feel good about your interface you have to do more work. Users will feel the same way.

Can you afford to work longer? Why not? If your answer to the last question was negative, meaning you think the interface is still crummy, you may not be able to afford NOT to work longer. If the interface is probably good enough, you may still easily be able to afford more work on it, especially if other aspects of the system are slipping.

You'll have an easier time with these questions and the decision that hangs on them if you've done some preparation at the time you planned out your project. First, you should have included in your original schedule at least two iterations, so you don't have to make any tough decisions before the design has a chance to be in reasonable shape. Second, you should have gotten interface design started early, and overlapped it with other development work. If you are lucky this means interface design will not be on the critical path for the project as a whole, meaning that you can take a little longer with it without extending the overall schedule. Third, you should have adopted software tools that minimize the time required to make interface changes.

Here's one more idea for taking some of the stress off the iteration decision. Make a plan that includes a short period, say a week or two, just for user interface improvements, as late in the schedule as you can tolerate (you have to worry about redoing pictures in manuals, for example, so changes can't be literally at the last moment.) Get everybody to agree to spend that time just on polishing up the user interface. If you do this then whenever you decide to stop iterating there'll still be a chance for some final finish work.

HyperTopic: What If Nobody's Willing to Hold Back the Product for Usability Work?

Peter Conklin of Digital, drawing on earlier work from Hewlett Packard, has developed a useful way to increase willingness to invest in product improvements of all kinds, including usability improvements, by getting people to think differently about ship dates and their significance (In M. Rudisill, T. McKay, C. Lewis, and P.G. Polson (Eds.), "Human-Computer Interaction Design: Success Cases, Emerging Methods, and Real-World Context." Morgan Kaufmann. In press.). The idea is to replace emphasis on TIME TO MARKET by emphasis on TIME TO BREAK EVEN.

Many companies measure projects by how quickly they ship. A project that gets to market sooner is rated better than one that takes longer. Conklin points out that the point of getting to market is to make money, and so a more important target date is the time the product recovers its development costs and starts to earn a profit: time to break even. Anything that increases the rate of product acceptance, that is, the growth of sales volume, will shorten time to break even, and if the increase in acceptance is big enough, time to break even may be shorter even if time to market is longer. It's smart to take time to produce a better product if the impact on acceptance is big enough.

Nothing in Conklin's approach makes decisions for you, but it does help the tone of group discussions. If you focus on time to market, any effort that delays shipment is bad, period. Somebody holding out for taking more time for usability improvements looks like they're just standing in the way of progress and making the whole project look bad. If I've implemented my routines on time, I'll resent giving the user interface people more time, because I'll be just as late as they are if the product slips. If you focus on time to break even, added development time can be good, if it's important enough (and there aren't overriding timing considerations, like an impending competitive release that could freeze you out). Anybody proposing added development has a clear shot at persuading everybody else in a rational discussion. If the user interface people manage to move up the time to break even, everybody looks good.

Here are a few last management suggestions.

HyperTopic: I can't get my management to do things right

Lots of usability people have tried to make careers of "educating" managers about the importance of usability, user testing, etc. etc. Don't waste time on this. If you are proposing concrete, practical work and management won't listen, quit and get a job working with people who are smart enough to do what's in their own interest. If you think your organization has a bright future without worrying about usability, and you want to stay, then don't you worry about usability either: get out of usability work.

M.4 Product Updates

No matter how wonderful your product is you'll have to upgrade it over time. There'll be changes in the platform and in user expectations that will affect some of your existing users as well as new users you hope will buy the product. This poses a big dilemma: how do you improve the user interface without turning off your loyal existing users, who have gotten used to the thing the way it is?

Marcy Telles of WordStar, a product which has faced this issue in a big way, argues that updating is harder than creating a new interface, because all the same problems arise with the added constraint of working around the existing interface ("Updating an older interface," Proc. CHI'90 Conference on Human Factors in Computer Systems. New York: ACM, 1990, pp. 243-247.) She recommends trying to work as much as possible by adding options to the existing interface, so that the habits of existing users will still work. In the case of WordStar it was possible to move to modern menu-based interface without disturbing the old keystroke commands very much. Telles also recommends discussing possible changes thoroughly with existing users, so you know what features of the old interface they are really dependent upon.

Copyright © 1993,1994 Lewis & Rieman
skip navigation Contents | Foreword | Process | Users&Tasks | Design | Inspections | User-testing | Tools | Documentation | Legal | Managing | Exercises | Top