In the dim distant past, some of us used to distinguish our work from the masses by declaring proudly that we were ‘user-centered’. At one time this actually meant you did things differently and put a premium on the ability of real people to exploit a product or service. While the concern remains, and there are many examples of designs that really need to revisit their ideas about users, I find the term ‘user-centered’ to have little real meaning anymore. It is not just the case that everyone claims this label as representative, after all, who in their right mind would ever declare their work as not user-centered and still expect to have an audience? It is more a case that truly understanding the user seems beyond both established methods and established practices.
I will leave aside there any argument about the term ‘user’. Some people have made careers out of disimissing that term and proposing the apparently richer ‘person’ or ‘human’, but the end result is the same (though I prefer to talk of human-centered than user-centered myself). The real issue is methodological.
First, claiming adherence to user-centered methods and philosophies is too easy; anyone can do it. Ask people what they would like to see in a re-design and you have ‘established’ user-requirements. Stick a few people in front of your design at the end and you have ‘conducted’ a usability test. Hey presto, instant user-centered design process. If only!
Second, and more pernicious, the set of methods employed by most user-centered professionals fails to deliver truly user-centric insights. The so called ‘science’ of usability which underlies user-centeredness leaves much to be desired. It rests too much on anecdote, assumed truths about human behavior and an emphasis on performance metrics that serve the perspective of people other than the user. ISO-defined usability metrics refer to ‘efficiency’ and ‘effectiveness’ and ‘satisfaction’. These do not correlate so one needs to capture all three. But who gets to determine what constitutes effective and efficient anyhow? In many test scenarios this is a determination of the organization employing the user, or the thoughts of the design team on what people should do, not the user herself. Maybe this should be called organizational-centric or work-centric design. If I wanted to start a new trend I could probably push this idea into an article and someone might think I was serious.
What is often overlooked is that the quality of any method is determined far too much by the quality of the evaluator who employs it.. Evaluation methods are all flawed, that much is a given, but it is the unwillingness of many people to recognize these shortcomings that should give us all real concern. Here’s but one example. The early Nielsen work on heuristic evaluation has given rise to the ‘fact’ that evaluators find about 35% of usability problems following his method, and if you pool several reviewers you can get a better hit rate. What many people overlook in this is that the 35% figure is not calibrated with real user problems but is based on Nielsen’s own interpretation of the problems users will face. So the 35% claim is really a claim that following his method, you will probably find a third of the problems that Nielsen himself estimates are real problems for users. This is a very different thing. It is interesting that in my own tests with students, this 35% figure holds pretty firm, which is impressive, but you cannot lose sight of what that percentage relates to or you will misunderstand what is going on.
Now of course, there are great evaluators out there but even if all evaluators were great, that would not change the problem with user-centeredness as it currently exists. Too much evaluation occurs too late to matter. OK, this is an old story but what’s changed since this story was first heard? Not enough. If user-centered design really is limited to evaluating and designing for a narrowly construed definition of usability then there is little prospect of change. For a limited range of tasks where I want to be efficient (finding a number in my cell-phone, for example, then current practices are fine, as long as I can prototype quickly) but for the type of deeply interactive tasks that I might spend large parts of my day engaged in (reading, communicating, exploring data etc.) then talk of ‘effective’ and ‘efficient’ rings more hollow. But it is preciely this next generation of application opportunity that we need to explore if we are to augment human capabilities. The old usability approach is fine for applications where we are making digital that which used to be performed with physical resources (text editing, mailing, phoning, calculating) but it’s not a great source of help for imagining new uses.
If we could de-couple user-centered design and usability then there might be some benefit but I don’t think this is as important as it might first appear. More important is the very conception we have of users and uses for which we wish to derive technologies and information resources. Designing for augmentation is a very real problem and a great challenge for our field theoretically and practically.
From Chad:
I wholeheartedly agree that the term user-centered (user-centric, ugh) is reaching buzzword status. It’s becoming a feel-good word managers can throw around to gain credibility. I also agree that its practice is somewhat… “impoverished.” But this impoverishment is not due to the UCD methods themselves, or even the skill of those employing UCD (usually). In my experience, the problem is one of stakeholder expectations about the amount of research UCD involvement required to ship a good product.
In my experience, most stakeholders understand that good design requires some contact with real users, but there are always pressures to deliver products more rapidly and more cheaply. I like the old adage, of FAST, CHEAP, GOOD – pick two, you can’t have all three. I have seen UCD failures occur most frequently because the design team was not able to sell the stakeholders on performing adequate research to inform the design. In this scenario, some UCD is done, but not enough and so assumptions and guesses are made and the resulting design suffers, not because of UCD, but because of the lack of it. Meanwhile, the stakeholders will of course talk up how their product was usability tested, and what a difference it made, further watering down the notion of UCD.
It is up to us as designers and UCD practitioners to battle this – to make the case for more complete and holistic UCD activities in our projects and to take ownership of the whole user experience, not just the “usability” aspect. During the last couple of years, I have seen consideration of the “whole experience” really catching on. I try to think of my role as helping to make a product usable, useful, and desirable and I structure my work towards those ends. I harp on this “trinity” of design endlessly, and after awhile, people catch on. This approach can open doors to doing the kind of user research and UCD needed to gain those insights that lead to great products.
I should add that it takes great leaders to structure teams (people) and work so that research and design efforts flow into and feed one another resulting in good design and rapid turnaround without sacrificing quality. This is truly a daunting task.
Hello
Great articles. UCD is now a Word without sens, practive and content. Usability is a duty. We never use this word for car, pen or food. IT world thing they invent design ans innovation. not at all. A product/services is not ontly about tech. It’s right. A product/service must be focus on usabilty and user. It’s wrong. It is not enough. Think that all people are not only users. think about look, symbolic, sign, practices vs uses. think with a systemic , human, functionnal & aesthetic approach.
I concur with the author. But what changes are suggested?
(Everyone complains about the weather, but nobody does anything about it.)
Where I see the weakness is in neglecting UX in the project scoping and budgeting phase of any project–especially IT projects. Product, food and services are fairly rigorous in field testing, but there seems to be a vast cultural divide between those who fund, those who code, and those who use.
It is up to us (designers, usability experts, quality directors, etc.) to advocate and demonstrate the business case for such qualitative and quantitative UCD studies.
In the medical field, the “User” works in a complex environment, an mosh-pit of gadget, applications, pumps, infusers, monitors, and alerting systems, each with its own unique UI and each one seemingly designed without consideration of the others.
In spite of this, funding for ethnographic research is typically left out of the design process, substituting instead for random focus groups, surveys and anecdotal user stories.
With the current stimulus package infusing Health IT with incentives to develop more, faster, we fear that poor user experiences will increase as cluttered workflows become models for what–and how–to automate.
See National Research Council 2009 report: COMPUTATIONAL TECHNOLOGY FOR EFFECTIVE HEALTH CARE: IMMEDIATE STEPS AND STRATEGIC DIRECTIONS
http://books.nap.edu/openbook.php?record_id=12572&page=R1
Folks — thanks for all the comments. I want to respond a little to some of the comments. I don’t see my entry as a complaint about the practice of UCSD (though I could do so and I can see how some might read it as such) — I am taking aim at the very concept of user-centeredness, which has had a long shelf-life but to which it is too easy to claim adherence. Who would claim NOT to be user-centered and remain employed? Consequently, the term has little real value anymore and is applied no matter how poorly the techniques and methods are used.
In the interests of being constructive, let me suggest a couple of ways things could be improved.
1) Distinguish between usability and learnability. Most usability tests are short run tests of people’s initial reactions to a technology. This privileges the earliest phases of human reactions, the making sense of something new. This is important but it is not the full story. I have shown repeatedly in my own work that people’s later reactions take time to emerge and often run counter to their initial ones. Since humans always adjust over time, we might want to think about factoring that natural process into our evaluations. We need longitudinal studies. One of these is worth 10 quick user tests, in my view, all other things being equal.
2) Let’s measure more than effectiveness, efficiency and satisfaction. These are fine baseline measures for some tasks but not for all. Further, since I am interested in tools that can aid creativity and discovery, I don’t find the ISO measures to be the most insightful when it comes to characterizing the interactive practices and outcomes I wish to identify. Furthermore, there is a deep issue of assessment in the classic task measures that suggests people other than the task performers can best set expectations for efficiency etc. This may be beyond most UCSD practitioners professional roles but any organizational design that fails to recognize and compensate for such a bias in design is hardly worthy of the name ‘user-centered’.
3) If we are serious about our methods, let’s have more comparative analysis of their relative strengths and weaknesses. Far too much of what UCSD folks do is based on the desire to get it done quickly and on fairly limited meta-analysis of our involvement in design.
In sum, if you are centered on a user then what you do reflects your assumptions about these users. If all you care about is initial learning and satisfaction, then fine, just say so. Just don’t tell me that this is really putting people (with all their temporally-extended and rich emotional aspects) at the center of designs. And one final thing — you don’t avoid this just by calling yourself a user-experience professional either
Excellent post. This is what I was trying to say in my IA Summit keynote last year.
I’ve come to the conclusion that one big problem with UCD is none of the practices that are lumped under it are design. They are only evaluation. Evaluation without design is just complaining and the world doesn’t need more complainers.
Another big issue is, in today’s world, design isn’t just relegated to trained, experienced designers. 99% of things designed today are created by people with no previous design background. Standard UCD techniques and practices are not accessible to these folks, even if they knew they existed. Therefore, most of today’s design is uninformed and poorly done.
A third big issue is that the UCD community has never made their work reliable (two practitioners evaluating the same design will produce radically different interpretations of what needs changing). There is no science behind jumping from observation to inference. The inference one chooses will lead to a different recommendation than any other inference. Therefore, you get radically different recommendations each time. Until this is solved, the practice is very dependent on the practitioner and needs to be treated as a craft, not an engineering discipline.
Personally, I think UCD has outlived its usefulness. It was very helpful back in the ’70s when everything was either self designed or unintentionally designed. (See my article on 5 Design Styles to learn more what I mean about these.
We should be taking the evaluation practices we know (and researching more, to fill in the glaringly missing gaps, such as how to evaluate for experiences that delight or how to evaluate in social situations, like Twitter) and helping everyone who is designing (trained designers and untrained designers alike) to integrate evaluation as part of their design process.
I would not be sad if the term user-centered design disappeared tomorrow.
Jared
I confess to a having certain amount of disappointment and impatience with our profession’s propensity to heavily promote terminology, only to turn around and toss it away as “dead,” once it becomes mainstream. UCD, like User Experience, have become well-recognized calls to having a mindset of “placing the user first” in all phases of the creation of any product. If it follows that the methodology or implementation is flawed, why not improve those aspects instead of simply abandoning the terminology. Is it not true that every established discipline has had new methodologies that began their existences with flaws in place? Did those disciplines abandon the names of their methodologies so quickly, or just continue to improve them? I haven’t studied it, but I suspect they continually improved them and kept the name.
In my experience, the biggest problem we face is that the belief the user should come first is often talked about and promoted, but rarely implemented in the “we can’t afford research” environment of IT. I submit to my colleagues that the stakeholders and decision makers in Business will not take us seriously if we continue to change the name of what we do. In my experience, these folks are just beginning to get the message of UCD and User Experience. When they finally do understand that UCD and User Experience are cost effective, are we to say, “We don’t call it that any more?” I concur that the points you (and Jared) make about the flaws of UCD are valid. If that is the case, what do we do to make UCD live up to its name? I realize that my experience is anecdotal. Nonetheless, I wish to raise it.
Andrew, Jared, (Gordon?), you are so right. And while we’re at it, what about this “gravity” nonsense? The concept has become so commonplace, even people unschooled in physics are leaning on it to explain things. (And how many of them REALLY understand it?!) And even if we did wish to continue to rely on this “gravity” thing (hey, it’s not just a good idea; it’s the LAW), it doesn’t really explain how light travels, now does it? So as to be maximally constructive in my critique, I propose a new term for gravity. I think from now on we should call it “Down eXperience” (DX). Of course, only people who are good at designing and explaining down experiences will be able to use this term. Until the great unwashed masses discover it and then we’ll have to invent a new term. But that’s a concern for next year. In the meantime . . .
1 – No, usability evaluation doesn’t help you design the FIRST prototype (though there
ARE various UCD methods that WILL help with that, e.g., contextual inquiry).
2 – Yes, some people do a poor job of choosing which UCD methods to apply, or of applying them, dang it. I wish they would stop doing that. (And I spent 90 minutes debating Krug at UPA last year, offering ideas on how to reduce this, and spend at least 30 weeks a year sharing those ideas with master’s students.)
3 – Yes, usability is comprised of learnability (AND discoverability) in addition to day-to-day usability, and we will do well to distinguish among them (and educate others
to do so).
4 – Yes, “truly understanding the user seems beyond both established methods and
established practices.” That’s why I’m working with you (Andrew) to improve old methods and add new ones. And,
5 – yes, our entire discipline is weak on the quantification of the value of which UCD
methods are best applied when, and by whom. Some of us are trying to work on this problem. Given what I’ve read so far, I believe I’ll continue to do so (whether it be under the rubric of UCD, or UX, or IxD, or Twas Brillig), and to do all I can to make sure the UCD baby doesn’t get thrown out with the bath water.
Randolph.
I recently came across your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog.
In many ways I’m with Randolph on this one – just because people hijack your buzzword doesn’t mean you should abandon it in disgust. Do that and in 18 months I guarantee that you’ll need to find a new buzzword since they’ll all catch on again and start jumping on your bandwagon. (Why do you think we have moved from User-friendly through Usability to UCD to UX in less than decade?)
But let’s not be too pessimistic here. Sure, there are plenty of people saying they “do” UCD when we, the blessed adherents of the One True Faith, know that they are mere heathens praying to False Prophets (or should that be Fast Profits?).
That’s just too extreme. Firstly, anyone who’s been in this industry as long as Bias, Spool and Dillon have has to admit that things are a *lot* better now than they were, even 5 years ago. And secondly, even if UCD isn’t done perfectly, the fact that it is done at all should be a cause for guarded optimism.
The other point I want to make is that, sure, we could spend our whole lives complaining about how project planners and stakeholders don’t understand what we do and don’t factor in enough time or budget for us to do our jobs perfectly. But where will that get us? Unfortunately, what happens in the real world – the one where someone is paying you by the hour for your work – is that only artists and monks (and maybe academics with tenure
ever get all the time they need to do their job as perfectly as they’d like.
For an industry that makes money telling others how to make their products to fit with user needs, I find it both ironic and rather sad that our *own* products often seem so poorly fitted to the needs of *our* users. Instead, we complain that stakeholders don’t use our products in the ways that we would like them to. Sound familiar?
So what’s the answer? Surely it’s to understand our own users better, and then iterate a few new ideas until we find something that works.
For me, the issue of what we call it is irrelevant. We all understand what we are talking about here. The goal is be to bring real evidence from real users into the design process. Jared touched on this and I agree – way too much of what gets done isn’t “user-centred design”, it’s “opinion-based design”. By this I mean that it relies on the opinion of someone, (the designer? the stakeholder? the marketing dude?, the UCD expert?) and not on evidence from real users, in real contexts.
At my company we’ve been on this path for a while and have started calling it “evidence-based design”. Sure, it’s another buzzword, which brings me right back to where we started.
But I don’t care. I see it as our job to develop the tools that allow us to bring evidence from users into every stage of the design process. It’s a tough job sometimes, but that’s why they pay us and that’s why I love it.
Call it whatever you like – but keep the focus of your attention on meeting the needs of *your* users. You know – the ones who pay you for your work?
Such a wonderful post, I would be glad if there are more like this one. Regards!
The intention of user-centered design is no doubt to make the user interface as intuitive as possible for the widest variety of user expertise levels. Designing order input for a website for example must be easy to understand for anyone who finds the site and desires to make a purchase. Theoretically anyone in the world has access to these, and the user interface should be focused on helping the consumer enter the order correctly with ease. The design of more specialized software that is particular to an industry seems to require less emphasis on user centered design and more emphasis on perfecting the ability to track inventory, orders, profits, etc. while keeping the learning curve reasonable so as not to make the product costly for consumers in training hours. Audience must always be considered in design.
I found your blog on google and read a few of your other posts. I just added you to my Google News Reader. Keep up the good work. Look forward to reading more from you in the future.
I recently came from google to your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog.
Product, food and services are fairly rigorous in field testing, but there seems to be a vast cultural divide between those who fund, those who code, and those who use.