Paul Otlet in the Times

Interesting article this week in the Science section of NYT on Paul Otlet in the context of modern web design. There is even a video clip from the documentary “The man who wanted to classify the world”. The tone suggests Otlet is forgotten — maybe by CS types, but he was considered foundational to the HCI folks studying hypertext and to generations of LIS faculty interested in organization of materials. The dreams of a better knowledge world are not new and it would be churlish to complain when a leading newspaper actually take the time to learn this before committing another ‘Web-story’ to print. So I won’t!

Innovation diffusion at work: HD DVD dies before our eyes

When I used to teach students the classic innovation diffusion model of Rogers, I would try to bring up examples of technologies that were more meaningful to them than the agrarian and medical techniques that fill the textbook. The potato famine just doesn’t have the same resonance for non-Irish learners, I discovered. The trouble was that ideal tests of competing innovations don’t happen in the space of time that fits easily within a semester. I was reminded of all this when I learned of Toshiba announcing it was giving up on DVD HD, having been outfought by Sony with their Blue Ray technology for control of the home video market. Clearly Sony learned a thing or two from their VHS/BetaMax battle twenty years ago. Of course the issue is probably more complicated now and the influence of big sellers such as WalMart on the market battle might cause us to re-think Roger’s classic model, which postulates victory to the technology with the relative advantage, better compatibility, less complexity, trialability and observability. Or maybe not – the model is so, shall we say, flexible that it can usually accommodate all data after the fact, a point noted by my more observant students. So we could just explain WalMart’s influence on the diffusion as one of increasing say, observability or trialability, or perhaps it was Sony’s backdoor into home theatre through gaming consoles. But if this makes simple sense to you, will someone explain how we fit the observation of Microsoft’s support for DVD HD, which lest we forget, was the cheaper of the technologies, was launched earlier, and broke free of some of the region constraints that frustrate other formats, into the Rogers model without wrinkles?

All this is not new to anyone who has given thought to buying a new TV or DVD player recently – the choices are annoyingly overcomplicated and mirror an earlier ‘battle’ that petered out over the next generation sound medium, post CD. Sony pushed SACD for awhile, others pushed DVD-A, and the net result was that Sony won again, but you’d never notice since they largely gave up on the format straight after, though you can still buy hardware and software which is, to my ears, a step up from regular redbook CDs. The prediction of pundits now is that with the format war over, everyone will be buying Blue Ray but it’s just as likely it seems to me that most people won’t care and will live happily with the quality the have with existing DVDs. CD and DVD are comparatively old technologies but for many consumers this is as good as they want, and the next great challenge is not a new disk format but a whole new way of obtaining video and audio of sufficient quality without any need for disks. Of course, as there remain regular buyers and users of LP records, I can see the HD DVD being with us for some time to come. Maybe the assumptions developers are making about the human need for these new media is just a little off track? But I am sure both Toshiba and Sony would tell you they really followed a user-centered design process. I just hope for the simple time when I don’t have to buy new copies of old items made obsolete through technological ‘advances’.

The poverty of user-centered design

In the dim distant past, some of us used to distinguish our work from the masses by declaring proudly that we were ‘user-centered’. At one time this actually meant you did things differently and put a premium on the ability of real people to exploit a product or service. While the concern remains, and there are many examples of designs that really need to revisit their ideas about users, I find the term ‘user-centered’ to have little real meaning anymore. It is not just the case that everyone claims this label as representative, after all, who in their right mind would ever declare their work as not user-centered and still expect to have an audience? It is more a case that truly understanding the user seems beyond both established methods and established practices.

I will leave aside there any argument about the term ‘user’. Some people have made careers out of disimissing that term and proposing the apparently richer ‘person’ or ‘human’, but the end result is the same (though I prefer to talk of human-centered than user-centered myself). The real issue is methodological.

First, claiming adherence to user-centered methods and philosophies is too easy; anyone can do it. Ask people what they would like to see in a re-design and you have ‘established’ user-requirements. Stick a few people in front of your design at the end and you have ‘conducted’ a usability test. Hey presto, instant user-centered design process. If only!

Second, and more pernicious, the set of methods employed by most user-centered professionals fails to deliver truly user-centric insights. The so called ‘science’ of usability which underlies user-centeredness leaves much to be desired. It rests too much on anecdote, assumed truths about human behavior and an emphasis on performance metrics that serve the perspective of people other than the user. ISO-defined usability metrics refer to ‘efficiency’ and ‘effectiveness’ and ‘satisfaction’. These do not correlate so one needs to capture all three. But who gets to determine what constitutes effective and efficient anyhow? In many test scenarios this is a determination of the organization employing the user, or the thoughts of the design team on what people should do, not the user herself. Maybe this should be called organizational-centric or work-centric design. If I wanted to start a new trend I could probably push this idea into an article and someone might think I was serious.

What is often overlooked is that the quality of any method is determined far too much by the quality of the evaluator who employs it.. Evaluation methods are all flawed, that much is a given, but it is the unwillingness of many people to recognize these shortcomings that should give us all real concern. Here’s but one example. The early Nielsen work on heuristic evaluation has given rise to the ‘fact’ that evaluators find about 35% of usability problems following his method, and if you pool several reviewers you can get a better hit rate. What many people overlook in this is that the 35% figure is not calibrated with real user problems but is based on Nielsen’s own interpretation of the problems users will face. So the 35% claim is really a claim that following his method, you will probably find a third of the problems that Nielsen himself estimates are real problems for users. This is a very different thing. It is interesting that in my own tests with students, this 35% figure holds pretty firm, which is impressive, but you cannot lose sight of what that percentage relates to or you will misunderstand what is going on.

Now of course, there are great evaluators out there but even if all evaluators were great, that would not change the problem with user-centeredness as it currently exists. Too much evaluation occurs too late to matter. OK, this is an old story but what’s changed since this story was first heard? Not enough. If user-centered design really is limited to evaluating and designing for a narrowly construed definition of usability then there is little prospect of change. For a limited range of tasks where I want to be efficient (finding a number in my cell-phone, for example, then current practices are fine, as long as I can prototype quickly) but for the type of deeply interactive tasks that I might spend large parts of my day engaged in (reading, communicating, exploring data etc.) then talk of ‘effective’ and ‘efficient’ rings more hollow. But it is preciely this next generation of application opportunity that we need to explore if we are to augment human capabilities. The old usability approach is fine for applications where we are making digital that which used to be performed with physical resources (text editing, mailing, phoning, calculating) but it’s not a great source of help for imagining new uses.

If we could de-couple user-centered design and usability then there might be some benefit but I don’t think this is as important as it might first appear. More important is the very conception we have of users and uses for which we wish to derive technologies and information resources. Designing for augmentation is a very real problem and a great challenge for our field theoretically and practically.

Discovery: the real purpose of information?

Much of the ferment over libraries, information, technology and digital life springs from rigid divisions drawn between people, professions and purposes. In our discussions here in Texas on the future of our own program and our plans for the future, we have given serious thought to advancing discovery as the true purpose of the information field. In this framing we can envisage a role for curated collections, access mechanisms, technologies of storage, retrieval and presentation, human meaning-making, and records management. Rather than defining the field around old or new roles for its professional members, there is a real preference for conceiving information as being the fuel of learning, creativity and discovery, regardless of subject matter or application domain. When viewed this way, it makes no sense to think in terms of libraries versus computers, digital versus paper, catalog versus folksonomy, each is a component in the larger purpose of augmenting human capabilities.

I have spoken of this recently in several talks and the feedback has given me some belief that the study of discovery has generally been undervalued. There are few deep studies of the process and each domain has taken it’s own research methods as the model of how discovery occurs. Information, as a field, would serve a valuable purpose by offering a more unified understanding.

Information: A New Discipline for Accelerating Discovery
 
We live in an era characterized by information technologies powerful and cheap enough to be used anywhere and anytime. Massive amounts of data, once physically bound to a location, are now shared, freed from time and space constraints on use. The mechanisms of scholarly communication are being challenged by open access, self-publishing peer networks. The emerging cyber infrastructure will enable new forms of collaborative research and data analysis that cross disciplinary divides. Education is no longer tied to a classroom. Communities are no longer tied geographically. Resources are no longer only physical.
 
All disciplines are affected and it is resulting in the blurring and crossing of subject boundaries. This is not the same old story of progress – history offers us few lessons when the changes enabled by technologies of information are so all-powerful. This is a new Gutenberg era and like that earlier period, it will change the world quickly, permanently, and in ways that that we do not easily anticipate.
 
There is a vital lesson to grasp about these changes. Data is stored, but information is experienced. How we design, manage, and share information will affect the experiences of all members of our society. I believe the ultimate goal of information experiences is discovery. In so saying, this provides the basis for a new field of information studies that contributes insights and knowledge to the human and social processes of discovery.
 
Discovery can be formal and informal, significant and trivial, personal or shared. It is an experience for all, young or old, expert or novice, professional or amateur. Acts of discovery are life long, and in their most refined form are defining characteristics of our species.
 
The process of discovery requires the meeting of an enquiring mind with a world, real or virtual, present or represented. Libraries and collections, physical and virtual, provide rich representational spaces for discovery. Digital tools offer interfaces to information for visualization, manipulation, and analysis. The emerging cyber infrastructure unites people and practices with layers of technology and resources. There is no turning back. The field of information serves to facilitate the engagement of minds into acts of discovery through the gathering, organization and presentation of vast data sets, and the tools for exploration and innovation.
 
It is happening already. There are 1bn Internet users today. There will be 2bn by 2015. The Internet has changed society, spawned more than a trillion dollar impact on the global economy, and this is only the beginning. Despite popular images, Internet use is not just about email and Google. 40m Americans report having used the Internet to find scientific information, and more impressively, 80% of these state they have checked the quality of this information with other sources. The need for curation and stewardship has never been greater. 99.99% of all new data is born digital. The ability to store data in a tiny chip exceeds the capability of the Library of Congress to store paper equivalents. All professions involve digital technology use somewhere in the set of tasks they perform. We have witnessed in a decade the emergence of a new socio-technical infrastructure in which we routinely live and work, make purchases and perform services, learn and communicate, create and share, without pause or concern for distance. Those born in the last decade will never understand a world without the Internet. More than half of teenage users have created and shared digital materials. The longer life expectancy of people now anticipates extended or multiple career opportunities which will demand more fluid and individual educational opportunities as never before.
 
What lies ahead should be studied. It should not be left to business or technological forces alone but should be planned and shaped with human and social concerns at the forefront. It unites the arts and sciences, it involves design and creativity, and it is will require legal processes and economic insights to understand and to manage. Ultimately, we need to create a new field, one that can make sense of the data smog we live in, helping people to leverage meaning from information, be they scientists or citizens, adult s or children, rich or poor. This is the field of Information, and its mission is to enable, and even accelerate, discovery for the benefit of all.
 
A new field requires a new kind of school. And this is why we have schools of information.

New NEA study suggests further declines in reading

A couple of years ago the NEA produced a study entitled Reading at Risk which suggested literary reading among the US adult population was diminishing at a worrying rate. Last week they released a follow-up, entitled To Read or Not to Read; A Matter of National Consequence which suggests that not only is reading continuing to decline but that reading levels and abilities are following suit. I’m struggling to digest the 90-page report filled with tables and summary data but the claims being made for national consequences hinge on academic achievement, employability,and civic engagement.

I do wonder how accurately anyone answers questions about reading, it’s a form of automatic behavior many of us engage in without consideration of time costs so we might treat some of these data points with caution. However, the social desirability of reading might also inflate responses to some questions. Further, measures of reading ability are less prone to this type of error (though they have their own specific types) so it’s not easy to dismiss the final results, even if the tone of the report is somewhat sensational. But here’s some data points reported that are noteworthy:

-Percentage of adults read a work of literature (a novel, short story, playor poem) within the past year= 47%
-Literary reading declined in both genders, across all education levels, and all age groups, with declines steepest in young adults, and first year college students report extremely low levels of reading for pleasure (no mention of reading lists here!)
-More than half 17th-12th graders multitask while reading. The report calls them Generation M.

There’s a few hoary old chestnuts in the report, such as the suggestion that there are no studies yet which show if following hyperlinked information on screen has cognitive impact (hello! there’s 20 years of study on this!) but in general this is an exhaustive (and exhausting) effort. I’m not entirely comfortable with the links made between being a good reader and being a model citizen, attending jazz concerts, visiting museums and having a great job, but there is no doubt that reading is a profoundly important part of the human condition and we would do well to take note of the trends reported here. But, one must wonder, is literary reading (as defined by NEA) the real yardstick?

The use of technology in schools to be studied (at last?)

Indiana University’s School of Education has received a federal grant of $3m to study how technology is used in the classroom and to what effect. Am pleased there will be more data on this since some of us have conducted significant reviews over the last decade that raised serious doubts about the claims made for improved learning through hypermedia tools. What’s surprising with this latest award are the comments to the effect that this is the first national study of the topic. According to an investigator leading the project “No national study has ever been undertaken to figure out how teachers use technology in lessons and how students learn from that technology” Can it be so? After decades of proclaiming the benefits, of pushing a technological agenda for classrooms, of soliciting millions of grant dollars to support new learning environments, of gaining tenure on the basis of papers and books espousing the power of hypermedia to enhance the construction of meaning, educational researchers are now saying there’s never been a national study of this? And are we to presume that a national study is somehow better or more authoritative than well-designed studies on a class, state or multi-state level? Or is it the case, as some of us pointed out a decade ago, that any well-controlled studies of the effects of technology on learning are pretty scarce in the trendy world of educational research.

Brian Shackel

I learned last night that Brian Shackel died on May 9th in England. For anyone even remotely familiar with the literature on usability and HCI, Brian was a foundational figure who led the development of operational definitions and measures of usability. I worked with Brian for eight years at the HUSAT Research Institute in Loughborough, a group he founded in 1970 to advance human and social research in the area of sofware design. He was an incredible character, full of commitment to user-centered design and the development of the science of human-computer interaction. To those who worked with him he was always known as “Prof”, and his eye for detail in proofreading the many reports we produced was legendary – he would pull out his multi-colored pen, push it to red and invariably find constructions or wordings that irked him (you could never say ‘overall’ in a report without Brian inking it and noting in the margins: “Overalls are what workmen wear!”). Brian could be a somewhat combative character when discussing issues of importance and he was tenacious in ensuring his points were noted, but he had the remarkable skill of all great leaders in being able to argue the point heatedly without ever allowing disagreements to interfere with his treatment of you as a person. Brian opened many doors for me in my career and he took great pride in seeing the influence of HUSAT spread across the globe through its projects and its people. Unlike many academics, Brian keenly understood the importance of influencing practice and he committed a large part of his life to ensuring the development of international standards that have shaped technology design for millions. There’s a formal obituary at: http://www.ergonomics.org.uk/page.php?s=7&p=101 which outlines his diverse career highlights. Brian was a one-off, we’ll not see his kind in academia again and I miss him already.

Information in a time of war

I am simultaneously heartened and horrified by reading the online diary of Saad Eskander, Director of the Iraqi National Library and Archive (http://www.bl.uk/iraqdiary.html). The library has been extensively looted, causing some to liken the current situation to the 13th century sacking of Baghdad by the Mongols. In the abstract this is depressing but Eskander’s diary puts a much more human face on recent events. He writes of the fear that drives people from the library, the threats that his staff routinely receive, the difficulties in tracing collections that are known to have been stolen and distributed, including rare treasures traded on the black market. His library has 39 armed guards, and part of his job is to track down staff who are kidnapped. What is heartening in any of this? The fact that he lives and he continues to work on developing this precious resource. The fact that he can communicate to the outside world what is happening so that we cannot claim ignorance of the plunder in the years ahead. Libraries appear quaint to many people in the ‘developed’ world, but we lose sight of the value and role of curated knowledge and free exchange of ideas at our peril. History shows that those who seek to control always want to limit both the flow of information and the accurate recording of events. You can learn how the ALA is responding at http://www.ala.org/ala/iro/iraq.htm

Killam lecture 2007

I just returned from Dalhousie University where I delivered the final lecture in the Killam Lecture series (http://dalgrad.dal.ca/killam/lectures/2006/). The trip was fraught with travel difficulties that make one wonder at the false confidence provided by technology. My return flight was cancelled when I arrived at the airport, fully 45 minutes (and a $55 taxi ride) after I had checked the web site which reported it as on schedule. The lack of any human representative at the United desk meant passengers were forced to call a helpline and wait over 60 minutes to get help – if you had no cell phone, who knows what you would do? That one hour turned into news that it would be 2 days and 4 flights before I could get home again, though it was only 65 minutes before I made my views on this particular socio-technical structure known to its designers in less than technical language. Just how hard can it be to leverage the interconnectedness of everyone to provide clear instructions on what is happening and how it will be resolved? Very hard, one imagines, but very much harder when the attitude of the airlines seems to be one of shoulder-shrugging indifference and the knee-jerk locking down of all spare seating capacity to control its allocation in the days ahead for maximum profit.

Travel woes aside, the Killam lecture was a marvellous event and allowed me to speak fairly directly about third force issues of design ethics, human and social values, and the need for new perspectives that transcend simple disciplinary divisions. The Q&A at the end took us even further into such territory and I was delighted to have the chance to hear so many voices raised in favor of us taking greater control over our information architecture.

Continuous partial attention syndrome

The Chicago Sun Times explored the idea that we are overwhelmed with data and cognitively suffering from multitasking in an interesting piece this week: http://tinyurl.com/24ajsp. And for once, the journalist actually reported what I said accurately. The angle taken seems to be that too much digital information use is in danger of dumbing us down and while that plays well as a story, the various people interviewed don’t actually offer much support for this. The real point is that humans are limited information processors (‘skinny pipes’ rather than broadband, as my colleague Randolph Bias likes to say) and multitasking can carry a cost.

Social Widgets powered by AB-WebLog.com.