Vanquishing vanishing Berries

John Berry’s opinion piece on the Vanishing Librarians has created a tornado of hot air on JESSE, a discussion list for educators in LIS. Most are reacting to his trotting out of the tired argument about LIS schools being ‘invaded’ by faculty from other disciplines, using it as a launch pad for some ill-informed attacks on new information tools and knee-jerk defenses of some imagined glorious past when an LIS degree was pure. Most of this seems a mis-reading of Berry’s piece which I interpret is an attack on what he views as the deskilling of library professionals at the hands of managers, vendors and an a lazy public more satisfied with Amazon than they have any right to be (apparently), with a pro-forma dig at the LIS world added for good measure. In the battle between logic and emotion, there is usually only one outcome and while it is apparently easy to blame the LIS programs and their faculty for this state of affairs, Berry’s piece suggests he is really criticizing the leaders of libraries for dumbing down services and jobs. Canute-like protestations are not unusual in this world but it’s clear the desire for libraries as bastions of education above all, and at all costs, lives on among those who never have to pay the costs.

My own students raised this in class last week and I attempted to show them that skills are not the same as labels, and arguing about the label rather than the skills would not provide much insight, but it might make you feel better. After all, being disgruntled and hiding behind the line that one is just an old-fashioned standard-bearer is a cheap rhetorical device long beloved of those who don’t want to deal with change. Librarians are not vanishing but I suspect the idea of what constitutes a librarian is now less in agreement among employers, educators, and professionals than at any time in the recent past. Berry does raise a valid point though since the data from recent labor studies I heard at ALISE suggested the largest growth in employment within libraries was for positions filled by people without ALA-accredited degrees. Perhaps the employers need to weigh in more on this. Is it just possible that some programs are not producing the right level of professional, or that some graduates care more about the credential than the education? Perhaps LIS programs are to blame after all, but not in the way most people have concluded.

Clinton and Obama in Texas

I had the pleasure of attending the debate here on campus last night (one of the privileges of being dean) and was most impressed with both candidates – their intelligence and ability to debate on their feet in front of a large audience suggested to me that both these folks are very able. Equally impressive was the CNN operation that turned a recreation hall into a TV venue in a matter of days, setting up everything from the stage to the security process, which I hasten to add was more efficient than any airport I’ve flown through in recent years and as unforgiving of inappropriate items (the guy in front of me had to hand over his cell phone, which we had been warned to leave in our cars or face confiscation; no idea if he ever got it back, but he did not seem too perturbed at the loss anyhow). The ability of the organizers to move a crowd in, keep them quiet on demand, while allowing for breaks, was a model of organizational efficiency. That several thousand people would sit rapt to discussions of economic policy and possible plagiarism also gave one hope that there is intelligence left in our society. Much cheering for many points and Barack really does have charisma as a speaker, but it being a campus, the biggest cheer in my section was for Hilary asking for an end to the attacks on science! Now there’s a political agenda some of us can get behind. All told, it was a stirring example of the power and relevance of political discourse to our lives even if I only came away thinking either of them would make a great choice as president.

IA Summit 2008 and the Third iConference

Registration is closing soon for what must now be the 9th summit (not bad for an idea that was supposed to be the basis of a one-off hot topic meeting for ASIST in Boston in 2000). I’ve not attended the last couple due to an over loaded conference and meeting schedule but I always enjoyed the sheer energy and irreverence (though not so much the occasional irrelevance) of the gathering. The theme this year is Experiencing Information (hum, where have I heard that before?) but it’s sure to be well-attended as usual. Of course the danger of any innovative grouping is falling prey to the standard behavior of conferences. The iSchool community is now running its third annual conference in UCLA next week. I wonder if there is something inherent in intellectual groups that demands the conference structure as a means of establishing a shared reality? Can you be a discipline without a conference? If you have a conference are you a field? No wonder interdisciplinarity is difficult, it just adds to your calendar!

Innovation diffusion at work: HD DVD dies before our eyes

When I used to teach students the classic innovation diffusion model of Rogers, I would try to bring up examples of technologies that were more meaningful to them than the agrarian and medical techniques that fill the textbook. The potato famine just doesn’t have the same resonance for non-Irish learners, I discovered. The trouble was that ideal tests of competing innovations don’t happen in the space of time that fits easily within a semester. I was reminded of all this when I learned of Toshiba announcing it was giving up on DVD HD, having been outfought by Sony with their Blue Ray technology for control of the home video market. Clearly Sony learned a thing or two from their VHS/BetaMax battle twenty years ago. Of course the issue is probably more complicated now and the influence of big sellers such as WalMart on the market battle might cause us to re-think Roger’s classic model, which postulates victory to the technology with the relative advantage, better compatibility, less complexity, trialability and observability. Or maybe not – the model is so, shall we say, flexible that it can usually accommodate all data after the fact, a point noted by my more observant students. So we could just explain WalMart’s influence on the diffusion as one of increasing say, observability or trialability, or perhaps it was Sony’s backdoor into home theatre through gaming consoles. But if this makes simple sense to you, will someone explain how we fit the observation of Microsoft’s support for DVD HD, which lest we forget, was the cheaper of the technologies, was launched earlier, and broke free of some of the region constraints that frustrate other formats, into the Rogers model without wrinkles?

All this is not new to anyone who has given thought to buying a new TV or DVD player recently – the choices are annoyingly overcomplicated and mirror an earlier ‘battle’ that petered out over the next generation sound medium, post CD. Sony pushed SACD for awhile, others pushed DVD-A, and the net result was that Sony won again, but you’d never notice since they largely gave up on the format straight after, though you can still buy hardware and software which is, to my ears, a step up from regular redbook CDs. The prediction of pundits now is that with the format war over, everyone will be buying Blue Ray but it’s just as likely it seems to me that most people won’t care and will live happily with the quality the have with existing DVDs. CD and DVD are comparatively old technologies but for many consumers this is as good as they want, and the next great challenge is not a new disk format but a whole new way of obtaining video and audio of sufficient quality without any need for disks. Of course, as there remain regular buyers and users of LP records, I can see the HD DVD being with us for some time to come. Maybe the assumptions developers are making about the human need for these new media is just a little off track? But I am sure both Toshiba and Sony would tell you they really followed a user-centered design process. I just hope for the simple time when I don’t have to buy new copies of old items made obsolete through technological ‘advances’.

The poverty of user-centered design

In the dim distant past, some of us used to distinguish our work from the masses by declaring proudly that we were ‘user-centered’. At one time this actually meant you did things differently and put a premium on the ability of real people to exploit a product or service. While the concern remains, and there are many examples of designs that really need to revisit their ideas about users, I find the term ‘user-centered’ to have little real meaning anymore. It is not just the case that everyone claims this label as representative, after all, who in their right mind would ever declare their work as not user-centered and still expect to have an audience? It is more a case that truly understanding the user seems beyond both established methods and established practices.

I will leave aside there any argument about the term ‘user’. Some people have made careers out of disimissing that term and proposing the apparently richer ‘person’ or ‘human’, but the end result is the same (though I prefer to talk of human-centered than user-centered myself). The real issue is methodological.

First, claiming adherence to user-centered methods and philosophies is too easy; anyone can do it. Ask people what they would like to see in a re-design and you have ‘established’ user-requirements. Stick a few people in front of your design at the end and you have ‘conducted’ a usability test. Hey presto, instant user-centered design process. If only!

Second, and more pernicious, the set of methods employed by most user-centered professionals fails to deliver truly user-centric insights. The so called ‘science’ of usability which underlies user-centeredness leaves much to be desired. It rests too much on anecdote, assumed truths about human behavior and an emphasis on performance metrics that serve the perspective of people other than the user. ISO-defined usability metrics refer to ‘efficiency’ and ‘effectiveness’ and ‘satisfaction’. These do not correlate so one needs to capture all three. But who gets to determine what constitutes effective and efficient anyhow? In many test scenarios this is a determination of the organization employing the user, or the thoughts of the design team on what people should do, not the user herself. Maybe this should be called organizational-centric or work-centric design. If I wanted to start a new trend I could probably push this idea into an article and someone might think I was serious.

What is often overlooked is that the quality of any method is determined far too much by the quality of the evaluator who employs it.. Evaluation methods are all flawed, that much is a given, but it is the unwillingness of many people to recognize these shortcomings that should give us all real concern. Here’s but one example. The early Nielsen work on heuristic evaluation has given rise to the ‘fact’ that evaluators find about 35% of usability problems following his method, and if you pool several reviewers you can get a better hit rate. What many people overlook in this is that the 35% figure is not calibrated with real user problems but is based on Nielsen’s own interpretation of the problems users will face. So the 35% claim is really a claim that following his method, you will probably find a third of the problems that Nielsen himself estimates are real problems for users. This is a very different thing. It is interesting that in my own tests with students, this 35% figure holds pretty firm, which is impressive, but you cannot lose sight of what that percentage relates to or you will misunderstand what is going on.

Now of course, there are great evaluators out there but even if all evaluators were great, that would not change the problem with user-centeredness as it currently exists. Too much evaluation occurs too late to matter. OK, this is an old story but what’s changed since this story was first heard? Not enough. If user-centered design really is limited to evaluating and designing for a narrowly construed definition of usability then there is little prospect of change. For a limited range of tasks where I want to be efficient (finding a number in my cell-phone, for example, then current practices are fine, as long as I can prototype quickly) but for the type of deeply interactive tasks that I might spend large parts of my day engaged in (reading, communicating, exploring data etc.) then talk of ‘effective’ and ‘efficient’ rings more hollow. But it is preciely this next generation of application opportunity that we need to explore if we are to augment human capabilities. The old usability approach is fine for applications where we are making digital that which used to be performed with physical resources (text editing, mailing, phoning, calculating) but it’s not a great source of help for imagining new uses.

If we could de-couple user-centered design and usability then there might be some benefit but I don’t think this is as important as it might first appear. More important is the very conception we have of users and uses for which we wish to derive technologies and information resources. Designing for augmentation is a very real problem and a great challenge for our field theoretically and practically.

Info careers make the US News Top Job list for 2008

US News put out another list of top careers this week. Apparently they determined there are 31 careers with exciting futures. Among those listed were:

  • librarian,
  • usability/user experience specialist,
  • curriculum and training specialist
  • The say average librarians make over $50k a year and are steeped in technology and research activities. If you get into usability (and Library Science is listed under that description as a relevant qualification) you can expect an average salary of over $90k and a warm glow from watching people happily interact with your products. I would not get too excited by this list (or any list, really); they also think hairstylists and locksmiths are top careers too. By your equals, ye shall be known. That said, it’s certainly encouraging to see these types of positions actually considered futuristic and rewarding. More importantly, it’s encouraging that the compilers notices that a degree in LIS might actually set you up for more than one great career.

    Information is a deep problem in computing?

    Back from the cattle-market that is the ALISE hiring conference (please people, don’t think that cornering a faculty member and thrusting your resume upon them is a good way of getting a job….someone must be telling you otherwise or there is no other way to explain this behavior), I note that there is now a digital edition of the CACM which looks good and in which an article by Jeannette Wing entitled: Five Deep Questions for Computing, includes the following:

    What is Information?
    What is Intelligence?
    How can we build complex systems simply?

    Funny, I thought these were information questions. In fact, coupled with the previous months issue where Ben Shneiderman spoke of computing as being in the game of accelerating discovery, you might be confused for wondering what truly are the differences between our fields? I make no broad claims to have the answer but I don’t wish to be thought of as a computer scientist. Maybe the information field should be asking: ‘What is computing?’

    Discovery: the real purpose of information?

    Much of the ferment over libraries, information, technology and digital life springs from rigid divisions drawn between people, professions and purposes. In our discussions here in Texas on the future of our own program and our plans for the future, we have given serious thought to advancing discovery as the true purpose of the information field. In this framing we can envisage a role for curated collections, access mechanisms, technologies of storage, retrieval and presentation, human meaning-making, and records management. Rather than defining the field around old or new roles for its professional members, there is a real preference for conceiving information as being the fuel of learning, creativity and discovery, regardless of subject matter or application domain. When viewed this way, it makes no sense to think in terms of libraries versus computers, digital versus paper, catalog versus folksonomy, each is a component in the larger purpose of augmenting human capabilities.

    I have spoken of this recently in several talks and the feedback has given me some belief that the study of discovery has generally been undervalued. There are few deep studies of the process and each domain has taken it’s own research methods as the model of how discovery occurs. Information, as a field, would serve a valuable purpose by offering a more unified understanding.

    Information: A New Discipline for Accelerating Discovery
     
    We live in an era characterized by information technologies powerful and cheap enough to be used anywhere and anytime. Massive amounts of data, once physically bound to a location, are now shared, freed from time and space constraints on use. The mechanisms of scholarly communication are being challenged by open access, self-publishing peer networks. The emerging cyber infrastructure will enable new forms of collaborative research and data analysis that cross disciplinary divides. Education is no longer tied to a classroom. Communities are no longer tied geographically. Resources are no longer only physical.
     
    All disciplines are affected and it is resulting in the blurring and crossing of subject boundaries. This is not the same old story of progress – history offers us few lessons when the changes enabled by technologies of information are so all-powerful. This is a new Gutenberg era and like that earlier period, it will change the world quickly, permanently, and in ways that that we do not easily anticipate.
     
    There is a vital lesson to grasp about these changes. Data is stored, but information is experienced. How we design, manage, and share information will affect the experiences of all members of our society. I believe the ultimate goal of information experiences is discovery. In so saying, this provides the basis for a new field of information studies that contributes insights and knowledge to the human and social processes of discovery.
     
    Discovery can be formal and informal, significant and trivial, personal or shared. It is an experience for all, young or old, expert or novice, professional or amateur. Acts of discovery are life long, and in their most refined form are defining characteristics of our species.
     
    The process of discovery requires the meeting of an enquiring mind with a world, real or virtual, present or represented. Libraries and collections, physical and virtual, provide rich representational spaces for discovery. Digital tools offer interfaces to information for visualization, manipulation, and analysis. The emerging cyber infrastructure unites people and practices with layers of technology and resources. There is no turning back. The field of information serves to facilitate the engagement of minds into acts of discovery through the gathering, organization and presentation of vast data sets, and the tools for exploration and innovation.
     
    It is happening already. There are 1bn Internet users today. There will be 2bn by 2015. The Internet has changed society, spawned more than a trillion dollar impact on the global economy, and this is only the beginning. Despite popular images, Internet use is not just about email and Google. 40m Americans report having used the Internet to find scientific information, and more impressively, 80% of these state they have checked the quality of this information with other sources. The need for curation and stewardship has never been greater. 99.99% of all new data is born digital. The ability to store data in a tiny chip exceeds the capability of the Library of Congress to store paper equivalents. All professions involve digital technology use somewhere in the set of tasks they perform. We have witnessed in a decade the emergence of a new socio-technical infrastructure in which we routinely live and work, make purchases and perform services, learn and communicate, create and share, without pause or concern for distance. Those born in the last decade will never understand a world without the Internet. More than half of teenage users have created and shared digital materials. The longer life expectancy of people now anticipates extended or multiple career opportunities which will demand more fluid and individual educational opportunities as never before.
     
    What lies ahead should be studied. It should not be left to business or technological forces alone but should be planned and shaped with human and social concerns at the forefront. It unites the arts and sciences, it involves design and creativity, and it is will require legal processes and economic insights to understand and to manage. Ultimately, we need to create a new field, one that can make sense of the data smog we live in, helping people to leverage meaning from information, be they scientists or citizens, adult s or children, rich or poor. This is the field of Information, and its mission is to enable, and even accelerate, discovery for the benefit of all.
     
    A new field requires a new kind of school. And this is why we have schools of information.

    Bill Aspray on the emergence of internet studies

    Bill Aspray of IU’s School of Informatics provided the Schneider Distinguished Lecture 2007 at the School of Information here in Austin yesterday, providing a detailed overview of the status of internet studies as a professional field of enquiry in the academy. I don’t care for the term ‘internet studies’ but as he noted, think ‘informatics’ or ‘information studies’ when you hear the term and the rest of his analysis applies. The Daily Texan coverage highlights the curricular aspects of his talk but he spoke widely of the contested terrain we are witnessing in the intellectual and scholarly response to the social and technical phenomena surrounding the ‘net. What struck me clearly from his presentation was the rather confused manner in which universities are responding. The tension between people in each discipline slicing off their own ‘relevant’ concerns and those who want to create a new interdisciplinary field of enquiry is palpable. As Bill noted, interdisciplinary programs don’t tend to do well in most universities, which of course should make us wonder if disciplines take on the role of distinguishing themselves to the exclusion of asking and answering important research questions.