Research Areas
My overall research goal is to examine the impact, value, and outcomes of information behavior and interaction, focusing on how information interaction improves human capabilities across work, school, and everyday life contexts. My research interests can be broken down into four areas broadly:
- Supporting Creativity in Searching
- Search as Learning
- Credibility Assessment in Information Seeking and Creation
- Information Literacy
Supporting Creativity in Searching
In recent years, my research has focused on investigating the intersection of search behavior and creativity with respect to people’s creative thinking processes in searching for new ideas. The purpose of this research program was two-fold: (1) to understand how people search for information to generate ideas; (2) to investigate how to design creativity support tools for idea generation during searching. For the first purpose, we conducted an exploratory study in which we investigate the search processes people engage in while completing creative tasks. We were also interested in identifying creative thinking strategies people employ when searching to generate new ideas (Chavula, Choi, & Rieh, 2024). We conceptualized creativity as a process in which people generate novel ideas in their everyday life. We collected the data from two different settings for each participant: their recollection of previous search episodes in which they engaged in creative tasks; their performance and experience regarding creative tasks given to them during the session. Based on the data analysis, we developed a framework on search phases for creative tasks. The framework consists of four discrete and iterative phases of creative thinking processes: planning for creative search tasks, searching for new ideas, synthesizing search results, and organizing ideas (Chavula, Choi, & Rieh, 2022). We identify several distinct behaviors for each phase. For example, when searching for new ideas, our participants took exploratory search approaches, embracing opportunistic explorations from multiple perspectives. During the synthesis phase, they actively compared and selected information while combining ideas. To organize ideas, they demonstrated behavior such as note-taking, grouping, and labelling. By developing a framework based on empirical findings, we were able to provide a theoretical foundation for studying creativity within search systems and insights for future research with respect to understanding the kinds of search behaviors associated with creative tasks.
For the second purpose, my research team developed an idea generation tool to support creativity in academic search contexts. The web-based interactive visual tool, called SearchIdea, was developed to facilitate prioritizing of search results, eliciting of keywords and ideas, and organizing ideas. SearchIdea consists of three main interface features: the search results, SearchMapper, and IdeaMapper. Using SearchMapper features, users are able to save search results for comparison and rearrangement. IdeaMapper enables users to create a mind map to elicit words and phrases from the search results, add new words or phrases, and organize ideas visually. As a baseline tool, we also designed a text editor called IdeaPad which allows users to take notes and keep track of ideas. The results of our evaluation study demonstrated that introducing visual features in search tools facilitate active engagement with search results (Chavula, Choi, & Rieh, 2023). We also found that active user engagement enhances creative idea generation, and that using mind mapping features supported users in identifying associations between topics and organizing their ideas more effectively.
Search as Learning
I have been one of the leading researchers to start and expand a research initiative on “searching as learning” over the past decade. There is now a fast-growing community of researchers who believe that there are great opportunities to leverage and extend current search systems to foster learning by reconfiguring search systems from information retrieval tools into rich learning spaces in which search experiences and learning experiences are intertwined and even synergized.
I was involved in organizing the first Searching as Learning (SAL) workshop held in conjunction with the Information Interaction in Context (IIiX) Conference in Regensburg, Germany in August 2014. There have been three additional workshops on this theme since then: SAL 2016 co-located at the SIGIR Conference, the Dagstuhl Seminar on Search as Learning (2017), and SAL 2018 at the ASIST Annual Meeting. I also co-edited a special issue on “Recent advances on searching as learning” for the Journal of Information Science and co-authored the introduction to the special issue (Hansen & Rieh, 2017).
I have contributed to the formulation of a research agenda for searching as learning with two distinct goals. The first goal is to develop a conceptual framework that can demonstrate how learning and searching intersect in the searching process. For instance, I co-authored a comprehensive review article (Rieh, Collins-Thompson, Hansen, & Lee, 2016) in which we proposed the concept of ‘comprehensive search’ to illustrate iterative, reflective, and integrative search sessions that facilitate individuals’ critical thinking abilities (critical learning) and the development of new ideas directly (creative learning). We made distinctions among search activities supporting critical learning and creative learning. Examples of such activities include critically evaluating the usefulness of information, assessing the credibility of information, differentiating the value of information, analyzing multiple sources of information, and prioritizing information through comparisons. In this framework, we demonstrated how activities related to comprehensive searching could nurture a new capacity for explorative and integrative thinking as a result of comparing diverse perspectives. Together with Catherine Smith, I expanded on this framework by introducing the concept of “knowledge-context,” defined as meta-information that searchers utilize to interpret information presented in a search engine results page (SERP). We presented that enriching the knowledge-context in SERPs has great potential for facilitating human learning, critical thinking, and creativity by expanding searchers’ information literate actions such as comparing, evaluating, and differentiating between information sources. We then proposed the development of learning-centric search systems that are designed to support information-literate actions. This paper was recognized with Honorable Mention at the 2019 ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR).
The second goal is to develop methods, measures, and indicators that demonstrate learning experiences and outcomes in search systems. In a paper presented at the first CHIIR (Conference on Human Information Interaction and Retrieval) (Collins-Thompson, Rieh, Haynes, & Syed, 2016), we conducted a lab-based user study of an interactive search system with which searchers were asked to perform two types of learning tasks that had different characteristics in terms of task difficulty. We developed and analyzed a rich set of explicit and implicit learning measures based on four different data sources: analysis of written summaries submitted by subjects across the pre-search and post-search phases; pre-search questionnaire; post-search questionnaire search interaction logs. The data analysis revealed a number of explicit and implicit indicators potentially useful for measuring learning in web searching. For instance, we found that perceived learning outcomes measured through questionnaires were positively correlated both with lower-level cognitive learning scores gathered from written summaries (measuring remembering, understanding, and applying) and higher-level cognitive learning scores (measuring analyzing, evaluating, and creating). This result implies that subjects were able to assess their own learning outcomes reasonably well, and that perceived learning outcome scores could be used as a measure for learning in searching. In terms of implicit indicators, we found a strong positive correlation between perceived learning outcomes and actual knowledge gain, particularly when subjects were introduced to search results with more diverse topics. Overall, the results of this study provide deeper insights into the problem of how learning can be assessed reliably during web searching, and which aspects of search interaction more effectively indicate learning outcomes.
Credibility Assessment in Information Seeking and Creation
I am one of the pioneering researchers who have advanced research on credibility assessment of online information. My contribution to the area of information credibility is threefold.
First, I’ve contributed to developing two novel models of credibility assessment. The first model on “judgment of information quality and cognitive authority” was based on my dissertation research. While the primary purpose of my research was to identify various factors influencing people’s judgments of quality and authority in web searching and the effects of those judgments on selection behaviors, I also found that people made two distinct kinds of judgments—predictive judgments and evaluative judgments—and that factors influencing these two kinds of judgments are distinct from each other (Rieh, 2002). My research has demonstrated that people make predictive judgments about which websites contain credible information and then follow through with evaluative judgments by which they express the quality of information encountered. Predictive judgments are based on people’s knowledge and experience, recommendations from others, or other characteristics of information objects or sources. Once people open a new Web page, they make an evaluative judgment in terms of how good the information is, how useful the information is, how trustworthy the information is, how accurate the information is, and so on. If people find that evaluative judgments of the information do not match the expectations of the earlier predictive judgments, they may return to a previous page or decide to start with a new page. By iterating this process, people can reach a point at which their predictive judgments and evaluative judgments match, and they will proceed to use that information.
Another credibility model, “a unifying framework of credibility,” was developed based on the results of an empirical study about people’s credibility assessments in a variety of everyday life information seeking contexts. We identified three distinct levels of credibility judgments: construct, heuristics, and interaction (Hilligoss & Rieh, 2008). The construct level involves defining the notion of credibility that influences people’s judgments. The heuristics level pertains to general rules of thumb for credibility assessment, applicable to a variety of general information seeking situations. The interaction level refers to credibility judgments in which particular sources or content cues are characterized. These three levels of credibility assessment are interlinked; for instance, people’s constructions of credibility influence the kind of heuristics used in selecting a Website from which people begin a search. Credibility heuristics can influence the ways in which people pay attention to certain characteristics of information and sources. As people gain more experience with a certain source of information, credibility heuristics can be changed or extended. Should a heuristic prove consistent over time, it then becomes a construct of credibility in people’s minds. This model also demonstrates that context is a factor influencing all three levels of credibility assessment. The context includes the social, relational, and dynamic frames surrounding people’s information seeking processes, creating boundaries around an information-seeking activity or a credibility judgment itself. The context of credibility judgments can either guide the selection of resources or limit the applicability of judgments.
The second contribution I’ve made to the field is introducing the concept of “credibility heuristics.” Heuristics provide ways of conveniently finding information and quickly making credibility judgments (Hilligoss & Rieh, 2008). In two qualitative studies (Hilligoss & Rieh 2008; St. Jean, Rieh, Yang, and Kim, 2011), we found ample evidence that people tend to rely on heuristic judgments extensively when they make credibility assessments. The most commonly described heuristic was simply staying on familiar Websites rather than taking the risk of using unfamiliar sites. A majority of participants went back to sites that they knew very well, and as a result, they did not need to be greatly concerned with information credibility. This helped them to minimize their effort in examining the content on those Websites. Selecting information from reputable sources was another common credibility heuristic people used. When participants described using these heuristics, they often remembered site names, organizations sponsoring sites, and URLs.
I have also made contributions to diversifying research design for credibility research. Most credibility researchers have used surveys and focus group interviews as their research methods. I have been employing lab-based user research (Rieh, 2002; Rieh, 2014), diaries and interviews (Hilligoss & Rieh, 2008), experience sampling methods (Rieh, Kim, Yang, & St. Jean, 2010), interviews (St. Jean, Rieh, Yang, & Kim, 2011; Rieh, Jeon, Yang, & Lampe, 2014). The reason that I’ve been avoiding survey research is that it makes it difficult for researchers to capture the context of credibility assessments, which includes elements such as user goals, intentions, search tasks, and information behavior. On one hand, I have controlled the topics (e.g., health, product, travel, research interests, and news) or user goals (e.g., information search task, content creation task) involved in participants’ searches. This approach enabled me to investigate the relationship between topics or user goals and credibility assessment. On the other hand, I have asked study participants to describe particular activities they engaged in first, and then investigated their credibility assessments using the mixed methods of diaries and interviews. As a result, I have been able to correlate multiple dimensions of credibility assessment with respect to levels of user participation on the Internet, types of digital media, and topics of online content. For instance, the results of one study showed that people conceptualize credibility with respect to multiple dimensions and that they consider certain dimensions of credibility more important than others, depending on the types of digital media they have used. For instance, when people use social networking sites, currency, trustworthiness, and truthfulness are the most important constructs. When they use online news sites, they consider currency, accuracy, and reliability more important than trustworthiness or truthfulness. According to our online activities diary study, expertise, which has long been understood as a primary construct for credibility assessment, is considered important only when people use articles or e‐books.
I expanded on credibility constructs for social media research and demonstrated the role of credibility perceptions in online activities of content contributors. In one study (St. Jean, Rieh, Yang, & Kim, 2011), we investigated how content contributors assess credibility when gathering information for online content creation and mediation activities, as well as the strategies they use to establish the credibility of the content they create. These contributors reported that they engaged in content creation activities such as posting or commenting on blogs or online forums, rating or voting on online content, and uploading photos, music, or videos. From this study, we identified three distinctive ways of establishing credibility that are applied during different phases of content contribution: ensuring credibility during the content creation phase; signaling credibility during the content presentation phase; and reinforcing credibility during the post-production phase. We also discovered that content contributors tend to carry over the strategies they used for assessing credibility during information gathering to their strategies for establishing the credibility of their own content.
In another study (Rieh, Jeon, Yang, & Lampe, 2014), we examined how bloggers establish and enhance the credibility of their blogs through a series of blogging practices. Based on an analysis of interviews with independent bloggers who blog on a range of topics, we presented audience-aware credibility as a theoretical construct. Audience-aware credibility is defined as how bloggers signal their credibility based on who they think their audience is and how they provide value to that perceived audience. The analysis of bloggers’ credibility constructs, conceptualizations of their audience, and perceived blog value identified four types of bloggers who constructed audience-aware credibility in distinctive ways: Community Builder, Expertise Provider, Topic Synthesizer, and Information Filterer. We were able to discern bloggers’ blogging practices for establishing credibility and strategies for interacting with their audience to enhance credibility. The findings revealed that a multi-dimensional construct of audience-aware credibility serves as a driving factor influencing and shaping the blogging practices of all four types of bloggers.
Information Literacy
I have conducted several studies on developing information literacy programs and assessing information literacy competencies from various IL programs. For instance, I collaborated with Karen Markey for the IMLS-funded project in which we designed, deployed, and evaluated the information literacy game program called BibioBouts. This project led us to publish a book entitled “Designing Online Information Literacy Games Students Want to Play” (Markey, Leeder, & Rieh, 2014).In this book, we presented how the game’s design evolved in response to student input and how students played the game in dozens of college classrooms, including their attitudes about playing games to develop information literacy skills and concepts specifically and playing educational games. My contributions to this project focused on providing theoretical frameworks of information literacy and a research design for the evaluation study in which we examined the effectiveness of BiblioBouts for teaching students how to conduct library research.
For another IMLS-funded project, I collaborated with academic librarians and master’s students to develop a performance-based novel method to assess college students’ information literacy competences. We developed a librarian role-playing method and identified benefits and drawbacks of using this method based on the results of two empirical studies (Rieh et al., 2022). The first study assessed students’ information literacy competencies while searching for information in an academic library system and the second study investigated their competencies in evaluating online sources. Using this method allowed us to gather more accurate data about students’ thinking processes and strategies. The results showed that the librarian role-playing method provided researchers with opportunities to observe how students externalize their critical thinking skills, making their thought processes visible.