Where have all the participants gone? Abstracts

Workshop: Where have all the participants gone? | Abstracts

What motivates or incentivizes participation in research studies?

Valerie Keppenne, Penn State University

In this lightning talk, I would like to present results from an informal survey investigating student participation in research. This survey was distributed to students after they participated in one of three studies in 2021 (online) or 2022 (in-person). The survey included questions about the reasons and motivations for participating in research studies, the impact study-modality and length may have on participation likelihood, and what kinds of incentives might increase the probability of participation. Results from 17 (5 online, 12 in-person) participants indicate that motivations to participate are often personal and related to academic success, that students are slightly more likely to participate in online studies, and that participation can be incentivized most with increased compensation or extra credit. The survey therefore provides some important insights into what motivates participation and shows which incentives are most likely to increase participation.

Recruiting Language Faculty with Experience in Teaching Visually Impaired Students Online

Liane She, University of North Carolina at Charlotte

For my dissertation, I aim to examine language faculty’s experiences in teaching visually impaired students online in higher education. Therefore, I am in the process of conducting 10 to 12 phenomenological, semi-structured interviews on Zoom. After editing and revising the interview transcripts several times and utilizing member checking, I will read them several times and upload the transcripts to the qualitative research software NVivo. Then, I hope to identify inductive codes and subcodes that will help identify themes and ultimately, a common phenomenon. To thank my participants and in hopes to increase my chances to recruit more, my study is incentivized as all participants will be entered to win a $50 gift card upon participation. Initially, I utilized purposeful sampling by emailing all the language department chairs in North Carolina only, requesting to forward my email to language faculty who taught or are currently teaching visually impaired students online. As very few faculty responded, I edited my initial IRB application to interview language faculty nation-wide through different professional associations, utilizing both purposeful and snowball sampling. Recruitment is still in process and I will share other strategies that will hopefully assist in recruiting more participants.

L2 Listeners’ comprehension of accented speech: Using Prolific to successfully recruit bilingual participants

Gregory Costanzo, Penn State University

The present study examines how listeners with Dutch as their native language (L1) and English as their second language (L2) comprehend English sentences produced in Dutchaccented English, Southern-American English, Chinese-accented English, and unmarked American English. Sentences were presented in noisy and in quiet conditions, as previous studies suggest that noisy environments can exacerbate one’s difficulty comprehending unfamiliar accents. This study was conducted online, using LabVanced to run the study and Prolific to recruit participants. Prolific is an online recruitment service which can connect researchers with participants in a timely and convenient fashion. Through this service, 40 Dutch-English bilinguals were recruited for this study in a period of two months. In light of the COVID-19 pandemic causing social and behavioral scientists to struggle to find participants, Prolific seems to be a highly effective solution.

Using Qualtrics and QR codes to enhance in-person and online recruiting

Nick Henry, The University of Texas at Austin

In this lightning talk, I compare two approaches to recruiting using QR-linked Qualtrics surveys to recruit for a study conducted both in-person and online. In the first approach, participants enrolled in the study by completing a Qualtrics survey that contained the first part of the experiment (a consent form and background questionnaire). In the second approach, participants initially provided contact information via Qualtrics and later completed the same surveys used in the first approach. Anecdotal evidence suggests that the second approach was more effective.

Leveraging Colleagues for Multi-site Online Studies

Carrie Jackson, Penn State University

In this lightning talk I will present several examples of ways students and I have distributed links for online studies to colleagues at other universities, as a way to expand the potential participant pool. The presentation will include information about following IRB guidelines for recruiting across institutions, compensating participants at remote locations, and taking into account factors that might impact the overall variability in proficiency and language background across participants. I will also discuss strategies for minimizing the additional burden on colleagues at other institutions when pursuing multiple sites for data collection.

What makes a language training study successful? Listening to your community

Vika Tkacikova, University of Pittsburgh

Language training studies have been considerably impacted by pandemic fatigue, and the Psycholinguistic Underpinnings of Multilingualism (PLUM) Lab at the University of Pittsburgh has faced an exceptionally challenging year launching a multi-visit ERP experiment in November of 2021. The biggest challenges we faced involved participant recruitment and retention, and community interest. Given that our study involves five laboratory sessions over the course of six weeks, retention rates were initially low. In an earnest effort to solve this and the low recruitment rate, we focused our efforts on community need; with guidance from community-engagement experts at the University (Pitt+Me), we expanded our recruiting efforts to include residents of the greater Pittsburgh area, provided free parking for participants, and fostered an inclusive and welcoming environment in the lab to encourage participants to return. We will discuss these and other tools we plan to implement further during the workshop.

Remote recording of conversational data: A guide for participants

Sarah Rose Bellavance, New York University

During the fall of 2020, I collected conversational data from children for my Master’s
thesis. Given the COVID-19 pandemic, I adjusted my protocol to minimize any related health risks. Participant recruitment changed to community-based “snow-ball” networking, and speech recordings were conducted in the participants’ homes without my presence. I chose this method, rather than a video-conferencing recording, due to the superior audio quality of the smartphones by which conversations were recorded. Parents downloaded the Voice Recorder Pro (Dayana Networks Ltd.) phone app (chosen for its ability for users to select bit depth, sampling rate, and sound file type) and recorded a conversation with their child. After the recordings were completed, parents sent the audio files to me via email. In this workshop, I will be presenting the participant packet that I created to guide parents through recording a sociolinguistic conversation with their child.

Visualizing and modeling data from studies with low participant numbers

Valerie Keppenne, Penn State University

In this lightning talk, I would like to present ways of analyzing—and in particular visualizing—data from research studies with participant numbers that are on the lower side to supplement and bolster statistical analyses. In particular, I will present graphs from an online study that is part of my dissertation work. Data from 46 participants are presented in violin plots, line graphs, scatterplots, and interaction plots (with continuous variables), which were created with different visualization packages in R (R Core Team, 2020). In addition to presenting results using these visualization techniques, I also decided to calculate confidence intervals on model estimates to gain a better understanding of significant and nonsignificant effects, their direction, as well as their range. In this talk, I hope to show that data visualization as well as understanding the reliability of model outputs are important to more confidently present potentially underpowered research findings.

Recent Posts