As technologies improve, developers are increasingly turning toward interactive interfaces in gaming, the arts, museum studies, and data visualization. Interactivity has been shown to improve connection, understanding, and above all, engagement with complex topics or materials.
As simulation data grows more complex and dense, interactivity outside of the traditional screen and mouse set-up may have the potential to drastically improve both data exploration through visualization and science communication to a lay public or an audience of stakeholders.
Our vislab is interested in developing both, and toward this end, are working to more closely couple bodily movement, physical artifacts, and screen-based content to provide semi-immersive experiences for researchers using big data simulations.
This work is ongoing, and is based at the TACC HCI lab on UT’s Pickle Research Campus. Using the HCI lab’s Rattler screen set up—a group of 15 interconnected and interchangeable high-definition 8K screens—together with computer vision technology including Microsoft’s AZURE Kinect, high-fidelity sensing towers, and physicalized data registered via touchpad, we are experimenting with new methods for interacting with and exploring dense, multivariate environmental data at a large scale
V-MAIL
V-Mail is a framework of cross-platform applications, interactive techniques, and communication protocols for improved multi-person correspondence about spatial 3D datasets. V-Mail was inspired by the daily use of traditional e-mail to correspond with ease via whatever device happens to be handy—between team members who are often separated by distance and/or schedule. With V-Mail, we seek to enable a similar style of rapid, multi-person communication accessible on any device, but accomplish this goal for the first time in the context of spatial 3D communication, where large datasets and limited access to 3D graphics hardware are typically prohibitive.
The approach extends visual data storytelling by adding multi-user snapshots and annotations situated in a common 3D data space as well as a keyframe-style story timeline with animated transitions. To enable asynchronous, cross-platform co-authoring for large-scale 3D data, we introduce a client-server architecture and communication protocol that relies on a standard video file as a story token.
Such videos can be viewed on every modern computing device and thus establish a baseline of access. The power of V-Mail, however, comes in the series of complementary client applications and plugins that enable different styles of story co-authoring that adjust automatically based upon the power of the current device.
A lightweight phone-based V-Mail app used by one team member while walking through the airport, for instance, makes it possible to annotate the data by adding captions to the video. Since, the client-server approach automatically associates these with the underlying spatial 3D context, the spatial annotations are then immediately accessible to team members working in a science lab with a high-end 3D graphics visualization system that includes a V-Mail plugin, or even on an immersive CAVE-like VR environment.
In addition to the system design, interaction techniques, and communication protocol, we report on user feedback and lessons learned from a first deployment within a real team-science context where V-Mail was used as a cross-disciplinary and general public communication tool for visual analysis of supercomputer simulations of the Antarctic ice sheets under different future climate scenarios.