April 1, 2016

Dan Keefe_2016Visit Professor Dan Keefe’s Interactive Visualization Lab (IV/Lab) and what you’ll see is a place that is equal parts advanced technology and art gallery.  Dominating the room is an interactive virtual reality space called the “the Cave” in which users immerse themselves in environments not unlike Star Trek’s holodeck.  In addition to the immersive VR experiences coming out of the Cave are an array of computers on which no less amazing research is being carried out, from visualizing the human body in motion, to creating artistically rendered climate maps, to building data-precise 3D models of Minnesota forests.

Keefe’s IV/Lab sits at the intersection of human experience and current advancements in digital immersion.  The mission of his and his students is to perform fundamental research that enables the deployment of these tools for the benefit of society without regard for present-day constraints.  If they can imagine a way to represent data with their impressive set of technological tools, they go after their vision, attempting to create more mindful interactions between human and computers.

Here are a few examples of how Professor Keefe and his students are creating and utilizing interactive computer graphics, data-processing algorithms, and radical new computer technologies to address and understand problems through data, design, and art.

Speaking to thousands in “The Cave”

The Cave_Volcano Kim 2016Imagine going back in time to 450 BCE and looking out over a crowd of thousands of Greeks and trying to convince them to go to war to expand the empire.  What would it look like?  Would a crowd this size even hear the message without modern amplifying technology?  Where would a political discussion like this take place and why would ancient people choose it?

These are just a few of the questions Ph.D. student Volcano Silver is trying to help answer in collaboration with CS&E Professor Stephen Guy, postdoctoral research associate Ioannis Karamouzas, Writing Studies Professor Richard Graff and Pennsylvania State University.  Their project, called “Visualizing Ancient Greek Rhetoric,” is part of a long-term study to catalog and classify structures from the late Archaic, Classical, and Hellenistic periods (ca. 500-100 BCE) that staged performances of political and legal oratory.

Researchers and anthropologists have had difficulty answering many questions about these performances and the structures in which they were presented because until now there has not been a proper way to model it based on the archaeological evidence left behind.

“There’s no way to understand these questions without creating the scene virtually and running models,” said Silver.  “For instance, there’s no way to see how humans interacted with the environment or how far a speaker’s voice would carry if there were, say, a strong wind.”

To answer these questions, the IV/Lab has developed a completely immersive virtual reality (IVR) application for visualizing the physical setting for ancient Greek oratorical performances.  The visualization extends current research on comparative data analysis.  Here, the comparisons involve multiple historical phases as well as multiple possible arrangements for the virtually created crowd, called assemblymen.  The crowd is created using the cutting-edge crowd simulation techniques developed by Professor Guy and postdoc Karamouzas.  An ensemble of simulations covering 3 historical phases and assembly sizes ranging from 1000 to 14000 citizens were run.  The immersive visualization provides a way to analyze these data in a spatial context.  A multitouch display is linked to the immersive Cave display and is used for navigating through the massive site and adjusting the visualization of the multiple historical phases.  Finally, a speech-based interface (the team playfully calls it a “yelling interface”) is used to provide an innovate real-time experiential understanding of just how loud you would have to speak in order to be heard by the crowd — assemblymen in the 3D visualization turn blue if they can hear you and red if not.

The project has already proven to be useful in evaluating the accuracy of site reconstructions and has helped to further the understanding of oral performances through acoustical simulations by visualizing areas where audiences were likely able to hear using color maps.  This technique illustrates just how loud the speaker needed to be to be heard by all—and so far, Silver has been unable to find any volunteer who could speak loud enough for the venue.

“We’ve had very large people try speak loud enough for a crowd of 14,000,” said Silver.  “They just can’t seem to project far enough.”

It’s helping researchers rethink how the Greeks used the space and whether it fulfilled its purpose or whether they used different accommodations to get their message across.

Before the Cave, there was no way to test this out because it would be impossible to gather thousands of people in a single place to test a theory.  Now, it’s just a matter of inputting an algorithm.

Turning climate data into a Monet

IV/Lab Climate Visualization 2016One of the fundamental challenges of the IV/Lab is taking extremely complex data sets and finding a way to visually present it in a clear way—or to put it another way—how does one take a data set that can have many variables and hundreds to thousands of datapoints and display it in a comprehensible, visible field?

“It becomes a very difficult design problem,” said Prof. Keefe.  “But we took that as a research challenge.”

One of the Ph.D. students confronting this challenge is Seth Johnson through an NSF-funded project called “Visualization by Sketching, Analogy, and Computational Creativity.”  His project is a continuation of former Ph.D. student David Schroder’s, and building off his work, Johnson’s goal has been to continue making scientific visualization tools accessible to artists and illustrators.

Using a sketch-based interface, artists are able to make hand-drawn marks to create images that are consistent with underlying data, which allows for much more unique and eye-catching data maps and models.

“We want to move away from the generic, eight color, unhelpful, built-in color maps,” said Johnson.  “You just can’t see as much with those limitations.”

The example Johnson demonstrated was a climate map.  But to say it was a just a climate map, is to imagine a nightly news weather person standing in front of a green-screened, boldly-colored representation of the U.S.  This overlooks one of the project’s most stunning aspects.

“They have this painterly quality,” Prof. Keefe added.  “You won’t see this on the Channel 5 weather map.  Something very special and different is happening in this.”

This was not by accident.  Johnson has been working with Francesca Samsel a professional artist from the University of Texas at Austin.  Francesca worked with each layer in the map, assigning colors and working with care to blend it all into something that is much more appealing.   Gone are the standard 8-bit color schemes from a bygone Atari era. In its place, is something more like an ever-changing, impressionistic painting—that’s all scientifically accurate.

“We realized that researchers are great with data, but are not that artistic,” said Johnson.  “So we wanted to provide a tool where artists, who may not have an in-depth knowledge of the technical data, can still use artist-friendly tools to create scientifically accurate visualizations.”

The results of the project are not just better design.  Johnson is not simply making beautiful, data-precise pictures.  Rather, the Lab’s openness to better design has led to better results.

“We see more data in the visualizations than we saw before,” said Prof. Keefe.  “Everything in the data is the same, but the variety of colors and the way these artists present it bring out much more.”

However, for those interested in looking at something pretty—the visualizations have that to offer, too.

Diving into the human heart

The Cave_Dan Orban 2016Back in the 80s, the only way to physically get into the human heart was going to the movies—perhaps seeing the science fiction/comedy Innerspace where a submersible pod is shrunk to microscopic size and injected into Martin Short.  Nowadays, that fiction is accessible, safer, and much less cumbersome.  The tools are a little different, however.  In place of the submarine, syringe, and shrink ray are the Cave’s four display screens, along with motion tracking cameras, multiple projectors, and two handheld wands.  Together, these tools allow users to enter an entirely unique virtual reality environment.

Ph.D. student Dan Orban demonstrated the project, which allows users to explore an interactive, holographic heart and track blood flow, visualized by little graphic droplets that represent speed and oxygenation of the blood tumbling within the model.

“You can look under the heart or walk around it to the side,” Orban said.  “We have cameras that track your head, showing you where you are and giving you an accurate depiction of motion in the environment.”

Sure enough, when you moved as Orban suggested, so did the heart in perfect relation to your view and body location.  But that wasn’t all:

“You can also use wands to scale up or down the heart or rotate it,” said Orban.  To demonstrate, he deftly used the wands like a symphony conductor to flip, rotate, resize, and draw guiding lines through the visualization.  It was similar to another science fiction film, Minority Report, where Tom Cruise manipulates holographic images with a wave of his hand.

“What’s really cool about this work is that since this is VR, you can dive into the heart,” Orban added and with a quick abracadabra of the wand, he brought the view right into the atrium of the heart.

From that surreal vantage, Orban described what this kind of technology could do, specifying the importance of flow data in medical device design. With such an immersive view, patterns quickly started to emerge—areas where the blood flow was slow, quick, stagnant or tumultuous.  All of this information can help medical device designers see where potential stressors could be when implanting a device.

“What we want to show is what would happen if you put a device in this model—a valve or a lead—and see how it reacts to the blood flow,” Orban explained.  “We can look at areas of potential stress—such as lower blood flow areas, which may collect plaque and build up forces on medical device implants.”

The applications for this project are groundbreaking since medical professionals will potentially have ways to determine the internal geometry of anatomy, run blood flow simulations, or draw out pathways through anatomical structures for surgical procedures, without having to do intrusive medical explorations.

“We’re able to do a lot more artificially with these models,” Orban said.  “Imagine if we could paint on some plaque to see how it reacts to the artificial lead we’ve built within the model.  This could allow for a virtual, real-time test—we hope to speed the analysis process up a lot.”

Orban’s project is part of the IV Lab’s overarching big data visualization initiative, which is a collaboration with the Medical Devices Center and Visible Heart Lab at the University of Minnesota as well as the Research Computing Center at the University of Chicago.  Co-funded by the National Science Foundation (NSF) and National Institutes for Health (NIH), the big data project seeks to transform the way scientists, engineers, and medical professionals make use of big data through high-tech, immersive visualization tools. 

Stretching time to illustrate movement

Movement Visualization_Devin Lang 2016When Prof. Dan Keefe introduced Devin Lang, he joked that Devin is an “undergraduate who ought to be a Ph.D.”  Judging from his project, “Multivariate Trajectory Visualizer (MulTraVis),” Prof. Keefe wasn’t joking.

The goal of Lang’s project is to create a visualization tool so that any researcher who looks into complex motion, can use the tool, load in the data, and visually analyze it with ease and in hopes of gaining new insight.

Lang demonstrated this by using footage from motion capture—“mo-cap” for short, which is the process of recording the movement of objects or people.

“You ever see actors in front of green screen with ping pong balls all over them?” Lang explained, “That’s what we use.”

To show his project in action, Lang pulled up a mo-cap video of a person swinging a golf club that he previously loaded into his MulTraVis tool.  What you see is a kind of stick figure repeatedly going through the golf swing motion, with colored-signatures and the entire trail of motion being visualized.

“You can correspond the representations to anything in the data,” Lang clarified.  “In this case we visualized speed.  Fast is red, blue is slow.”

Working with different visual cues to map the data, Lang was able to project it onto a 3D plane within the program and then stretch the swing out—literally stretching time.

“You can see time moving in this direction and this allows you to really analyze the swing in a new way and hone in on different aspects you might be interested in,” said Lang.

He illustrated this time-stretch function further by loading another visualization he created into the program.

“This data set was really entertaining because here we have a person wearing a motion capture outfit pretending to be a hummingbird,” he said.  “But what’s interesting is that this tool allows you to see subtle fatigue over time.  By the 11th flap, his flap is lower and slower.”

It’s just this kind of precise analysis that gets Lang excited about the project because he foresees a number of industries that could benefit from more exacting visualization tools, such as biomechanical and physical therapy, lathroscopic surgery, and mechanical product testing.

“The most exciting thing about this is that there is motion everywhere,” said Lang.  “The way we’ve done this technically is that we’ve created one of the simplest ways to represent motion so you could mo-cap anything.”

Growing a VR forest to combat climate change

VR Forest_Jung Nam 2016Ph.D. student Jung Nam has been working with Dr. Charles Perry and researcher Barry Wilson with the U.S. Forestry Service’s Forest Inventory and Analysis unit and using technology similar to the engines that create popular first-person shooter video games to grow a virtual forest, representative of forests that spread over the northern U.S. landscape.

Like all of the projects in the IV/Lab, Nam’s project is data-accurate, using information the Forestry Service has collected since the 1920s to create a 3D environment of these forests that can be surveyed and analyzed virtually, without heading back into the woods.

He is literally bringing the forests into research offices and finding ways for interested parties to more easily analyze crucial variables.

“For every tree, they collect over a hundred variables,” said Nam.  “What we do is map the height, diameter, biomass and other things that can be done with 3D modeling.  What’s very exciting is that they also collect above ground and below ground carbon content.”

Nam further explained that since the U.S. Forestry Service collects carbon content, scientists hope to use that data to determine the amount of carbon being pulled out of the atmosphere and, from there, answer a number of important questions concerning climate change such as: what trees can store the most carbon, which can store it the quickest, and which species would be best for pulling carbon out of the atmosphere.

“We are very interested in what kind of trees are best at controlling carbon,” said Nam.  “This can affect forest management and policy for climate change.”

The difficulty, however, is modeling the carbon content of trees within his program.

“At this level our project is pure visualization. Height and diameter can be done with 3D modeling,” Nam said, “but carbon content?  How do you visualize that?”

Another challenge Nam is faced with is the way the U.S. Forestry Service collects data.  They don’t take a rigorous tally of every tree within a forest, for obvious reasons.  It’d be too time consuming.  Nam is looking at ways to fill in these gaps in the data virtually.

“I’m working with a lot of prediction methods to fill in the gaps in information.  There’s uncertainty in how to visualize it, but there are also methods to address these challenges.” 

Ultimately, the team plans to develop two versions of the forestry visualization.  One will be scientist-facing, using interactive data visualization to help forestry and climate scientists to answer specific current research questions; and one will be public-facing, helping the forest service to tell the story of change in the climate and forests to K-12 students and the general public.

To find out more about the Interactive Visualization Lab (IV/Lab) and the exciting and groundbreaking research projects coming out of it, be sure to visit their website.