This semester UT Computer Science welcomed Amy Pavel as a new assistant professor. Pavel's work sits at the intersection of accessibility and computer science. Her research at UT Austin expands on these themes by exploring how people with disabilities as well as those with different situations and preferences interact with emerging forms of media such as virtual reality and augmented reality.
While Pavel’s previous projects often centered around making visual content accessible to people who were blind or visually impaired, her research at UT Austin explores more complex media such as VR and AR environments alongside video as well. Pavel is utilizing established techniques used to make videos accessible and is continuing that pursuit as well as refashioning those solutions to fit new forms of media like virtual reality and augmented reality as well as tangibles and robots. While for decades our main forms of communication online were text-based (and therefore easy to translate to people with visual impairments through screen readers) VR and AR environments present new challenges in how creators approach the task of effectively communicating content to a wide audience. VR presents full environments where the task of automating a description is made more complex through the presence of 360 degrees of content. Additionally, augmented reality presents additional problems through the complexity of communicating both digital and physical stimuli. Both of these forms of media additionally often require certain motor abilities to fully interact with the content. All these challenges are a part of the exciting frontier that Pavel is investigating through her research.
While increasing accessibility is an important impact of her research, her investigations intend to alleviate obstacles that might hinder effective communication across a variety of situations. She described this perspective by highlighting that when any piece of media is created, there are certain abilities that the audience member is assumed to have. These abilities might include sightedness, hearing, or even an assumption that their audience is able to pay full attention without distraction. However, with our modern digital communications reaching out to larger audiences than ever before, it has become less likely that the real audience will match the default audience that was assumed when making the content. For example, an audience member could be a commuter who cannot watch visual content while driving or may be nearsighted and in need of some more detailed descriptions until they find their glasses. There are many ways in which media currently isn’t perfectly suited for its real-life audience.
When asked for a specific fact that surprised her through her research so far, Pavel explained how: "blind users that I've talked to mentioned that no one ever wants the exact same thing.” And she further explained how, for example, one viewer might be interested in hearing about the events that occurred in a video clip while another might be more interested in the costume design or scenery instead. Changing our perspective to focusing on creating automated solutions that adjust to viewers' abilities, situations, and interests is a key cornerstone of her current research.
To put it succinctly, Pavel described that: “the goal is to make the media we are creating more accessible to a much broader range of audiences and allow people to efficiently accomplish their goals with that media."
This semester she taught a Human-Computer Interaction (HCI) course and is excited to continue introducing computer science students to the field. This course involves a study of how to create effective user interfaces and her research expands on these topics to investigate the plethora of interactive techniques in the context of augmented and virtual reality as well as in general what human interactions with computers will look like in the future.