Vision naturally occurs in the context of voluntary information gathering movements involving the eyes, head, and hand. However, we have only limited understanding of the consequences of eye and head movements for vision and visuo-motor coordination. The technology to look at performance in more natural circumstances now exists, and I have developed a human sensory-motor lab for measuring unconstrained performance in both real and virtual environments. My objective is to understand the demands placed on vision and motor systems by natural behavior and the nature of the representations that are required for visually guided tasks.
PhD, University of California, San Diego
Affiliation: Department of Psychology, Center for Perceptual Systems, Institute for Neuroscience
hayhoe AT utexas.edu
(512) 475-9338
SEA 4.234
CV; Webpage; Google Scholar
My main research interest is in computational theories of the brain with emphasis on human vision and motor control. In 1985 Chris Brown and I led a team that designed and built a high speed binocular camera control system capable of simulating human eye movements. The system was mounted on a robotic arm that allowed it to move at one meter per second in a two meter radius workspace. This system has led to an increased understanding of the role of behavior in vision, in particular that visual computations can be simpler when interacting in the 3D world. Currently I am interested in pursuing this research by using high DOF models of humans' natural behavior in virtual reality environments.
PhD, University of California, Irvine
Affiliation: Department of Computer Science; Center for Perceptual Systems
dana AT cs.utexas.edu
(512) 471 9750
GDC 3.510
CV; Webpage; Google Scholar
Matthew Tong (Ph.D., University of California, San Diego) joined Mary Hayhoe's Lab in July, 2012. His graduate studies focused on models of eye movements and salience during both free-viewing and real-world tasks. At the Center for Perceptual Systems, his work has looked at eye-movements and sensorimotor behavior while performing tasks in virtual reality. He's been examining and modeling the roles that uncertainty and task reward play in determining where we look.
PhD, University of California, San Diego, 2015
Affiliation: Center for Perceptual Systems
mhtong AT utexas.edu
SEA 4.128G
ResearchGate
Jon’s research focuses primarily on the visual control of human walking, with an emphasis on the way that the biomechanics of bipedal gait shapes the use of visual information during locomotion over real world rough terrain. To this end, he has developed an apparatus that accurately records full-body motion capture and eye tracking data of people walking outdoors over real-world rocky terrain. Using this data, he hopes to explain the way that humans use eye movements to extract information from their environment in order to facilitate stable and efficient locomotion over complex and difficult terrain.
PhD, Rensselaer Polytechnic Institute, 2014
Affiliation: Center for Perceptual Systems
matthis AT utexas.edu
SEA 4.130
ACADEMIA; ResearchGate
Oran joined the lab in May, 2015. His graduate work was focused on sound source localization within the auditory system of the guinea pig. During his work he studied and developed a novel model for fast neural code and the effect of correlation on pooling information from population. At present, he takes part in developing a model for directing gaze behavior in natural surroundings in order to extract visual information. To that end he is studying the basic models assumption about cognitive structures. In the near future he plans on focusing on the nature and effect of memory, reward and learned priors on gaze behavior.
PhD, Ben-Gurion University, 2015
Affiliation: Center for Perceptual Systems
oran_zohar AT utexas.edu
SEA 4.128G
Lijia received her B.S. in Computer Science from Case Western Reserve University (CWRU). Her research area is dynamic motion planning of character animation. She loves painting, swimming and playing piano.
Affiliation: Department of Computer Science
lijialiu AT cs.utexas.edu
GDC 3.518A
LinkedIn
Ruohan Zhang joined UTCS PhD program in 2014. His research goal is to design intelligent agents. His research focuses on understanding the nature of intelligence and how to model intelligent behaviors. In addition, he is also interested in apply biologically inspired models to the robots and learning agents. His research involves using graphical model, deep learning, imitation learning, and reinforcement learning. He applies these algorithms to solve problems in both modeling human behaviors and controlling robots.
Affiliation: Department of Computer Science
zharu AT utexas.edu
GDC 3.518B
Website
Sariel received her B.S. in Life Science and a Master of Veterinary Medicine from National Taiwan University. She has been working on a "virtual apartment" project investigating the how spatial memory guides attention in visual search in realistic environments since she joined the lab at 2012. By integrating virtual reality and eye-tracking techniques, her work also aims to compare visual attention and memory usage across 2D and 3D settings. She loves music, cats and yoga.
Affiliation: Institute for Neuroscience
sariel.cl.li AT utexas.edu
SEA 4.128D
LinkedIn