Computer scientists at The University of Texas at Austin say that a day will come when computers will automatically give short video digests of a day in our lives, kind of like a video journal.
Kristen Grauman, her postdoc, Lu Zheng, and doctoral student, Yong Jae Lee, presented their technique named "story-driven" video summarization at the IEEE Conference on Computer Vision and Pattern Recognition this summer.
The major reason for development of the method is that a wearable camera technology such as Google Glass and Looxcie that generate immense amounts of video need summarization.
Kristen Grauman, associate professor of computer science in the College of Natural Sciences said, "The amount of what we call 'egocentric' video, which is video that is shot from the perspective of a person who is moving around, is about to explode."
She explained, "We're going to need better methods for summarizing and sifting through this data."
Grauman and her colleagues’ method automatically examines recorded films and then creates a shorter "story" of the video.
Military officials will benefit from the technology and easily scan the information from a soldiers' camera, investigators would be able to go through cellphone video data with ease in the wake of disasters, and senior citizens will be able to use video summaries of their days to compensate for memory loss, explained Grauman.
Grauman said, "There's research showing that if people suffering from memory loss wear a camera that takes a snapshot once a minute, and then they review those images at the end of the day, it can help their recall."
The team of researchers used a combination of methods to give weight to the frames to be added to the summarization.
The team then ran human “taste tests” in order to understand how their method performed with respect to previous methods that were created for the same purpose. Between 75 and 90 percent of people who evaluated the summaries agreed that Grauman and her team’s method was superior.