Georgia Tech

Robotics and Intelligent Machines Center (RIM) Seminar-Kristen Grauman

Event Details


  • Wednesday, March 6, 2013 12:00 pm - 1:00 pm
Location: Marcus Nanotechnology Building, Room 1116
Phone: (404) 385-3300

For More Information Contact

Josie Giles
RIM Communications Officer

Unless otherwise noted, all seminars are held in room 1116 in the Marcus Nanotechnology Building from 12-1 p.m. Seminars are open to the public.

Kristen Grauman, associate professor of computer science at UT-Austin, presents "Visual Search and Summarization" as part of the RIM Seminar Series.

Widespread visual sensors and unprecedented connectivity have left us awash with visual data--from online photo collections, home videos, news footage, medical images, or surveillance feeds. How can we efficiently browse image and video collections based on semantically meaningful criteria? How can we bring order to the data, beyond manually defined keyword tags? We are exploring these questions in our recent work on interactive visual search and summarization.

I will first present a novel form of interactive feedback for visual search, in which a user helps pinpoint the content of interest by making visual comparisons between his envisioned target and reference images. The approach relies on a powerful mid-level representation of interpretable relative attributes to connect the user’s descriptions to the system’s internal features. Whereas traditional feedback limits input to coarse binary labels, the proposed “WhittleSearch” lets a user state precisely what about an image is relevant, leading to more rapid convergence to the desired content. Turning to issues in video browsing, I will then present our work on automatic summarization of egocentric videos. Given a long video captured with a wearable camera, our method produces a short storyboard summary. Whereas existing summarization methods define sampling-based objectives (e.g., to maximize diversity in the output summary), we take a “story-driven” approach that predicts the high-level importance of objects and their influence between subevents. We show this leads to substantially more accurate summaries, allowing a viewer to quickly understand the gist of a long video. This work is being conducted with Adriana Kovashka, Yong Jae Lee, Devi Parikh, and Lu Zheng.

Related Links