Semantic Analysis of Large Multimedia Data Sets
This research addresses interactivity and scalability in automatically analyzing large collections of video and multiple video streams processed continuously. This work develops mechanisms to enable real-time interactive video search for user defined concepts using intelligent, active processing clusters and methods for performing high-accuracy semantic video analysis from large amounts of weakly-labeled video over distributed computing resources. The methods leverage modern cluster file systems where data is stored on local disks of the compute servers, and the location of data is made available to the runtime system to allow co-location of compute and storage.
The specific research objectives are to allow co-location of compute and storage through a runtime for parallel stream processing that parallelizes data processing and machine learning tasks across a cluster of multi-core compute nodes. The project also extends distributed versions of graphic model algorithms to speed computation of both the basic low-level signal processing steps and for the semantic analysis based on weakly labeled video data as currently available on the web. The main outcome is to demonstrate vastly accelerated, complete processing of parallel live video streams into a retrieval database with immediate search capabilities and accessing cluster resources during interactive search. The goal of this work is to develop principles for interactive applications driven by real-time processing of high-rate streaming data. The processing architecture and modules developed in this work enable computer vision and multimedia developers to efficiently apply and test their own methods within this framework.
Work shown on this web site is supported in part by NSF Grant No. 0917072
Contact Alex Hauptmann for more information.