Research

Virtual Campus

Virtual Campus

Virtual Campus

Augmented Virtual Environments: Virtual Campus

This project addresses several research topics related to virtual reality, augment reality, geometry modeling, motion tracking, and visualization. The primary objective of this research effort is to devise new and innovative approaches and technologies that would improve the capabilities of information extraction, fusion, interpretation, and visualization from multiple sensor sources and dataset, suitable for various time-critical applications.

Scene modeling: While current sensing and modeling technologies offer many methods are suitable for modeling a single or a small number of objects, the creation of detailed wide-area models still remain costly and difficult to produce. This problem is the main impetus for our work. Our long-term goals are the scientific knowledge needed for creating detailed 3D models of large urban areas that include the internal and external features of buildings, surface streets, subsurface infrastructure, and vegetation, as well as moving people and vehicles, and data from multiple sensors. This research will benefit many applications including urban planning, geoinformation systems, virtual reality, and military operations.

Motion tracking: This research front is for the development of innovated multi-sensor fusion technology that primarily uses for outdoor motion tracking. The aim is to produce technology that is general-purpose, operates in unprepared-environments, and is feasible with current or near term technology. Particularly, we develop techniques towards the system requirements we consider essential to the hybrid tracking system. The effort of this research is complementary to current trends in NRL's BARS system. Other areas of potential use include virtual reality, HCI, and robot navigation.

Data fusion and visualization: This research front focuses on the problem of dynamic visualization of multiple image/video/data streams from different sources and platforms. The problem we emphasize is providing some way for people to digest and understand all this information easily with some sense of the spatial, temporal, and truthful nature of the information. The efforts of this research will benefit many application in commercial, law enforcement, security surveillance, environment monitoring, traffic measurement, and military applications. We develop an integrated prototype system by combing those researches to illustrate the utility and benefits of our technologies.

An Augmented Virtual Environment (AVE) provides a means of fusing dynamic imagery on a 3D model sub-strate. The AVE approach allows users to visualize and comprehend multiple streams of imagery (video and still images) in a four-dimensional context. The addition of projected live or recorded imagery to an other-wise static virtual environment creates an AVE. Our methods are described in the context of a prototype system for visualizing activities on the USC campus.

White Paper
NSF Report (Year 8)