Department of Computer Science
University of Southern California
Project Award Information
We intend to complete and integrate three major research projects that we have been successfully pursuing for the past decade: 1) design and implementation of our second phase of Yima, a large scale, high performance, and real-time media server, 2) analysis and enhancement of Super-Streaming, a technique that utilizes idle system resources to improve media delivery performance, and 3) design and implementation of GMeN (Global Media Network), a distributed CM server architecture for cost-efficient CM delivery to geographically distributed clients.
The impact of this architecture is on both the server design as well as the online multimedia content. Furthermore, the realization of such an infrastructure is enabling large scale applications such as video-on-demand, news-on-demand, distance learning and scientific exploration and visualization. These applications, in turn, will be promoting teaching, training, and learning as well as enhancing scientific and technological understanding.
Publications and Products
The following is a list of publications that appeared after the reception of this grant, while the research has been mainly initiated prior to the grant approval. Within these publications, this ITR grant has been acknowledged. More research manuscripts have been submitted for publication and/or are under preparation.
We achieve efficient, online disk scalability through a technique we developed in conjunction with Prof. Ashish Goel called Scaling Disks for Data Arranged Randomly, or SCADDAR. SCADDAR allows data blocks, which are placed in a random manner across several disks, to be efficiently redistributed across the existing disks after a disk addition or removal operation. For efficient redistribution, block movement must be minimized. Random data block placement is determined by a reproducible random sequence. With this technique, we do not need to re-generate a new random sequence, which causes all data blocks to move, upon disk additions or removals. Instead, block movement is minimized thereby allowing Yima server to stay online without a temporary shutdown.
We have conducted a number of end-to-end video streaming playback tests between the Yima server in our lab and a client approximately 40 kilometers away. The client was linked to the Internet through an ADSL connection. The end-to-end raw bandwidth achieved was about 1Mbps. Tests were performed using an MPEG-4 encoded movie with a frame size of 720 x 576 pixels and 25 frames per second (fps). The stream required an average of 105 KB/s (840 Kb/s) bandwidth for both the video and audio layers.
The second phase of Yima involves a complete overhaul of the architectural design which incorporates several new key concepts. Among these concepts are: true distributed server architecture, development of our own MPEG-4 hinting process and online scalability and fault tolerance capabilities.
Three research manuscripts on this project are either submitted or under preparation for publication. We will include them in the next year report. In addition, we are in the process of including live demonstration of playing MPEG-4 video through our server on our website. Currently, our client is based on Linux operating system and is not trivial to disseminate through the Web. Under the project website there are some pictures of the demonstrations of Yima to different visitors.
A growing number of immersive and multimedia applications store, maintain, and retrieve large volumes of real-time data where the data is required to be available online. We denote these data types collectively as "continuous media", or CM for short. Continuous media is distinguished from traditional textual and record-based media in two ways. First, the retrieval and display of continuous media are subject to real-time constraints. If the real-time constraints are not satisfied, the display may suffer from disruptions and delays termed hiccups. Second, continuous media objects are large in size. A two-hour MPEG-2 video with a 4-megabit per second (Mb/s) bandwidth requirement is 3.6 gigabyte (GB) in size. Popular examples of CM are video and audio objects, while less familiar examples are haptic, avatar and application coordination data. We focus on those CM objects that are of very high quality requiring bandwidths in the order of megabits per second.
S. Ghandeharizadeh and C. Shahabi, “Distributed Multimedia Systems.” In Wiley Encyclopedia of Electrical and Electronics Engineering, Editor: J. G. Webster, Volume 5, 1999 John Wiley & Sons, Inc.