biography research projects publications courses students home

MICHELLE EFFROS:
 

   

RESEARCH
 

:

   

DATA COMPRESSION LAB


applications / research projects

interesting links

1999 NSF-Sponsored Workshop on Joint Source Channel Coding


   

The Data Compression Lab was founded by Professor Michelle Effros in 1994. Professor Effros' graduate research focused on the theory of universal lossy and lossless source coding, and universal source coding has continued to be an area of keen interest in the Data Compression Lab. Topics of investigation include both theoretical analyses and practical code designs. New techniques for lossy image coding demonstrate how universal coding tools can be incorporated into practical DCT- and wavelet-based image compression algorithms. Work in lossless coding includes studies of the Burrows Wheeler transform and its performance in universal compression of finite memory sources. We also extend many of the ideas developed for universal coding in point-to-point networks to achieve universal codes for multinode networks.

The investigation of network source coding treats many questions beyond the universal code design questions mentioned above. The field of network source coding treats data compression in any environment comprising more than one transmitter, more than one receiver, or both. Examples of network source codes include multiresolution, multiple description, multiple access, side information, and broadcast codes. Recent results developed in the Data Compression Lab include theoretical analyses and code design algorithms for both lossless and lossy algorithm in a variety of network environments. Theoretical results include the derivation of properties of optimal lossless and near-lossless codes for multiterminal systems, derivation of rate-distortion bounds for multiresolution source coding on stationary ergodic and stationary nonergodic source distributions, rate loss bounds for the multiresolution, multiple description, and multiple access source coding scenarios, and rate-distortion bounds for networks with feedback.

The given theoretical investigations into network source code performance provide valuable insight into code design. Results for lossless code design include fast multiple access source codes designed using channel coding techniques. Early lossy coding results include globally optimal algoirthms for polynomial time scalar code design in a limited collection of network coding environments. Unfortunately, globally optimal design of lossy vector codes for general network environments is NP hard. Therefore it is necessary to consider other avenues in the pursuit of good practical codes for source coding in networks. One popular approach in the point-to-point source coding literature is to develop iterative descent techniques for locally optimal code design. Results in this area from the Data Compression Lab include the introduction of a locally optimal code design algorithm for vector code design in general network scenarios. Later work includes the development of both additive and multiplicative approximation algorithms for approximating the optimal performance using low-complexity codes. Additive approximation algorithms include extensions of entropy constrained dithered quantizers to a variety of network coding environments; the results are low complexity algorithms that achieve rate-distortion performance within an additive constant distance of the rate-distortion bound. Multiplicative algorithms guarantee performance within a factor (1 + epsilon) of the optimal performance for any epsilon > 0; recent work includes a new deterministic linear-time multiplicative approximation algorithm for a wide class of network source coding problems.

Another area of current research in the Data Compression Lab is the new field of Network Coding. Network coding is an alternative to routing for communication over networks like the internet. Network coding generalizes routing by allowing the output of each node in the network to be an arbitrary function of data arriving on incoming links. This generalization can increase capacity and robustness while maintaining very low complexity. Research in this area is moving forward in active collaboration with the researchers from MIT, UIUC, and Microsoft Labs. Results include centralized deterministic design algorithms, distributed randomized design algorithms, and analyses of the relationships of network coding to source and channel coding.


 

Affiliated Groups

Caltech Communication Group

Caltech Lee Center for Advanced Networking

Caltech Electrical Engineering Department

Caltech Center for the Mathematics of Information

Communications Related Websites

IEEE Information Theory Society Home Page

Data Compression Conference Home Page

Compression FAQ

Stanford Signal Compression and Classification Group

Information Theoretic Inequality Prover (ITIP)

MPEG Home Page

Other Links of Interest

The National Science Foundation

Caltech Home Page

Caltech Library System


top