In this paper, we investigate a method to find the cell loss probability versus different buffer size for different types of video traffic in a statistical multiplexer. Next, we fine-tuned the parameters for JBIG and BILEVEL coding, and this resulted in an increased compression ratio of 8.0 and 6.7 respectively. Some results are remarkable: STAT works very well, despite its one- dimensional approach JBIG beats BILEVEL coding on high- resolution images though BILEVEL coding is better on the CCITT images, and finally, TIFF Group 4 (CR 3.2) and TIFF Group 3 (2.7) can't compete with any of these three methods. We found that both in compression ratio (CR) and speed, JBIG (CR 7.3) was best, followed by STAT (CR 6.3) and BILEVEL coding (CR 6.0). First we compared the methods without tuning their parameters. The adaptive two-dimensional coders are: BILEVEL coding (by Witten et al.) and JBIG (latest fax- standard). The non-adaptive two-dimensional black-and-white coders are: TIFF Group 3 and TIFF Group 4 (former published fax- standards by CCITT). The general-purpose coders are: GZIP (LZ77 by GNU), TIFF LZW and STAT (an optimized PPM compressor by Bellard). In this paper we compare the performance of lossless one-dimensional general-purpose byte-oriented statistical and dictionary-based coders as well as lossless coders designed for compression of two- dimensional bilevel images on high-resolution screened images. Since a screened photographic image may be viewed as a rotated rectangular grid of large half-tone dots, each of them being made up of an amount of microdots, we suspect that compression results obtained on the CCITT test images might not apply to high-resolution screened images and that the default parameters of many existing compression algorithms may not be optimal. Storing such images on disk or transmitting them to a remote imagesetter is an expensive and time-consuming task, which makes lossless compression desirable. Screening of color-separated continuous-tone photographic images produces large high-resolution black-and-white images (up to 5000 dpi). Therefore, an important aspect during development was the constraint of hardware feasibility, while sufficient quality of the intermediate view images had still to be retained. The algorithms have been designed for a realtime videoconferencing system with telepresence illusion. Interpolation weights are derived both from the position of the intermediate view, and from the position of a specific point within the face. It takes into account the fact that a person's face has a more or less convex surface. We have developed an object-based interpolation algorithm, which produces high-quality results. Using the disparity estimator's output, it is possible to generate arbitrary intermediate views from the left- and right-view images.
A hierarchical block matching algorithm is employed for this purpose, which takes into account the position of high-variance feature points and the object/background border positions. The system identifies foreground and background regions, and applies disparity estimation to the foreground object, namely the person sitting in front of a stereoscopic camera system with rather large baseline.
This paper describes algorithms that were developed for a stereoscopic videoconferencing system with viewpoint adaptation.