Degree Type

Dissertation

Date of Award

1996

Degree Name

Doctor of Philosophy

Department

Electrical and Computer Engineering

First Advisor

John F. Doherty

Abstract

Data compression is probably the single most important factor in every information service that is being visualized and proposed by engineers. The effectiveness of such services are dependent upon achievable compression of real time speech and video signals. Several approaches to signal encoding have been proposed and realized, each with its unique advantages and costs. Large compression ratios can only be achieved through lossy source encoding methods. One such method is Vector Quantization (VQ);The lossy nature of such encoders imply that the process of encoding is non invertible. At low bit rates, lossy compression with conventional decoders (realized as a simple 'inverse' of the encoder) result in huge subjective and objective distortions. Therefore, the thrust of research is to build 'intelligent' decoders that use a priori knowledge of the human visual properties in the decoding process. In such a scenario, signal decoding poses itself as a recovery problem based on some known a priori information. This is a study of the use of image recovery methods in lossy image encoding. Specifically, it is a study of the problems and costs associated with low bit rate coding of photographic grayscale images and recovery approaches to alleviate those problems;This study investigates the possibility of applying the theory of Convex Projections (CP) to problems in image recovery. The lossy compression method used as a target is the standard Vector Quantization (VQ) approach. In particular, the study looks at an implementation of VQ with a single-codebook encoding and multiple-codebook decoding. The method uses a convex projections (CP) based algorithm to iteratively project a coarsely encoded image onto a better codebook(s) during decoding, based on certain a priori constraints. The objective of this approach is to make encoding independent of edge regions of an image. This drastically reduces the number of edge vector representations at the encoder and hence results in fast searches. Such an approach will also work better on images outside the training sets since encoding is less dependent on edges.

DOI

https://doi.org/10.31274/rtd-180813-10232

Publisher

Digital Repository @ Iowa State University, http://lib.dr.iastate.edu/

Copyright Owner

Ajai Narayan

Language

en

Proquest ID

AAI9620982

File Format

application/pdf

File Size

93 pages

Share

COinS