E-ISSN:2583-2468

Research Brief

Vector Quantization Techniques

Applied Science and Engineering Journal for Advanced Research

2023 Volume 2 Number 2 March
Publisherwww.singhpublication.com

Image Compression Using Wavelets and Vector Quantization Techniques

Varsha Purohit A1*
DOI:10.54741/asejar.2.2.3

1* Varsha Purohit A, Assistant Professor, Department of Mathematics and Statistics, Banasthali University, Jaipur, Rajasthan, India.

Digital image processing refers to the handling of an image by means of a processor. The various elements of a digital image processing system include image acquisition, image storage, image processing and display. Digital image compression has been focus of a huge amount of research in recent years. A survey and study of various image compression used in various image processing application has been presented. Image compression plays a vital role in Image processing, it is also very important for efficient transmission and storage of images. When calculate the number of bits per image resulting from typical sampling rates and quantization methods with image compression is needed. Therefore, development of an efficient technique for image compression has become a challenging one. On the basis of analyzing the various image compression technique, this paper presents a survey of existing researcher has various kinds of existing method of Image compression.

Keywords: image compression, lossless, lossy, huffman coding, fractal coding

Corresponding Author How to Cite this Article To Browse
Varsha Purohit A, Assistant Professor, Department of Mathematics and Statistics, Banasthali University, Jaipur, Rajasthan, India.
Email:
Varsha Purohit A, Image Compression Using Wavelets and Vector Quantization Techniques. Appl. Sci. Eng. J. Adv. Res.. 2023;2(2):14-18.
Available From
https://asejar.singhpublication.com/index.php/ojs/article/view/46

Manuscript Received Review Round 1 Review Round 2 Review Round 3 Accepted
2023-02-12 2023-10-27 2023-03-20
Conflict of Interest Funding Ethical Approval Plagiarism X-checker Note
None Nil Yes 14.18

© 2023by Varsha Purohit Aand Published by Singh Publication. This is an Open Access article licensed under a Creative Commons Attribution 4.0 International License https://creativecommons.org/licenses/by/4.0/ unported [CC BY 4.0].

Introduction

Images are significant documents in today; in day-to-day life, it plays a crucial role to work with them in few applications, there is need for compression. Image compression plays a very main role in transmission and storage of an image. The main objective of image compression is to present an image in the fewest number of bits without losing the content of an original image. The application areas for compression today range from Mobile, Medical field, Satellite Research field, TV and broadcasting of high definition. This guides to improving the curiosity rising tools and algorithms for lower bit rate image encoding. An Image is a two dimensional signal processed by the human visual perception system. Generally, the images are usually in analog form then they are converted to digital form for the purpose of processing storage and transmission. Typically an image data is a two dimensional array of picture elements.  

Suganya et al., developed a new method in lossless compression and efficient reconstruction of color medical images using the curvelet. Tushar Jadhav et al., Introduced thee mean removed and multistage vector quantization in wavelet domain.Thanes war Kumar et al., developed the DWT, DCT and Huffman coding; it is an hybrid technique for image compression for the purpose of achieving good quality image.Saravanan introduced the adaptive image coding algorithm for medical image compression. An Adaptive image coding algorithm is a combination of haar transform; Modified curve let transform and SPIHT Encoding. Jenny et al., developed a modified Embedded Zero-tree Wavelet method for medical Image Compression. Amandeep Kaur et al., introduced the compression of medical images based on Region Of Interest (ROI).The region growing algorithm applied on to the image and the select region of interest part by using the mouse click and apply the Discrete Wavelet first and second level DWT and IDWT(Inverse Discrete Wavelet Transform).Ruchika et al., developed a hybrid technique for compressing the medical images. First up all using DWT wavelet transforms into the images. Then Huffman encoding is applied into the constructs the image as compressed. Then Huffman decoding and IDWT are applied to the image.

Somasundaram et al., introduced A Hybrid scheme for medical image compression using SPIHT and DEFLATE technique. The higher magnitude bit planes, whose corresponding thresholds are more than 8, are encoded by the SPIHT coder.Gurjar et al., introduced the medical image compression technique using hybrid wavelets and vector quantization method for telemedicine application.Sivakumar et al, introduced the vector quantization based Image compression. First up all three level DWT applied into the image then Vector quantization is also applied. It gives the better compression ratio and PSNR values.

Jibanananda Mishra et al, proposed an intelligent method based medical image compression. Apply subband decomposition by wavelet transform after apply the vector quantization technique. Codebook formation using SOFM, is a neural network concept. Then apply mapping and transmission of index vector with code vector. Finally, arranging the subband in proper order. It provides better Image Quality.

BASICS OF IMAGE COMPRESSION

There are two main types of image compression Techniques. They are lossless image compression and lossy image compression. Run length encoding, Huffman Encoding, Arithmetic Coding techniques are some examples of lossless compression. In lossy Image compression, data are discarded for the duration of compression and cannot be recovered; but, lossy Image compression gives much greater compression than lossless Image compression technique. Wavelet and Fractal, vector quantization are examples of lossy Image compression techniques. There are three basic steps,

A) The Transformation: It may separate vital components of the Image prototype so that they are openly accessible for analysis. The Transformation may put into the image data in a more intense form so that they can be saved and transmitted competently. The DWT dividing the image data into blocks of 64 pixels (8×8) and processes each block separately.

B) Quantization: It reduces the accuracy of the mapper’s output corresponds to the fidelity criterion. This process is irreversible and targets to eliminate irrelevant information from the image. When lossless compression is needed, this procedure must be omitted.


The values of each block are next separated by a quantization coefficient. This is the compression stair while content failure occurs.

C) Encoding: An Encoder reduces the entropy, which means decreasing an average number of bits required to represent the Image. The reduced coefficients are then programmed, typically with Huffman coding... Typically, zero-order entropy computes the source of the entropy.

H= -∑pi log2(pi)  i € S  (1), S – Sources

asejar_46_01.jpg
Figure 1:
Basic Steps of Image Compression

Methodology

Lossless Compression Methods

The name lossless compression point outs the original image can be perfectly recovered. The following techniques are some of the lossless compression methods.

a) Run-Length Encoding (RLE)

b) Huffman Encoding

c) Lempel-Ziv-Welch Encoding (LZW)

d) Area Encoding

a) Run Length Encoding

It is an effortless compression technique used for chronological data. It is more helpful in case of redundant data. This method replaces succession of indistinguishable symbols called runs by shorter symbols.asejar_46_02.jpg
Figure 2: Run Length Encoding

The run length encoding for grayscale image is represented by a series of {Vp, Rp}, while Vp is the intensity of pixel and Rp refers to the number of Successive pixels with the intensity Vp.

b) Huffman Encoding

In Computer Science, and Information hypothesis, Huffman encoding is based on entropy encoding scheme used for lossless data compression. The term refers to the employ of a variable length code table for encoding a basis symbol where the variable length code table has been derived in a exacting way based on the predictable probability of incidents for each possible value of the source symbol. The picture elements of the image data are treated as symbols. The symbol, which occurs more frequently, assigned a relatively large quantity of bits. Typically, it is a prefix code. This means that the (binary) code of every symbol is not the prefix of the code of every additional symbol.

c) LZW Encoding

It is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. Naturally, LZW(Lempel-Ziv-Welch) is an entirely dictionary based coding. LZW encoding is normally divided into static and dynamic. The static dictionary is fixed for a phase of an encoding and decoding operation. In dynamic dictionary coding, the dictionary is reorganized if required.

asejar_46_03.jpg
Figure 3:
Example of LZW Coding

d) Area Coding

This method is an improved form of run length encoding method. There is some significant advantage of using this technique over other lossless methods. A invariable or constant area coding special code words are used to identify large areas contiguous 1`s and 0’s .Here image data is divided into a group of pixels and then partitions are classified as blocks which only have black (or) white pixels(or)blocks with hybrid intensity. Another variant of constant area coding is to employs an iterative approach in which binary image is decomposed into com successively smaller and smaller block.


The section stops when the block reaches, some of built-in size or when all pixels of the blocks have the some value. The nodes of this tree are then coded. For compressing white text a simpler method is used.

This is also called as white block skipping. In this block containing solid white areas are coded to 0 and further areas are coded to 1. They are followed by the bit pattern.

B) Lossy Image Compression

The second method is Lossy Compression. Lossy compression techniques provides higher compression ratio. The following techniques are lossy image compression techniques,

  1. a) Transform Encoding
  2. b) DWT
  3. c) DCT
  4. d) Fractal Image Compression

a) Transform Coding

DFT, DCT are type of transforms which are used in shifting the picture elements of the unique image into frequency domain coefficients. There are numerous properties in this form of coefficients. One is the compaction property. This is the fundamental for achieving the Image compression.

b) Discrete Wavelet Transform

Discrete Wavelet Transform is mathematical tool that has aroused great interest in the area of image processing due to its good features. Some of these characteristics are: 1) it allows image multiresolution representation in a usual way because more wavelet subbands are used to progressively enlarge the short frequency subbands; 2) It supports wavelet coefficients examine in both spatial and frequency domains, thus the interpretation of the coefficient is not controlled to its frequency manners and we can perform better study about image vision and segmentation; and 3) For common images, Discrete Wavelet Transform achieves high compactness of energy in the lower frequency sub bands, which is enormously helpful in various applications such as image compression.

The introduction of Discrete Wavelet Transform made it possible to improve few specific applications of image processing by dropping the present tools with this new mathematical transform.

c) Discrete Cosine Transform

Discrete Cosine Transform helps to break up the image into dissimilar parts of varying importance. Discrete Cosine Transform expresses a series of finitely a number of data points in terms of cosine functions oscillating at various frequencies. In particular, DCT is a Fourier-related transform similar to discrete Fourier transform, but using just real numbers only. Discrete cosine transform is mostly used in digital image processing by performing encoding and decoding.

d) Fractal Compression

Fractal image compression coding establishes the idea of decomposition of an image into fragments by using standard methods of image processing like color partition, edge discovery and texture analysis. Each segment is stored in a library of fractals [21]. This scheme is efficient for compressing images that have good reliability and self-similarit.

PARAMETERS OF IMAGE COMPRESSION

For performance analysis of image compression techniques the different performance analysis parameters used in literature and some of them are as follow. These characteristics are defined to measure the fitness of a given compression algorithm for any application. Performance measurement parameters are described in the following sub-sections,

  1. a) Compression Ratio (Cr)
  2. b) Bits per Pixel (BPP)
  3. c) Mean Square Error (MSE)
  4. d) Peak Signal Noise Ratio (PSNR)

a) Compression Ratio

It is the ratio, for size of a compressed image to the size of an original image. The compression ratio usually describes the picture quality. Normally, higher compression ratio gives the poor quality of the reconstructed image.

Compression Ratio = Uncompressed Image / Compressed Image

b) Bits Per Pixel

A good measurement of image compression. It measures the average number of bits used to represent each pixel of the image in a compressed form.


It is defined as the number of bits used to accumulate one pixel of the image data. For gray scale image Bits per Pixel is 8 bits and for a colour image BPP is 24 bits.

c) Mean Square Error

Typically, Mean Square Error also called as an average prediction error, it determines the clarity of an image. It is calculated as the average of difference between the decompressed and original image. A Higher value of MSE gives a poor quality image.

asejar_46_04.jpg

Where I is an original image, K is an approximation of decompressed image and m, n are pixels of the image. Its lower value indicates better picture quality.

d) Peak Signal Noise Ratio

Peak Signal Noise Ratio is a measure of a peak error. Peak Signal Noise Ratio is casually expressed in terms of the logarithmic decibel scale in dB. MSE and Peak Signal Noise Ratio is a very helpful parameter to compare the image data compression quality.

asejar_46_05.jpg

Higher PSNR value gives better quality of reconstructed image.

Conclusion

This paper presents different kinds of image compression techniques. Basically two types of techniques. One is lossless technique. After study of all techniques, lossless image compression techniques are most effective techniques among the lossy compression techniques. Lossy Image Compression provides a higher compression ratio than lossless. The survey makes clear that, the field will continue to interest researchers in the days to come.      

References

1. Vijayvargiya , Silakari S., & Pandey R (2013). A survey: Various techniques of image compression. International Journal of Computer Science and Information Security, 11(10).

2. Vrindavanam J., Chandran S., & Mahanti G.K. (2012). A survey of image compression methods. International Conference and Workshop on Recent Trends in Technology, (TCET), pp. 12-17.

3. Neelam & Bansal A. (2014). Image compression a learning approach : International Journal of Computer Science Trends and Technology, 2(4), 60–66.

4. Suganya M., Ramachandran A., Venugopal D., & Sivanantha Raja A. (2014). Lossless compression and efficient reconstruction of colour medical images. International Journal of Innovative Research in Computer 5. and Communication Engineering, 2(Special Issue 1), 1271–1278.

5. Jadhav T., Patil M., & Dandawate Y. (2015). Image compression using mean removed and multistage vector quantization in wavelet domain. International Journal of Modern Trends in Engineering and Research, 1299–1306.

6. Kumar T., & Kumar (2015). Medical image compression using hybrid techniques of DWT, DCT and huffman coding. International Journal of Innovative Research in Electrical, Electronics, Instrumentation and Control Engineering, 3(2), 54-60.

7. Saravanan (2013). Medical image compression using curvelet transform. International Journal of Engineering Research and Technology, 2(12), 2196–2202.

8. Jenny C.T., & Muthulakshmi G. (2010). A modified embedded zero-tree wavelet method for medical image compression. ICTACT Journal on Image and Video Processing, 02, 87-91.

90. Kaur A., & Goyal M. (2014). ROI based image compression of medical images. International Journal of Computer Science Trends and Technology, 2(5), 162-166.

10. Ruchika, Singh M., & Singh A.R. (2012). Compression of medical images using wavelet transforms. International Journal of Soft Computing and Engineering, 2(2), 339 -343.

11. Mahmudul Hassan, & Wang Xuefeng. (2022). The challenges and prospects of inland waterway transportation system of Bangladesh. International Journal of Engineering and Management Research, 12(1), 132-143.