Image processing compression
Image processing compression
10–21 10
bDepartment of Computer Science, College of Computer Sciences and Mathematics, University of Mosul, Mosul, Iraq .Email: sata@uomosul.edu.iq.
ARTICLEINFO ABSTRACT
Article history: With the development of modern communications technology, data compression is becoming
Received: 12 /10/2021 more important to save space and reduce transmission costs. Because of this, various types
Rrevised form: 10 /11/2021 and strategies of image compression were provided by several researchers, some of these
studies were discussed in this review. The two main types of image compression are Lossless
Accepted : 19 /12/2021
and lossy compression, with many methods for each of them. This research also described
Available online: 21 /12/2021 various lossless and lossy compression algorithms that were used by many researchers
studied that reported in this literature. Lastly, certain conclusions have been provided based
Keywords: on the results of the conducted survey.
DOI : https://doi.org/10.29304/jqcm.2021.13.4.860
1. Introduction
The digital image is a collection of pixel values that requires a large amount of storage space and transmission
bandwidth. Most images are characterized by the fact that neighboring pixels are associated and thus contain
duplicate information. [1]. The main goal now would be to find a less correlated visual representation. [2]. Image
compression aims to minimize the size of a graphics file without sacrificing image quality, allowing more images to
be saved in a given amount of memory space and reducing the number of times images must be delivered or
downloaded over the Internet in a more efficient manner. [3].
Compression reduces the amount of data necessary to represent and store a digital image by removing redundant or
excess bits from the image. Generally, the main types of redundancy can be identified as; Coding redundancy that
happens when more bits are used than are required and fewer code words are used than are available, spatial and
temporal redundancies (Interpixel Redundancy) that happen as a result of correlations between adjacent pixels in
an image, causing some information to be duplicated unnecessarily between associated pixels, and irrelevant data
(Psychovisual Redundancy) that happens when the human visual system ignores visually unimportant data, In
previous years, many image compression methods have been developed, which can be categorized widely in two
primary categories technologies; Lossy Compression, and Lossless Compression [2] [4].
a. Chain codes : efficiently define the boundary of rasterized shapes, and they can be compressed further, also
they stands for a sequence of instructions that regulate the walk through the border pixels of an examined
shape. [9].
b. Run-length encoding (RLE): is a lossless compression method that relies on the occurrence of data
rather than statistical data [10], [11]. The (length, value) duo is used to replace data in this
coding, where "value" is the recurring value and "length" is the number of repetitions[3].
c. Predictive coding: the goal is to eliminate redundancy in image patterns. The error pattern can be regarded
fully random if the predictor does a good job of removing redundancy [12]. In predictive coding, previously
delivered or available data is utilized to forecast future values, with the difference being coded. This would
be done in the image or spatial domain, so it's a bit more complicated [2].
d. a. Bit plane coding: For a simple data partitioning approach, this is an obvious choice. It has the
characteristic of being immune to context dilutions and assessing probability quickly [13].
e. Algorithms of an adaptive dictionary: is employed to achieve a suitable balance of compression efficiency
and computational complexity [14].
f. Entropy encoding: is a lossless compression method that is applied to an image after it has been
quantized. It allows for a more efficient representation of an image, using less memory for
transmission or storage. [15].
g. Area code compression: Area coding is a more advanced form of lossless compression run length coding. It
is extremely effective and can yield higher compression ratios (CR), mostly because of the non-linear
technique, it cannot be performed in hardware. [16].
3. Lossy Compression Technique [4], [5]: Lossy compression reduces a file by perpetually eliminating redundant
information, and when an uncompressed file, only a part of the original information is still there, this technique is
generally used when certain data loss mostly not noticed by the user as with video and sound files, with images on
the Web the JPEG compression were commonly adopted. Lossy schemes are used widely because The quality of the
reconstructed images is sufficient for most applications. The decompressed image isn't the same as the original
image, but it's quite close [17]. and provide much higher compression ratios than the lossless scheme. Figure (2)
show result of compression image by Lossy Compression Technique[18].
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 12
4. Subject review
In the year 2008 [22], Somasundaram and Domnic proposed a still image compression technique with a low bit rate,
to generate residual codebook, and compresses VQ indices. It's a novel grayscale image compression strategy that
improves image quality while keeping the bit rate low. The VQ method is being used in the system, which uses a
residual codebook to improve image quality and the compression VQ indices to reduce bit rate. With the standard
images, this technique provides superior PSNR values and is less expensive than GSMVQ and JPVQ. This approach
compresses data faster than the other two since it uses a smaller codebook.
In the year 2009 [23], Sadashivappa and AnandaBabu performed a study to characterize a bigger collection of
wavelet functions for usage in a SPIHT-based still image compression system. The study looks at the key aspects of
wavelet functions and filters used in MATLAB to convert images into wavelet coefficients for sub-band coding. To
objectively test image quality, the peak signal to noise ratio (PSNR) and its variation with bit rate were utilized. The
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 13
impact of various parameters on various wavelet functions is investigated. The results serve as a useful guide for
wavelet-based coder application makers.
Kharate and Patil provided an appropriate selection of mother wavelets based on the nature of images in a paper
published in 2010 [6], which significantly improved image quality and compression ratio. They propose a unique
compression technique based on wavelet packet best tree with better run-length encoding, which is based on
Threshold Entropy. Because a whole tree is not dissected, the suggestion technique minimizes the temporal
complexity of wavelet packets decomposition. Based on threshold entropy, the algorithm identifies sub-bands that
contain meaningful information. It is suggested that the improved run-length encoding technique outperforms RLE.
The compression ratio is good for low frequency (smooth) images, but it is very high for gray images, according to
the results obtained on a set of natural and synthetic images. For high-frequency images like Mandrill, and Barbara,
the compression ratio is good, and the image quality is preserved. These findings are compared to those acquired
using the JPEG-2000 application. The results achieved using the suggested algorithm are superior to those obtained
using the JPEG-2000 application.
In the year 2010 [24], Somasundaram and Vimala proposed a novel approach called Efficient Block Truncation
Coding (EBTC). The proposed method is a lossy image compression technique that takes advantage of inter-pixel
redundancy to minimize the bit rate furthermore. It is a well-known fact that the intensity values of adjacent pixels
are more or less the same. After separating the image into small 4 × 4 pixel blocks, the blocks are classified into two
categories: low detail blocks and high detail blocks. The block is referred to as a high detail block when the intensity
values of the nearest pixels differ, and it is referred to as a low detail block when the difference between the
intensity values is less. When compared to traditional BTC, the proposed approach provides excellent performance
of PSNR values and bit rate.
In the year 2011 [25], Mohammed and Abou-Chadi did a study to look into image compression utilizing block
truncation coding, which is regarded as a lossy image compression technique. The original block truncation coding
(BTC) and Absolute Moment block truncation coding were chosen as the two algorithms (AMBTC). The two
algorithms employ a two-level quantize and divide the image into non-overlapping blocks. Different grey level test
images with 512512 pixels and 8 bits per pixel were used to apply the approaches (256 grey levels). The bit rate of
the reconstructed images is 1.25 bits per pixel. This equates to an 85 percent compression ratio. Image
quality(SSIM) was assessed using the Bit Rate (BR), Peak Signal to Noise Ratio (PSNR), Weighted Peak Signal to
Noise Ratio (WPSNR), and Structural Similarity Index (SSI). The ATBTC algorithm outperforms the BTC algorithm,
according to the results. It has been demonstrated that at the same image compression, bit rate using AMBTC gives
superior image quality than compression image by using BTC. Furthermore, the AMBTC is much faster than the BTC.
An improvement lossy compression technique that works with grayscale images to eliminate correlation and spatial
redundancy between pixels of Block Truncation Coding (BTC) and Enhanced Block Truncation Coding (EBTC) in the
year 2011 [26] have been suggested by Kumar and Singh, which is useful for preserving the compression ratio and
quality of an image. The ETBTC algorithm outperforms the BTC algorithm, according to the results. It has been
demonstrated that at the same bit rate, image compression using EBTC gives superior image quality than image
compression using BTC. This algorithm was put to the test on a variety of grayscale images of various sizes. Image
quality was evaluated by using Weighted Peak Signal to Noise Ratio, Peak Signal to Noise Ratio, Bit Rate, and
Structural Similarity Index. Where bit rate of the reconstructed images is 1.25 bpp, which equates to compression of
85%.
In the year 2014 [27], Bhavana Patil and Asharani Patil researched to develop a computationally efficient and
effective image compression algorithm based on DCT and wavelet transform. The work focuses on wavelet image
compression utilizing the Haar Transformation, intending to reduce processing needs by applying different
compression thresholds for the wavelet coefficients and obtaining results in a matter of seconds, improving the
quality of the reconstructed image. They investigate key design challenges using a reduced model of a sub and coder.
Haar wavelet achieves a higher compression ratio and PSNR than DCT. A higher PSNR indicates better image quality.
They are adaptively quantized using a high-frequency sub band with better resolution, in addition to Haar wavelets.
Due to separable wavelets filters and clustering with spatial limitations, these two compression approaches provide
well-structured directional edges and vast homogeneous regions. The bit rate of sub band coding is substantially
lower than that of the original sub-band images.
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 14
In the year 2015 [28], Zhou, Bai, and Wang conducted a study. A discrete cosine transform-based image reduction
approach is proposed (DCT). This approach combines differential pulse code modulation and vector quantization in
a DPCM hybrid scheme. In this system, DCT is utilized to transfer the image to the frequency domain from the spatial
domain. The block data is then shortened after being translated into a vector in zigzag order. Following that, the
vector is divided into DC and AC coefficients. The DC coefficient is coded using DPCM after scale quantization.
Multistage vector quantization is used to code AC coefficients (MSVQ). Then, on index tables and DC portions,
entropy encoding is done independently. The proposed algorithm outperforms the standard VQ algorithm as well as
the hybrid DCT-VQ technique. The codebook design procedure, which is improved by using multiple small-sized
codebooks instead of one huge codebook, is the method's only complicated operation as compared to the JPEG
scheme. The suggested technique has a higher PSNR value than the JPEG standard, as demonstrated by the
experimental results.
In the year 2017 [7], Singular Value Decomposition (SVD) which is a rapid technique of lossless compression
proposed by Pabi and et al, compresses images by utilizing a smaller rank to mimic the original matrix. With low
compression ratios, SVD provides good PSNR values. Encoding time is increased by employing SVD for distinct
singular values with an acceptable PSNR. A new fast compression strategy called SVD-BPSO is developed that uses
SVD and butterfly particle swarm optimization to reduce encoding time. The use of the BPSO idea in singular value
decomposition decreases encoding time and increases transmission speed. The simulation results indicated that the
strategy delivers a high PSNR while requiring the least amount of encoding time.
The work of Kong, Sun, Han, and Guo [29], in 2017 proposed an image reduction and transmission strategy based on
non-negative matrix factorization (NMF), the NMF algorithm concept is studied first. The image capture, block,
compression, and transmission mechanisms are then finished collaboratively. Camera nodes take images and send
them to regular nodes, which compress them using the NMF method. The cluster head node sends compressed
images to the station, and ordinary nodes receive them. They assigned distinct functions to nodes, such as data
processing and long-distance data transmission, respectively. As a result, the entire system's energy usage becomes
homogeneous, and finally, the image restoration is handled by the station. This mechanism can reduce the energy
consumption of camera nodes, which play a critical role in the network, according to simulation results. At the same
time, it may balance the network's energy usage and lengthen its lifetime. It can also efficiently remove common
noise and enhance image restoration quality.
In the year 2017 [30], Ahmed and George proposed a low-cost lossy compression for a color image in a study. The
data of the RGB image is transformed to YUV color space, and then U and V bands are down-sampled by the
propagation step. Each color sub-band is decomposed separately using the biorthogonal wavelet transform. The
Low-Low (LL) sub-band is then encoded using the (DCT). Scalar Quantization is used to code the remaining wavelet
sub-bands. The quadtree coding method was also used to code the results of DCT and quantization procedures.
Finally, adaptive shift coding is employed as a high-order entropy encoder to remove any statistical redundancy and
boost compression efficiency. The system was put to the test on a series of standard color photographs, and the
compression results revealed that it was capable of reducing the size while keeping fidelity levels above the
acceptable level, with compression ratios of around 1:30 for Color Barbara and 1:40 for Color Lena.
In the year 2017 [31], Mander and Jindal propose a novel technique for image compression. Jindal's technique
combines BTC and DWT algorithms with spline interpolation. It assists in shrinking the image's size so that it takes
up less memory and is easier to send. As a result, grayscale images are compressed using BTC. After compression,
the images are reconstructed using the Discrete Wavelet Transform (DWT) with spline interpolation. PSNR values
observed are reasonable, and alterations have a favorable impact on compressed image visual quality. This image
compression approach takes care of all of the image's edges. The method is used because it is less complicated than
others and is simple to implement. Following the application of these strategies, it was discovered that the results
obtained were over 43 percent better than the techniques utilized by other users for compression. These results are
derived by comparing the findings of this study to previous implementations by various researchers. The
recommended method has been discovered to be ineffective exceeds the most commonly used existing procedures,
providing outcomes that are 49 percent better.
In 2017, Abood utilized three composite color image compression methods in this study [32]: composite stationary
wavelet technique (S), composite wavelet technique (W), and composite multi-wavelet technique (M). The
compression parameters are derived for the third-level high-energy sub-band of each composite conversion in each
composite technology. Color image compression is used in these methods to produce great compression, no loss of
original image, higher performance, and good image quality. The three-level multi-wavelet transform (MMM) in M
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 15
technology is the best complex transformation among the 27 types, with the highest energy and compression ratio
values and the lowest bit per pixel (bpp), time in seconds, and rate-distortion values, and the least bit per pixel
(bpp). A color image's compression is nearly the same as the average compression parameter values for the three
ranges of the same image. This work is beneficial for images with high compression, no loss of original image,
improved performance, and good image quality.
Kumar R. and et al. [33] devised in 2019 an effective matrix completion technique for picture compression and
quality retrieval. The suggested method uses thresholding and singular value reduction to complete low-rank
matrices. Singular value decomposition (SVD) is used to decompose an image to obtain a low rank of image data that
may be approximated in compressed form. The singular value thresholding approach is then used to recover the
compressed image's visual quality. The proposed method is easily applicable for various visual characteristics of the
image for various compression efficiency, and the comparative analysis also considers as evidence that explains the
suitability of the proposed method in comparison to state-of-the-art techniques and standard techniques such as
JPEG200. Visual quality can also be improved using an SVT-based quality retrieval procedure, depending on the
application. The simulation results show that the proposed method is capable of compressing images at high rates. A
complete examination of the proposed method's efficiency in terms of compression and quality retrieval has been
presented. Experiments show that a maximum compression of 80% can be achieved while maintaining acceptable
visual quality for the human vision system (HVS).
Li and Jia published a paper in 2019 [34] proposing a model of coding bit-rate within high bit-rate in terms of mean
absolute difference and coding quantization parameters for predictive coding. The model for JPEG-LS is then used to
create a rate control approach for near compression. To manage the bit rate during a certain image coding process,
quantitative parameters are altered piecewise based on the model. Experiments demonstrate that with the
proposed strategy, the final code rate can be close to a target rate. Because of the exact bit rate model, it is possible
to avoid quantitative parameters varying within a large range, which is not possible with other methods.
Consequently, the suggested approach can achieve a performance of rate-distortion that is close to ideal.
Ariatmanto and Ernawan [35] in the year 2020 proposed a new scaling factor for selected Discrete Cosine
Transform (DCT) coefficients in image watermarking, where these factors employ particular guidelines to reduce
distortion. Image blocks with the lowest pixel variances are chosen as embedding places. The best image quality is
used to determine the ideal scaling factors for specified DCT coefficients on the middle frequencies. the scaling
factors are used to accomplish the embedding procedure, the results indicate that the proposed method achieves
higher Normalized Cross-Correlation (NC) values of watermark recovery against various attacks than existing
schemes, also this scheme maintains watermarked images with a PSNR value of 45 dB in quality.
In the year 2020 [36], Aljaz Jeromel and Borut Zalik proposed a modern lossy approach for compressing cartoon
images. To begin, the image is divided into parts of about the same color. The chain codes for all regions are then
determined. The Burrows-Wheeler Transform, RLE, and Move-To-Front transformations are used to transform the
sequence of acquired chain code symbols. Finally, in the last stage, an arithmetic encoder can be employed to
compress the output binary stream even more. The suggested technique is asymmetric, which means that it does
not reverse all of the compression steps during decompression. The given method yields significantly superior
compression ratios than JPEG2000, WebP, JPEG, PNG, SPIHT, and two of the algorithms specialized in cartoon
images compression: the quad-tree algorithm and the RS-LZ algorithm, according to the experimental results.
The researcher Peto and et al. in the year 2020 [37] developed the compressed adaptive integration technique, a
one-of-a-kind method for computing stiffness and mass matrices in imaginary domain approaches involving the
integration of discontinuous functions (C-AIS). The new approach adds a step to the standard quad tree-
decomposition-based adaptive integration scheme (AIS), which consists of an established, C-AIS that has several
benefits: To begin with, the compression of the sub-cells invariably saves significant time in terms of numerical
integration computations. Second, the compression technique is simple to integrate into existing applications
because it runs directly after the quad tree-decomposition procedure. Third, C-AIS produces the same level of
precision as traditional AIS in the case of polynomial integrands. Finally, C-AIS can be easily integrated with other
systems aimed at reducing the number of integration points, such as the Boolean-FCM, to achieve the fourth
advantage. C-AIS method is demonstrated to be efficient in the context of a fictional domain model (FCM) based on
Cartesian meshes applied to linear electrostatics and modal analysis problems, but it is also suitable for quadrature
in other fictional domain models, such as CutFEM and cgFEM.
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 16
J. Wang and colleagues in 2020 [38] proposed a new approach termed CDMD (Compressing Dense Media
Descriptors) which is an end-to-end method for compressing color and grayscale images using a dense medial
descriptor approach, which adapts the existing DMD method, which was originally introduced for image
segmentation and simplification, to the problem of image compression. An enhanced layer-selection approach, a
lossless MAT-encoding scheme, and an all-layer lossless compression scheme were presented to achieve this. They
make two major contributions to this study. First, effective layer selection heuristics, a modified skeleton pixel-chain
encoding, and a post-processing compression approach improved the encoding power of dense skeletons. Second,
across a wide range of natural and synthetic color and grayscale images, a benchmark was proposed to calculate
ideal parameters for dense skeletons and to assess the encoding capability of dense skeletons. Because it achieves
greater compression ratios at similar quality to the well-known JPEG technique, this new method (CDMD) suggests
that skeletons can be an attractive choice for lossy image encoding.
In the year 2020 [39], Al-khassaweneh and AlShorman suggest a new lossy method for image reduction. There are
two steps to the proposed algorithm: The Frei-Chen bases stage and the RLE stage. This method's main purpose was
to increase compression factor while lowering decompression distortion. In the second stage, RLE is employed to
improve the compression factor even more. RLE's goal is to boost compression while reducing image distortion in
the decompressed version. The results of the tests showed that the proposed approach is efficient in terms of
compression factor and MSE. The frei-Chen stage has an increased compression factor yet strong correlation values.
To increase the compression factor, Frei-Chen bases are combined with well-known RLE. In terms of performance,
the proposed method outperforms other image compression algorithms.
Lone proposed in 2020 [40], A compression technique based on spatial orientation block trees is proposed. To
encode an image, it primarily comprises two tiny lists and two-state tables. The main goal is to create a memory-
efficient and fast method that achieves a modest lossless - perceptually lossless compression performance.
In the year 2021 [18], Ragmi Mustafa, Basri Ahmedi, and Kujtim Mustafa conducted a study. The topic of the
proposed study is lossy image compression using neural networks. They looked at the BEP-SOFM algorithm, which
uses the Backward Error Propagation algorithm to get initial weight values for the Self-Organizing Feature Maps
algorithm fast. By dividing the image into equal-sized parts and utilizing quadtree segmentation, the compressed
image was created. The testing revealed that employing quadtree segmentation for the BEP-SOFM method produces
better error outcomes than dividing the image into blocks of the same size. The image size is a significant factor in
the image compression process. When compared to the results produced using the simple splitting approach,
quadtree segmentation for small images did not, or only marginally, improve image quality. However, the quality of
larger photographs is improved. This is because the input vector components have the same value after breaking the
training image into smaller blocks by changing the pixels' value to the average value. This indicates that the color
value of the decompression process will be the same. However, for a larger image, these blocks are lacking in detail.
The results are presented in terms of mean square error (MSE) and peak signal to noise ratio (PSNR).
Zhang and et. al. [41] in 2021, developed a wavelet-based sensing method to compress remotely sensed
astronomical images, introducing a new wavelet-based CS framework. The ideal scaling rate assignment approach is
provided by the improved scaling matrix with dual scaling rate assignment. At low rates of measurement, the
enhanced measurement matrix retains the most relevant frequency domain information. The process starts with a
two-dimensional discrete waveform transformation (DWT), which gives the image frequency information. The
parent-child relationship between the sub-bands determines how the wavelet coefficients are reorganized in a new
ordered fashion. We offer an optimized measurement matrix with a double assignment of the scaling rate and
construct scanning modes for high-frequency sub-bands based on trend information. Higher scaling rates can be
assigned simultaneously to sparse vectors carrying more information and higher energy coefficients in sparse
vectors using a single scaling matrix. Image sampling can be improved by using a two-assignment technique.
Orthogonal matching (OMP) and inverse discrete wavelet transform (IDWT) are employed to recreate the image in
the decoding phase. this technique can accomplish high-quality reconstruction at a low measurement rate, and
developed a high-performance remote-sensing astronomical image compression methodology.
In 2021 Ko, H.-H. [42], proposes an improved binary MQ arithmetic coder that uses a look-up table (LUT) for (A x
Qe) to improve coding performance. Quantification on several levels using 2-level, 4-level, and 8-level look-up
tables. Rather than employing a uniform quantized value of (A x Qe), experiments were carried out with the
parameters and being varied at each level of 2, 4, and 8. With modifying , , nonuniform quantization of (A x
Qe) is used. We got positive results when we applied the JBIG2 and JPEG2000 coding standards. The best LUT was
discovered through a series of experiments. The higher the quantization level, the better the compression
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 17
performance of JBIG2. The best-chosen parameters and at 4-level and 8-level are 1.0. Meanwhile, at most
quantization levels, the best compression performance of JPEG2000 was attained at a value of 1.05.
A study in 2021 by Svynchuk and et. al. [20], highlights a new image compression approach that relies on a finite
number of parameters and employs a class of nonmonotonic singular functions with fractal features. These
characteristics enable high digital data compression ratios and quick decoding. Because a class of continuous
functions that depend on a finite number of parameters and exhibit fractal qualities is explored, an algorithm for
generating image encoding–decoding is investigated. Unlike conventional functions, fractal functions aid in the
effective encoding of data and the solution of complicated problems in a variety of human endeavors. A system of
iterative functions is the mathematical model utilized in fractal image reduction. Because it involves a huge number
of changes and mathematical calculations, the encoding process takes a long time. However, this results in a high
level of image compression. To decode an image, you'll need to know the fractal codes that will let you reconstruct
the raster image; in this situation, unpacking the image is easier because the majority of the work was already done
during encoding. The acquired results allow for the creation of a mathematical basis that is sufficiently dependable
for the compression of varied graphic information, as well as the improvement of existing approaches.
To accomplish lossless image encryption and compression at the same time, Zhang M. in 2021 proposed [43] a joint
lossless image compression and encryption strategy based on a context-based adaptive lossless image codec
(CALIC) and the hyper-chaotic system is proposed, Four encryption locations are designed to realize joint image
compression and encryption, taking advantage of CALIC's characteristics: encryption for the predicted values of
pixels based on gradient-adjusted prediction (GAP), encryption for the final prediction error, encryption for two
lines of pixel values required by prediction mode, and encryption for the entropy coding file. Furthermore, to
improve security, a new four-dimensional hyper-chaotic system and plaintext-related encryption based on table
lookup are implemented. According to the test results, the proposed approaches offer a high level of security and
provide good lossless compression performance.
5. Studies description
In this section, a summary of the image compression techniques was presented in the table 1, that were previously
explained and clarified at subject review.
Compression Compression
No. Author Describe
Technique Method
[26] correlation and spatial redundancy
between pixels of an image.
6. Doaa Mohammed, lossy block truncation Old BTC and new BTC algorithms
Fatma Abou-Chadi coding were chosen AMBTC. The two
[25] algorithms employ a two-level
quantize and divide the image into
non-overlapping blocks.
7. Bhavana Patil, Lossy Sub-Band Coding Using Haar Transformation with an
Asharani Patil idea to minimize the computational
[27] requirements by applying different
compression thresholds for the
wavelet coefficients to improve the
quality of the reconstructed image.
8. Xiao Zhou, Yunhao a hybrid method VQ and DPCM Based on JPEG standards,
Bai, and Chengyou Lossy+Lossless established novel image-coding
Wang methods based on VQ and DCT.
[28]
9. D.J. Ashpin Lossy transformations In the proposed SVD-BPSO method
Pabi,N.Puviarasan, Singular value decomposition (SVD)
P.Aruna is used as an image compression
[7] technique, and the butterfly particle
optimization technique is
incorporated to find the better
quality reconstructed image by the
entropy of the symbols
10. Ali H. Ahmed, Loay E. Lossy Transformation It is proposed to introduce a low-
George cost lossy color image compression.
[30] After converting RGB image data to
YUV color space, the chromatic
bands U and V are down-sampled
using the disseminating step.
11. Kong and et. al. Lossy Transform Use the NMF approach to compress
[29] Coding and transfer images, which is based
on the collaboration of nodes
throughout the system.
12. Zainab Ibrahim Lossy Transform compression Color image by using
Abood [32] Coding (S), (W), and (M) techniques. derived
compression parameters for the
third-level high-energy sub-band of
each composite conversion.
13. Kuldeep Mander and lossy block truncation The approach utilized combines BTC
Himanshu. Jindal, coding and DWT algorithms with spline
[31] interpolation.
14. Shigao Li and Liming Lossless Prediction For prediction coding, first
Jia Coding investigated a model of coding bit-
[34] rate under a high bit-rate in terms of
mean absolute difference (MAD) and
coding quantization parameters.
15. Kumar R. and et al. Lossy transformations Singular value truncation and
[33] thresholding were performed to
complete low-rank matrices.
16. Mohd Rafi Lone, [40] lossless Bit plane coding proposed a lossless and perceptually
lossless medical picture
compression technique.
17. J. Wang and et. al. lossless Chain codes The (CDMD) method was used to
[38] produce greater compression ratios
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 19
Compression Compression
No. Author Describe
Technique Method
with equivalent quality to the well-
known JPEG technique,
demonstrating that skeletons can be
a viable lossy image encoding
solution.
18. Aljaz Jeromel and Lossless chain codes The first image is split into parts
Borut Zalik with similar colors. The acquired
[36] chain code symbols' sequence is
then converted and compressed
using RLE. Finally, to compress, an
arithmetic encoder could be used.
19. Mohsen Marton Lossless Developed a novel method for
Peto1, Fabian computing stiffness and mass
Duvigneau1 and matrices in fictional domain
Sascha Eisentrager approaches that require the
[37] integration of discontinuous
functions, known as the compressed
adaptive integration scheme (C-AIS).
20. Dhani Ariatmanto and Lossy Transform In image watermarking, it employs a
Ferda Ernawan. Coding proposed scaling factor for selected
[35] DCT coefficients, which employs
particular principles that result in
reduced distortion.
21. Mahmood Al- Lossless Run-Length The Frei-Chen bases stage and RLE
khassaweneh and Encoding stage are the two steps of the
Omar AlShorman[39] algorithm.
22. Ragmi Mustafa, Basri lossy Block Truncation image compression using a neural
Ahmedi and Kujtim network. Which uses the Backward
Mustafa Error Propagation algorithm to
[18] quickly obtain initial values of the
weights for the SOFM algorithm
23. Y. Zhang and et. al. Lossy Transform A new wavelet-based CS framework
[41] Coding is introduced, which uses a wavelet-
based sensing approach to compress
remotely detected astronomical
images.
24. Ko, H-H. [42] Lossy and Lossless Entropy A strategy for reducing
encoding and MQ approximation artifacts while
arithmetic coding keeping the binary MQ arithmetic
coder's probability estimation table
is proposed.
25. Svynchuk and et. al. Lossy Fractal illustrate how this set of
[20] compression nonmonotonic singular functions
might be used to fractal encode
images.
6. Conclusion
In previous years, image compression has become a dazzling and vibrant field. Where many researchers presented
different types and techniques of image compression. Some of these researches were discussed in this review, to
conclude that they are all useful in the related area, which is constantly evolving to show us every day new research
with better results, its main goal is to reduce the cost of transmission and storage. This paper also gave a Studies
description to abstract the technique, method, and work for each research in this survey, to make a valuable
contribution for other scientists working on this topic.
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 20
References
[1] K. P. Chandresh K Parmar, “A REVIEW ON IMAGE COMPRESSION TECHNIQUES,” J. INFORMATION, Knowl. Res. Electr. Eng., vol. 2, no. 2,
pp. 281–284, 2013.
[2] S. Dhawan, “A Review of Image Compression and Comparison of its Algorithms,” nternational J. Electron. Commun. Technol., vol. 2, no. 1,
pp. 22–26, 2011.
[3] R. Kaur and P. Choudhary, “A Review of Image Compression Techniques,” Int. J. Comput. Appl., vol. 142, no. 1, pp. 8–11, 2016.
[4] S. P. Amandeep Kaur, Sonali Gupta, Lofty Sahi, “COMPREHENSIVE STUDY OF IMAGE COMPRESSION TECHNIQUES,” J. Crit. Rev., vol. 7, no.
17, pp. 2382–2388, 2020.
[5] P. B. Khobragade and S. S. Thakare, “Image Compression Techniques- A Review,” Int. J. Comput. Sci. Inf. Technol., vol. 5, no. 1, pp. 272–
275, 2014.
[6] G. K. Kharate and V. H. Patil, “Color Image Compression Based On Wavelet Packet Best Tree,” Int. J. Comput. Sci. Issues, vol. 7, no. 2, pp.
31–35, 2010.
[7] D. J. A. Pabi, N. Puviarasan, and P. Aruna, “Fast Singular value decomposition based image compression using butterfly particle swarm
optimization technique (... Fast Singular value decomposition based image compression using butterfly particle swarm optimization
technique ( SVD-BPSO ),” Int. J. Comput. Eng. Res. Trends, vol. 4, no. 4, pp. 128–135, 2017.
[8] M. Singh, S. Kumar, S. Singh, and M. Shrivastava, “Various Image Compression Techniques: Lossy and Lossless,” Int. J. Comput. Appl., vol.
142, no. 6, pp. 23–26, 2016, doi: 10.5120/ijca2016909829.
[9] K. R. Žalik, B. Žalik, D. Mongus, and N. Luka, “Efficient chain code compression with interpolative coding,” Inf. Sci. (Ny)., vol. 439, pp. 39–
49, 2018.
[10] I. M. Pu, Fundamental Data Compression. Oxford, UK,: Butterworth-Heinemann, 2005.
[11] A. Rahman and M. Hamada, “Lossless Image Compression Techniques: A State-of-the-Art Survey,” Symmetry (Basel)., vol. 11, no. 10,
2019, doi: 10.3390/sym11101274.
[12] H. Kobayashi and L. R. Bahl, “Image Data Compression By Predictive Coding - 1. Prediction Algorithms.,” IBM J. Res. Dev., vol. 18, no. 2,
pp. 164–171, 1974, doi: 10.1147/rd.182.0164.
[13] H. Kikuchi, R. Abe, and S. Muramatsu, “Simple bitplane coding and its application to multi-functional image compression,” IEICE Trans.
Fundam. Electron. Commun. Comput. Sci., vol. E95-A, no. 5, pp. 938–951, 2012, doi: 10.1587/transfun.E95.A.938.
[14] B. Carpentieri, “Dictionary Based Compression for Images,” vol. 6, no. 3, pp. 187–195, 2012.
[15] M.-S. Ong, Entropy encoding in wavelet image compression, Representations, Wavelets, and Frames. Birkhäuser Boston: Springer, 2008.
[16] A. P. Singh and A. Kumar, “A review on latest techniques of image compression,” pp. 727–734, 2016.
[17] G. R. C. and W. R. E., Digital Image Processing, 2nd Ed. Prentice Hall, 2004.
[18] R. Mustafa, B. Ahmedi, and K. Mustafa, “Compression of Monochromatic and Multicolored Image with Neural Network,” vol. 9, no. 1, pp.
39–45, 2021, doi: 10.9734/AJRCOS/2021/v9i130213.
[19] R. C. Gonzalea and R. E. Woods, Digital Image Processing, 2nd Ed. Prentice Hall, 2004.
[20] O. Svynchuk, O. Barabash, J. Nikodem, R. Kochan, and O. Laptiev, “Image compression using fractal functions,” Fractal Fract., vol. 5, no. 2,
pp. 1–14, 2021, doi: 10.3390/fractalfract5020031.
[21] A. Jacquin, “Image coding based on a fractal theory of iterated contractive image transformations,” IEEE Trans Image Process, vol. 1, no.
1, pp. 18–30, 1992, [Online]. Available: doi:%0A10.1109/83.128028. PMID: 18296137.
[22] K. Somasundaram and S. Domnic, “Modified Vector Quantization Method for Image Compression,” World Acad. Sci. Eng. Technol. 19, vol.
19, pp. 222–227, 2008.
[23] G. Sadashivappa and K. V. S. Anandababu, “Evaluation of Wavelet Filters for Image Compression,” World Acad. Sci. Eng. Technol. 19, vol.
51, pp. 131–137, 2009.
[24] K. Somasundaram and S. Vimala, “Efficient Block Truncation Coding,” Int. J. Comput. Sci. Eng., vol. 2, no. 6, pp. 2163–2166, 2010.
[25] D. Mohammed and F. Abou-chadi, “Block Truncation Coding,” no. 3, pp. 9–13, 2011.
[26] A. Kumar and P. Singh, “Enhanced Block Truncation Coding for Gray Scale Image,” Int. J. Comp. Tech. Appl, vol. 2, no. 3, pp. 525–530,
2011.
[27] B. Patil and A. Patil, “Image Compression Using HAAR Wavelet Transform , DCT and Sub-Band Coding,” Int. J. Ethics Eng. Manag. Educ.,
vol. 1, no. 4, pp. 244–249, 2014.
[28] X. Zhou, Y. Bai, and C. Wang, “Image Compression Based on Discrete Cosine Transform and Multistage Image Compression Based on
Discrete Cosine Transform and Multistage Vector Quantization,” Int. J. Multimed. Ubiquitous Eng. Vol.10, vol. 10, no. July 2020, pp. 347–
356, 2015, doi: 10.14257/ijmue.2015.10.6.33.
[29] S. Kong, L. Sun, C. Han, and J. Guo, “An image compression scheme in wireless multimedia sensor networks based on NMF,” Inf., vol. 8,
no. 1, pp. 1–14, 2017, doi: 10.3390/info8010026.
[30] A. H. Ahmed and L. E. George, “The Use of Wavelet , DCT & Quadtree for Images Color Compression The Use of Wavelet , DCT & Quadtree
for Images Color Compression,” Iraqi J. Sci., vol. 58, no. 1C, pp. 550–561, 2017.
Marwa Adeeb Al-jawaherry, Saja Younis Hamid VOL.13(4) 2021 , PP COMP. 10–21 21
[31] K. Mander and H. Jindal, “An Improved Image Compression- Decompression Technique Using Block Truncation and Wavelets,” Image,
Graph. Signal Process., vol. 8, pp. 17–29, 2017, doi: 10.5815/ijigsp.2017.08.03.
[32] Z. I. Abood, “Composite Techniques Based Color Image Compression,” J. Eng., vol. 23, no. 3, pp. 80–93, 2017.
[33] R. Kumar, U. Patbhaje, and A. Kumar, “An efficient technique for image compression and quality retrieval using matrix completion,” J.
King Saud Univ. - Comput. Inf. Sci., no. xxxx, 2019, doi: 10.1016/j.jksuci.2019.08.002.
[34] S. Li and L. Jia, “Rate Allocation with Near-optimal Rate-distortion Performance for JPEG-LS,” Tenth Int. Conf. Signal Process. Syst., vol.
11071, p. 110710M, 2019, doi: 10.1117/12.2521483.
[35] D. Ariatmanto and F. Ernawan, “Adaptive scaling factors based on the impact of selected DCT coefficients for image watermarking,” J.
King Saud Univ. - Comput. Inf. Sci., no. xxxx, 2020, doi: 10.1016/j.jksuci.2020.02.005.
[36] A. Jeromel and B. Zalik, “An efficient lossy cartoon image compression method,” Multimed. Tools Appl., vol. 79, pp. 433–451, 2020,
[Online]. Available: https://doi.org/10.1007/s11042-019-08126-7.
[37] M. Petö, F. Duvigneau, and S. Eisenträger, “Enhanced numerical integration scheme based on image-compression techniques :
application to fictitious domain methods,” Adv. Model. Simul. Eng. Sci., vol. 7, no. 21, 2020, doi: 10.1186/s40323-020-00157-2.
[38] J. Wang, M. Terpstra, J. Kosinka, and A. Telea, “Quantitative evaluation of dense skeletons for image compression,” Inf., vol. 11, no. 5, pp.
1–18, 2020, doi: 10.3390/INFO11050274.
[39] M. Al-khassaweneh and O. AlShorman, “Frei-Chen bases based lossy digital image compression technique image,” Appl. Comput.
Informatics, 2020, [Online]. Available: https://www.emerald.com/insight/2210-8327.htm%0AFrei-Ch.
[40] M. R. Lone, “A high speed and memory efficient algorithm for perceptually-lossless volumetric medical image compression,” J. King Saud
Univ. - Comput. Inf. Sci., no. xxxx, 2020, doi: 10.1016/j.jksuci.2020.04.014.
[41] Y. Zhang, J. Jiang, and G. Zhang, “Compression of remotely sensed astronomical image using wavelet-based compressed sensing in deep
space exploration,” Remote Sens., vol. 13, no. 2, pp. 1–16, 2021, doi: 10.3390/rs13020288.
[42] H. H. Ko, “Enhanced binary mq arithmetic coder with look-up table,” Inf., vol. 12, no. 4, 2021, doi: 10.3390/info12040143.
[43] M. Zhang, X. Tong, Z. Wang, and P. Chen, “Joint Lossless Image Compression and Encryption Scheme Based on CALIC and Hyperchaotic
System,” Entropy, vol. 23, no. 8, p. 1096, 2021, doi: 10.3390/e23081096.
[44] A. BRISAM and Q. MOSA, “Compression Techniques for the JPEG Image Standard by Using Image Compression Algorithm”, JQCM, vol. 13,
no. 2, pp. Comp Page 1 -, Apr. 2021.
[45] A. Noori Mohammed and A. Falih, “A Proposed Method for Image Compression Using Discrete Wavelet Transform and Absolute Moment
Block Truncation Coding”, JQCM, vol. 3, no. 1, pp. 297-305, Sep. 2017.
[46] A. Abdulelah, S. Abed Hamed, M. RASHEED, S. SHIHAB, T. RASHID, and M. Kamil Alkhazraji, “The Application of Color Image Compression
Based on Discrete Wavelet Transform”, JQCM, vol. 13, no. 1, pp. Comp Page 18 -, Feb. 2021.
[47] A. M. Hadi and A. A. Abdulrahman, “Multi Discrete Laguerre Wavelets Transforms with The Mathematical aspects”, JQCM, vol. 12, no. 1,
pp. Comp Page 26-37, Mar. 2020.