Color Image Compression by Using Absolute Moment Block Truncation Coding and Hadamard Transform

This paper investigates image data compression as it is applicable to different fields of image processing, in order to reduce the volume of pictorial data which one may need to store or transmit, The research modifies a method for image data compression based on the two component code, in this coding technique, the image is partitioned into regions of slowly varying intensity. The contours separating the regions are coded by hadamard transform, while the rest image regions are coded by (AMBTC)


INTRODUCTION
Remote sensing sensors generate useful information about climate and the Earth's surface, and are widely used in resource management, agriculture, and environmental monitoring. Compression of the RS data helps in long-term storage and transmission systems. [1] Image data compression addresses the problem of reducing the amount of data required to represent a digital image. It is a process intended to yield a compact representation of an image, thereby reducing the image storage/transmission requirements. And these two are relatively expensive. These factors prove the need for images compression. The basic idea of image data compression is to remove the redundancy of data presented within an image to reduce its size without affecting the essential information of it [2].
Image compression is an important task in digital image processing. Applications of image compression can be seen, for example, in multimedia, image databases, data communications, And remote sensing. The common factor for all of these applications is that without compression, the data to be handled is too large. In image databases, many images must be Stored and retrieved, and in data communication applications, the image must be small enough to be transferred quickly .Compression is achieved by the removal of one or more of the three basic data redundancies [3]:


Coding redundancy, which is presented when less than optimal code words are used.


Inter pixel redundancy, which results from correlations between the pixels of an image.
 Psych visual redundancy, which is due to data that are ignored by the human visual system [4].
Image compression techniques reduce the number of bits required to represent an image by taking advantage of these redundancies. An inverse process called decompression (decoding) is applied to the compressed data to get the reconstructed image [2]. Fig. (1), represents the block diagram of image compression, in which f(x, y) is a digital image. It is compressed using a suitable compression algorithm.
Then it is transmitted over a channel to the receiver end. At the receiver end the data is retrieved from the storage device for decompression. The decompressed data will yield f(x, y).heref(x, y) is original image and f^ (x, y) is the reconstructed image [5].  A digital image is can be defined as a rectangular array of dots, or pixels, arranged in m rows and n columns. A digital image is represented by a two-dimensional array of pixels, which are arranged in rows and columns. Hence, a digital image can be presented as (MN) array [6].where (0,0) gives the pixel of the left top corner of the array that represents the image and ( − 1, − 1) represents the right bottom corner of the array.
A Grey-scale image, also referred to as a monochrome image contains the values ranging from 0 to 255, where 0 is black, 255 is white and values in between are shades of grey. In color images, each pixel of the array is constructed by combining three different channels (RGB) which are = , = and = . Each channel represents a value from 0 to 255. In digital image, each pixel is stored in three bytes, while in a Grey image is represented by only one byte. Therefore, color images take three times the size of Gray images [7].
The image data compression techniques are broadly classified into two categories depending whether or not an exact replica of the original image could be reconstructed using the compressed image these are: lossless technique and lossy technique. In lossless techniques the original data and the data after reconstruction are exactly the same because, in these methods, the compression and decompression algorithms are exact inverses of each other: no part of the data is lost in the process. Redundant data is removed in compression and exactly an inverse process is carried out during decompression. These are also called noiseless since they do not add noise to the signal (image). While if lossy compression is applied to an image, the image cannot be recovered exactly as it was before compression. When the compressed image is decoded it does not give back the original image. Data has been lost because lossy compression cannot be decoded to yield the exact original image. Hence it is not a good method of compression for critical data, such as textual data. It is most useful for Digitally Sampled Analog Data (DSAD) [8].
The remaining of this paper is organized by. In Section 2, the paper describes a brief background of Image Compression Techniques. The Absolute Moment Block Truncation Coding and Hadamard Transform are presented in Section 3. Our Proposed Method is presented in Section 4. In section 5, we explained for our proposed method simulations. Finally, conclusion and discussion are presented in Section 6.

Related Works
A large number of data compression algorithms have been developed and used throughout the years. Some of which are of general use, i.e., can be used to compress files of different types (e.g., text files, image files, video files, etc.). Others are developed to compress efficiently a particular type of files. It has been realized that, according to the representation form of the data at which the compression process is performed, below is reviewing some of the literature review in this field.
Howrd et.al. (1991), presented lossless image compression with four modular components: pixel sequence, prediction, error modeling, and coding. They used two methods that clearly separate the four modular components. These method are called Multi-Level Progressive Method (MLP), and Partial Precision Matching Method (PPMM) for lossless compression, both involving linear predictions, modeling prediction errors by estimating the variance of a Laplace distribution (symmetric exponential), and coding using arithmetic coding applied to pre-com-puted distributions [9].
Kapde et. al, (2012) presented work investigates image compression. Two algorithms were selected namely; block truncation coding (BTC) and Absolute Moment block truncation coding (AMBTC) and a comparative study was performed. The Block Truncation Coding (BTC), Absolute Moment block truncation coding (AMBTC) is the lossy image compression algorithms. These are simple and easy to implement image compression algorithms. The basic BTC and AMBTC algorithm is a lossy fixed length compression method that uses a Q level quantizer to quantize a given region of the image. Both of two techniques rely on applying divided image into non overlapping blocks. They differ in the way of selecting the quantization level in order to remove redundancy [10].
Franti et.al, (1994) studied BTC and its improvements by dividing the algorithm into three separate tasks: performing quantization, coding the quantization data, and coding the bit plane. Each phase of the algorithm is analyzed separately. On the basis of the analysis, a combined BTC algorithm will be proposed, and they made comparisons to the standard JPEG algorithm [7]. either one of the two quantizers. In the proposed method (Improved AMBTC -IAMBTC), to increase the number of quantizers and hence the number of bits used to represent the quantizing levels was raise. The input image is divided into blocks of size 4 x 4 pixels. For each block, the high mean and the low mean called the quantizers are computed as in AMBTC. While encoding, the average Mean (a Mean) of high mean (h Mean) and low mean (l Mean) was calculate [11].
Prameelamma et. al, (2012) proposed a lossless, compressed domain steganography technique for AMBTC-compressed images based on the interchange of two quantization levels. The hiding capacity is independent of the compressed codes. Although the reversible data Hiding can recover the original image after the extraction of secret data, the embedding distortion needs to be kept as low as possible in order to achieve perceptually invisible. The process of data embedding does not introduce any image distortion, which should be the best case for steganography [12].
Shashikumar and Santosh, (2013) proposed the AMBTC for gray and color image compression respectively; this compression technique reduces the computational complexity and achieves the optimum minimum mean square error and PSNR. It is an improvised version of BTC, obtained by preserving absolute moments. AMBTC is an encoding technique that preserves the spatial details of digital images while achieving a reasonable compression ratio [13,14].
The compared with previous works are explained above, in this paper, we proposed new technique to compression three color images by using Absolute Moment Block Truncation Coding (AMBTC) and Hadamard Transform. Our method investigates good quality for image compression with high compression ratio with deferent block size.

Implementing of Absolute Moment Block Truncation Coding
Lema and Mitchel [15] presented a simple and fast variant of BTC named AMBTC. In this method, the higher mean ℎ and lower mean are preserved instead of the mean and standard deviation values. Pixels in an image block are then classified in to two gropes of values .One group (higher range) comprising of gray levels which are greater than or equal to the mean ( ) and the remaining gray levels are brought in to another group (lower range). The mean of the image block and the lower, higher range are calculated using Eqs. (1), (2) and (3) respectively in [5].
Where, is the number of pixels in each block, and is the original pixel value of the block. If the pixel value of each block is greater than or equal to mean, it is represented by "1" and if less than the mean, it is represented by "0". The collection of 1s or 0s for each block is called a bit plane.
Where, is the number of pixels whose gray levels are greater than or equal to the and ( − ) is the number of pixels whose gray levels are less than . The collection of 1s or 0s for each block is called a bit plane.
In this section color image data compression using absolute moment block truncation coding scheme (AMBTC) is presented. This section gives an insight on the performance of AMBTC for various values of blocks. An improvement in BTC can be obtained by preserving absolute moments. This method called absolute moment block truncation coding (AMBTC). Both computational speed and reconstructed image quality are improved by preserving absolute moments instead of standard moments. The new method has the same general characteristics as BTC which include low storage requirements and extremely simple coding and decoding technique.

Implementing of Hadamard Transform
The elements of the basis vectors of the Hadamard transform take only the binary values ±1 and are, therefore, well suited for digital signal processing. The Hadamard transform matrices, is × matrix, where = 2 , = 1, 2, 3. These matrices can be easily generated from the core matrix as shown in Eq. (4).
And the kronecker product recursion is represented by Eq. (5) As an example, for n=3, the Hadamard matrix becomes.
And the inverse transform is given by Eq. (8).
In this section, three image samples have been selected to assignment the performance of our mentioned technique. A comparative study has been applied on the test images with different block size, 4×4, 8×8, and 16×16.

Proposed Method
In order to develop an efficient coding method for digital images, the characteristics of the digital images and the wide range of the image data must be taken into consideration [18]. Consequently, the interpixels correlation among adjacent pixels should be considered. In fact, the digital images, often, consists of different regions which involve different degree of details, therefore, coding all image regions with the same number of bits is not an efficient coding method because it will produce high distortion in regions that are highly detailed. To overcome this problem many adaptive coding techniques have been introduced. Our proposed method is the most and successful method among these techniques.
The new method which is adopted in our work is relatively a new image compression technique. In which, the image is partitioned into regions of slowly varying intensity, and the edge (contours) separating the regions. The actual performance of this method depends highly on the edge detection algorithm used.
For coding purposes, the edge detection algorithm should have the property that the contour (or edge) should be smooth so that they can be coded efficiently. At high compression ratios, the new method yields a better subjective image quality than block transform coding (BTC) methods because the objectionable block distortion is avoided [19].
For this reason, this method is ideal for every low bit rate coding and progressive transmission of images and is considered a possible candidate for inclusion in the upcoming MPEG -4 standards [20].
In what follows we shall explain the procedures involve in our new method.

Compression Method procedures
A. Edge Detection.
The first step in most image processing application is to detect sharp transitions called edges, and then connect these edges to outline the desired boundaries. In short, extracting edge information and constructing boundaries by connecting edges and line segments can be considered as the essential process in most pictorial analysis and pattern recognition problems, since subsequent measurements of shape size (area or land perimeters) and texture can then be taken [21].
Moreover, image data compression by separating the image into two parts "i.e. edges and all the rest" can also be achieved as a subsequent process to the edge detection. Two of these techniques were selected and adopted in detail in this paper. These edge techniques are: Sobel operator technique and Laplacian operator Technique.

B. Image Block Subdividing
Step.
The next step in our coding algorithm is to group the binary edge image elements, which obtained from the first step into similar size blocks, each is considered as to be an individual image region. In our present work, different size ( × ) block are used; i.e. 4×4, 8×8, 16×16. Then we used a suitable threshold value to compare with the number of edge point in each block to decide if this block is an edge region or not and as follows [11]:

1.
No. of edge. Point ≥ Th, then the block is edge region. No. of edge. Point ≤ Th, then the block is not edge.
Accordingly, experiments showed that larger threshold value results in an important loss of edge information, which must be avoided.

C. The Encoding Process:
Up to this point, the segmented image regions are classified into two types, these are; edge region or rest region. Our major aim was directed toward the development of a coding scheme based on segmenting the digital image. In our present work, the encoding process is adopted by used two compression image data techniques to encode the same digital image, so we adopt the, hadamard transform coding on edge region, while the absolute moment block truncation coding (AMBTC) method is used to encode the rest region. The two compression techniques mentioned above are implemented on the original image, so the binary (edge detected) image is used to decide if this region is edge region or rest region only. We shall demonstrate the obtained results by the mentioned encoding algorithm. Samples of three images have been chosen to demonstrate the coding results, with different block size.

D. Decoding procedures:
In the decoder, the coding processes should be inverted in order to recover the original image. We shall demonstrate and conclude the obtained results by the mentioned encoding algorithm in the next section.

Image quality measurement.
There are several kinds of parametric measure to measure the quality of an image which plays important role in various image-processing applications. The parametric measures of image quality are broadly classified into two classes -one is subjective measure and another is objective measure. In this work we will concentrate in objective measures [2][3][4] such as Peak Signal to Noise Ratio (PSNR), Weighted Peak Signal to Noise Ratio (WPSNR), Bit Rate (BR), Bit Peer Pixel (BPP) and structural similarity index (SSIM).in our proposed scheme we will makes only PSNR to measure the image quality. Image quality is measured using peak signal-to-noise ratio (PSNR) as most common objective measure [12].
Whenever the value of PSNR is high this implies good compression because it means high Signal to Noise. In other words, the signal represents the original image while the noise represents the error in reconstruction. So, a compression scheme having a high PSNR can be recognized as a better one Therefore, certain performance measures have been established to compare different compression algorithms [10]. Practically, let us denote the pixels of the original image by Pi and the pixels of the reconstructed image by Qi ( ℎ 1 ≤ ≤ ), we first define the mean square error (MSE) between the two images as Eq. (9) in [13].
It is the mathematical mean of the differences in the values for the pixels between the original and the reconstructed images. The root mean square error (RMSE) is defined as the square root of the MSE as Eq. (10) in [14]: Hence, the PSNR can be defined as by Eq. (11).
= 20 log 10 | | (11) Also, the compression ratio may be defined as the ratio between the original image to the reconstructed image. Therefore, suppose P and Q are two units of a set of data representing the same information. The compression ratio, CR, is denoted as Eq. (12) in [13]:

Simulation Result
Simulations are carried out in MATLAB R2011a (version 7.12.0), for our proposed method as explained in previous section, we computed the (MSE, PSNR, and CR) and shown the reconstructed image. Our method uses three color images, the first and second image for Baghdad city of the capital of Iraq, and third image for a Stockholm city of the capital of Sweden.

Conclusions
The objective of this work was to present a newly digital compression method that can be performed fast and can yield as high compression ratio as possible with little distortion. The digital processing techniques which were cover in this work ranged from extracting edge information to image data compression including edge encoding by one compression technique and rest image region encoding by another compression technique. We started in this work by looking for an edge detector which can satisfy the above requirement, i.e. an edge detector that can correctly outline different regions within an image.
This work involved a demonstration for some well-known compression methods that are, usually, encountered in the literature. Among these coding methods was the absolute moment block truncation method (AMBTC) and the transform coding techniques (based on hadamard transform). These coding techniques are adopted and presented in our new coding method to improve the final reconstructed image. The results of the decoding showed good image qualities for small block size, with lower value of compression ratio while higher compression ratios gained with larger block sizes. This method yields higher compression ratios, when compared with that when used Block-truncation coding method or hadamard transform coding method. The obtained result showed that; implementing our coding techniques yields higher compression ratios and, at the same time, better quantitative and qualitative test measures on colored images. This is because the distortion obtained with each of these techniques is, normally, distributed randomly through color bands (RGB).