A New Algorithm for Image Fusion to reduce computational Time

o r I n n o


Introduction
Image fusion is a process of combining two or more images to get one image which is more clear and informative than the input images. The fused image is more suitable for human vision and also for computer vision. Image fusion is a technique of combining data from remote sensing through image processing. The main purpose of image fusion is to improve the definition of image, enhancement of images and classifications. Image fusion has many applications such as object identification [1]. There are number of methods to fuse the images. The very basic method is high pass filtering but now a day wavelet transformation technique and Laplacian pyramid decomposition are mainly used. On the basis of fusion level fusion is of three types: Pixel level fusion, feature level fusion and decision level fusion. In pixel level fusion, fusion is done pixel by pixel manner and used to merge the different parameters. Feature level is used to recognize the object from various data sources. In decision level data is extracted from different images separately and then decision is done. But pixel level fusion is mainly used because of its simplicity and linearity and used to detect undesirable noise, low complexity and fuses different images directly pixel by pixel to enhance the image. This method gives the more trueness of image and supply the information in details as compared to other methods. The basic principle of image fusion is to increase the required information only that's why by using different wavelet transforms low and high frequency components are split from the image so that the high frequency components can be neglected. By applying different wavelets each has their own advantages and disadvantages. Wavelet transform has a long series and can be applied for higher levels by which unwanted information can be reduced to large extent [2]. Fusion of images also reduce the data for storage so the memory requirement also reduces. But the main problem is to fuse two or more images is of computational time for the satellite or high resolution images because they requires high processing. In this paper the complete focus is on to reduce the computational time. The proposed algorithm becomes efficient in the reduction of computational time and the complexity.

Wavelet Transform
Wavelet transform is used to transform the image form time domain to frequency domain. So that Time and frequency both information are analysed [3]. Wavelet is nothing but the continuous signal which dies up after a particular time that is a waveform of limited duration and have zero average value.Wavelet transform is used to decompose the frequency components.In wavelet transform first row operation is applied on the image and then decimation factor is applied for down-sampling. By performing column operations approximation part LL is calculated. Similarly detail parts are calculated that are LH, HL and HH [4].

Fusion Algorithms
After applying wavelet transform fusion algorithms are used. For the fusion purpose averaging method, maximum pixel replacement method and minimum pixel replacement method are used. In averaging method just average of both the images has taken and in case of maximum pixel replacement method that pixel is considered for fusion whose value is greater and same as in minimum pixel replacement method. Maximum pixel replacement method can only be used for white shades images while minimum pixel replacement method can only be used for dark shades images [5].

Proposed Algorithm
As the name suggest blocking it means in this algorithm the complete process occurs on the blocks of images instead of the complete image. In this algorithm first the complete image is converted to the blocks of size m*m. Now each block of image react as different image and n number of blocks are constructed. For the further processing DWT is applied on each and every block and also the fusion rules that is each block of first image is fused with the corresponding block of second image. This method is mainly used to reduce the computational time and complexity as instead of processing single pixel it will process the m*m pixels at a time. The steps of blocking algorithm are given below: I. Take two input images of size m x n, both the images are of same scene.

II.
Convert the images in grey scale image as this is done on grey scale images only.

III.
Convert both the images to the size which is multiple of 8 or 16.

IV.
Convert both the images to the blocks with block size 8 * 8 or 16 * 16 and store images to cells.

V.
Find mean square error of all the blocks of both the images.

VI.
Apply a threshold value on calculated MSE of each block.

VII.
Check the condition for MSE that is if MSE is less than threshold value then no need of fusion.

VIII.
If MSE is greater than the threshold value then apply wavelet transform to those particular blocks.

IX.
Apply the fusion methods to each transformed blocks.
X. Take inverse wavelet transform of fused blocks.

XI.
Store each block to the cell.

XII.
Convert the image from cell to complete image.

XIII.
Get the final fused image which is more informative then the input images.

2D Signal
Columns Rows

Results
Where ft=total blocks, fp= unfused blocks, fs= fused blocks

Conclusion
After giving the brief theory and results it can be easily conclude that the proposed algorithm that is work on the blocks of images perfectly implemented to reduce the computational time for high resolution images. From all the above discussions it is clear that the image fused by blocking algorithm can have low quality as compared to the complete image fusion method but it occurs to very low extent and also it can be reduce either by using high level transformation or by increasing the block size. So the quality of fused image can be further improved but the problem of computational time reduces to large extent. The computational time acquired by proposed method is only 1msec, while existence methods acquirednear about 66sec.