IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

International Journal of Research in Information Technology (IJRIT) www.ijrit.com

ISSN 2001-5569

Review of Image Fusion Techniques 1

2

Shriniwas Budhewar1, P. L. Paikrao 2 M-Tech student, Electronics and Telecom, Government college of Engineering Amravati, Maharashtra, India [email protected]

Assistant Professor, Electronics and Telecom, Government college of Engineering Amravati, Maharashtra, India [email protected]

Abstract Digital Image Processing is rapidly growing area in the field of engineering, various new technologies had been emerging for image acquisition, registration and processing. There is increasing demand for more clear and realistic images, which contain every fine aspect of the scene that is to be captured. Depending upon the requirement various operations such as image de-noising, filtering, feature extraction, segmentation, compression are performed based on some mathematical, logical algorithms. Image fusion is also an example of such kind. Image fusion is technique of merging of two images to form single image; resulted image would contain much of the information of the scene. Mathematical transforms viz. wavelet and curvelet are being used in fusion techniques. The concept of image fusion and its different techniques are highlighted in this paper. In addition to this, they are compared based on various performance metrics. Keywords: Image Fusion, Spatial and Transform domain techniques, Performance metrics.

1. Introduction Image fusion is the process of combining two or more images of the same scene with different focus planes which results in the more detail information about the scene or different scenes [6]. It is technique which results into sparse representation, and more clear pictures. Generally, image fusion technique is employed for combining number of images of same scene or object known as 'multifocus image fusion'. Need of this technique arises due to the limitations of focal depth of optical imaging devices. Any device is capable of focusing on particular object of scene hence all other image is get blurred. Solution for the above stated problem is given by multifocus image fusion. In which all the objects in a scene get focused by more than one image then merging them to form a synthetic image which would contain every fine aspect of scene [13]. Besides combining images of same scene one can combine images of different scenes. Generally, Image Fusion techniques can be classified as: [5] [7] 1. Spatial domain based 2. Transform domain based Details of above are discussed in second and third section respectively. Another way of classifying the techniques is based on the processing level at which fusion takes place, shown in fig.1. They are as follows: 1. Signal level 2. Feature level 3. Decision level Shriniwas Budhewar, IJRIT

25

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

In signal level techniques, fusion takes place at pixel level i.e. very basic element of image. Averaging, maximum select are the some examples of this method. In feature level, part of images or an object is extracted then they are fused again all the objects are assembled to form complete image. Decision level image fusion is performed at highest level. It represents fusion of probabilistic decision information obtained by local decisionmakers operating on the results of feature level processing on image data produced from individual sensors [2] [14].

Figure 1. Classification of fusion techniques based on processing level

Including first chapter this paper contains five sections; second section deals with Spatial domain techniques whereas transform domain methods are discussed in third section. Performance metrics and comparison are included in successive section. Paper is concluded at the end.

2. Spatial Domain Techniques In this type of technique, fusion takes place in spatial domain. These techniques exploit the intensity values of the images. As compared with transform domain techniques they are simple. Some of the popular techniques are discussed below:

2.1 Methods based on Arithmetic functions: A. Simple Maximum select: Highly focused image is composed of high values of intensities; this principle can be used for fusion of two images. Two images are compared with pixel by pixel pertaining to their intensity values and greater one is assigned to each pixel. Mathematically it is given by: F (i,j) = max{X(i,j), Y(i,j)}

(1)

This is not the precise method of fusion, which can be regarded as just approximation to fusion. B. Simple Averaging: It is well documented fact that the pixel intensity values can be improved by simple arithmetic average of all the registered images. Which can be mathematically given by:

1 Zi, j = P i, j N



(2)



Shriniwas Budhewar, IJRIT

26

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

But in this method all the pixels are given equal weights, edges boundaries which have more information are treated same as the redundant information. This drawback can be eliminated by using weighted aggregation of images which is given by: Zi, j =

∑

 W i, jI i, j ∑

 W i, j

(3)

Where ‘Wn (i,j)’ is weight factor and ‘In’ are no. of images to be fused. In spite of all this modification, there are drawbacks, as this method works well with the pixels; unwanted changes are obtained in some pixels which should not be affected at all.

2.2 Spatial frequency Method: It events overall intensity deviations of the image. For an image P of size ‘X’ by ‘Y’, ‘P (i,j)’ being intensity values at position ‘(i,j)’, its spatial frequency is given by [11]: (4)   . . = .  + .  Where R.F. is row frequency, given by

1 R. F. =  [Pi, j − Pi, j − 1] XY &

$

(5)

' %

C.F. is column frequency, given by 1 C. F. =  [Pi, j − Pi − 1, j] XY &

$

(6)

' %

It is well documented fact that; as image is subjected to more blurring its spatial frequency reduces. Thus, spatial frequency can be utilized to reflect image clearness. This algorithm is styled as shown in fig.2. Here, for the sake of simplicity only two images are considered although it can be extended forthrightly for multiple images. i.

As shown in fig, complete image is disintegrated into image blocks of uniform size say ‘X by Y’. Let ‘An’, ‘Bn’ designate the nth block of images A and B respectively.

ii.

Next step is to compute the spatial frequency of each block; let ‘SFnA’ and ‘SFnB’ denotes the spatial frequency of blocks ‘An’ and ‘Bn’ respectively.

iii.

Two blocks are compared with respect to their spatial frequency from which corresponding block of fused image is constructed as follows: (7) *+ +, > +. + /ℎ , . Fn = ) 1+ + < + − /ℎ ,34 53 

67ℎ89:;<8

Where ‘Th’ is the user defined intensity threshold. iv.

Fused blocks are integrated by 3*3 sized windowed majority filter. This filter compares central block with neighboring blocks and assign respective intensity. E.g. if the central block is of type B and neighboring majority blocks are of type A then this filter replaces central block with A.

Shriniwas Budhewar, IJRIT

27

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

This algorithm is simple and hence more often it is implemented in real time applications. But there is further scope on the issues of block selection and threshold.

Figure 2. Image fusion by spatial frequency

3. Transform domain Techniques: As name indicates, these methods are based on some mathematical transforms, e.g. discrete cosine, Wavelet, Curvelet. When an image is subjected to any transform, it is decomposed into its sub band components which may be regarded as frequency domain or wavelet domain. So this type of technique is also termed as ‘Multiresolution analyses’. All transform domain techniques can be realized by a generic scheme given in fig.3. [13] The figure depicts that Images which are to be fused undergo some mathematical transform. The outcome of transformation is the coefficients which are to be used for fusion. For the fusion some criteria is fixed which may be regarded as ‘fusion rule’ e.g. maximum, minimum, mean, rand [3]. These block compares the coefficients and based upon fusion method it yields fused coefficients. Inverse transformation is applied to get synthetic, fused image in spatial domain. Note that image fusion in transform domain needs nearly perfect reconstruction of the spatial domain information.

Figure 3. Generic scheme of Image fusion using Transform

Now a day’s many mathematical transforms are available viz. Wavelet, Curvelet, Contourlet, Ridgelet, which can be used in Image processing. Out of which Wavelet transform is most widely used due to its inherent nature of representing the signal in space-frequency domain. Image fusion using Wavelet transform and curvelet transform are discussed below:

Shriniwas Budhewar, IJRIT

28

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

3.1 Discrete wavelet transform (DWT) based Image Fusion: Wavelets are finite duration oscillatory functions with zero average value. They have finite energy. They are suited for analysis of transient signal. The irregularity and good localization properties make them better basis for analysis of signals with discontinuities. Wavelets can be described by using two functions viz. the scaling function f (t), also known as “Father Wavelet” and the wavelet function or “Mother Wavelet”. Mother wavelet (t) undergoes translation and scaling operations to give self-similar wavelet families as given by Equation [12]. ψ> τ, s = ?

1

t−τ C xt. ψ  dt s √s

(8)

Note that equation (8) represents continues wavelet transform however for real time applications discrete wavelet transform is used. For the signal, DWT can be implemented by means of ‘subband coding’ and it is forthrightly extended for images which can be regarded as ‘2-D DWT implementation’. Subband coding makes use of low pass filter (H1) and high pass filter (H0) initially, subband coding is applied on rows then on columns as shown in fig.4. After one level of decomposition, there will be four frequency bands, namely Low-Low (LL), Low-High (LH), High-Low (HL) and High-High (HH). The next level decomposition is just applied to the LL band of the current decomposition stage, which forms a recursive decomposition procedure. Thus, N-level decomposition will finally have 3N+1 different frequency bands, which include 3N high frequency bands and just one LL frequency band [9].

Figure 4 Filter bank structure of the DWT Analysis For fusion the generic scheme has been adopted as detailed in fig.3. It can be implemented either of two ways as shown in fig.5, either corresponding pixels or a fixed sized window can be compared. This type of algorithm can be easily realized in Matlab [3]. The main advantage of this method is that it is relatively simple than other transform based methods and it removes artefacts if any. Moreover better resolution, signal to noise ratio can be achieved by DWT. In spite of all these advantages, there are some shortcomings such as less directional sensitivity, DWT can’t represent edges faithfully [10]. So there is scope of improvement.

Shriniwas Budhewar, IJRIT

29

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

Figure 5. DWT based image fusion

3.2 Image fusion based on curvelet transform: To overcome the drawbacks of the wavelets new transform has been emerged and esteemed; it is highly directional transform which results in better edge representation. The key principle of this transform is that, it makes use of parabolic scaling function which adopts the relation G8HI7ℎ ~ :;K7ℎ. [10]. This technique also practices the generic scheme as shown in fig.3. The mere difference is in implementation of curvelet transform. The software for this transform is available http://www.curvelet.org. [1]

4. Performance metrics: Image Quality is a characteristic of an image that measures the perceived image degradation (typically, compared to an ideal or perfect image). Imaging systems like the fusion algorithm may introduce some amounts of distortion or artefacts in the signal, so the quality assessment is an important problem. Image Quality assessment methods can be broadly classified into two categories: Full Reference Methods (FR) and No Reference Method (NR). In FR, the quality of an image is measure in comparison with a reference image which is assumed to be perfect in quality. NR methods do not employ a reference image. The image qualities metrics considered and implemented here fall in the FR category. In the following subsections, we discuss the SSIM and some other image quality metrics implemented to assess the quality of our fused image. There are following image quality metrics: [4] A. Entropy (EN): Entropy is used to evaluate the information quantity contained in an image. The higher value of entropy implies that the fused image is better than the reference image. Entropy is defined as: [8] ST

LM = − NO log  NO OU

(9)

Where L = total of grey labels, P = P0, P1, P2……..PL-1 is the probability distribution of each labels. B. Peak Signal to Noise Ratio (PSNR): The ratio between maximum possible power of the signal to the power of the corrupting noise that creates distortion of image. The peak signal to noise ratio can be represented as [4]

Shriniwas Budhewar, IJRIT

30

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

NMK1 = 20 log

255√3ZM

_ [∑` O ∑]\*O] − 1O] ^



(10)

Where A- fused image, B- perfect image, i- pixel row index, j-pixel column index, M, N- Number of rows and columns respectively. C. Mean Squared Error (MSE): Mean square error is a measure of image quality index. The large value of mean square means that image is a poor quality. Mean square error between the reference image and the fused image is [8] ZL =

1  \*O] − 1O] ^ ZM `

_

(11)

O ]

D. Structural similarity index measure (SSIM): The Structural similarity index measures follows that a measure of structural information change can provide a good approximation to perceived image distortion. The SSIM compares local patterns of pixel intensities that have been normalized such as luminance and contrast. It is an improved version of traditional methods like PSNR and MSE. The SSIM index is a decimal value between 0 and 1. A value of 0 would mean zero correlation with the original image, and 1 means the exact same image. [4] For the set of images shown in fig. 6 (where a) in left focused and b) is right focused image), above discussed techniques are applied and they are compared with respect to PSNR and EN as shown in table1.

a)

b) Figure 6. Set of images to be fused

Table 1. Comparison of fusion techniques

Sr.

Fusion

no.

technique

Shriniwas Budhewar, IJRIT

Domain

Measuring

Advantages

Disadvantages

parameters

31

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

1

Simple average

spatial

PSNR-25.48

His

is

simplest

Does

not

give

EN-7.22

method of image

guarantee to have a

fusion

clear objects from set of images.

2

Simple

spatial

maximum

PSNR-26.86

Resulting

in

Affected by blurring

EN-7.20

highly

focused

effect which directly

image

output

obtained from the

effect

on

the

contrast.

input image 4

5

DWT

CT

transform

transform

PSNR-54.06

Minimizes

Less

spatial

EN-7.42

spectral

resolution.

Edges

distortions, better

information may be

SNR

lost.

PSNR-30.08

Image edges are

Challenged

EN-7.16

represented

small

efficiently.

image details.

with features,

5. Conclusion: Note that for spatial domain techniques PSNR values are significantly smaller than that of transform domain techniques. Entropy values are almost same. Spatial domain techniques can’t preserve the frequency information of image and vice versa in case of transform domain techniques. Smaller values of PSNR in case of spatial domain techniques reveals that fused image may have more noise content than that of transform domain techniques. Wavelet transform has been esteemed because of its ability of multilevel decomposition of image, which provides best approximation in image details of fused images. Recently new transform curvelet is being utilized for image fusion, as shown in table1, PSNR of this technique is less compared to wavelet but edge information is preserved [10]. From above discussion we can accomplish that each technique has some merits and demerits which promotes scope for the further development. Spatial domain techniques need less memory for implementation than that of transform based technique as it eradicates processing of data at various scales or levels (e.g. 2-level, 4-level DWT). Transform domain technique demands perfect reconstruction in spatial domain information but there is no such step involved in spatial domain techniques. In contrast, wavelet and other transform domain techniques removes artefacts, distortions in fused image which is main drawback of spatial domain technique. The new approach to get optimum result is to combine two techniques. E.g. combination of wavelet and curvelet transform improves the results [13].

6. References [1] CurveLab, [Online]. Available: http://www.curvelet.org. [2] K. Shivsubramani, K. P. Soman, "Implementation and Comparative Study of Image Fusion Algorithms," International Journal of Computer Applications, vol. 9, no. 2, p. (0975 – 8887), Nov 2010. [3] Mathworks corporation, "Matlab and Simulink," [Online]. Available:

Shriniwas Budhewar, IJRIT

32

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, January 2014, Pg: 25-33

http://www.mathworks.in/matlabcentral/answers/. [4] M. Deshmukh and U. Bhosale, "Image Fusion and Image Quality Assessment of Fused Images," International Journal of Image Processing (IJIP), vol. 4, no. 5. [5] A. Saha , G. Bhatnagar and J. W. Q. M., "Mutual spectral residual approach for multifocus image fusion," Digital Signal Processing, ELSEVIER, vol. 23, p. 1121–1135, March 2013. [6] D. k. Sahu and M. P. Parsai, "Different Image Fusion Techniques –A Critical Review," International Journal of Modern Engineering Research, vol. 2, no. 5, pp. 4298-4301, sep-oct 2012. [7] R. Sharma and K. Rani, "Study of Different Image fusion Algorithm," International Journal of Emerging Technology and Advanced Engineering, vol. 3, no. 5, 2013. [8] M. A. P. A. Dr.S.S.Bedi, "Image Fusion Techniques and Quality Assessment Parameters for Clinical Diagnosis: A Review," International Journal of Advanced Research in Computer and Communication Engineering, vol. 2, no. 2, pp. 2319-5940, Feb 2013. [9] R. C. Gonzalez and R. E. Woods, "Wavelets and Multiresolution processing," in Digital Image Processing, New Jersey, Prentice Hall, 2002, pp. 350-386. [10] Jianwei Ma and . G. Plonka, "The Curvelet Transform : A review of recent applications," in IEEE SIGNAL PROCESSING MAGAZINE, 2010. [11] S. Li, J. T. Kwok and Y. Wang, "Combination of Images with diverse focuses using spatial frequency," Information Fusion, ELSEVIER, vol. 2, pp. 169-176, 2001. [12] R. Polikar, "The Wavelet Tutorial by Robi Polikar," 2001. [Online]. Available: http://users.rowan.edu/~polikar/WAVELETS/WTpart1.html. [13] B. Y. Shutao Li, "Multifocus image fusion by combining curvelet and wavelet transform," Pattern Recognition Letters, ELSEVIER, no. 28, p. 1295–1301, 2008. [14] D. Hall and J. Llinas, "An introduction to multisensor data fusion", Proceedings IEEE, Vol. 85(1), pp. 6-23, 1997

Shriniwas Budhewar, IJRIT

33

Review of Image Fusion Techniques

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, ... 1 M-Tech student, Electronics and Telecom, Government college of Engineering ..... which provides best approximation in image details of fused images.

886KB Sizes 3 Downloads 188 Views

Recommend Documents

Investigation on image fusion of remotely sensed ...
In this paper the investigation on image fusion applied to data with significantly different spectral properties is presented. The necessity of placing high emphasis on the underscored problem is demonstrated with the use of simulated and real data.

Image-based fusion for video enhancement of night ...
School of computer science and engineering, Chengdu,. China, 611731. bShanghai Jiao Tong University, Institute of Image. Communications and Information ... online Dec. 29, 2010. .... where FL(x, y) is the final illumination image, M(x, y) is.

Fusion of micro-metrology techniques for the flexible ...
Cell phones, for instance, are ... most important “non-contact” micro-metrology techniques (optical and non-optical), performing a comparison of these methods ...

medical image fusion using cross scale coeffcient
decomposition-based fusion of volumetric medical images taking into account both intrascale and interscale consistencies. An optimal set ..... navigation, but also offers them the flexibility of combining modalities of their choice, which is importan

Image Fusion With Undecimated Wavelet Transform
The process of image fusion can be performed at pixel-, feature- or decision-level. Image fusion at pixel-level represents the combination of information at the lowest level, since each pixel in the fused image is determined by a set of pixels in the

Multispectral Multifocus Image Fusion with Guided ...
In this paper, we propose a novel solution to the above problem using guided .... Analytic image provides an easy solution for recov- ering local phase from the ...

Underwater Image Enhancement Techniques: A Survey - International ...
blurry image without any reconfiguration. This technique is not count on significant variance on transmission or surface shading in the input image. This technique is independent on the users update or purchase expensive equipment either. The result

Underwater Image Enhancement Techniques: A Survey - IJRIT
Different wavelength of light are attenuated by different degree in water. Underwater images are ... 75 found in [28]-[32]. For the last few years, a growing interest in marine research has encouraged researchers ..... Enhancement Using an Integrated

Evaluating Content Based Image Retrieval Techniques ... - CiteSeerX
(“Mountain” class); right: a transformed image (negative transformation) in the testbed ... Some images from classes of the kernel of CLIC ... computer science.

Review and Design of Image Inpainting Technique ...
Image Inpainting deals with the recreation and selected object removal from paintings and photographs. Sometimes the images are damaged with few misplaced region, in such cases Image Inpainting is useful for removing the blur part from the infected r

Edge Based Image Segmentation Techniques: A Survey
This paper describes the image segmentation methods in the field of computer vision. Keywords: Mean Shift, Watershed, Normalized Cuts, Graph Cuts, Edge Based Segmentation. 1. Introduction. Image processing refers to processing of a 2D picture by a co

D2 Discussthe impact that file format, compression techniques, image ...
D2 Discussthe impact that file format, compression tech ... nd colour depth have on file size and image quality.pdf. D2 Discussthe impact that file format, ...

Improving Library Services: A Review of Techniques
number of technology questions that patrons ask of library staff. (Flanagan .... Libraries love to count things and keep statistics about usage. However, the needs ...