IJRIT International Journal of Research in Information Technology, Volume 1, Issue 11, November, 2013, Pg. 365-372

International Journal of Research in Information Technology (IJRIT)


ISSN 2001-5569


Student, Prathyusha Institute Of Technology and Management, Thiruvallur, TamilNadu, India [email protected]


Assistant Professor, Prathyusha Institute Of Technology and Management, Thiruvallur, TamilNadu, India [email protected]


Assistant Professor, Prathyusha Institute Of Technology and Management, Thiruvallur, TamilNadu, India [email protected] Abstract

The aim of image fusion is to integrate corresponding information from different sources into one new image. The idea is to lessen uncertainty and minimize redundancy in the output while maximizing relevant information particular to an application or task. Therefore image fusion techniques, which provide an efficient way of combining and enhancing information, have drawn increasing attention from the medical community. In this paper, we propose a novel cross-scale fusion rule for multiscaledecomposition-based fusion of volumetric medical images taking into account both intrascale and interscale consistencies. An optimal set of coefficients from the multiscale representations of the source images is determined by effective exploitation of neighborhood information. An efficient color fusion scheme is also proposed. Experiments demonstrate that our fusion rule generates better results than existing rules.

Index Terms—3-D image fusion, fusion rule, medical image fusion, multiscale analysis.

I. INTRODUCTION MEDICAL imaging has become a vital component in routine clinical applications such as diagnosis and treatment planning. However, because each imaging modality only provides information in a limited domain, many studies prefer joint analysis of imaging data collected from the same patient using different modalities . The goal of image fusion is to provide a single fused image, which provides more accurate and reliable information than any individual source image and in which features may be more distinguishable. Due to its compact and enhanced

G.Purna Chandra, IJRIT


representation of information, image fusion has been employed in many medical applications. Computed tomography (CT) and MRI images were fused for neuro navigation in skull base tumor surgery. Fusion of positron emission tomography (PET) and MRI images has proven useful for hepatic metastasis detection and intracranial tumor diagnosis. Single-photon emission computed tomography (SPECT) and MRI images were fused for abnormality localization in patients with tinnitus. Multiple fetal cardiac ultrasound scans were fused to reduce imaging artifacts . In addition, the advantages of image fusion over side-by-side analysis of non fused images have been demonstrated in lesion detection and localization in patients with neuro endocrine tumors and in patients with pretreated brain tumors. A straightforward multimodal image fusion method is to overlay the source images by manipulating their transparency attributes, or by assigning them to different color channels. This overlaying scheme is a fundamental approach in color fusion, a type of image fusion that uses color to expand the amount of information conveyed in a single image , but it does not necessarily enhance the image contrast or make image features more distinguishable. Image fusion can be performed at three different levels, i.e., pixel/data level, feature/attribute level, and symbol/decision level, each of which serves different purposes. Compared with the others, pixel-level fusion directly combines the original information in the source images and is more computationally efficient .



Image N

Pixel/Block Fusion Evaluation

Results Fig1: Processing Levels Of Image Fusion

II. IMAGE FUSION BY WAVELET TRANSFORM Wavelet multi-resolution expression maps the image to different level of pyramid structure of wavelet coefficient based on scale and direction. To implement wavelet transform image fusion scheme, first, to construct the wavelet coefficient pyramid of the two input images. Second, to combine the coefficient information of corresponding level. Finally, to implement inverse wavelet transform using the fused coefficient.



G.Purna Chandra, IJRIT



G.Purna Chandra, IJRIT


Image fusion combines multiple-source imagery by using advanced image processing techniques. Specifically, it integrates disparate or complementary images in order to enhance the information apparent in the respective source images, as well as to increase the reliability of interpretation. This leads to more accurate image1 and increased confidence (thus reduced ambiguity), and improved classification. This paper focuses on the “pixel-level” fusion process, where a composite image has to be built of two or more input images. A general framework of image fusion can be found elsewhere. In image fusion, some general requirements,for instance, pattern conservation and distortion minimization, need to be followed. To measure the image quality, the quantitative evaluation of fused imagery has to be considered such that an objective comparison of the performance of different fusion algorithms can be carried out. In addition, a quantitative measurement may potentially be used as a feedback to the fusion algorithm to further improve the fused image quality. Through the wide applications of image fusion in medical imaging, remote sensing, nighttime operations and multispectral imaging, many fusion algorithms have been developed. Two common fusion methods are the discrete wavelet transform (DWT) and various pyramids (such as Laplacian, contrast, gradient and morphological pyramids). As with any pyramid method, the wavelet-based fusion method is also a multi-scale analysis method.

Block Diag.1 Fusion technique



(c) Fig.2 (a) CT of a brain (b) MRI of a brain (c)Fused image

G.Purna Chandra, IJRIT


III. CROSS SCALE COEFFICIENT SELECTION FOR IMAGE FUSION We propose a novel cross-scale fusion rule for multiscale-decomposition-based fusion of volumetric medical images taking into account both intrascale and interscale consistencies. An optimal set of coefficients from the multiscale representations of the source images is determined by effective exploitation of neighborhood information. An efficient color fusion scheme is also proposed. We propose a fusion rule that blends the pixel values in the monochrome source images to combine information while preserving or enhancing contrast. In addition, we show how color fusion can benefit from the monochrome fusion results. The effectiveness of this new fusion rule is validated through experiments on 3-D medical image fusion. Although it is possible to fuse individual 2-D slices in 3-D images/volumes separately, the results are not of the same quality as those of 3-D fusion due to the lack of between-slice information in the fusion process. The basic steps are: 1) Pass salient information from a lower level to a higher level in an MSR until APX is reached. 2) Calculate the memberships of each fused coefficient at APX using the passed salient information. 3) Use these memberships to guide the coefficient selection at DETs.

CT Image

MRI Image

Divide RGB Components

Divide RGB Components


Transformed image



Transformed image


Fused Image Block Diagram.2 CS Fusion Rule ALGORITHM : DWT+CS BASED FUSION       

Apply N-level DWT to each source image. Apply band pass filtering to APXs of each source image. Compute APX for DET 1 to N. Compute membership for DET N to 1. Select coefficients for fused APX. Select coefficients for fused DETs Apply inverse DWT to the fused MSR.

G.Purna Chandra, IJRIT


A. Color Fusion: In this section, we introduce an efficient color fusion scheme for the case of two monochrome source images. The color fusion scheme, which utilizes the fusion result from the previous section to further enhance image contrast, is inspired by the color opponency theory in physiology [40], which states that human perception of achromatic and chromatic colors occurs in three independent dimensions, i.e, black–white (luminance), red–green, and yellow–blue. Contrast sensitivity in these three dimensions has been studied by many researchers. The contrast sensitivity function of luminance shows bandpass- characteristics, while the contrast sensitivity functions of both red–green and yellow–blue show low-pass behavior. Therefore, luminance sensitivity is normally higher than chromatic sensitivity except at low spatial frequencies. Hence, the fused monochrome image, which provides combined information and good contrasts, should be assigned to the luminance channelto exploit luminance contrast. In addition, the colorfused image should also provide good contrasts in the red–green and/or yellow–blue channels in order to fully exploit human color perception. To achieve this, we can consider that red, green, yellow, and blue are arranged on a color circle as in [40], where the red– green axis is orthogonal to the yellow–blue axis and color (actually its hue) transits smoothly from one to another in each quadrant. Then, in order to maximize color contrast/dissimilarity between an object and its local neighborhood in the color-fused image, their hues should come from two opposite quadrants, or at least from two orthogonal hues on the color circle. With these considerations in mind, we have developed the following scheme. Let I1 and I2 denote the two source images and ¯ I the monochrome fused image. ¯ I is considered as the luminance image of the color-fused image ¯ Ic . Therefore, if we consider the YUV color space,¯ I is the Y component. Let¯ I cr ,¯ I cg , and¯ I cb denote the red, green, and blue color planes of¯ Ic, respectively. The source images are assigned to the red and blue planes in the RGB color space and the green plane is derived by reversing the calculation of the Y component from the RGB color space. ¯ I cg = (¯ I − 0.299¯ I cr − 0.114¯ I cb)/0.587. This scheme provides more contrast enhancement than the overlaying schemes because it fully utilizes color opponency in human perception. It provides a visual comparison of slices from two directions. An inset is given below each slice, which clearly shows the improved contrast using our scheme, as indicated by the white arrows (i.e., the sarcolemma in the T1W scan and the mastoid air cells in the T2W scan in the upper row, the orbital apex in the T1W scan and the sulcus in the T2W scan in the lower row). The color characteristics of the color-fused images may be further adjusted according to a user’s preference using methods such as color transfer. Researchers have previously studied opponent-color fusion, which is essentially based on opponent processing. After intermediate fused and/or enhanced grayscale images are generated by opponent processing, they are either directly assigned to different color planes (in the case of two source images) or assigned in a way that emphasizes chromatic color contrast (in the case of three or more source images) to form a color-fused image. This is different from our scheme, which aims to maximize both achromatic and chromatic color contrasts in a color-fused image.

IV. SIMULATION RESULTS Now that the performance of our CS fusion rule has been validated, analyzed, and compared with other fusion rules on T1W/T2W MRI fusion, we further demonstrate its effectiveness in the fusion of other modalities. The registered 3-D images used in the experiments in this section are retrieved from LPT+CS was applied with the same parameter setting as in Section IV-A unless otherwise mentioned. 1) Fusion of CT/MRI: This dataset contains one CT scan and one T1W MRI scan of a patient with cerebral toxoplasmosis. Each scan contains 256 × 256 × 24 voxels with 8-bit precision. Four decomposition levels were applied because the depth of the third dimension is only 24 voxels. As displayed in Fig. the calcification captured in the CT scan and the soft tissue structures captured in the MRI scan are successfully transferred to the fused image.With our color fusion scheme applied, different features stand out even better. G.Purna Chandra, IJRIT


2) Fusion of SPECT/MRI: This dataset contains one colorcoded SPECT scan and one T2W MRI scan of a patient with anaplastic astrocytoma. Each scan contains 256 × 256 × 56 voxels with 8-bit precision in the luminance channel. When one source image contains color (e.g., the color-coded SPECT scan), a common procedure is to fuse its luminance channel with the other monochrome source image using a monochrome fusion method. As displayed in Fig, our method combines the high Thallium uptake shown in the SPECT scan with the anatomical structures shown in the MRI scan in the fused image for better determination of the extent of the tumor, while preserving high image contrast. 3) Fusion of PET/MRI: This dataset contains one color coded PET scan and one T1W MRI scan of a normal brain. Each scan contains 256 × 256 × 127 voxels with 8-bit precision in the luminance channel. As demonstrated in Fig, the metabolic activity revealed in the PET scan and the anatomical structures revealed in the MRI scan are combined in the fused image, providing better spatial relationships.





Figure1. Fusion of SPECT and T2W MRI images. (a) SPECT image (colorcoded), (b) T2W MRI image, (c) Fused , (d) Fused (luminance channel).





Figure2 Fusion of PET and T1W MRI images. (a) PET image (color-coded).(b) T1W MRI image. (c) Fused. (d) Fused (luminance channel).





Figure3. Fusion of CT and T2W MRI images. (a) CT image (b) T2W MRI image, (c) Fused , (d) Fused (luminance channel). G.Purna Chandra, IJRIT


V. VALIDATION RESULTS The performance of our CS fusion rule was evaluated on volumetric image fusion ofT1WandT2WMRI scans using both synthetic and real data . After this validation, we demonstrate the capability of our fusion rule to fuse other modalities. In addition, we have consulted a neurosurgeon and a radiologist. In their opinion, our method not only provides enhanced representations of information, which is useful in applications like diagnosis and neuro navigation, but also offers them the flexibility of combining modalities of their choice, which is important because the data types required are normally application dependent. TABLE.1 Entropy and PSNR Values of Fused Images IMAGE CHANNEL












VI. CONCLUSION AND FUTURE WORK In this paper, we proposed a CS fusion rule, which selects an optimal set of coefficients for each decomposition level and guarantees intrascale and interscale consistencies. Experiments on volumetric medical image fusion demonstrated the effectiveness and versatility of our fusion rule, which produced fused images with higher quality than existing rules. An efficient color fusion scheme effectively utilizing monochrome fusion results was also proposed. In future work, we will explore the possibility of extending our technique for 4-D medical images. Performing full-scale clinical evaluation catered for individual medical applications is also a valuable future work that will facilitate the adoption of our technique.

VII. REFERENCES [1] V. D. Calhoun and T. Adali, “Feature-based fusion of medical imaging data,” IEEE Trans. Inf. Technol. Biomed., vol. 13, no. 5, pp. 711–720,Sep. 2009. [2] B. Solaiman, R. Debon, F. Pipelier, J.-M. Cauvin, and C. Roux, “Information fusion: Application to data and model fusion for ultrasound image segmentation,” IEEE Trans. Biomed. Eng., vol. 46, no. 10, pp. 1171–1175,Oct. 1999. [3] C. S. Pattichis,M. S. Pattichis, and E. Micheli-Tzanakou, “Medical imaging fusion applications: An overview,” in Proc. 35th Asilomar Conf. Signals,Syst. Comput., vol. 2, 2001, pp. 1263–1267.. [4] M. C. Vald´es Hern´andez, K. Ferguson, F. Chappell, and J.Wardlaw, “New multispectral MRI data fusion technique for white matter lesion segmentation: method and comparison with thresholding in FLAIR images,” Eur.Radiol., vol. 20, no. 7, pp. 1684–1691, 2010. [5] M. J. Gooding, K. Rajpoot, S. Mitchell, P. Chamberlain, S. H. Kennedy, and J. A. Noble, “Investigation into the fusion of multiple 4-D fetal echocardiography images to improve image quality,” Ultrasound Med.Biol., vol. 36, no. 6, pp. 957–966, 2010

G.Purna Chandra, IJRIT


medical image fusion using cross scale coeffcient

decomposition-based fusion of volumetric medical images taking into account both intrascale and interscale consistencies. An optimal set ..... navigation, but also offers them the flexibility of combining modalities of their choice, which is important because the data types required are normally application dependent. TABLE.1.

2MB Sizes 0 Downloads 159 Views

Recommend Documents

Multi-Scale Retinex Based Medical Image Enhancement Using ...
is corrected insecond step wavelet fusion is applied.As it over enhances the images hence this method is not suitable for the enhancement of medical images.[9].

Multi-Scale Retinex Based Medical Image Enhancement Using ...
IJRIT International Journal of Research in Information Technology, Volume 3, Issue 6, June 2015, Pg.40-52. Shilpa Asok Mote, IJRIT-40. International Journal of Research in Information Technology. (IJRIT) www.ijrit.com. ISSN 2001-5569. Multi-Scale Ret

Medical Image Annotation using Bag of Features ...
requirements for the degree of. Master of Science in Biomedical Engineering ... ponents in teaching, development of support systems for diagnostic, research.

Medical image registration using machine learning ...
Medical image registration using machine learning-based interest ... experimental results shows an improvement in 3D image registration quality of 18.92% ...

Content-Based Medical Image Retrieval Using Low-Level Visual ...
the retrieval task of the ImageCLEFmed 2007 edition [3], using only visual in- ..... models that mix up textual and visual data to improve the performance of our.

Review of Image Fusion Techniques
IJRIT International Journal of Research in Information Technology, Volume 2, Issue 1, ... 1 M-Tech student, Electronics and Telecom, Government college of Engineering ..... which provides best approximation in image details of fused images.

LARGE SCALE NATURAL IMAGE ... - Semantic Scholar
1MOE-MS Key Lab of MCC, University of Science and Technology of China. 2Department of Electrical and Computer Engineering, National University of Singapore. 3Advanced ... million natural image database on different semantic levels defined based on Wo

Web-scale Image Annotation - Research at Google
models to explain the co-occurence relationship between image features and ... co-occurrence relationship between the two modalities. ..... screen*frontal apple.

Image Fusion With Undecimated Wavelet Transform
The process of image fusion can be performed at pixel-, feature- or decision-level. Image fusion at pixel-level represents the combination of information at the lowest level, since each pixel in the fused image is determined by a set of pixels in the

Multispectral Multifocus Image Fusion with Guided ...
In this paper, we propose a novel solution to the above problem using guided .... Analytic image provides an easy solution for recov- ering local phase from the ...

Investigation on image fusion of remotely sensed ...
In this paper the investigation on image fusion applied to data with significantly different spectral properties is presented. The necessity of placing high emphasis on the underscored problem is demonstrated with the use of simulated and real data.

Image-based fusion for video enhancement of night ...
School of computer science and engineering, Chengdu,. China, 611731. bShanghai Jiao Tong University, Institute of Image. Communications and Information ... online Dec. 29, 2010. .... where FL(x, y) is the final illumination image, M(x, y) is.

Image processing using linear light values and other image ...
Nov 12, 2004 - US 7,158,668 B2. Jan. 2, 2007. (10) Patent N0.: (45) Date of Patent: (54). (75) ..... 2003, available at , 5.

Image inputting apparatus and image forming apparatus using four ...
Oct 24, 2007 - Primary Examiner * Cheukfan Lee. (74) Attorney, Agent, or Firm * Foley & Lardner LLP. (57). ABSTRACT. A four-line CCD sensor is structured ...

Wavelets in Medical Image Processing: Denoising ... - CiteSeerX
Brushlet functions were introduced to build an orthogonal basis of transient functions ...... Cross-subject normalization and template/atlas analysis. 3. .... 1. www.wavelet.org: offers a “wavelet digest”, an email list that reports most recent n

Large-scale Incremental Processing Using Distributed ... - USENIX
collection of machines, meaning that this search for dirty cells must be distributed. ...... to create a wide variety of infrastructure but could be limiting for application ...

Wavelets in Medical Image Processing: Denoising ... - CiteSeerX
Wavelets have been widely used in signal and image processing for the past 20 years. ... f ω , defined in the frequency domain, have the following relationships.