Image fusion Technology - A Pixel Based technique for Medical, Multifocus and Satellite images

Mrs.R.Maruthi, Dr.R.M.Suresh

Abstract:
Image fusion integrates images of the same target or scene from multiple sensors to produce a composite image or images that will inherit most salient features from the individual images. The fused image should  have  more  complete information  which is more useful for human or perception, ands computer processing tasks such as segmentation, feature extraction and object recognition. The fusion of images is an important technique in many fields, such as remote sensing, robotics and medical applications. The paper discusses the   image fusion technology  in medical images , multi focus images and satellite images (remote sensing) and a simple fusion methods are  developed for fusing  those images and  are illustrated with different pairs of images using JAVA technology.
Keywords:  Image fusion, medical, multi focus, remote sensing.

1. Image Fusion  Methodology -  Overview

Simple image fusion attempts – The primitive fusion schemes perform the fusion right on the source images, which often have serious side effects such as reducing the contrast. Pyramid-decomposition based image fusion methods-   With the introduction of
pyramid transform in mid-80's, some sophisticated approaches began to emerge. People found that it would be better to perform the fusion in the transform domain. Pyramid transform appears to be very useful for this purpose. The basic idea is to construct the pyramid transform of the fused image from the pyramid transforms of the source images, then the fused image is obtained by taking inverse pyramid transform. Here are some major advantages of pyramid transform: It can provide information on the sharp contrast changes, and human visual system is especially sensitive to these sharp contrast changes and it also provide both spatial and frequency domain localization. Several types of pyramid decomposition are used or developed for image fusion, such as: Laplacian Pyramid ,Ratio-of-low-pass Pyramid Gradient Pyramid etc

The image fusion involves two steps  ,Registration and subsequent Fusion. Registration deals with proper geometrical alignment of the images so that the corresponding pixels or regions of both images map to the same region being imaged. A significant amount of research work has been done in developing various registration algorithms. Perfect image registration is required to fuse the images.. There are number of techniques available for fusing  images. The lowest possible is the pixel level, which refers to the merging of measured physical parameters (intensity values of pixels). One step higher is feature-level fusion, which operates on characteristics such as size, shape, edge, contrast and texture. The highest level of abstraction, called decision level fusion, deals with symbolic representations of images. A common apparatus in image fusion is a multiscale transform (MST), such as the Laplacian pyramid, contrast pyramid, gradient pyramid and wavelet decomposition.

In recent years, image fusion has become an important and useful technique for image analysis and computer vision, medical diagnosis, remote sensing, concealed weapon detection (CWD), and night vision applications . The majority of image fusion research can be classified into two categories: pixel-level image fusion and feature-level image fusion . Pixel-level fusion generates a fused image by considering individual pixels or associated local neighborhoods of pixels for fusion decision. Currently it appears that more people are focusing on pixel-level fusion and a number of pixel-level fusion methods are proposed .Feature-level fusion generally involves two steps. First, feature extraction is employed separately on each source image, and next, the fusion is performed based on the extracted features. Edges and regions of similar intensity or texture are typical features for fusion. This theme deals with combining different sources of information for intelligent systems. The information are signals delivered by different sensors and images from various modalities. The fusion concepts and methods gather tools like weighted average, neural networks, sub-band filtering, and rules based knowledge. More recently, fuzzy logic and graph pyramids have been used.

2. Image Fusion –Medical Images

The evolution of imaging technology, an increasing leads to an increasing number of image modalities  such as CT, SPECT, PET, MRI etc. Radiologists and surgeons often have to interpret a large number of images  from different type of scanners. To support diagnosis or to mange patient’s therapy there is a need for an image fusion. Modern medical technology provides a wide range of scanning and measurement systems. Each modality has its strengths and weakness. For example, SPECT is able to image functional behavior of organs but has low resolution with diffuse boundaries. Because of this it is difficult to identify specific organs or tissues. On the other hand, X-ray Computed Tomography (CT) and MRI provide images with high resolution and sharp boundary information. To preserve all the complementary information’s provided by different modalities in a single image, image fusion is performed. The images are presented  to the physicians in such a way that they can be easily and correctly interpreted
without any loss of information.

2.1 Medical Multi modalities and Objects

  • CT and MRI provide anatomic (structural) detail and can help identify abnormal masses or distortion of normal structures by disease. Fusion of CT-MRI has been described  using multi resolution analysis method. We have found a varied list of applications
  • PET-MRI- Fusion of PET-MRI has been described in variety of applications
  • CT-SPECT -SPECT imaging is the image modality  which has the highest accuracy detecting bone inflammation. The fusion of these two modalities has been done in several applications
  • MRI-US The registration of MRI-US modalities has been done in various applications

Objects -The Object in medical image registration is the part of the anatomy involved.

Brain/Head-The large part of the literature on image fusion  concerns head or brain images Liver Image fusion  from the two modalities, CT and SPECT, acquires both structural and functional information, and therefore is very useful for clinical diagnoses. Various modalities and objects are also involved in image fusion other than the mentioned above.

2.2 Experimental Results–Medical Images
The simplest method of image fusion is pixel level maximum or minimum. Each of the pixels that represents an image stored inside a computer has a pixel value which describes how bright that pixel is, and/or what color it should be. In the simplest case of binary images, the pixel value is a 1-bit number indicating either foreground or background. For a grayscale images, the pixel value is a single number that represents the brightness of the pixel. The most common pixel format is the byte image, where this number is stored as an 8-bit integer giving a range of possible values from 0 to 255. Typically zero is taken to be black, and 255 is taken to be white. Values in between make up the different shades of gray.  In the proposed method, source images a and b are pair of registered images of size (MXN) and f is the fused image.. The pixel level fusion is  done where intensity value of two source images is compared pixel by pixel and the one with the higher value is taken in the fused image.

A 2-D fusion of CT and MR human head images has been presented in the Figure-1. These are the head images that were taken from a 42 year old woman. Brain images showed a large mass with surrounding edema, and compression of adjacent midbrain structures. The CT-MR images were already registered with each other.Figure-1 shows the result of fusion .

Listings 1.1 shows the implementation  of the fusion of CT and MRI images in Java language.

 Listings1.1
import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
import java.applet.*;
class head5 extends Frame {
Image rawImage,rawImage1;
int rawWidth,rawWidth1, rawHeight,rawHeight1;
static int c;   Image modImage,m1,m2,m,n5;
public static void main(String[] args){
head5 obj = new head5();
obj.repaint(); 

public head5() { 
rawImage =Toolkit.getDefaultToolkit().getImage("head1.gif")
rawImage1= Toolkit.getDefaultToolkit().getImage("head3.gif");
MediaTracker tracker = new MediaTracker(this);
tracker.addImage(rawImage,1);
try{  if(!tracker.waitForID(1,10000)) {
System.out.println("Load error.");  System.exit(1); } }
catch(InterruptedException e)
{System.out.println(e); rawWidth = rawImage.getWidth(this); rawHeight = rawImage.getHeight(this); rawWidth1 = rawImage1.getWidth(this); rawHeight1 = rawImage1.getHeight(this); this.setVisible(true); int[] n1 = new int[rawWidth * rawHeight];   int[] n2 = new int[rawWidth * rawHeight]; int[] n3 = new int[rawWidth * rawHeight]; int[] pix = new int[rawWidth * rawHeight]; int[] p = new int[rawWidth * rawHeight]; int[] pix1 = new int[rawWidth * rawHeight]; int[] pi = new int[rawWidth * rawHeight]; int[] pix3 = new int[rawWidth * rawHeight]; int[] pix6 = new int[rawWidth * rawHeight]; try{ PixelGrabber pgObj = new PixelGrabber( rawImage,0,0,rawWidth,rawHeight, pix,0,rawWidth); if(pgObj.grabPixels()) {
for(int cnt = 0; cnt < (rawWidth*rawHeight);cnt++)
pix6[cnt]=0;                                                                  
for(int cnt = 0; cnt < (rawWidth*rawHeight);cnt++) {                                       
n1[cnt]=pix[cnt]; 
String  aString = Integer.toString(pix[cnt]);
 StringBuffer strbuf = new StringBuffer(aString);
StringBuffer s =  strbuf.delete(2,8);
       String s1=new String(s); 
pix1[cnt]=Integer.parseInt(s1);
if(pix1[cnt]!= -1)
 pix1[cnt]=0; 
if(pix1[cnt]  != 0) 
             pix1[cnt]= pix[cnt]; 
pix3[cnt]=pix1[cnt];
 if(pix3[cnt] != 0)  
 pix6[cnt]=pix3[cnt];  }   } 
 else    System.out.println("Pixel  grab not successful");    }
catch(InterruptedException  e)  {System.out.println(e);}
m = this.createImage(new MemoryImageSource(  rawWidth,rawHeight,pix3,0,rawWidth)); 

 try{    PixelGrabber pg = new  PixelGrabber(rawImage1,0,0,rawWidth,rawHeight,pi,0,rawWidth);
 if(pg.grabPixels()) { for(int cnt = 0;  cnt <(rawWidth*rawHeight);cnt++)
 {  n2[cnt]=pi[cnt];    p[cnt]=pi[cnt] ;
                        
 if(pix6[cnt]==0)
 pix6[cnt]=pi[cnt];
 pi[cnt]=pix[cnt]; 

 }    }
  else     System.out.println("Pixel grab not  successful");  } 
  catch(InterruptedException e)
 {System.out.println(e);
 modImage = this.createImage(new MemoryImageSource(  rawWidth,rawHeight,p,0,rawWidth));
m2 =  this.createImage(new MemoryImageSource(    rawWidth,rawHeight,pix6,0,rawWidth))  
 for(int cnt = 0; cnt <(rawWidth*rawHeight);cnt++) {
n3[cnt]=(n1[cnt]+n2[cnt])/2; }
n5 = this.createImage(new MemoryImageSource(  rawWidth,rawHeight,n3,0,rawWidth)); 
this.addWindowListener(  new WindowAdapter(){ 
public void windowClosing(WindowEvent e) {  
System.exit(0); }//end  windowClosing()
           }//end WindowAdapter  
                 );//end  addWindowListener
                      }//end constructor 
                                   public  void paint(Graphics g){ 
                        if(modImage != null){  
 g.drawImage(rawImage,10,10,this) ;
g.drawImage(m,400,400,this);
g.drawImage(modImage,100,10,this); 
g.drawImage(m2,200,10,this) ;
 g.drawImage(n5,300,10,this);
g.drawImage(m,50,50,this);  }
}//end  paint() 
}//end Image05 class
 
for(int cnt = 0; cnt < (rawWidth*rawHeight);cnt++) {
n1[cnt]=pix[cnt];
String aString = Integer.toString(pix[cnt]);
StringBuffer strbuf = new StringBuffer(aString);
StringBuffer s = strbuf.delete(2,8);
String s1=new String(s);
pix1[cnt]=Integer.parseInt(s1);
if(pix1[cnt]!= -1)
pix1[cnt]=0;
If(pix1[cnt] != 0)
pix1[cnt]= pix[cnt];
pix3[cnt]=pix1[cnt];

if(pix3[cnt] != 0)
pix6[cnt]=pix3[cnt]; } }
else System.out.println("Pixel grab not successful"); }
catch(InterruptedException e) {System.out.println(e);}
m = this.createImage(new MemoryImageSource( rawWidth,rawHeight,pix3,0,rawWidth));
try{ PixelGrabber pg = new PixelGrabber(rawImage1,0,0,rawWidth,rawHeight,pi,0,rawWidth);
if(pg.grabPixels()) {

for(int cnt = 0; cnt <(rawWidth*rawHeight);cnt++)
{ n2[cnt]=pi[cnt]; p[cnt]=pi[cnt] ; if(pix6[cnt]==0) pix6[cnt]=pi[cnt]; pi[cnt]=pix[cnt]; } }

else System.out.println("Pixel grab not successful"); }
catch(InterruptedException e)
{System.out.println(e);
modImage = this.createImage(new MemoryImageSource( rawWidth,rawHeight,p,0,rawWidth));
m2 = this.createImage(new MemoryImageSource( rawWidth,rawHeight,pix6,0,rawWidth))
for(int cnt = 0; cnt <(rawWidth*rawHeight);cnt++) {
n3[cnt]=(n1[cnt]+n2[cnt])/2; }
n5 = this.createImage(new MemoryImageSource( rawWidth,rawHeight,n3,0,rawWidth));
this.addWindowListener( new WindowAdapter(){
public void windowClosing(WindowEvent e) {
System.exit(0); }//end windowClosing()
}//end WindowAdapter

);//end addWindowListener
}//end constructor public void paint(Graphics g){
if(modImage != null){
g.drawImage(rawImage,10,10,this) ;
g.drawImage(m,400,400,this);
g.drawImage(modImage,100,10,this);
g.drawImage(m2,200,10,this) ;
g.drawImage(n5,300,10,this);
g.drawImage(m,50,50,this); }
}//end paint()
}//end Image05 class

 Medical image fusion helps the physicians to extract the features that may not be normally visible  in images from different modalities, and also the image fusion combines

These contrasting and complimentary features into one image, so the storage cost gets reduced. Medical imaging is a $15-20 billion global industry whose products touch the quality of all of us. New products and technologies have great implications for the medical practice and healthcare organizations.

3. Image Fusion –Multi Focus Images

A wide variety  of data acquisition devices are available at present. There are sensors which cannot generate images of all objects at various distances with equal clarity.(e.g camera with limited depth of field).Thus several images of a scene are captured with focus on different parts of it. The images that are captured are complementary in many ways and also it is not sufficient  in terms of information content. However viewing such series of images separately and individually is not very  much useful.
The advantages of multi focus data can be fully exploited by  integrating the sharply focused regions seen in the different images. The multi focused  images are combined to form a single image through a judicious selection of regions from different images. This process is known as multi focus image fusion. A fused data can also render itself more successfully for any subsequent processing like object recognition, feature extraction, segmentation, etc,

There are number of techniques for multi focus image fusion.  Simple techniques like  weighted average method, often have serious side effects like reduction in the contrast of the fused image. Other  approaches include, image fusion using controllable camera , probabilistic methods, image gradient method with majority filtering , multi scale methods  and multi resolution approaches, multi decision based on wavelet transform have been suggested .Some of the above  methods involve huge computation and thus requires a lot of time and memory space. The same  pixel level fusion that is described for fusing  medical images is  done for multi focus images with slight variation .Listings 1.2 shows the fusion of two images which is left and right focused. The results are  shown in the  Figure-2.

 

Listings1.2

import java.awt.*;
import java.awt.event.*;
import java.awt.image.*;
import java.applet.*;
class sence extends Frame{
Image rawImage,rawImage1,rawImage3;
int rawWidth,rawWidth1, rawHeight,rawHeight1;
static int c;
Image modImage,m1,m2,m;
public static void main(String[] args){
sence obj = new sence();
obj.repaint(); }
public sence(){
rawImage = Toolkit.getDefaultToolkit().getImage("cath1.jpg");
rawImage1 = Toolkit.getDefaultToolkit().getImage("cath2.jpg");
MediaTracker tracker = new MediaTracker(this);
tracker.addImage(rawImage,1);
try{ if(!tracker.waitForID(1,10000))
{ System.out.println("Load error.");
System.exit(1); } }
catch(InterruptedException e)
{System.out.println(e);}
rawWidth = rawImage.getWidth(this);
rawHeight = rawImage.getHeight(this);
rawWidth1 = rawImage1.getWidth(this);
rawHeight1 = rawImage1.getHeight(this);
this.setVisible(true);
int[] pix1 = new int[rawWidth * rawHeight];
int[] pix2 = new int[rawWidth * rawHeight];
int[] pi = new int[rawWidth * rawHeight];
int[] pix3 = new int[rawWidth * rawHeight];
int[] pix5 = new int[rawWidth * rawHeight];
int[][] twoDim1 = new int[rawHeight][rawWidth];
try{
PixelGrabber pgObj = new PixelGrabber(rawImage,0,0,rawWidth,rawHeight,pix1,0,rawWidth); if(pgObj.grabPixels()) {
for(int cnt = 0; cnt < (rawWidth*rawHeight);cnt++)
{ pix3[cnt]=pix1[cnt]; }}
else System.out.println("Pixel grab not successful"); }
catch(InterruptedException e) {System.out.println(e);
try{ PixelGrabber pg = new PixelGrabber( rawImage1,0,0,rawWidth,rawHeight, pix2,0,rawWidth)
for(int cnt = 0; cnt <(rawWidth*rawHeight);cnt++){
pix5[cnt]=0;}
if(pg.grabPixels())
{for(int cnt = 0; cnt <(rawWidth*rawHeight);cnt++) {
if(pix3[cnt]>=pix2[cnt]){
pix5[cnt]=pix2[cnt] ;}
else pi[cnt]=pix3[cnt];} }
else System.out.println("Pixel grab not successful"); }
catch(InterruptedException e)
{System.out.println(e);}
double sum2=0;
for(int cnt = 0; cnt <(rawWidth*rawHeight);cnt++) {
if(pix5[cnt] ==0)
pix5[cnt]=pi[cnt]; }
m = this.createImage( new MemoryImageSource( rawWidth,rawHeight,pix5,0,rawWidth));
this.addWindowListener( new WindowAdapter(){
public void windowClosing(WindowEvent e){
System.exit(0); }//end windowClosing()
}//end WindowAdapter);
//end addWindowListener
}//end constructor
public void paint(Graphics g){
g.drawImage(rawImage,10,20,this);
g.drawImage(rawImage1,150,20,this);
g.drawImage(m,300,20,this);
}//end paint()
}//end Image05 class

4.Image Fusion –Remote Sensing

A wide variety of applications of image fusion is seen in Geo science and remote sensing applications where satellite images from different bands and at different resolution are combined to extract more useful information of ground terrain. Important defense related applications in Image Fusion is change in detection, where images acquired over a period of  time are fused to detect changes.

The Ministry of Land and Resources of china initiated a 12  year project for dynamic monitoring of Land use using image fusion technology. The latest image processing technology and the integration of  computer and network will be used to monitor land use changes over important cities which have a population of 500,000 and above, as well as over other areas.

The field of remote sensing is a continuously growing market with applications like vegetation mapping and observation of the environment. Due to the demand for higher classification accuracy and the need in enhanced positioning precision(Geo Science information systems) there is always a need to improve the spectral and spatial resolution of remotely sensed imagery. In the remote sensing domain, image fusion is a technique which deals with the limitations of sensors in capturing high and spectral /spatial   resolution   multi spectral images. There are many objectives of image fusion, including image sharpening, improving registration, classification accuracy ,temporal change detection ,feature enhancements etc.

NASA   alone has approximately 18 satellites with over 80 sensors, all of which continuously collect a tremendous amount of data from around the globe. Data Fusion is the important step in modern data processing applications, where data are gathered from multiple sources to achieve refined and improved information for decision  making. Image Fusion is a subset of the  at fusion where data being fused are images. High resolution Panchromatic images provide better spatial quality compared to multi spectral (MS ) images. MS images provide better spectral quality compared to PAN images. Object recognition and extraction of features require methods to integrate complementary data sets such as PAN, MS or Lidar data. Many pixel based image fusion techniques have been developed to combine the spatial and spectral characteristics of images. Figure 7 illustrates this: the middle image is a natural colour image with a spatial resolution of 2.4 m ( resampled 400%), and the left image a panchromatic image with a spatial resolution of 0.6 m; by combining these inputs, a high-resolution colour image is produced. In the fused output, spectral signatures of the input colour image and spatial features of the input pan image, as the best attributes of both inputs, are (almost) retained. Listings 1.2  shows the fusion of these images .

Image source-  2004 Digital globe - Figure -3

5.Recommendations and Discussions

Image Fusion techniques have shown some good progress in recent years .They are expected to play significant role sin many applications.  In real applications, information from different sensors is not likely to be treated equally important. That is, information from some sensors is more emphasized than information from other sensors. This may suggest that performance of image fusion algorithms depends on images of specific applications.  Furthermore, since fused images are used to enhance visual information for human users, performance  assessment of image fusion should be first judged by the users based on the mission of specific applications.           

An adaptive fusion method has to be developed. A number of issues are still to be addressed in the future. A survey has to be conducted in this area to know whether the image fusion really helps in case of medical images for diagnosis or it leads to mis- interpretation.

Various quantitative criteria that are used to estimate the quality of fused images are Root Mean Square Error(RMSE),the Normalized Least Square Error (NLSE),the Mutual information(MI),,the standard deviation (SD),the Entropy(E),the Difference Entropy(DE),the cross entropy(CE),Spatial Frequency (SF) etc. Quantitative measures should only serve as a useful tool to assist human users to make difficult judgments whenever necessary.

A new quality  metric has to be proposed to deal with grayscale images and colour images. A  combined qualitative and quantitative assessment  approach seems to be the best way to determine which fusion technique is most appropriate for an applications  The performance assessment of image fusion should continue to be shared between qualitative and quantitative methods, with increasing weight being placed toward new quantitative assessment techniques. Human users will continue to be the sole final decision-makers while improved image fusion techniques will continue to relieve their workloads and help them to make quicker and more accurate decisions.

6. References

[1] Barbara zitova ,Jan Flusser “Image Registration Methods :a Survey “Image and Vision Computing
[2] Josien .P.W.Pluim,J.B .Antoine Maintz and Max A.Veirgever “Mutual Information based Registration of Medical images –a Survey “ IEEE Transaction on Medical  Imaging ,Vol XX  No.Y
[3] Rajiv Kapoor, Aditya Dutta, Deepak Bagal “Fusion for Registration of Medical Images –a Study” 32nd applied Imager Pattern Recognition  workshop
[4] Guihong  Qu, DaliZhang & Pingfan Yan “Medical Image Fusion by Wavelet transform Modulus Maxima” 13 Aug-2001 /Vol-No.4/OPTICS EXPRESS 186.








}