AccScience Publishing / ARNM / Volume 1 / Issue 2 / DOI: 10.36922/arnm.0870

The significance of image fusion in nuclear medicine and molecular imaging

Xiangxing Kong1,2 Hua Zhu1,2,3* Zhi Yang1,2,3*
Show Less
1 Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Peking University Cancer Hospital and Institute, Beijing, 100142, China
2 Institute of Medical Technology, Peking University Health Science Center, Beijing, 100191, China
3 Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, Guangdong, 518055, China
Submitted: 27 April 2023 | Accepted: 20 July 2023 | Published: 17 August 2023
© 2023 by the Author(s). This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution 4.0 International License ( )

Nuclear medicine molecular imaging (NMMI) typically employs radioactive isotopes to label cells or molecules and then utilizes imaging devices such as positron emission tomography and single photon emission computed tomography to generate images. However, the images produced by these devices often suffer from problems such as signal noise, low resolution, and poor soft-organ contrast. To address these limitations, image fusion technology merges images from different imaging modalities, combining multiple types of medical image information obtained through various imaging techniques. This process generates a more comprehensive and accurate image, significantly improving image quality, reducing noise, and ultimately enhancing diagnostic accuracy and treatment effectiveness. Image fusion technology has found widespread applications in NMMI, achieving significant results in various fields. This review provides an overview of the development of image fusion technology, introduces traditional image fusion techniques, explores deep learning-based image fusion methods, and finally discusses the challenges and future directions of image fusion technology in NMMI.

Nuclear medicine molecular imaging
Image fusion
Multimodal medical image
Beijing Hospitals Authority Dengfeng Project
Pilot Project (4th Round) to Reform Public Development of Beijing Municipal Medical Research Institute
Third Foster Plan in 2019 “Molecular Imaging Probe Preparation and Characterization of Key Technologies and Equipment” for the Development of Key Technologies and Equipment in Major Science and Technology Infrastructure in Shenzhen, China.
  1. Azam MA, Khan KB, Ahmad M, et al., 2021, Multimodal medical image registration and fusion for quality enhancement. Comput Mater Continua, 68: 821–840.


  1. Bodar YJL, Jansen BHE, van der Voorn JP, et al., 2021, Detection of prostate cancer with 18F-DCFPyL PET/CT compared to final histopathology of radical prostatectomy specimens: is PSMA-targeted biopsy feasible? The DeTeCT trial. World J Urol, 39: 2439–2446.


  1. Azam MA, Khan KB, Salahuddin S, et al., 2022, A review on multimodal medical image fusion: Compendious analysis of medical modalities, multimodal databases, fusion techniques and quality metrics. Comput Biol Med, 144: 105253.


  1. Huang B, Yang F, Yin M, et al., 2020, A review of multimodal medical image fusion techniques. Comput Math Methods Med, 2020: 8279342.


  1. James AP, Dasarathy BV, 2014, Medical image fusion: A survey of the state of the art. Inform Fusion, 19: 4–19.


  1. LeCun Y, Bengio Y, Hinton G, 2015, Deep learning. Nature, 521: 436–444.


  1. Zhou T, Ruan S, Canu S, 2019, A review: Deep learning for medical image segmentation using multi-modality fusion. Array, 3–4: 100004.


  1. Li S, Kang X, Fang L, et al., 2017, Pixel-level image fusion: A survey of the state of the art. Inform Fusion, 33: 100–112.


  1. Liu Z, Song Y, Sheng VS, et al., 2019, MRI and PET image fusion using the nonparametric density model and the theory of variable-weight. Comput Methods Programs Biomed, 175: 73–82.


  1. Haddadpour M, Daneshvar S, Seyedarabi H, 2017, PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method. Biomed J, 40: 219–225.


  1. Stokking R, Zuiderveld KJ, Viergever MA, 2001, Integrated volume visualization of functional image data and anatomical surfaces using normal fusion. Hum Brain Mapp, 12: 203–218.<203:AID-HBM1016>3.0.CO;2-X


  1. Chen CI, 2017, Fusion of PET and MR brain images based on IHS and log-gabor transforms. IEEE Sens J, 17: 6995–7010.


  1. Parmar K, Kher R, 2012, A Comparative Analysis of Multimodality Medical Image Fusion Methods. In: Conference: 2012 Sixth Asia Modelling Symposium.


  1. Liu Y, Yang J, Sun J, 2010, PET/CT Medical Image Fusion Algorithm Based on Multiwavelet Transform. Vol. 2. In: Conference: Advanced Computer Control (ICACC), 2010 2nd International Conference. p264–268.


  1. Haribabu M, Bindu CH, Prasad KS, 2012, Multimodal Medical Image Fusion of MRI-PET Using Wavelet Transform. In: 2012 International Conference on Advances in Mobile Network, Communication and its Applications.


  1. Sahu A, Bhateja V, Krishn A, et al., 2014, Medical image fusion with Laplacian Pyramids. In: 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom).


  1. Mahmoudi FT, Samadzadegan F, Reinartz P, 2015, Object recognition based on the context aware decision-level fusion in multiviews imagery. IEEE J Sel Top Appl Earth Obs Remote Sens, 8: 12–22.


  1. Shabanzade F, Khateri M, Liu Z, 2019, MR and PET image fusion using nonparametric bayesian joint dictionary learning. IEEE Sens Lett, 3: 1–4.


  1. Zhang Q, Liu Y, Blum RS, et al., 2018, Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: A review. Inform Fusion, 40: 57–75.


  1. Zong JJ, Qiu TS, 2017, Medical image fusion based on sparse representation of classified image patches. Biomed Sig Process Control, 34: 195–205.


  1. Zhu Z, Yin H, Chai Y, et al., 2018, A novel multi-modality image fusion method based on image decomposition and sparse representation. Inform Sci, 432: 516–529.


  1. Daneshvar S, Ghassemian H, 2010, MRI and PET image fusion by combining IHS and retina-inspired models. Inform Fusion, 11: 114–123.


  1. Liu Z, Cao Y, Li Y, et al., 2020, Automatic diagnosis of fungal keratitis using data augmentation and image fusion with deep convolutional neural network. Comput Methods Programs Biomed, 187: 105019.


  1. Li Y, Zhao J, Lv Z, et al., 2021, Multimodal medical supervised image fusion method by CNN. Front Neurosci, 15: 638976.


  1. Dian R, Li S, Kang X, 2021, Regularizing hyperspectral and multispectral image fusion by CNN denoiser. IEEE Trans Neural Netw Learn Syst, 32: 1124–1135.


  1. Li J, Yuan G, Fan H, 2019, Multifocus image fusion using wavelet-domain-based deep CNN. Comput Intell Neurosci, 2019: 4179397.


  1. Liu M, Wang X, Zhang H, 2018, Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach. Comput Methods Programs Biomed, 156: 209–215.


  1. Lahoud F, Süsstrunk S, 2019, Zero-Learning Fast Medical Image Fusion. In: 2019 22th International Conference on Information Fusion (FUSION).


  1. Teng J, Wang S, Zhang J, et al., 2010, Neuro-Fuzzy Logic Based Fusion Algorithm of Medical Images. In: 2010 3rd International Congress on Image and Signal Processing.


  1. Wang K, Zheng M, Wei H, et al., 2020, Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors, 20, 2169.


  1. Liu Y, Chen X, Ward RK, et al., 2016, Image fusion with convolutional sparse representation. IEEE Signal Process Lett, 23: 1882–1886.


  1. Liu S, Liu S, Cai W, et al., 2015, Multimodal neuroimaging feature learning for multiclass diagnosis of Alzheimer’s disease. IEEE Trans Biomed Eng, 62: 1132–1140.


  1. Zhang H, Yuan J, Tian X, et al., 2021, GAN-FM: Infrared and visible image fusion using GAN with full-scale skip connection and dual markovian discriminators. IEEE Trans Computat Imaging, 7: 1134–1147.


  1. Ma J, Yu W, Liang P, et al., 2019, FusionGAN: A generative adversarial network for infrared and visible image fusion. Inform Fusion, 48: 11–26.


  1. Huang J, Le Z, Ma Y, et al., 2020, MGMDcGAN: Medical image fusion using multi-generator multi-discriminator conditional generative adversarial network. IEEE Access, 8: 55145–55157.


  1. Lü X, Long L, Deng R, et al., 2022, Image feature extraction based on fuzzy restricted Boltzmann machine. Measurement, 204: 112063.


  1. Wu W, Qiu Z, Zhao M, et al., 2018, Visible and infrared image fusion using NSST and deep Boltzmann machine. Optik, 157: 334–342.


  1. Sakai Y, Yamanishi K, 2014, Data Fusion Using Restricted Boltzmann Machines. In: 2014 IEEE International Conference on Data Mining.


  1. Fakhari A, Kiani K, 2021, A new restricted boltzmann machine training algorithm for image restoration. Multimedia Tools Appl, 80: 2047–2062.


  1. Suk HI, Lee SW, Shen D, 2014, Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage, 101: 569–582.


  1. He C, Liu Q, Li H, et al., 2010, Multimodal medical image fusion based on IHS and PCA. Procedia Eng, 7: 280–285.


  1. Xiong Y, Wu Y, Wang Y, et al., 2017, A Medical Image Fusion Method Based on SIST and Adaptive PCNN. In: 2017 29th Chinese Control and Decision Conference (CCDC).


  1. Bhavana V, Krishnappa HK, 2015, Multi-modality medical image fusion using discrete wavelet transform. Procedia Comput Sci, 70: 625–631.


  1. Wang A, Haijing S, Yueyang G, 2006, The Application of Wavelet Transform to Multi-Modality Medical Image Fusion. In: 2006 IEEE International Conference on Networking, Sensing and Control.


  1. Du J, Li W, Xiao B, 2018, Fusion of anatomical and functional images using parallel saliency features. Inform Sci, 430–431: 567–576.


  1. Shahdoosti HR, Mehrabi A, 2018, Multimodal image fusion using sparse representation classification in tetrolet domain. Digit Signal Process, 79: 9–22.


  1. Chaitanya CK, Reddy GS, Bhavana V, et al., 2017, PET and MRI medical image fusion using STDCT and STSVD. In: 2017 International Conference on Computer Communication and Informatics (ICCCI).


  1. Liu Y, Chen X, Cheng J, et al., 2017, A Medical Image Fusion Method Based on Convolutional Neural Networks. In: 2017 20th International Conference on Information Fusion (Fusion).


  1. Xu H, Ma J, 2021, EMFusion: An unsupervised enhanced medical image fusion network. Inform Fusion, 76: 177–186.


  1. Xia J, Lu Y, Tan L, 2020, Research of multimodal medical image fusion based on parameter-adaptive pulse-coupled neural network and convolutional sparse representation. Comput Math Methods Med, 2020: 3290136.


  1. Ma J, Xu H, Jiang J, et al., 2020, DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process, 29: 4980–4995.


  1. Kang J, Lu W, Zhang W, 2020, Fusion of brain PET and MRI images using tissue-aware conditional generative adversarial network with joint loss. IEEE Access, 8: 6368–6378.
Conflict of interest
The authors declare no conflicts of interest.
Back to top
Advances in Radiotherapy & Nuclear Medicine, Electronic ISSN: 2972-4392 Published by AccScience Publishing