Fact-checked by Grok 2 weeks ago

Image fusion

Image fusion is the process of combining multiple input images from diverse sensors or modalities into a single output image that integrates complementary information to enhance interpretability, visual quality, and informational content beyond what is available in the individual source images. This technique aims to preserve salient features while reducing redundancies and artifacts, resulting in an output that better supports human perception and machine analysis. Image fusion methods are broadly categorized into spatial domain, transform (frequency) domain, and deep learning-based approaches. Spatial domain techniques operate directly on pixel intensities through methods like (PCA), intensity-hue-saturation (IHS) transformation, and guided filtering, offering simplicity and computational efficiency but potentially introducing color distortions or blurring. Transform domain methods, such as discrete wavelet transform (DWT), Laplacian pyramid, and curvelet transform, decompose images into multi-resolution representations to capture edges and textures more effectively, though they often require higher computational resources. More recently, deep learning frameworks including convolutional neural networks (CNNs) and generative adversarial networks (GANs) have emerged for automated feature extraction and fusion, achieving superior performance in complex scenarios at the cost of increased training demands. Fusion can also occur at different levels: pixel-level for direct intensity combination, feature-level for edge or texture integration, and decision-level for higher-level symbolic merging. Key applications of image fusion span multiple fields, driven by the need for enhanced data representation. In , it merges panchromatic and multispectral satellite images to improve and detail detection for and . Medical imaging benefits from fusing modalities like computed tomography (CT) and (MRI) to provide comprehensive anatomical and functional insights, aiding in precise and planning. Other domains include , where and visible light images are combined for robust in low-visibility conditions, and multi-focus to produce all-in-focus composites. In security applications, such as X-ray luggage screening, fusion enhances threat identification by integrating multi-view or multi-energy scans. Performance evaluation of image fusion relies on metrics like for , average for edge sharpness, and structural similarity index for perceptual quality, with recent studies highlighting the strengths of hybrid models in achieving balanced results across these measures. Originating from early works like the Brovey transform in 1987 for , the field has seen exponential growth, with over 21,000 research papers published by 2019, reflecting its evolving role in interdisciplinary technologies.

Fundamentals

Definition and Principles

Image fusion is the technique of combining two or more images from different sources or modalities, such as visible light, , (MRI), or (CT), into a single fused image that ideally contains all relevant information from the inputs while reducing and . This process merges salient features to produce an output with enhanced visual quality, interpretability, and informational content compared to any individual source image. The key principles underlying image fusion involve the integration of complementary data from diverse sources to enhance overall scene understanding, the reduction of redundant information to streamline , and the preservation of critical details such as and spectral fidelity. These principles ensure that the fused result provides a more comprehensive representation without introducing distortions or loss of essential features. Image fusion can be categorized into types based on input characteristics: multi-sensor fusion, which combines data from different imaging sensors (e.g., optical and ); multi-temporal fusion, involving images captured at different times by the same sensor to detect changes; and multi-view fusion, which integrates multiple perspectives of the same scene for depth or enhancement. At its mathematical foundation, fusion is modeled as I_f = F(I_1, I_2, \dots, I_n), where I_f denotes the fused , I_k (for k=1 to n) are the input images, and F represents the that applies rules or transformations to combine the inputs. High-level approaches include pixel-based fusion, which directly operates on individual values for straightforward ; transform-domain methods, which decompose images into or multiresolution components before recombination; and sparse techniques, which use overcomplete bases to sparsely and merge features. Fusion outputs are tailored to specific purposes, such as generating images optimized for human through enhanced and detail for intuitive interpretation, or for machine analysis via improved feature extraction that supports automated tasks like or segmentation.

Historical Development

Early concepts of sensor integration in military applications, such as systems for , date back to the 1960s and 1970s, laying groundwork for later fusion techniques. Digital image fusion techniques emerged in the 1980s, with early methods applied to multispectral , including (PCA) for merging bands. One of the earliest methods was the Brovey transform introduced in 1987 for merging multispectral data. In the 1990s, advancements shifted toward multi-resolution techniques, building on foundational pyramid decompositions introduced by Burt and Adelson in 1983, which were adapted for fusion tasks to preserve edges and details across scales. Burt's 1992 gradient pyramid method further enabled pattern-selective fusion, enhancing applications in . Wavelet-based methods gained popularity in the mid-1990s, with Li et al.'s 1995 work on multi-sensor fusion using wavelet transforms providing a robust framework for handling multi-scale features in satellite and medical images. contributed to remote sensing data processing during this era, supporting with Landsat and imagery. The 2000s saw expansion into hybrid methods incorporating , such as neural network-based fusion proposed by Li et al. in 2002 for multifocus images. Key contributions included Piella's 2003 quality metrics, which introduced and structural preservation measures to evaluate fusion performance without reference images. Sparse representation models emerged in the late 2000s, with early work in 2009 applying them to multifocus image fusion for robust feature extraction in noisy environments. advanced military applications through funded research on tactical fusion, emphasizing real-time integration for and guidance systems. From the 2010s onward, the era transformed image fusion, with (CNN)-based methods like DenseFuse in 2019 enabling end-to-end learning for infrared-visible fusion via dense blocks for feature preservation. Transformer models, post-2020, introduced mechanisms for global context capture, as in FuseFormer (2024), improving fusion in and autonomous systems. Trends in the 2020s emphasize processing for autonomous vehicles and , driven by ongoing and initiatives in sensor integration.

Motivation and Benefits

Enhancing Information Content

Image fusion addresses the inherent limitations of single-modality images, such as low contrast in visible light imagery under poor illumination or high in infrared sensors, by integrating complementary data to preserve salient features while reducing artifacts like blurring or false positives. This synergy enables the fused output to capture essential details that individual inputs cannot, such as signatures combined with structural edges, thereby enhancing overall data interpretability without introducing distortions. From an information theory perspective, maximizes between input modalities to quantify and exploit statistical dependencies, ensuring the fused image retains non-redundant details. -based measures further demonstrate this gain, where such metrics indicate increased information richness and reduced uncertainty in the output. For instance, Tsallis entropy generalizations of have been used to evaluate fusion quality. These theoretical enhancements benefit human by aligning with the visual system's sensitivity to and , where fusion improves through multi-modal boundary reinforcement and enhances representation via combined spectral details. In theoretical models, such as those employing discrete transforms, fusion yields quantitative improvements in (SNR) and detail retention, establishing superior fidelity over single inputs.

Practical Advantages

Image fusion offers significant efficiency gains by reducing data volume through the elimination of redundancies inherent in multi-sensor inputs. For instance, fusing multi-spectral bands into a single RGB image preserves essential visual information, enabling more efficient storage and transmission in bandwidth-constrained environments. This facilitates faster in systems, such as autonomous vehicles where fusion supports efficient handling compared to separate streams. Usability improvements from image fusion enhance operator decision-making by providing clearer, more intuitive visuals that integrate complementary data sources. In low-light conditions, fusing visible and results in enhanced contrast and detail visibility, allowing personnel to identify threats with greater accuracy and reduced . Additionally, fused outputs are compatible with standard displays and software tools, eliminating the need for specialized hardware and streamlining workflows in fields like . Cost and resource benefits arise from the reduced reliance on multiple s or repeated acquisitions, as fusion leverages existing data streams to achieve comprehensive coverage. In embedded systems, such as portable medical devices, image fusion contributes to lower through optimized processing pipelines that minimize redundant computations. This efficiency translates to extended life and lower operational costs in resource-limited deployments. Reliability aspects of image fusion include increased robustness to failures by incorporating complementary data from redundant sources, ensuring continuous operation even if one degrades. For example, in drone-based , fusing optical and images allows fault-tolerant and detection, maintaining high despite partial sensor outages. Such is critical for safety-critical applications, where fusion mitigates single-point failures and enhances overall system dependability.

Methods and Techniques

Pixel-Level Fusion

Pixel-level fusion operates directly on the intensity values of pixels from multiple source images to produce a single composite image that integrates complementary information without prior feature extraction or segmentation. This approach is foundational in image fusion, as it preserves raw spatial details and is suitable for applications requiring high-resolution outputs, such as medical diagnostics and . The process typically involves aligning the input images via registration and then applying fusion rules to combine pixel data, resulting in an output that enhances overall informativeness for human or . In the spatial domain, methods manipulate pixel intensities directly, offering simplicity and low computational overhead. Basic techniques include simple averaging, where the fused pixel value is the arithmetic mean of corresponding pixels from the source images, though this often leads to reduced contrast and blurred details. Weighted summation improves upon this by assigning coefficients to each input image based on their relative importance, expressed as F(x,y) = W_1 I_1(x,y) + W_2 I_2(x,y), where I_1 and I_2 are the input images, W_1 and W_2 are weights summing to 1, and (x,y) denotes coordinates; this allows emphasis on salient regions but requires careful weight selection to avoid artifacts. Principal component analysis () extends these by decorrelating the input data through orthogonal transformation, fusing the principal components (e.g., selecting the highest-variance component as the base and averaging others), which efficiently captures variance in multispectral images while maintaining spatial fidelity. Transform-domain methods decompose images into frequency components before fusion, enabling selective integration of low- and high-frequency information to better preserve edges and textures. The (DWT) is a prominent example, where source images are decomposed into approximation (low-frequency) and detail (high-frequency) subbands; fusion rules such as maximum absolute value selection for high-frequency bands and averaging for low-frequency bands are applied, followed by inverse transformation to reconstruct the output—this approach effectively reduces compared to spatial methods. Pyramid decompositions, including the Laplacian pyramid, represent images as multi-resolution layers capturing band-pass details; fusion combines corresponding layers (e.g., via maximum selection for detail levels and averaging for the Gaussian base), originating from compact coding techniques adapted for integration of multi-sensor data. The gradient pyramid variant emphasizes edge information by decomposing images into gradient magnitudes across scales, fusing them to enhance sharpness while mitigating blurring in the final reconstruction. Hybrid spatial-transform methods combine domain-specific strengths, such as gradient-based weighting within or frameworks, to prioritize edge preservation during fusion. For instance, gradients from source images guide weight maps that modulate contributions in a multi-scale , ensuring robust detail transfer without excessive smoothing. These techniques bridge the gap between direct operations and for improved artifact suppression. Deep learning-based methods have recently gained prominence in pixel-level fusion, leveraging convolutional neural networks (CNNs) and generative adversarial networks (GANs) for end-to-end learning of fusion mappings. These approaches automatically extract and combine features from input images, often outperforming traditional methods in preserving details and reducing artifacts, particularly in medical and applications. For example, CNN-based frameworks can learn pixel-wise fusion weights directly from data, while GANs generate realistic fused outputs by adversarial training. As of 2024, these methods demonstrate superior performance in multimodal fusion tasks. Pixel-level fusion excels in computational efficiency, particularly spatial methods like averaging and , which process data rapidly without decomposition overhead, making them ideal for real-time applications. However, it is susceptible to noise amplification, especially in averaging-based approaches where uncorrelated noise from inputs accumulates, and to misalignment errors, as even slight registration inaccuracies propagate distortions across the fused output; transform methods mitigate some noise through subband selection but increase .

Feature-Level and Decision-Level Fusion

Feature-level fusion involves extracting salient features from input images, such as edges or textures, and then combining these intermediate representations to form a unified feature set for subsequent processing. This approach operates at an abstraction level higher than direct pixel manipulation, allowing for the integration of meaningful structural information from multiple sources. Common feature extraction techniques include the , which identifies boundaries by applying Gaussian smoothing followed by gradient computation and non-maximum suppression, and Gabor filters, which capture texture patterns through oriented wavelet-like kernels tuned to specific frequencies and directions. These features are then merged using methods like Kalman filtering for dynamic scenarios, where the filter recursively estimates feature states over time by predicting motion and correcting with new observations, or support vector machines (SVM) for classification-based merging, which optimizes a to separate and combine feature vectors from different modalities. Decision-level fusion, in contrast, processes each input image independently to derive high-level decisions, such as classifications or detections, before integrating these outputs into a final consensus. This level emphasizes semantic interpretation and is particularly suited for scenarios involving uncertainty or conflicting evidence. A key method is Bayesian inference, which updates posterior probabilities by marginalizing over features, as expressed in the equation: P(\text{class} \mid \text{data}) = \int P(\text{class} \mid \text{feature}) \, P(\text{feature} \mid \text{data}) \, d\text{feature} This formulation combines probabilistic decisions from individual analyses to yield a joint likelihood, enhancing reliability in multi-sensor environments. Another prominent technique is the Dempster-Shafer theory, which handles through functions that assign masses to subsets of hypotheses, allowing of partial or conflicting evidence without assuming completeness, as demonstrated in applications where it merges diagnostic probabilities from varied scans. In multi-modal integration, feature-level and decision-level fusion enable the combination of segmented regions or object detections from disparate sensors, such as aligning contours from and visible imagery to refine target boundaries or aggregating bounding boxes from and optical detections to improve localization accuracy in surveillance tasks. For instance, segmented tumor regions from and MRI can be fused at the decision level to produce a unified probabilistic map for clinical assessment. These fusion levels offer advantages including greater noise immunity, as abstracted features or decisions are less sensitive to low-level artifacts, and enhanced semantic understanding, which supports complex interpretations in dynamic or occluded environments. However, they introduce higher due to the need for robust and algorithms, and there is potential for information loss if key details are overlooked during .

Applications

Medical Imaging

In medical imaging, image fusion plays a crucial role in integrating complementary data from multiple modalities to enhance diagnostic accuracy and treatment precision. Anatomical imaging techniques such as computed tomography (CT) and (MRI) provide detailed structural information, while functional modalities like (PET) and (SPECT) reveal metabolic and physiological activities, allowing for improved tumor localization when fused. For instance, fusing PET with CT or MRI enables precise identification of tumor boundaries and metabolic hotspots, which is essential for applications where standalone modalities may miss subtle abnormalities. This is particularly valuable for pre-surgical planning, where multi-modal overlays combine anatomical and functional data to create comprehensive visualizations for guiding interventions. In radiotherapy, such overlays facilitate better delineation of gross tumor volume (GTV) and clinical target volume (CTV), leading to more accurate treatment plans compared to single-modality imaging. A historical milestone in this domain was the development of PET-CT hybrid scanners in the late , proposed by Townsend and colleagues, which integrated PET and CT in a single device to streamline and improve clinical . Registration of images from different modalities presents unique challenges, particularly due to patient motion such as breathing or organ shifts, which can misalign anatomical and functional data. Mutual information-based algorithms address these issues by maximizing statistical dependence between images, enabling robust non-rigid alignment even with deformable tissues. An example is the of with MRI for real-time guidance, where elastic registration compensates for probe-induced deformations to target suspicious lesions accurately, as demonstrated in procedures. The benefits of image fusion in medical applications include enhanced accuracy in radiotherapy planning, with studies showing improved target delineation that can modify in 20-30% of cases by refining tumor margins and sparing healthy tissue. Additionally, fusion techniques reduce overall by minimizing the need for multiple separate scans, as systems and registration allow comprehensive assessments from fewer acquisitions. Case studies highlight these advantages; for example, fusing diffusion-weighted MRI (DWI), which detects infarcted tissue, with perfusion-weighted imaging (PWI), which identifies hypoperfused but salvageable areas, improves assessment by automatically segmenting lesions and predicting outcomes within 4.5 hours of onset. This multimodal approach supports timely decisions, outperforming individual modalities in .

Remote Sensing and Earth Observation

In and , image fusion plays a crucial role in integrating data from multiple sensors to overcome limitations such as , varying spatial resolutions, and spectral ranges, enabling robust . A primary application is classification, where optical imagery from satellites like Landsat is fused with (SAR) data to provide all-weather analysis. For instance, fusing Landsat-8 multispectral bands with SAR texture features enhances urban mapping by combining spectral information with structural details unaffected by atmospheric conditions. Another key application is disaster monitoring, particularly flood mapping, achieved through the fusion of and optical data. This integration allows for rapid detection of inundated areas by leveraging 's cloud-penetrating capabilities with optical imagery's high , facilitating near-real-time assessment during events like heavy rainfall or hurricanes. For example, fusing and has been used to delineate permanent and temporary bodies with high accuracy, supporting emergency response in regions prone to flooding. Techniques in this domain include pansharpening for resolution enhancement, such as the Gram-Schmidt method, which is a pixel-level approach that transforms multispectral bands into a higher-resolution panchromatic equivalent while preserving spectral fidelity. This method, originally developed for Landsat data, is widely applied to sharpen moderate-resolution imaging spectroradiometer (MODIS) or Landsat images for detailed land surface analysis. Temporal fusion methods further enable over time by aligning and merging multi-date images from the same or different sensors, capturing subtle environmental shifts like or urban expansion. The benefits of these fusions include enhanced derivation of vegetation indices, such as the (NDVI), from combined spectral bands, which improves monitoring of crop health and ecosystem productivity. Fused datasets also boost accuracy in prediction; multi-sensor integration has shown improvements in prediction compared to single-sensor approaches, aiding decisions. Notable examples include NASA's fusion of MODIS and data for high-resolution thermal mapping of land surface temperatures, which supports studies on heat islands and volcanic activity. Similarly, the European Union's Copernicus program has incorporated image fusions post-2015, such as and integrations, to enhance and services across and beyond.

Military and Surveillance

In military and surveillance applications, image fusion plays a critical role in enhancing and operational effectiveness through multi-sensor integration on platforms such as unmanned aerial vehicles (UAVs) and ground-based systems. A primary use is enhancement, where (IR) and visible light images from drones are fused to provide operators with comprehensive views in low-light conditions, combining signatures for heat detection with visible details for and . This approach is particularly valuable for tactical operations, enabling pilots and ground forces to navigate and engage targets under darkness or adverse weather. Similarly, in urban environments, radar-optical fusion facilitates threat identification by merging radar's ability to penetrate obstacles and detect motion with optical imagery's high-resolution details, allowing for precise localization of potential adversaries amid cluttered cityscapes. Key techniques in these contexts include decision-level fusion for systems, where processed data from multiple sensors are combined at a higher level to support rapid decision-making in dynamic battlefields. For instance, electro-optical/ (EO/IR) fusion in UAVs supports (ATR) by aligning and integrating data streams, often using algorithms like transforms or region-based segmentation to prioritize salient features such as vehicle heat profiles or human movement. Feature-level methods are occasionally referenced for initial target extraction, fusing mid-level representations like edges or textures before higher-level decisions. These techniques enable seamless integration into existing , processing multispectral inputs at rates sufficient for live feeds. The benefits of such fusions are substantial, including improved detection rates in low-visibility scenarios; for example, multispectral IR-visible fusion has demonstrated up to a 9% increase in target detection accuracy compared to single-modality imaging. In border surveillance, reduces false alarms by up to 95% through complementary validation of detections across , RGB, and motion sensors, minimizing operator overload and enhancing response times to genuine intrusions. Developments in this domain trace back to DARPA's early initiatives, such as the Dynamic Database (DDB) program, which focused on signal-level for and time-critical target identification using multi-sensor . Post-2015, integration with has advanced predictive analytics, leveraging for proactive threat forecasting from fused datasets, as explored in efforts to revolutionize military information processing.

Other Applications

Image fusion extends to multi-focus photography, where multiple images with varying focal depths are combined to generate an all-in-focus output, improving clarity in , , and visual inspection tasks. In security applications, such as airport luggage screening, fusion of multi-energy or multi-view images enhances the detection of concealed threats by differentiating materials and revealing hidden structures more effectively than single-energy scans.

Evaluation and Challenges

Performance Metrics

Evaluating the quality of fused images requires a combination of metrics that quantify retention, structural , and preservation, as well as subjective assessments to align with human perception. metrics are particularly valuable in image fusion because they provide reproducible measures without relying on subjective judgment, though they often assume ideal fusion outcomes in the absence of references. Mutual information (MI) serves as a key objective metric for assessing information gain in the fused image relative to the input images, quantifying the shared information content to evaluate how effectively complementary details are combined. Introduced as a fusion performance measure, MI is computed as the sum of mutual information between the fused image and each source image, with higher values indicating greater information transfer and fusion quality. This metric is grounded in information theory and has been widely adopted for its ability to capture entropy-based improvements in fused outputs. The structural similarity index (SSIM) evaluates perceptual by comparing , , and structural components between source and fused , emphasizing how well the fusion preserves visually salient features akin to human visual system responses. SSIM values range from -1 to 1, with values closer to 1 signifying high structural fidelity, making it suitable for applications where visual coherence is critical. While originally developed for general quality assessment, SSIM has become a standard in fusion evaluation due to its correlation with perceived . For edge preservation, the metric Q_{AB/F} measures the degree to which edge information from input images A and B is transferred to the fused image F, using a weighted combination of edge strength and preservation across local regions. Defined as an average of local quality values, Q_{AB/F} ranges from 0 to 1, with higher scores reflecting better retention of salient s without artifacts; it is particularly effective for no-reference scenarios in multi-sensor . This gradient-based approach highlights the importance of sharp feature integration in fused results. No-reference metrics are essential when is unavailable, as in real-world tasks. Gradient-based measures like Q_E, which assess saliency through strength and preservation, provide insights into how well the maintains prominent features without input comparisons. Similarly, visual fidelity (VIF) quantifies the fidelity of channels in the fused image relative to sources, modeling human via Gaussian scale mixtures to predict losses. VIF scores above 1 indicate enhanced , while values below suggest , offering a robust perceptual no-reference tool. Subjective evaluation complements objective metrics through human observer studies, where mean opinion scores () aggregate ratings from multiple viewers on a (e.g., 1-5) to gauge overall fusion acceptability, visual clarity, and artifact absence. MOS studies reveal correlations with metrics like SSIM and VIF but highlight perceptual nuances that automated measures may overlook, such as naturalness in or surveillance applications. A primary challenge in these metrics is the lack of for real-world data, complicating validation and leading to reliance on simulated benchmarks. The 2006 Eden Project dataset from the , featuring multi-modal sensor pairs under varying conditions, serves as a standard benchmark for testing fusion performance across objective and subjective evaluations.

Limitations and Future Directions

Image fusion techniques face several inherent limitations that can compromise their effectiveness. Misregistration errors, arising from inaccuracies in aligning images captured at different times or by varying sensors, are particularly problematic in dynamic scenes, leading to artifacts and reduced fusion quality. Computational demands pose another significant challenge, especially when processing high-resolution data, where deep learning-based methods require substantial resources, often hindering real-time applications. Handling heterogeneous modalities, such as fusing color images with depth maps, introduces further difficulties due to differences in spatial resolution, spectral characteristics, and noise profiles, which can result in information loss or distortion during integration. Ethical and security concerns also underscore the need for cautious deployment of image fusion technologies. In applications, fusing multi-sensor data raises issues, as combined imagery can enable detailed tracking of individuals , potentially violating data protection regulations. Additionally, AI-driven fusion methods are susceptible to biases inherited from training datasets, which may amplify discriminatory outcomes in applications like security screening or medical diagnostics. Looking ahead, future directions in image fusion emphasize innovative integrations to address these challenges. The incorporation of generative adversarial networks (GANs) since the early has enabled synthetic data generation for training fusion models, improving robustness in data-scarce scenarios. Edge computing architectures are emerging to facilitate fusion on mobile devices, reducing latency and enabling on-device processing for applications like autonomous vehicles. Quantum-inspired algorithms offer promise for ultra-high efficiency, leveraging quantum principles to handle complex computations in multi-modal fusion tasks more rapidly than classical methods. Persistent research gaps highlight opportunities for advancement. The absence of standardized datasets for evaluating deep learning-based fusion limits and across studies. Furthermore, developing adaptive fusion techniques that dynamically adjust to variable environments, such as changing lighting or motion, remains an underexplored area to enhance versatility in real-world deployments.

References

  1. [1]
    Image Fusion Techniques: A Survey - PMC - NIH
    Jan 24, 2021 · Fusion of images is defined as an alignment of noteworthy Information from diverse sensors using various mathematical models to generate a single compound ...
  2. [2]
    None
    ### Summary of Image Fusion Techniques and Applications
  3. [3]
    A Comprehensive Image Quality Evaluation of Image Fusion ... - MDPI
    Image fusion is a common technique that combines multiple images into a single, clearer, more informative image. Recently, many fusion algorithms and evaluation ...
  4. [4]
    Review article Multisensor image fusion in remote sensing
    A general de®nition of image fusion is given as `Image fusion is the combination of two or more di erent images to form a new image by using a certain algorithm ...
  5. [5]
    (PDF) Image Fusion: An Overview - ResearchGate
    Feb 17, 2017 · An extensive overview of the field of image fusion is presented in this paper. The study firstly delves into the problem of multiple ...
  6. [6]
    [PDF] Image Fusion Techniques in Remote Sensing - arXiv
    An effective image fusion technique can produce such remotely sensed images. Image fusion is the combination of two or more different images to form a new image ...Missing: definition | Show results with:definition
  7. [7]
    The History, Trends, and Future of Infrared Technology - DSIAC
    Nov 2, 2019 · This article provides a brief history of IR sensors and systems, as well as current trends and future projections for this important technology.
  8. [8]
    Comparative Analysis of Pixel-Level Fusion Algorithms and a New ...
    Nov 27, 2023 · As an important branch of information fusion technology, the pixel-level fusion of images can be traced back to the 1980s. With the increasing ...
  9. [9]
    A history of NASA remote sensing contributions to archaeology
    Aug 5, 2025 · Optical remote sensing systems have a long history of collecting image data for many diverse applications, ranging from lab-based analysis ...<|separator|>
  10. [10]
    A new quality metric for image fusion | IEEE Conference Publication
    Abstract: We present three variants of a new quality metric for image fusion. The interest of our metrics, which are based on an image quality index ...
  11. [11]
    (PDF) Review of pixel-level image fusion - ResearchGate
    Aug 5, 2025 · This paper provides an overview of the most widely used pixel-level image fusion algorithms and some comments about their relative strengths and weaknesses.<|separator|>
  12. [12]
    DenseFuse: A Fusion Approach to Infrared and Visible Images
    Dec 18, 2018 · Abstract: In this paper, we present a novel deep learning architecture for infrared and visible images fusion problems.Missing: CNN | Show results with:CNN
  13. [13]
    PIXNET: Pixel Network for Dynamic Visualization - DARPA
    A key objective of the PIXNET program is to demonstrate that real-time image fusion can be achieved with on-board resources within a low-power budget. This ...
  14. [14]
    Survey of Advanced Image Fusion Techniques for Enhanced ...
    This paper presents a theoretical study of advanced image fusion methods applied to cardiovascular imaging. We explore wavelet-based, Principal Component ...
  15. [15]
  16. [16]
    Texture-preserving and information loss minimization method for ...
    Jul 23, 2025 · Designing an appropriate loss function to reduce information loss and produce a high-quality and information-rich fused image is a popular area ...
  17. [17]
    In-depth analysis of Tsallis entropy-based measures for image ...
    Mar 14, 2019 · We focus on Tsallis entropy and consider mutual entropy and entropic distance as the two entropic measures for image fusion quality assessment.
  18. [18]
    The effect of multispectral image fusion enhancement on human ...
    Mar 20, 2017 · The current paper examines the potential impact of one such enhancement, multispectral image fusion, where imagery captured in varying spectral ...
  19. [19]
    Multisensor Image Fusion Using the Wavelet Transform
    This paper presents an image fusion scheme which is based on the wavelet transform. The wavelet transforms of the input images are appropriately combined.
  20. [20]
    The Laplacian Pyramid as a Compact Image Code - IEEE Xplore
    Abstract: We describe a technique for image encoding in which local operators of many scales but identical shape serve as the basis functions.Missing: fusion | Show results with:fusion
  21. [21]
    Multi-scale weighted gradient-based fusion for multi-focus images
    A novel multi-scale weighted gradient-based fusion method for multi-focus images is proposed in this paper.
  22. [22]
  23. [23]
    Improved Kalman Filtering-Based Information Fusion for Crack ...
    Aug 30, 2020 · In the first stage, the features extracted from two types of sensors are fused at a specific time at the feature level, and then the location of ...<|separator|>
  24. [24]
    Decision-level data fusion using Bayesian inference
    This paper describes a blackboard system for integrating observations from multiple sensors. Multiple sensors report observations to the blackboard system.
  25. [25]
    Multi-Modality Image Fusion and Object Detection Based on ... - MDPI
    Apr 26, 2023 · Infrared and visible image fusion (IVIF) aims to provide informative images by combining complementary information from different sensors.
  26. [26]
    IMAGE FUSION USING CT, MRI AND PET FOR TREATMENT ... - PMC
    Fusion of morphologic and functional images is feasible before, during and after radiofrequency ablation of tumors in abdominal organs.
  27. [27]
    fusion of FDG PET with CT or MRI to localize foci of increased activity
    We describe a method by which computed tomography or magnetic resonance anatomic images can be digitally fused in three dimensions.
  28. [28]
    Application of Multimodality Imaging Fusion Technology in ... - NIH
    Precise localization and qualitative diagnosis of tumors. PET/CT can accurately locate tumors through high spatial resolution of CT; in addition, it can reflect ...
  29. [29]
    Image fusion of CT and MRI data enables improved target volume ...
    Treatment planning based on fused CT and MRI data resulted in improved target volume and risk structure definition.
  30. [30]
    Combined PET/CT: the historical perspective - PMC - NIH
    The proposal to combine PET with CT was made in the early 1990s by Townsend, Nutt and co-workers. The concept originated from an earlier low-cost PET ...
  31. [31]
    [PDF] Medical image registration using mutual information
    Each of these applications poses specific problems the registration method should be able to cope with, such as for instance breathing induced motion ar-.
  32. [32]
    MRI–ultrasound fusion for guidance of targeted prostate biopsy - PMC
    Fusion of MRI with ultrasound allows urologists to progress from blind, systematic biopsies to biopsies, which are mapped, targeted and tracked.
  33. [33]
    Prospective Feasibility Trial of Radiotherapy Target Definition for ...
    Apr 1, 2004 · Published results involving other cancer sites (e.g., lung) have shown that PET imaging modified patient management in 20%–30% of the cases ...
  34. [34]
    Diffusion-/perfusion-weighted imaging fusion to automatically ... - NIH
    Perfusion-weighted imaging (PWI) fusion images had high performance in identifying stroke within 4.5 h. The automatic segmentation-classification models based ...
  35. [35]
    Diffusion-/perfusion-weighted imaging fusion to automatically ...
    Mar 15, 2024 · The diffusion/perfusion-weighted imaging fusion model had the best performance in identifying stroke within 4.5 h.
  36. [36]
    The MODIS/ASTER airborne simulator (MASTER) - ResearchGate
    Aug 9, 2025 · A major benefit of multispectral thermal infrared remote sensing is its ability to estimate land surface temperatures, spectral emissivities ...<|control11|><|separator|>
  37. [37]
    Evaluation of four image fusion NDVI products against in-situ ...
    Feb 15, 2021 · High-spatiotemporal-resolution remote sensing images have enhanced our ability to monitor ecosystem dynamics across different scales.
  38. [38]
    UAV-based multi-sensor data fusion and machine learning ... - PMC
    Aug 3, 2022 · The results proved that low altitude UAV-based multi-sensor data can be used for early grain yield prediction using data fusion and an ensemble learning ...
  39. [39]
    Fusion of Sentinel-1A and Sentinel-2A data for land cover mapping
    Here we propose a methodological approach to fuse information from the new European Space Agency Sentinel-1 and Sentinel-2 imagery for accurate land cover ...<|control11|><|separator|>
  40. [40]
    [PDF] Fusion of Night Vision and Thermal Images - DTIC
    An Adaptive. Technique for the Enhanced Fusion of Low-Light Visible with Uncooled Thermal. Infrared Imagery. IEEE International Conference on Imaging Processing ...Missing: drones | Show results with:drones<|separator|>
  41. [41]
    Request Rejected
    Insufficient relevant content.
  42. [42]
    EO2IR ControlNet: synthetic infrared image generation for automatic ...
    May 29, 2025 · Automatic target recognition (ATR) utilizes both electro-optical (EO) and infrared (IR) data for accurate predictions.
  43. [43]
    (PDF) Image Fusion for Tactical Applications - ResearchGate
    Aug 9, 2025 · The advantages of image fusion have been postulated for navigation, surveillance, fire control, and missile guidance to improve accuracy and contribute to ...
  44. [44]
    Comparison results of four fusion models with IoU set as 0.5.
    The application experiments results indicate that the military target detection rate of the fused image is improved by 9.04% and 6.87% compared with that of ...
  45. [45]
    Fusion of Heterogenous Sensor Data in Border Surveillance - NIH
    Sep 28, 2022 · With the presented fusion approach, we achieved a significant reduction in false alarms ... Imaging Radar System Network for Home Land Security ...
  46. [46]
    Signal-level fusion model for image-based change detection in ...
    Improving change detection performance (probability of detection/false alarm rate) is an important goal of DARPA's Dynamic Database (DDB) program.
  47. [47]
    [PDF] An AI Revolution in Military Affairs? How Artificial Intelligence Could ...
    Jul 4, 2025 · Central to this prediction is that AI provides a massive leap in information fusion capabilities, where it can collect and process data from ...
  48. [48]
    Information measure for performance of image fusion - ResearchGate
    Aug 6, 2025 · Higher values of MI, EN, CC, SSIM, and Q AB/F indicate higher quality of the fusion image. ... measure (Q AB/F ) [49]. These image fusion ...
  49. [49]
    Image information and visual quality | IEEE Journals & Magazine
    In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the ...Missing: fusion | Show results with:fusion<|separator|>
  50. [50]
    A new image fusion performance metric based on visual information ...
    G.H. Qu et al. Information measure for performance of image fusion. Electronics Letters. (2002).<|separator|>
  51. [51]
    Sensors at the Eden Project - ResearchGate
    To address this need we recorded an image/video fusion database using infrared and visible-light cameras under varying illumination conditions. Moreover, ...