Fact-checked by Grok 2 weeks ago

Data fusion

Data fusion is the of combining originating from multiple sources to produce more consistent, accurate, and useful than could be achieved by the use of a single source alone. This integration enhances the quality, relevance, and reliability of the resulting , often addressing challenges such as uncertainty, noise, and conflicts among inputs. Originating primarily from applications in the late , data fusion was formalized through models like the Joint Directors of Laboratories (JDL) framework in 1991, which describes it as a multi-level involving the association, correlation, and combination of from single and multiple sources to achieve refined assessments of situations and threats. In practice, data fusion operates at various levels depending on the application context, including low-level fusion (e.g., raw data association from sensors), mid-level fusion (e.g., state estimation for tracking objects), and high-level fusion (e.g., and situation assessment). Common techniques encompass probabilistic methods like the for state estimation, for handling uncertainty, and Dempster-Shafer theory for evidential reasoning in decision fusion. In database and information integration contexts, it focuses on merging records representing the same real-world entities into a unified, clean representation, resolving conflicts through relational operators and advanced algorithms. Data fusion finds broad applications across domains such as multisensor networks for target tracking in and , image processing for enhanced detection, and systems where early fusion (combining features pre-classification) or late fusion (merging classifier outputs) improves robustness against noisy data. Recent advancements incorporate and hybrid approaches, such as copula-based methods for correlated decisions, enabling scalable fusion in environments like autonomous vehicles and healthcare diagnostics. Overall, it plays a critical role in enabling informed by leveraging complementary strengths from heterogeneous sources.

Fundamentals

Definition and Principles

Data fusion is defined as the process of combining data from multiple disparate sources, such as sensors and databases, to achieve improved accuracy, consistency, and comprehensiveness in the resulting information compared to what any single source can provide alone. This integration leverages diverse inputs to generate inferences that are more reliable and informative, often in real-time environments like surveillance or autonomous systems. At its core, data fusion operates on several key principles related to the relationships among data sources. Complementarity arises when sources provide unique, non-overlapping information, filling gaps that individual inputs cannot address. involves overlapping data from multiple sources, which enables error detection and reduction by cross-verifying information for greater reliability. , or more precisely , accounts for interdependencies between sources, allowing fusion algorithms to exploit these relationships for enhanced estimation and prediction. The process is typically structured across hierarchical levels, as outlined in foundational frameworks like the JDL model, which serves as a prerequisite for understanding fusion operations. These levels include: Level 0 for sub-object data assessment (e.g., signal refinement); Level 1 for object assessment (e.g., tracking and identification); Level 2 for situation assessment (e.g., relational context); Level 3 for impact or threat assessment (e.g., evaluating consequences); Level 4 for process refinement (e.g., resource optimization); and for user refinement (e.g., adjustments). By progressing through these levels, fusion systematically builds from to high-level insights. The primary benefits of data fusion include enhanced accuracy through combined , reduced uncertainty via and handling, and improved in complex scenarios. Unlike data integration, which primarily merges datasets for unified storage and querying, data fusion emphasizes real-time synthesis specifically tailored for inference and actionable outcomes.

Historical Overview

Data fusion originated in the 1970s as a U.S. effort to integrate data from multiple sensors, such as and , for improved target detection and in systems. This approach addressed the need to combine disparate sensor inputs to counter threats like submarine detection through multi-sonar . The term "data fusion" was formally coined in 1985 by F. E. White in a developed for the Joint Directors of Laboratories (JDL) to standardize terminology in multisensor integration. During the 1980s, programs advanced data fusion through initiatives like the Tri-Service Data Fusion Symposium, fostering collaboration on surveillance systems across U.S. military branches. In the , the JDL formalized a influential functional model to structure data fusion processes, emphasizing levels of abstraction from to decision support. The 2000s marked an expansion to civilian applications, particularly in , where fusion techniques enabled collaborative exploration and precise navigation in unstructured environments. By the 2010s, data fusion integrated with and paradigms, leveraging for handling heterogeneous datasets in real-time analytics. Key drivers of this evolution included rapid advances in computing power, which facilitated complex algorithms; sensor miniaturization, enabling deployment in compact devices; and the post-2000 surge in volume from proliferating sources. Post-2015, a notable shift occurred toward AI-enhanced fusion, with methods combining sensor data for robust perception in dynamic settings. This was exemplified by Uber ATG's 2016 testing of self-driving Fusions equipped with , , and cameras for fused environmental mapping. In 2022, the ISO 23150 standard emerged to define interfaces for sensor-to-fusion communication in automated driving, promoting and . These developments underscore data fusion's transition from roots to interdisciplinary tool, grounded in principles of complementarity and for reliable .

Fusion Models and Architectures

JDL/DFIG Model

The Joint Directors of Laboratories (JDL) Data Fusion Model, originally developed in 1985 by the U.S. Department of Defense's JDL Data Fusion Sub-Panel under , provided an initial framework for categorizing data fusion processes in applications. This model evolved through revisions, notably in 1999 by Alan N. Steinberg, , and White, which expanded its scope beyond tactical scenarios to include broader fusion contexts and introduced dynamic feedback mechanisms. Further updates by the Data Fusion Information Group (DFIG) in the , particularly around 2004-2005, incorporated and addressed emerging technologies like , while criticisms highlighted its initial static, sequential interpretation that limited adaptability. The JDL/DFIG model structures data fusion as a hierarchical process with six levels (0 through 5), progressing from raw signal processing to high-level decision support, emphasizing iterative refinement across levels. Level 0 (sub-object refinement or source preprocessing) focuses on estimating states from pixel-level or signal data, such as calibrating sensor inputs for accuracy. Level 1 (object assessment) involves correlating observations to estimate entity states, including kinematics, identity, and attributes, often through multi-sensor tracking algorithms. Level 2 (situation assessment) evaluates relationships among entities, such as force structures or spatial configurations, to form a contextual understanding. Level 3 (threat or impact assessment) predicts outcomes of situations, including potential threats or effects of planned actions on entities and scenarios. Level 4 (resource management or process refinement) optimizes data collection and processing, adapting sensor selection and fusion parameters based on mission needs. Level 5 (user refinement), added in the early 2000s by DFIG contributors like Erik Blasch, addresses human-centric aspects, refining information presentation for cognitive decision-making, trust, and situation awareness. Recent extensions as of 2022 incorporate AI and machine learning for enhanced Levels 4-5 in dynamic environments. In defense applications, the model has been widely adopted for multi-sensor tracking systems, such as integrating , , and data in command, control, communications, computers, and intelligence (C4I) environments to enhance in combat scenarios. Textually, the model's depicts a vertical : raw data from sources enters at Level 0, flows upward through sequential processing blocks for Levels 1-3, branches to Level 4 for loops optimizing lower levels, and culminates in outputs to users, with bidirectional arrows illustrating iterative interactions rather than strict linearity. Despite its influence, the model's static partitioning has been criticized for blurring boundaries between levels and struggling with big data volumes or non-hierarchical processes, prompting extensions like dynamic feedback loops to better handle , distributed systems.

Alternative Frameworks

While the JDL/DFIG model remains a dominant for structuring data fusion processes, several alternative architectures have emerged to address its limitations in flexibility, integration with , and adaptability to dynamic environments. These alternatives offer distinct approaches to organizing fusion activities, often prioritizing modularity, iterative processing, or service-oriented designs suitable for specific domains like or cloud-based applications. Early contributions include hierarchical structures like that proposed by R.C. Luo and M.G. Kay in their 1992 chapter on data fusion in , which describes sequential from raw data to symbolic levels without rigid feedback, suiting static scenarios. Building on such foundations, the Omnibus Model, proposed by and O'Brien in 1999, integrates elements of the JDL framework with to enable adaptive fusion processes. It features a dual-perspective —a for operational flow and a layered view for conceptual —allowing dynamic reconfiguration of fusion tasks based on contextual , which enhances adaptability in complex, uncertain environments like systems. This model is particularly useful over JDL when fusion must incorporate expert rules or evolve in real-time, as demonstrated in its application to multi-agent fusion workstations. The , described by Harris around 1997, represents another unidirectional, hierarchical progression from raw sensing to decision-making, with data flowing sequentially through levels of , feature extraction, and situation assessment without , suiting static, well-defined scenarios like early sensor where predictability is prioritized. In contrast, models with mechanisms, such as the Model and Boyd's (adapted in fusion contexts from the 1990s onward), introduce iterative loops to refine fusion outputs based on higher-level , enabling continuous in dynamic settings such as fault or . These mechanisms, often visualized as cyclic networks, outperform unidirectional approaches in for applications by allowing re-calibration of s or priorities mid-process. More recent alternatives leverage cloud computing for distributed fusion, exemplified by concepts like data fusion as a service, as explored in a 2014 framework for enterprise-scale integration. Google Cloud Data Fusion, with beta launch in April 2019 and general availability in December 2019, embodies this paradigm as a fully managed service that orchestrates data pipelines from diverse sources using serverless execution, reducing infrastructure overhead and enabling enterprise-scale fusion without custom hardware. Such models excel over traditional frameworks like JDL in big data environments, offering elasticity for geospatial or IoT applications where fusion demands vary dynamically. Additional modern frameworks, such as the ONTology-based COmmon Operating Picture (ONTCOP) model (c. 2016 onward), emphasize semantic integration for high-level fusion in collaborative systems.
FrameworkModularityScalabilityDomain Focus
Luo/Kay Hierarchical (1992)Moderate (sequential levels)Moderate (hierarchical design), multisensor
(1999)High (knowledge )High (adaptive reconfiguration), multi-agent systems
Waterfall (Harris, c. 1997)Low (linear )Low (no iteration)Static military, fault diagnosis
Mechanisms (e.g., /OODA, 1990s+)Moderate (cyclic loops)High ( refinement)Dynamic monitoring, environmental
Data Fusion as a Service (2014+)High (service-oriented)Very high ( elasticity), , enterprise

Techniques and Methods

Sensor and Low-Level Fusion

Sensor and low-level fusion, also known as source preprocessing in the JDL data fusion model (Level 0), involves combining raw signals or pixel-level data from multiple sensors to generate refined raw outputs, such as enhanced signals or images, often through techniques like averaging to reduce noise. This level operates at the earliest stage of the fusion process, focusing on direct integration of unprocessed sensor measurements to improve data quality without extracting higher-level features. Two primary architectural approaches are employed: centralized fusion, where all raw data from sensors are transmitted to a single processing unit for combination, and decentralized fusion, where local processing occurs at individual sensors or nodes before aggregated results are shared. In centralized methods, the fusion unit handles alignment and integration comprehensively, which can be computationally intensive but allows for . Decentralized approaches, by contrast, enable distributed computation, reducing needs but potentially introducing inconsistencies in local estimates. A common application is in , where pixel data from or aerial sensors are aligned spatially to create composite images for . Key algorithms at this level include pixel-level averaging, which combines corresponding pixels or signal values from multiple sources to mitigate noise and enhance signal-to-noise ratios. Another technique is , used for by transforming high-dimensional raw sensor data into a lower-dimensional space while preserving variance, particularly useful when fusing multi-spectral images or time-series signals. A foundational method is the weighted average, where the fused output y is computed as y = \frac{\sum w_i x_i}{\sum w_i}, with x_i representing individual sensor measurements and w_i as weights derived from sensor reliability, such as inverse variance. In , an illustrative example is multi-camera video stabilization, where raw image frames from multiple onboard cameras are fused at the level to compensate for motion-induced distortions, producing smoother video feeds for . This fusion level offers advantages in preserving raw information fidelity, enabling higher-resolution outputs than single- data, and providing a robust foundation for subsequent processing stages. However, challenges include errors due to sensor miscalibration or differing , which can propagate inaccuracies if not addressed through precise geometric transformations.

Feature and Mid-Level Fusion

Feature and mid-level fusion involves the integration of extracted features from multiple data sources to create more robust and informative representations, typically corresponding to Level 1 (Object Assessment) in the JDL data fusion model, where signal and feature reports are combined to estimate object states. This process operates on intermediate representations, such as detections from images or spectral signatures from sensors, rather than , enabling the formation of higher-level object hypotheses. For instance, features like motion vectors from video streams can be fused with acoustic signatures to refine target tracking. Key techniques in feature and mid-level fusion include methods, such as those based on , which quantify the relevance and redundancy between features to select the most informative subsets for fusion. Common fusion strategies encompass concatenation of feature vectors, transformation via linear or nonlinear mappings, and evidence-based combination to handle uncertainties. Dempster-Shafer theory is particularly effective for uncertainty management, allowing the combination of belief masses from disparate sources through the rule: m(A) = \sum_{B \cap C = A} m_1(B) m_2(C) where m(A) is the combined mass for hypothesis A, and the sum is over all pairs (B, C) from sources 1 and 2 whose intersection is A, normalized to account for conflict. Algorithms like Support Vector Machines (SVMs) are widely used for feature fusion, leveraging kernel functions to map fused features into higher-dimensional spaces for improved classification boundaries in multisensor scenarios. A representative example is the fusion of color and texture features in tasks, where color histograms are combined with texture descriptors like to enhance discrimination of objects in complex scenes, improving recognition accuracy by capturing complementary visual cues. This approach has been applied in background subtraction for video surveillance, yielding more stable models against illumination changes. The primary benefits of and mid-level include through selective integration, which mitigates computational overhead while preserving essential information, and enhanced robustness to by compensating across modalities. However, challenges such as feature misalignment—arising from temporal or spatial discrepancies between sources—can degrade quality if not addressed through techniques.

Decision and High-Level Fusion

Decision and high-level fusion, also known as decision-level fusion, involves combining symbolic decisions or hypotheses from multiple sources to produce a unified, higher-level , typically corresponding to Levels 2 through 4 of the JDL data fusion model, where , , and refinement occur. This approach operates at the output stage, integrating classifications or alerts rather than raw data or features, such as applying majority voting to aggregate category labels from distributed classifiers. Common methods include rule-based systems using if-then logic to resolve discrepancies, consensus techniques like the for ranking-based aggregation, and hybrid approaches that blend these for robustness. In threat detection, for instance, rule-based fusion might trigger an if two or more sensors indicate intrusion, while ranks potential threats by averaging positions across detectors to prioritize responses. Key algorithms encompass for handling soft decisions with uncertainty, where membership functions quantify confidence in hypotheses before aggregation. A foundational example is majority voting, which computes confidence as the proportion of agreeing decisions: p = \max_k \frac{\sum_i I(d_i = k)}{n} where I is the , d_i is the decision from source i, k indexes classes, and n is the number of sources; this yields a probability-like score for the dominant class. Practical examples include fusing alerts from security sensors in perimeter protection, where decisions from video, , and seismic detectors are combined via or fuzzy rules to confirm threats and reduce false positives. This fusion level offers advantages in interpretability, as symbolic decisions facilitate human oversight and explanation of outcomes. However, it faces limitations in managing conflicts, such as when sources provide contradictory soft evidence, potentially leading to suboptimal resolutions without advanced modeling.

Probabilistic and Machine Learning Methods

Probabilistic methods in data fusion leverage statistical frameworks to model uncertainty and propagate beliefs across multiple data sources. Bayesian networks, which represent joint probability distributions through directed acyclic graphs, enable efficient for fusing heterogeneous sensor data by updating posterior probabilities based on incoming evidence. These networks are particularly effective in handling incomplete or noisy information, as demonstrated in sensor network applications where they facilitate target tracking by integrating received signal strength measurements. The , a cornerstone of probabilistic state estimation, recursively fuses predictions with observations to minimize in linear dynamic systems. Its update equation is given by \hat{x}_{k|k} = \hat{x}_{k|k-1} + K_k (z_k - H \hat{x}_{k|k-1}), where \hat{x}_{k|k} is the updated state estimate, \hat{x}_{k|k-1} is the predicted state, K_k is the Kalman gain, z_k is the measurement, and H is the observation matrix; this formulation has been extended for multi-sensor fusion in tracking scenarios, improving accuracy over individual sensor outputs. Machine learning approaches extend probabilistic foundations by learning fusion parameters directly from data, addressing nonlinearities and high-dimensional inputs. Neural networks enable end-to-end data fusion by jointly optimizing feature extraction and combination layers, as seen in multimodal settings where they outperform traditional modular pipelines in tasks like image-text integration. Gaussian processes, nonparametric Bayesian models relying on kernel functions for covariance, support regression-based fusion by providing uncertainty quantification in multi-fidelity data scenarios, such as combining low- and high-resolution simulations for predictive modeling. Deep learning variants further refine these capabilities for complex multimodal fusion. Convolutional neural network-recurrent neural network (CNN-RNN) hybrids process spatial and temporal data streams separately before merging representations, enhancing performance in by capturing both local patterns and sequential dependencies across modalities like images and time-series. Autoencoders contribute to in fusion pipelines by learning compressed latent spaces that preserve essential information from high-dimensional inputs, facilitating efficient integration of multi-sensor streams in industrial monitoring. Semiparametric estimation techniques, incorporating methods, offer flexible fusion for scenarios with partial assumptions. Kernel-based approaches smooth nonparametric components while estimating effects, enabling efficient in data fusion under semiparametric models by reducing variance through integrated and profiled likelihood. Recent advances incorporate models for sequence-aware fusion, leveraging self-attention mechanisms to handle long-range dependencies in temporal or sequential multimodal data. Post-2022 developments, such as architectures for multi-omics integration, achieve superior predictive accuracy by dynamically weighting contributions from diverse sequences, as evidenced in prognosis tasks with AUC improvements up to 0.89.

Applications

Geospatial and Environmental

Data fusion plays a pivotal role in geospatial and environmental applications, particularly in , where integrating from multiple sources enhances the accuracy and reliability of mapping and monitoring efforts. One key use is the fusion of optical imagery from satellites like Landsat with () to improve . For instance, combining Landsat multispectral with L-band observations has demonstrated superior performance in delineating types in tropical regions, achieving classification accuracies up to 93.8%, compared to 91.2% using individual sensors alone. This approach leverages the complementary strengths of optical for detail and for all-weather penetration, enabling robust assessments of vegetation, soil, and water features. Another critical application involves multi-spectral for , where integrating data from various sensors facilitates rapid damage assessment and . Techniques such as fusing with multispectral imagery, as applied after the 2015 Gorkha () earthquake, provide comprehensive scene coverage by combining structural information from with color and texture details from optical sources, leading to more reliable mapping of affected areas for emergency operations. Pixel-level fusion methods, including component substitution and multi-scale decomposition, are commonly employed for SAR-optical integration, as they preserve spatial and spectral fidelity at the finest . Recent 2024 reviews highlight the efficacy of these multi-platform fusion approaches in environmental contexts, emphasizing hybrid methods that balance computational efficiency with output quality. In climate monitoring, data fusion supports detection through the integration of coarse-resolution satellite data like MODIS with higher-resolution sources or in-situ measurements. For example, fusing MODIS vegetation indices with enables near forest disturbance alerts, improving detection rates in cloud-prone areas and supporting 2020s global initiatives for conservation. Similarly, urban planning benefits from fusing GIS vector data with imagery to model dynamics, such as identifying green spaces and impervious surfaces for . These applications yield benefits like enhanced —blending 250m MODIS with 30m Landsat to achieve daily 30m monitoring—and broader coverage, though challenges such as temporal misalignment between datasets require advanced spatiotemporal alignment techniques. Recent advancements incorporate to enhance data fusion for assessment. Post-2023 studies have developed frameworks that fuse optical, , and data, enabling precise habitat mapping and modeling with improved accuracy over traditional methods. For instance, convolutional neural networks applied to multi-sensor inputs have boosted indicator detection in diverse ecosystems, addressing data scarcity and variability in . These AI-driven fusions not only amplify predictive capabilities but also facilitate scalable solutions for global efforts.

Transportation and Autonomous Systems

In transportation and autonomous systems, data fusion integrates heterogeneous data to enable robust , , and control, particularly in dynamic environments where single- limitations can compromise safety and efficiency. Advanced driver-assistance systems (ADAS) commonly employ of and cameras for obstacle detection, where provides precise 3D point clouds for geometric mapping, while cameras add semantic context for object classification, achieving improvements in detection accuracy under adverse weather conditions. Similarly, management benefits from fusing video feeds with vehicle data, allowing estimation of congestion and vehicle densities with reduced error rates compared to isolated sources. Key techniques in this domain include Kalman filter-based methods for vehicle tracking, which recursively fuse position, velocity, and acceleration data from and cameras to predict trajectories with minimal latency, essential for maintaining tracking continuity in occluded scenarios. Recent 2025 reviews highlight multi-source fusion approaches, such as switching methods that select the optimal based on environmental conditions and weighted fusion schemes that assign dynamic coefficients to sources like GNSS and inertial measurements, improving localization accuracy in urban canyons. Probabilistic methods, such as those extending Kalman filters, briefly address uncertainty in these fusions by modeling noise distributions. Case studies demonstrate practical impacts, including post-2020 V2X communication trials where data fusion of vehicle-to-infrastructure messages with onboard sensors enables collision avoidance, reducing reaction times by fusing detections with shared positional data across networks. In applications, fusing inductive loop detectors embedded in roads with imagery has optimized traffic signal timing, as shown in urban forecasting models that predict flow speeds with improved precision by integrating ground-level counts and aerial overviews. These applications yield significant benefits, such as enhanced safety through fused perceptions that lower accident risks in autonomous simulations, while challenges persist in handling high-velocity streams from diverse sources, necessitating scalable algorithms to avoid delays. Emerging trends leverage for predictive fusion in autonomous vehicles, where neural networks integrate historical trajectory from , , and cameras to forecast behaviors, as evidenced in 2024 reviews showing gains in long-term prediction horizons for highway merging scenarios.

Healthcare and Biomedical

In healthcare and biomedical applications, data fusion integrates diverse sources such as multimodal imaging, electronic health records (EHRs), wearables, and genomic data to enhance diagnostics, patient monitoring, and treatment planning. For instance, fusing (MRI) with (PET) scans enables precise tumor detection by combining anatomical detail from MRI with metabolic activity from PET, improving early cancer identification and localization. Similarly, integrating EHRs with data from wearable devices allows continuous patient monitoring, where physiological signals like and activity levels from wearables are combined with historical clinical records to predict health deteriorations in real-time. Feature-level fusion techniques are particularly prominent in genomic-imaging integration, where extracted features from genomic sequences and medical images are combined to uncover disease mechanisms, such as linking genetic variants to imaging phenotypes in classification. Recent 2024 reviews highlight multimodal data advancements, including convolutional neural networks (CNNs) for Alzheimer's disease diagnosis, where CNNs process fused MRI and PET features to achieve higher accuracy compared to single-modality approaches. Decision at the output level can further refine clinical outcomes by aggregating probabilistic predictions from these models. Case studies from the (2020-2022) demonstrated data fusion's role in fusing scans with clinical biometric data, such as and laboratory results, to develop predictive models for disease severity and patient , enhancing in overwhelmed healthcare systems. In drug discovery, multi-omics fusion integrates , transcriptomics, and to identify novel therapeutic targets, accelerating the identification of drug candidates by revealing interconnected biological pathways. These applications yield benefits like enhanced predictive accuracy—for example, multimodal fusion models have improved Alzheimer's detection rates over unimodal methods—while addressing challenges such as data privacy under HIPAA regulations, which mandate secure handling of fused sensitive health information to prevent breaches. Post-2023 developments include transformer-based models for wearable data fusion, which leverage attention mechanisms to process sequential sensor streams, enabling robust real-time health trajectory predictions with reduced computational overhead compared to traditional recurrent networks.

Defense and Security

Data fusion plays a pivotal role in defense and security by integrating disparate sensor data streams to enhance threat detection, situational awareness, and operational decision-making in adversarial environments. In military contexts, it enables the synthesis of information from radar, infrared (IR), and electromagnetic (EM) sensors to track maneuvering targets, reducing vulnerability to countermeasures and improving accuracy in dynamic scenarios. For instance, multi-sensor fusion has been applied in ballistic missile defense systems, where radar provides early detection and IR sensors offer precise tracking during boost phases, allowing for multi-target discrimination. This approach has demonstrated robust performance in simulations, achieving high tracking accuracy for multiple threats simultaneously. Cyber-physical data fusion further extends these capabilities by combining network traffic data with physical sensor inputs to detect anomalies in , such as power systems or weapon platforms. Advanced models, including AI-driven frameworks, fuse cyber logs and physical to identify evolving attacks, with reported detection rates exceeding 95% in controlled environments by leveraging sequential modeling and hybrid intrusion systems. The Joint Directors of Laboratories (JDL) model structures this process into levels, where Level 1 focuses on object assessment for basic tracking, Level 2 on situation assessment for contextual understanding, and Level 3 on impact prediction to evaluate threats, thereby supporting comprehensive in cyber-defense operations. High-level fusion, operating at JDL Levels 4 and 5, integrates fused data with human expertise for , producing entity-based insights from multi-intelligence (multi-INT) sources to inform strategic decisions. In practical deployments, data fusion has been instrumental in managing swarms during the 2020s Ukraine conflict, where systems like coordinate real-time intelligence from multiple UAVs, ground sensors, and satellite feeds to optimize strikes and evade defenses, enabling autonomous group decisions that have neutralized high-value targets. For border security, fusion of video surveillance with biometric data, such as and patterns, enhances identity verification and at entry points, with techniques improving accuracy by up to 20% over unimodal systems in operational trials. These applications yield benefits like rapid response times—reducing decision cycles from minutes to seconds—and heightened operational resilience, though challenges persist in secure data sharing across networks to prevent . Recent advancements in systems incorporate for enhanced data fusion, as outlined in U.S. strategies through 2025, which emphasize cloud-enabled integration of multi-domain sensors to support . These efforts, including -powered workflows for in drone detection, have driven projected spending of approximately $58.5 billion in 2025, fostering and for threat anticipation. Data fusion's military origins trace back to Cold War-era efforts in the 1970s to integrate and for detection, laying the foundation for modern systems.

Challenges and Advances

Integration and Quality Challenges

One of the primary challenges in data fusion is the heterogeneity of data sources, which often differ in formats, scales, resolutions, and modalities, complicating the integration process. For instance, combining structured numerical from sensors with unstructured textual or requires of disparate representations to avoid inconsistencies. This heterogeneity can lead to misalignment and reduced fusion accuracy if not addressed. issues further exacerbate these problems, including from environmental interference, missing values due to sensor failures, and outliers that skew representations. In multi-sensor environments, such imperfections can propagate errors, undermining the reliability of the fused output. To mitigate these challenges, preprocessing techniques are essential, such as to standardize scales across sources and imputation methods to handle missing values, often using statistical models like mean substitution or more advanced algorithms like k-nearest neighbors. metrics, including fusion error rate—which quantifies the discrepancy between fused results and —provide benchmarks for evaluating integration effectiveness, with lower rates indicating successful error reduction. Conflict resolution in multi-source databases exemplifies these solutions; iterative probabilistic models resolve discrepancies by estimating source reliability and weighting contributions accordingly, improving truth discovery in conflicting datasets. Sensor techniques, such as , offer brief mitigation for but must integrate with broader preprocessing pipelines. In systems, poses a critical issue, as temporal misalignments between data streams can introduce delays or inaccuracies, particularly in dynamic environments like . challenges have intensified with since the , where volume and velocity overwhelm traditional algorithms, necessitating distributed architectures to process petabyte-scale inputs without performance degradation. Recent 2025 reviews highlight stochastic errors in , where random models in inertial systems amplify positioning inaccuracies, requiring adaptive estimation for robust . Ethically, data can amplify biases present in individual sources, such as demographic skews in data, leading to unfair outcomes in downstream applications and necessitating in processes to uphold . Recent advancements in data fusion have increasingly incorporated to enable distributed processing, particularly post-2020, allowing fusion of data from decentralized sources such as sensors in ecosystems. This approach reduces and demands by performing fusion closer to the data origin, enhancing scalability in applications like autonomous systems. Explorations into quantum-inspired methods have emerged around 2025 to better handle in multimodal data fusion, leveraging concepts like superposition to model probabilistic ambiguities more efficiently than classical techniques. For instance, quantum-inspired algorithms have been applied to fuse and UAV data, improving robustness in dynamic environments. The expansion of and in data fusion emphasizes paradigms for privacy-preserving integration, where models are trained across distributed datasets without centralizing sensitive information, as demonstrated in traffic state estimation frameworks. Recent 2024-2025 reviews highlight deep learning's role in fusing heterogeneous data for and digital twins, enabling through combined and inputs to optimize . Looking toward future directions up to 2030, explainable integration in data fusion is anticipated to address interpretability in complex systems, such as combining radiological and for healthcare predictions, fostering trust in automated decisions. Fusion techniques are evolving to integrate with and networks, supporting by enabling seamless, low-latency aggregation in edge-intelligent environments for applications like smart cities. Surveys from 2025 predict a shift toward model-data fusion in digital engineering, where physical models are dynamically updated with real-time data streams to create adaptive digital twins, enhancing design and simulation accuracy. In environmental applications, sustainable data fusion practices are gaining traction, such as merging and dispersion models for urban air quality monitoring, promoting resource-efficient and eco-friendly decision-making. Current research gaps include the lack of for interoperable fusion protocols across domains, hindering scalable deployment, and ethical concerns in systems, particularly around amplification and in automated . Addressing these through global frameworks will be crucial for the ethical evolution of data fusion technologies.

References

  1. [1]
    A Review of Data Fusion Techniques - PMC - PubMed Central
    Briefly, we can define data fusion as a combination of multiple sources to obtain improved information; in this context, improved information means less ...
  2. [2]
    Data fusion | ACM Computing Surveys - ACM Digital Library
    Data fusion is the process of fusing multiple records representing the same real-world object into a single, consistent, and clean representation.Abstract · Information & Contributors · Publication History
  3. [3]
    A Comparative Study on Recent Automatic Data Fusion Methods
    Dec 30, 2023 · Data fusion techniques attempt to combine multiple sources of information to achieve accuracy and precision in decision-making that would not be ...Data Fusion Concepts · 3. Early Fusion From Sensor... · 4. Late Fusion From Scores...<|control11|><|separator|>
  4. [4]
    (PDF) An Introduction to Multisensor Data Fusion - ResearchGate
    Aug 5, 2025 · Data fusion is a process which integrates multiple data or information to produce more efficient and valuable data (Hall & Llinas, 1997) .
  5. [5]
    [PDF] Information Fusion in WSNs: A Review - IJISET
    According to the relationship among the sources, information fusion can be classified as complementary, redundant, or cooperative [Durrant-Whyte 1988]. Thus,.
  6. [6]
    [PDF] Revisions to the JDL Data Fusion Model - DTIC
    1. Abstract. The Data Fusion Model maintained by the JDL Data Fusion Group is the most widely-used method for categorizing data fusion-related functions.
  7. [7]
    Data integration and data fusion approaches in self-driving labs
    Oct 29, 2025 · Within SDLs, data integration and data fusion are distinct yet complementary processes that play a critical role in constructing robust, machine ...<|control11|><|separator|>
  8. [8]
    Integrated Sensor Systems and Data Fusion for Homeland Protection
    The first data fusion algorithms employed in real systems in the radar field go back to the early seventies, when they had been developed for multi-radar ...
  9. [9]
    Review Article Multi-source information fusion: Progress and future
    To meet the needs of processing multiple independent sonar signals for information fusion, the US military proposed a method for enemy submarine detection based ...
  10. [10]
    [PDF] Chapter 2: Revisions to the JDL Data Fusion Model - DSP-Book
    The data fusion model, developed in 1985 by the U.S. Joint Directors of Laboratories (JDL) Data Fusion. Group*, with subsequent revisions, is the most ...
  11. [11]
    [PDF] DFS-88, 1988 Tri-Service Data Fusion Symposium. Volume I - DTIC
    Mar 7, 1988 · The 1988 Tri-Service Data Fusion Symposium (DFS-88) was held at Laurel, Maryland on 17-19 May 1988 under the joint sponsorship of the Data ...
  12. [12]
    Revisiting the JDL model for information exploitation - IEEE Xplore
    Abstract: The original Joint Directors of Laboratories (JDL) model was developed in the early 90's, with revisits in 1998, and 2004.Missing: 1990s | Show results with:1990s
  13. [13]
    [PDF] Data Fusion Algorithms for Collaborative Robotic Exploration
    May 15, 2002 · In this article, we will study the problem of efficient data fusion in an ad hoc network of mobile sensors (“robots”) using belief ...
  14. [14]
    A Survey on Deep Learning for Multimodal Data Fusion
    May 1, 2020 · This review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals.
  15. [15]
    Multi-sensor integrated navigation/positioning systems using data ...
    This article describes a thorough investigation into multi-sensor data fusion, which over the last ten years has been used for integrated positioning/navigation ...
  16. [16]
    Deep Learning Sensor Fusion for Autonomous Vehicle Perception ...
    This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle ...Missing: Uber | Show results with:Uber
  17. [17]
    Uber Jumps Into the Self-Driving Wars With a Ford Fusion
    "The Uber ATC car comes outfitted with a variety of sensors including radars, laser scanners, and high resolution cameras to map details of the ...
  18. [18]
    ISO 23150:2023 - Road vehicles — Data communication between ...
    In stockThe document specifies the logical interface between in-vehicle environmental perception sensors (for example, radar, lidar, camera, ultrasonic) and the fusion ...
  19. [19]
    [PDF] Revisions to the JDL Data Fusion Model - DTIC
    The JDL levels have frequently been interpreted as a canonical guide for partitioning functionality within a system: do level 1 fusion first, then levels 2,3 ...Missing: origins | Show results with:origins
  20. [20]
    [PDF] JDL MODEL - International Society of Information Fusion
    The JDL model was developed to define the concepts and struc- ture of the data fusion problem: that of estimating entity states of interest within a problem ...
  21. [21]
    [PDF] JDL Level 5 Fusion Model “User Refinement” Issues and ...
    ABSTRACT. The 1999 Joint Director of Labs (JDL) revised model incorporates five levels for fusion methodologies including level 0 for.
  22. [22]
    A Review of data fusion models and architectures - ResearchGate
    Aug 10, 2025 · This paper reviews the potential benefits that can be obtained by the implementation of data fusion in a multi-sensor environment.Missing: DFM | Show results with:DFM
  23. [23]
    [PDF] A REVIEW OF DATA FUSION MODELS AND ARCHITECTURES
    A classical JDL model of data fusion including positional fusion. This will attempt to determine the location and kinematic information of an entity. Following ...
  24. [24]
    [PDF] A Review on System Architectures for Sensor Fusion Applications
    While being more exact in analyzing the fusion process than other models, the major limitation of the waterfall model is the omission of any feedback data flow.
  25. [25]
    Sensor Fusion - an overview | ScienceDirect Topics
    Durrant-Whyte (1988) distinguishes the following three types of sensor configuration: ... Competitive, complementary, and cooperative fusion. Adapted from ...
  26. [26]
    [PDF] Data Fusion as an Enterprise Service - OSTI.GOV
    ENTERPRISE DATA FUSION SERVICES. Ephemeris was originally ... Service-oriented architectures and software-as-a-service models adopted in the pursuit of en-.
  27. [27]
    [PDF] DATA FUSION IN CLOUD COMPUTING:BIG DATA APPROACH
    DATA FUSION IN CLOUD COMPUTING:BIG DATA APPROACH. Piotr Szuster. Jose M ... In industry, those services are defined as Infrastructure as a Service. (IaaS) ...
  28. [28]
    Centralised and Decentralised Sensor Fusion-Based Emergency ...
    The centralised fusion-driven EBA yields comparatively less accurate results, but with the benefits of a higher frame rate and lesser computational cost.
  29. [29]
    [PDF] Research issues in image registration far remote sensing
    Image registration is an important element in data pro- cessing for remote sensing with many applications and a wide range of solutions.
  30. [30]
    [PDF] A Total Variation Based Algorithm for Pixel Level Image Fusion
    Mar 13, 2008 · An intuitive approach for pixel level fusion is to average the input images. Averaging reduces sensor noise but it also reduces the contrast of ...
  31. [31]
    Principal Component Analysis (PCA) for Data Fusion ... - SpringerLink
    Kalman .lter- based methods have been developed for fusing data from various sensors for mobile robots. However, the Kalman .lter methods are computationally ...
  32. [32]
    [PDF] A Study of Weighted Average Method for Multi-sensor Data Fusion
    Jan 20, 2022 · The weighted average method is the most easily understood and most used method in the parameter classification fusion algorithm. The weighted ...
  33. [33]
    Real-Time Digital Video Stabilization Based on IMU Data Fusion ...
    Mar 18, 2025 · A real-time digital video stabilization (DVS) method based on inertial measurement unit (IMU) data fusion and periodic jitters, which can be operated in real ...Missing: multi- camera
  34. [34]
    Sensor and Sensor Fusion Technology in Autonomous Vehicles
    This paper evaluates the capabilities and the technical performance of sensors which are commonly employed in autonomous vehicles.
  35. [35]
    A Review of Multi-Sensor Fusion in Autonomous Driving - MDPI
    To discuss the advantages and limitations of camera–LiDAR fusion, BEV transformation, cross-modal Transformer layers, and temporal fusion methods. To highlight ...
  36. [36]
    The three types of data fusion are compared side by side
    While observation- level fusion means that raw data are combined directly, feature-level fusion involves a preliminary extraction of rep- resentative features ...
  37. [37]
    [PDF] Decision Level Fusion: An Event Driven Approach - OSTI.GOV
    Feature level fusion, on the other hand, first gleans features from raw data (e.g., transformed data) from diverse sensors, to subsequently coherently merge ...
  38. [38]
    [PDF] Feature Selection via Mutual Information: New Theoretical Insights
    Jul 17, 2019 · Abstract—Mutual information has been successfully adopted in filter feature-selection methods to assess both the relevancy.
  39. [39]
    [PDF] Combination of Evidence in Dempster- Shafer Theory
    This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for ...
  40. [40]
    Sensor data fusion with support vector machine techniques
    This paper presents an approach to multisensor data fusion based on the use of Support Vector Machines (SVM). The approach is investigated using simulated ...
  41. [41]
    Fusing Color and Texture Features for Background Model
    We present a novel approach that uses fuzzy integral to fuse the texture and color features for background subtraction. The method could handle various small ...
  42. [42]
    Fusing color and texture features for background model
    We present a novel approach that uses fuzzy integral to fuse the texture and color features for background subtraction. The method could handle various small ...
  43. [43]
    A Review on Data Fusion of Multidimensional Medical and ...
    Nov 2, 2022 · Feature-level fusion reduces data preprocessing, including the alignment of the image data. The methods used for traditional data fusion are ...
  44. [44]
    DATA: Domain-And-Time Alignment for High-Quality Feature Fusion ...
    Jul 24, 2025 · The acquisition of high-quality features faces domain gaps from hardware diversity and deployment conditions, alongside temporal misalignment ...
  45. [45]
    Decision Fusion - an overview | ScienceDirect Topics
    Decision fusion is defined as the method of combining decisions made by multiple classifiers to arrive at a final, unified decision.
  46. [46]
    Does Classifier Fusion Improve the Overall Performance? Numerical ...
    Approaches without any training of parameters are for example the Majority Voting or the Borda Count method to be used directly after classification.
  47. [47]
    Use of the Borda count for landmine discriminator fusion
    The Borda Count was proposed as a method of ranking candidates by combining the rankings assigned by multiple voters. It has been studied extensively in the ...Missing: decision data
  48. [48]
    (PDF) Fuzzy logic decision fusion in a fingerprints based multimodal ...
    Aug 6, 2025 · Experimental results explain an improvement of about 10 and 12% using the fuzzy logic decision fusion over the fusion by majority voting and ...
  49. [49]
    Using decision fusion methods to improve outbreak detection in ...
    Mar 5, 2019 · Voting methods​​ In the case of majority voting (MV) scheme fusion, the method gives equal weight to the decisions and carries out the prediction ...
  50. [50]
    An Integrated Fusion Engine for Early Threat Detection ...
    An integrated fusion engine was developed for the management of a plurality of sensors to detect threats without disrupting the flow of commuters.
  51. [51]
    Sensor Fusion: The Next Generation of Perimeter Security - Senstar
    Sensor fusion is the process of combining simultaneous inputs from different types of sensors to extract meaningful signals with a much higher fidelity.
  52. [52]
    Potential advantages and limitations of using information fusion in ...
    Jul 29, 2021 · Information fusion, ie, the combination of expert systems, has a huge potential to improve the accuracy of pattern recognition systems.
  53. [53]
    Bayesian Approach for Data Fusion in Sensor Networks - IEEE Xplore
    We formulate the target tracking based on received signal strength in the sensor networks using Bayesian network representation.Missing: papers | Show results with:papers
  54. [54]
    Application of Data Sensor Fusion Using Extended Kalman Filter ...
    Jul 4, 2023 · An architecture for LiDAR and Radar sensor data fusion through the extended Kalman filter model was implemented based on the CTRV model and the ...
  55. [55]
    Deep Multimodal Data Fusion | ACM Computing Surveys
    Apr 24, 2024 · This phenomenon is called information redundancy. Furthermore ... correlation between various modalities. However, by focusing solely ...
  56. [56]
    Multifidelity Data Fusion via Gradient-Enhanced Gaussian Process ...
    Aug 3, 2020 · We propose a data fusion method based on multi-fidelity Gaussian process regression (GPR) framework. This method combines available data of the quantity of ...
  57. [57]
    A review of deep learning-based information fusion techniques for ...
    This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks.
  58. [58]
    Multi-sensor data collection and fusion using autoencoders in ...
    Jun 22, 2021 · Autoencoders are unsupervised NNs that are trained to replicate the input data. Applications of autoencoders include dimensionality reduction ...
  59. [59]
    Efficient Estimation under Data Fusion - PMC - NIH
    Efficient estimation under data fusion involves fusing data from multiple sources to make inferences about a parameter, reducing the semiparametric efficiency ...
  60. [60]
    A novel sequence-based transformer model architecture for ... - Nature
    Aug 20, 2025 · This study pioneers the application of transformer-based algorithms for multi-omics data integration in disease prediction, paving the way for ...
  61. [61]
    Combined Landsat and L-Band SAR Data Improves Land Cover ...
    Combined Landsat and L-band SAR data produced the best overall classification accuracies (92.96% to 93.83%), outperforming individual sensor data (91.20% to ...
  62. [62]
    [PDF] UTILIZING SAR AND MULTISPECTRAL INTEGRATED DATA FOR ...
    The results show that combining SAR and multispectral images, leads to more reliable information and provides a complete scene for the emergency response.
  63. [63]
    Pixel level fusion techniques for SAR and optical images: A review
    This paper discusses the necessity of fusing synthetic aperture radar (SAR) and optical imagery. A survey is presented for various pixel level approaches.
  64. [64]
    A critical review on multi-sensor and multi-platform remote sensing ...
    Feature-level fusion involves integrating multi-sensor data or features by combining spectral, textural, spatial, and structural features through stacking ...<|control11|><|separator|>
  65. [65]
    Toward near real-time monitoring of forest disturbance by fusion of ...
    In this paper, we propose a method to fuse MODIS and Landsat data in a way that allows for near real-time monitoring of forest disturbance.
  66. [66]
    Data Fusion for Urban Applications - Remote Sensing - MDPI
    This Special Issue is devoted to strategies and methods for fusing multi-modal data in the context of urban remote sensing. As a general guideline, ...
  67. [67]
    Deep Learning-Based Fusion of Optical, Radar, and LiDAR Data for ...
    Synergistic harmonization, involving the integration of multi-source remote sensing data, offers a promising solution to overcome these challenges. By combining ...
  68. [68]
    A review of multi-sensor fusion 3D object detection for autonomous ...
    Sep 13, 2024 · In this paper, we provide a review of 3D object detection methods for multi-sensor fusion. First, we introduce common camera and LiDAR sensors ...<|separator|>
  69. [69]
    Extended Kalman Filter-Based Vehicle Tracking Using Uniform ...
    We develop an extended Kalman filter-based vehicle tracking algorithm, specifically designed for uniform planar array layouts and vehicle platoon scenarios.
  70. [70]
    Advances in Multi-Source Navigation Data Fusion Processing ...
    Currently, the primary approaches employed in this field include the switching method, the average weighted fusion method, and the adaptive weighted fusion ...
  71. [71]
    Sensor Fusion-Based Vehicle Detection and Tracking Using a ... - NIH
    May 19, 2023 · A Kalman filter is used in the motion estimation model to track target vehicles detected by the camera and radar sensors. The Kalman filter ...
  72. [72]
    5G and connected vehicles: communication and cooperation
    Mar 25, 2019 · Data from these cameras is then sent to the V2X server, where a data fusion module compares it to data compiled by the connected vehicles in ...
  73. [73]
    Multi-Source Urban Traffic Flow Forecasting with Drone and Loop ...
    Jan 7, 2025 · Therefore, this paper investigates the problem of multi-source traffic speed prediction, simultaneously using drone and loop detector data. A ...Missing: smart | Show results with:smart
  74. [74]
    Real time object detection using LiDAR and camera fusion ... - Nature
    May 17, 2023 · In this paper, a LiDAR-camera-based fusion algorithm is proposed to improve the above-mentioned trade-off problems by constructing a Siamese network for object ...
  75. [75]
    Deep‐learning‐based vehicle trajectory prediction: A review - Yin
    Feb 9, 2025 · This work reviews the DL-based methods that have shown promising results, organising them in terms of usage of the input data.
  76. [76]
    A Review of Deep Learning-based Multi-modal Medical Image Fusion
    Jul 4, 2025 · For example, in the field of cancer diagnosis, the combined fused images of PET and MRI can lead to early detection of tumors and, consequently, ...
  77. [77]
    Synergy in Neuroimaging: PET-CT and MRI Fusion for Enhanced ...
    Nov 24, 2024 · The fusion of PET and MRI was found to be useful in the attempt at the characterization of brain tumors. PET-CT alone can be confusing because ...
  78. [78]
    Bringing it all together: Wearable data fusion | npj Digital Medicine
    Aug 17, 2023 · It's the process of combining data from different sources to create a more complete picture of what's going on.
  79. [79]
    Imaging‐genomic spatial‐modality attentive fusion for studying ...
    Nov 19, 2024 · Attentive fusion of neuroimaging and genomics data classify schizophrenia (SZ) with high precision. The attention scores provide the most ...
  80. [80]
    Multi-Modal Fusion and Longitudinal Analysis for Alzheimer's ... - NIH
    Mar 13, 2025 · This study introduces FusionNet, a groundbreaking framework designed to enhance AD classification through the integration of multi-modal and longitudinal ...
  81. [81]
    Visual transformer and deep CNN prediction of high-risk COVID-19 ...
    Nov 17, 2023 · In this work, we use data fusion of lung CT scan images and clinical data from a total of 380 Iranian Covid-19-positive patients to develop deep ...Missing: biometrics | Show results with:biometrics
  82. [82]
    Network-based multi-omics integrative analysis methods in drug ...
    Mar 28, 2025 · This review aims to analyze network-based approaches for multi-omics integration and evaluate their applications in drug discovery.
  83. [83]
    Multi-Sensor Wearable Device With Transformer-Powered Two ...
    Apr 4, 2025 · Multi-Sensor Wearable Device With Transformer-Powered Two-Stream Fusion Model for Real-Time Leg Workout Monitoring. IEEE J Biomed Health Inform.Missing: 2023-2025 | Show results with:2023-2025
  84. [84]
    Tracking a 3D maneuvering target with passive sensors - IEEE Xplore
    The fusion of IR and EM (electromagnetic) sensors make the system less susceptible to target counter-measures and to destruction of one sensor by a preemptive ...
  85. [85]
    Multi-target tracking algorithm of boost-phase ballistic missile defense
    Monte Carlo simulation re- sults show that the proposed algorithm can track the boost phase of multiple ballistic missiles and it has a good tracking ...
  86. [86]
    Anomaly detection method for cyber physical power system based ...
    A cyber-physical bilateral data-driven composite model is proposed in this paper to achieve efficient and accurate anomaly detection of CPPS.
  87. [87]
    AI-driven cybersecurity framework for anomaly detection in power ...
    Oct 10, 2025 · Sahu et al. introduced cyber-physical data fusion, they lacked advanced sequential modeling. Other models focused on classical ML or deep ...
  88. [88]
  89. [89]
    [PDF] Information Fusion for Situational Awareness - DTIC
    While the JDL provides a functional model significance of objects and events. It also combines for the data fusion process, it does not model it from new ...
  90. [90]
    Application of the JDL Data Fusion Process Model for Cyber Security
    Therefore, to have awareness of a “situation”, the output of a Level 2 fusion process must provide information about the defensive posture of the network under ...
  91. [91]
    Effective defence intelligence requires effective data fusion and AI ...
    May 15, 2025 · Effective defence intelligence requires effective data fusion and AI enablement · Transforming high-noise information into assured, entity-based ...
  92. [92]
    Ukraine military's data fusion system facing user backlash
    Jul 21, 2025 · Delta, Kyiv's flagship data fusion system that coordinates drones, troops and intelligence in real time, is facing a backlash from its users ...Missing: 2020s | Show results with:2020s
  93. [93]
  94. [94]
    Fusion of Hand Biometrics for Border Control Involving Fingerprint ...
    Feb 11, 2025 · ABSTRACT In this paper, we proposed an advanced multimodal fusion technique for fingerprint and finger vein recognition algorithms ...
  95. [95]
    A comprehensive overview of biometric fusion - ScienceDirect.com
    This paper presents an overview of biometric fusion with specific focus on three questions: what to fuse, when to fuse, and how to fuse.
  96. [96]
    AI and cloud computing focus of DoD C4ISR spending
    Frost & Sullivan's recent analysis, Assessment of the US DoD C4ISR Market, Forecast to 2025, highlights the U.S. Department of Defense (DoD) command, control, ...
  97. [97]
    [PDF] Appraising the State of Play of C4ISR Infrastructure within NATO ...
    ... 2025, NATO MSS enhances stra- tegic-level C4ISR through AI-driven data fusion and analytics, processing multi-domain data to support strategic command and ...Missing: DoD | Show results with:DoD
  98. [98]
    Why U.S. Defense Strategy Centers on C4ISR Systems
    In 2025, the U.S. defense sector is doubling down on C4ISR ... This joint initiative delivers an AI-powered workflow linking drone detection, sensor fusion ...Missing: DoD | Show results with:DoD
  99. [99]
    Data Fusion - an overview | ScienceDirect Topics
    Data fusion refers to the process of combining data, information, and knowledge in order to enhance data quality, reduce uncertainty, extract important ...Missing: noise, | Show results with:noise,
  100. [100]
    A Review of Multi-Source Data Fusion and Analysis Algorithms in ...
    Data preprocessing is the basis of multi-source data fusion, which aims to eliminate inconsistency, redundancy, and noise in the data and ensure the reliability ...
  101. [101]
    Data Transformation Strategies to Remove Heterogeneity - arXiv
    Jul 16, 2025 · In this paper, we conduct a comprehensive survey of data transformation strategies to mitigate heterogeneity issues arising from varying ...
  102. [102]
    Research on Multi-Source Data Fusion Methods in Information ...
    Data cleaning is a foundational step in the data fusion process, aimed at improving data quality. Common data cleaning operations include removing duplicate ...
  103. [103]
    [1503.00310] Data Fusion: Resolving Conflicts from Multiple Sources
    Mar 1, 2015 · We present a case study on real-world data showing that the described algorithm can significantly improve accuracy of truth discovery and is ...
  104. [104]
    Analyzing the Impact of Time Synchronization in Sensor Fusion - arXiv
    Jan 20, 2024 · This paper presents the Syncline model, a simple visual model of how time synchronization affects the accuracy of sensor fusion for different mobile robot ...Missing: rate | Show results with:rate
  105. [105]
    Impact Analysis of Time Synchronization Error in Airborne Target ...
    Apr 23, 2024 · This paper investigates the influence of time synchronization on sensor fusion and target tracking.
  106. [106]
    Critical analysis of Big Data challenges and analytical methods
    This paper presents a state-of-the-art review that presents a holistic view of the BD challenges and BDA methods theorized/proposed/employed by organizations.
  107. [107]
    Advanced Stochastic Model for MEMS IMU in Navigation - IEEE Xplore
    Jul 14, 2025 · This study focuses on improving positioning accuracy by precisely characterizing and modeling the stochastic error com- ponents in inertial ...
  108. [108]
    [PDF] Data Engineering Ethics: Societal Implications of Large-Scale Data ...
    May 12, 2025 · Integration processes can inadvertently amplify existing biases through several mechanisms, creating significant ethical challenges for data ...
  109. [109]
    Distributed intelligence on the Edge-to-Cloud Continuum
    This review aims at providing a comprehensive vision of the main state-of-the-art libraries and frameworks for machine learning and data analytics available ...
  110. [110]
    Q-MobiGraphNet: Quantum-Inspired Multimodal IoT and UAV Data ...
    The concept of multimodal data fusion—combining heterogeneous data from multiple sources—has gained traction across various fields such as smart city management ...
  111. [111]
    Generative Federated Learning With Small and Large Models in ...
    May 22, 2025 · This article proposes a Dependency-correlated Data Fusion Scheme (DcDFS) to maximize the privacy of the health data-sharing process.
  112. [112]
    Data–model Fusion Methods and Applications toward Smart ...
    Jan 28, 2025 · In the context of smart manufacturing and digital engineering, deep learning methods can facilitate the analysis of large-scale industrial data ...
  113. [113]
    Orchestrating explainable artificial intelligence for multimodal and ...
    Jul 22, 2024 · Radiological data has been combined with other data types for predictive AI systems in various clinical disciplines like oncology or neurology.
  114. [114]
    Integrating IoT and 6G: Applications of Edge Intelligence ...
    The integration of artificial intelligence and machine learning technologies endows edge devices with the capabilities of self-learning and decision-making.
  115. [115]
    Data fusion for urban air quality modeling with citizen science data
    Dec 1, 2024 · This study aims to advance urban air quality modelling by integrating a dispersion model output with large-scale citizen science data
  116. [116]
    [PDF] Global Challenges in the Standardization of Ethics for Trustworthy AI
    Abstract. In this paper, we examine the challenges of developing international standards for Trustworthy AI that aim both to be global applicable and to ...
  117. [117]
    Progress and recommendations in data ethics governance - Nature
    Aug 20, 2025 · Therefore, to address the numerous challenges in data ethics governance, this study collects several data ethics frameworks with the objective ...