Fact-checked by Grok 2 weeks ago

Medical image computing

Medical image computing is an interdisciplinary that develops and applies computational methods to , , analyze, and visualize , enabling robust, automated, and quantitative extraction of clinically relevant information to support , therapy planning, follow-up, and biomedical research. This domain integrates principles from , , and , operating primarily on multidimensional such as 2D images or 3D volumes from modalities including computed (CT), (MRI), (PET), and . At its core, medical image computing involves several fundamental tasks that transform raw data into actionable insights. These include image enhancement to improve quality by reducing noise or artifacts, segmentation to delineate anatomical structures or pathologies, registration to align images from different modalities or time points, and feature extraction for quantitative measurements like volume or . Advanced techniques, such as model-based approaches incorporating prior anatomical knowledge or algorithms like convolutional neural networks (CNNs), address the inherent challenges of data variability, including differences in imaging physics, patient , and pathological variations. Advancements as of 2025 emphasize for tasks like automated and synthesis of synthetic images via generative adversarial networks (GANs) and broader generative AI models, along with AI integration in multi-modal , enhancing efficiency and accuracy in handling large-scale datasets. The applications of medical image computing span diagnostics, interventional procedures, and , profoundly impacting healthcare outcomes. In diagnostics, it facilitates early detection of diseases such as tumors or lesions through multi-modal fusion, combining structural (e.g., MRI) and functional (e.g., ) data for comprehensive assessment. For treatment planning, techniques like image-guided and visualizations enable precise navigation and minimally invasive interventions. In , it supports longitudinal studies and population-level analyses, though challenges like —due to limited , , and variability in experimental setups—remain critical hurdles for clinical translation. Ongoing trends highlight the integration of to manage escalating volumes, from kilobytes in traditional radiographs to terabytes in whole-body scans, promising more personalized and efficient medical practices.

Fundamentals

Definition and Scope

Medical image computing refers to the application of computational algorithms and models to acquire, process, analyze, and interpret digital medical images derived from modalities such as (MRI), (CT), and . This field leverages techniques from to extract meaningful information from visual data, enabling automated or semi-automated assistance in medical decision-making. The scope of medical image computing is broad, encompassing stages from initial image acquisition and enhancement to advanced tasks like segmentation, registration, quantitative feature extraction, and seamless integration into clinical workflows. It is inherently interdisciplinary, drawing on expertise from for algorithm development, biomedical engineering for hardware-software interfaces, and for domain-specific validation and application. This collaborative nature ensures that computational methods align with clinical needs, such as improving image quality or fusing multi-modal for comprehensive . The importance of medical image computing lies in its transformative role across healthcare, facilitating precise diagnostics, treatment planning, surgical guidance, and biomedical . For instance, it supports tumor detection by delineating malignant structures in scans, reducing diagnostic errors and enabling earlier interventions, while also advancing through patient-specific -derived models for tailored therapies. In surgical contexts, it processes data to provide navigational overlays, enhancing procedural accuracy and outcomes. Techniques like segmentation and registration underpin these applications by aligning and partitioning elements for targeted . At its foundation, medical image computing relies on key concepts in , where two-dimensional images are composed of pixels—discrete units encoding intensity values at spatial coordinates—and three-dimensional volumes use voxels to extend this representation volumetrically. , defined by the size and density of these units, critically influences the ability to discern fine anatomical details, directly impacting diagnostic reliability and the efficacy of downstream computations.

Historical Development

The field of medical image computing emerged in the alongside the advent of , which marked the transition from analog to digital imaging in medicine. The first clinical CT scanner was developed by and installed at Atkinson Morley Hospital in in 1971, enabling the of cross-sectional images through computer processing of X-ray projections. This innovation introduced to clinical practice, with early applications focusing on basic enhancement and reconstruction algorithms to handle the computational demands of tomographic data. By the mid-1970s, techniques such as texture analysis for quantitative feature extraction in CT images were proposed, exemplified by Robert M. Haralick's 1973 work on textural features for image classification. The 1980s saw further foundational progress with the clinical adoption of (MRI) and the development of initial algorithms for image analysis. The first whole-body MRI scan was achieved in 1977 by Raymond Damadian's team, expanding the scope of to soft tissues without . Concurrently, early segmentation methods emerged, such as the 1986 algorithm by Wells et al. for (NMR) images, which laid groundwork for delineating anatomical structures. Pioneering contributions from figures like , whose 1940s work on Gabor filters for signal analysis influenced subsequent and filtering techniques in medical images, provided essential mathematical tools for these advancements. In the 1990s, medical image computing matured with the proliferation of registration techniques and probabilistic atlases, driven by the need to align multi-modal data from , MRI, and emerging modalities like (). Registration methods gained prominence in the early 1990s amid neuroimaging challenges from the , enabling spatial correspondence across images for applications like surgical planning. The first International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) was held in 1998, fostering collaboration and standardizing research in the field. The 2000s integrated statistical shape models (SSMs), with Timothy Cootes and Christopher Taylor's active appearance models (AAMs) from the mid-1990s evolving into 3D variants for robust organ segmentation, capturing population-based variability in anatomical shapes. Software frameworks like the Insight Toolkit (ITK), initiated in 1999 by the U.S. National Library of Medicine, provided open-source tools for segmentation and registration, accelerating adoption. The 2010s witnessed an explosion in applications, propelled by the 2012 architecture, which demonstrated convolutional neural networks' (CNNs) efficacy in image recognition and inspired adaptations for medical tasks. This shift was amplified by hardware advances like graphics processing units (GPUs), enabling training on large datasets, and initiatives such as the , which began imaging 100,000 participants in 2014 to support population-scale analysis. Seminal works like the 2015 for biomedical segmentation further entrenched , achieving high accuracy in delineating complex structures while addressing data scarcity through efficient architectures. These developments, building on decades of computational foundations, continue to drive precision in diagnostics and interventions.

Data Acquisition and Representation

Imaging Modalities

Medical image computing relies on data acquired from various imaging modalities, each employing distinct physical principles to generate representations of anatomical and functional information within the human body. These modalities produce datasets ranging from two-dimensional (2D) projections to three-dimensional (3D) or four-dimensional (4D, incorporating time) volumes, which serve as the foundation for subsequent computational analysis. Key considerations include the use of ionizing versus non-ionizing radiation, as well as inherent data characteristics such as spatial and temporal resolutions, noise profiles, and common artifacts that influence computing workflows. X-ray imaging is one of the earliest and most fundamental , utilizing high-energy electromagnetic waves generated by accelerating electrons onto a target in an , producing a continuous spectrum via and discrete peaks from characteristic radiation. These s interact with tissues primarily through photoelectric absorption and , where denser structures like bone attenuate more rays, appearing brighter on the resulting 2D projection images captured on a detector. This modality offers high for bony structures (typically 0.1–0.5 mm) but limited soft-tissue contrast due to overlapping projections of 3D anatomy. Data characteristics include grayscale images with Poisson-distributed noise from photon counting statistics, and artifacts such as geometric distortion from patient positioning. X-ray uses , raising concerns for cumulative exposure in repeated scans. Computed tomography (CT) extends X-ray principles by acquiring multiple projections from rotating X-ray sources around the patient, enabling of cross-sectional slices. The physical basis involves measuring X-ray attenuation along lines through the body, formalized by the , which integrates the along projection paths to form a dataset subsequently inverted to yield volumetric images. CT provides isotropic of 0.5–1 mm and excels in both and soft-tissue visualization, though it employs with doses varying by protocol (e.g., 2–10 mSv for a chest scan). Resulting data are 3D volumes in Hounsfield units, characterized by Poisson noise dominant at low doses, manifesting as granular streaks that degrade low-contrast detection. Common artifacts include beam hardening from polychromatic X-rays and partial volume effects in thin structures. Magnetic resonance imaging (MRI) operates on non-ionizing principles, exploiting the nuclear spin properties of protons in and fat molecules. In a strong static (typically 1.5–3 T), protons align and precess at the Larmor ; a radiofrequency (RF) pulse perturbs this alignment, and upon relaxation, protons emit detectable signals as they return to via T1 (spin-lattice) and T2 (spin-spin) processes, with T1 times longer in fluids (e.g., 2000–3000 ms) than in fat (200–500 ms). Gradient fields spatially encode these signals for reconstruction into images. MRI delivers superior soft-tissue contrast and (0.5–2 mm) without radiation, supporting multiplanar and functional (e.g., ) imaging in or formats. Data exhibit , with motion artifacts like ghosting from patient or physiological movement (e.g., ) causing blurring or replicas across the phase-encoding direction. Positron emission tomography (PET) focuses on functional and metabolic imaging using from positron-emitting radiotracers (e.g., 18F-FDG) injected into the patient. A decays by emitting a , which annihilates with an ~1–2 mm away, producing two oppositely directed 511 keV gamma rays detected in coincidence by a ring of scintillators, defining lines of response for . This yields quantitative 3D maps of tracer uptake, with of 4–6 mm limited by positron range and non-collinearity. PET data are low-resolution volumes with high noise from random and scatter events, often requiring correction; supports 4D dynamic studies of processes like blood flow. Artifacts include attenuation mismatches in obese patients. Ultrasound imaging employs non-ionizing high-frequency (1–20 MHz) generated by piezoelectric transducers, which propagate through tissues at ~1540 m/s and reflect at interfaces due to mismatches (Z = × ). Strong reflectors like appear echogenic (bright), while fluids are anechoic (dark); echoes are amplified and time-gained to form real-time or images. It offers excellent (>30 frames/s) for dynamic visualization but spatial resolution varies (0.1–1 mm axially, poorer laterally), with limited (10–30 cm) in air or bone-filled regions. Data characteristics include speckle noise from coherent interference and artifacts like shadowing behind dense structures or from repetitive echoes. Operator dependence affects . Hybrid modalities integrate complementary principles for enhanced , such as PET-MRI, which simultaneously acquires metabolic PET data with high-contrast anatomical MRI in a single session, reducing motion misalignment and compared to PET-CT. This produces aligned 4D multimodal volumes ideal for and , with PET resolution augmented by MRI's soft-tissue detail. These modalities' outputs often require initial preprocessing for noise reduction, such as filtering Poisson noise in , to prepare data for computing tasks.

Data Formats and Preprocessing

Medical image data requires standardized formats to facilitate , , and retrieval across diverse systems and applications. The and Communications in Medicine () standard serves as the primary format for most clinical modalities, defining protocols for encoding image data, metadata (including patient demographics, acquisition parameters, and study details), and network communications to enable seamless exchange between devices and institutions. In , the NIfTI format has become a , extending the earlier ANALYZE format by incorporating explicit affine transformations for orientation and supporting multidimensional arrays up to 7D, which simplifies handling of functional and structural data. For large, heterogeneous datasets—such as those from multi-omics or —HDF5 provides a flexible, hierarchical structure that accommodates complex objects like arrays, groups, and attributes, optimizing and access for computational pipelines in medical research. Preprocessing transforms raw images to mitigate acquisition artifacts and variations, ensuring suitability for downstream analysis. Intensity normalization adjusts pixel values to a common scale, with being a foundational method that spreads out the intensity distribution to enhance contrast, particularly useful in low-contrast regions of or images. Noise reduction employs filters like Gaussian , which convolves the image with a Gaussian to attenuate random fluctuations while maintaining integrity, commonly applied to reduce thermal or electronic in CT and MRI scans. Bias field correction addresses slow-varying intensity inhomogeneities in MRI due to sensitivities; the N4ITK refines the earlier N3 method by using a deformable model to estimate and subtract the multiplicative bias, achieving superior uniformity in brain tissue segmentation tasks. Handling medical image data involves inherent challenges that impact computational accuracy. Anisotropic voxels, resulting from slice-selective acquisition in modalities like MRI, introduce directional resolution disparities (e.g., higher in-plane than through-plane resolution), leading to elongated structures in models and errors in quantitative metrics such as diffusion tensor imaging. Multi-scale resolutions emerge from protocol variations across scanners or sessions, complicating alignment and feature extraction by requiring that may amplify noise or during resampling. extraction poses difficulties due to format-specific inconsistencies, such as optional DICOM tags or proprietary extensions, which hinder automated retrieval of critical details like voxel spacing or use without risking data loss or privacy breaches. Quality assurance pipelines systematically detect and correct artifacts to uphold before analysis. These workflows often integrate automated tools for artifact identification, such as motion-induced distortions or susceptibility artifacts in MRI; for instance, models like 3D-QCNet employ 3D DenseNet architectures to classify volumes and localize anomalies in , achieving high sensitivity (over 90%) and enabling scalable rejection or of affected regions.

Mathematical Foundations

Image Formation and Reconstruction

In medical image computing, refers to the mathematical modeling of how raw sensor data is generated from the underlying properties, while involves inverting these models to recover the image. For computed tomography (), image formation is based on the projection geometry, where s pass through the body and are attenuated according to the , which integrates the object's density along lines of projection. In parallel-beam geometry, projections are acquired from multiple angles assuming non-diverging rays, forming the basis for analytical . Fan-beam geometry, commonly used in modern scanners, extends this by accounting for the diverging fan from a , which requires rebinning to parallel projections or direct fan-beam formulas to handle the geometry. In (MRI), image formation occurs in , the Fourier domain, where the components of the image are encoded through gradient fields modulating the radiofrequency signals from hydrogen protons. The raw MRI data represents samples of the continuous of the distribution, and the image is obtained by applying the inverse . This Fourier basis allows for flexible sampling trajectories, such as Cartesian or radial paths in . Reconstruction algorithms invert these forward models to estimate the image from measured projections or data. In , filtered back-projection (FBP) is a widely adopted analytical method that applies a to the projections before back-projecting them onto the . The core for parallel-beam FBP is given by f(x,y) = \int_0^\pi \int_{-\infty}^\infty p(\theta, s) \, h(x \cos \theta + y \sin \theta - s) \, ds \, d\theta, where f(x,y) is the reconstructed image density, p(\theta, s) is the projection data at angle \theta and distance s, and h denotes the , which compensates for the blurring inherent in simple back-projection. This approach, originally formulated using convolution instead of transforms for computational efficiency, enables rapid reconstruction but can amplify noise without apodization. For (), where projections represent line integrals of radionuclide emissions modeled as Poisson processes, iterative methods like expectation-maximization () are preferred to incorporate statistical noise models and system matrices. The algorithm iteratively updates the image estimate by maximizing the likelihood, alternating between expectation (computing expected counts given current estimate) and maximization (adjusting estimate to fit observed data), improving convergence over direct methods in low-count scenarios. Compressed sensing has revolutionized reconstruction in MRI by exploiting image sparsity in transform domains to enable below traditional limits, reducing scan times. The core minimizes the l1-norm of the sparse coefficients subject to data consistency: \min \| \Psi x \|_1 \quad \text{s.t.} \quad A x = b, where x is the image, \Psi is the sparsifying transform (e.g., ), A is the undersampled encoding matrix, and b is the measurements. This nonlinear recovery, solved via , allows acceleration factors of 3-5 in clinical protocols while suppressing artifacts. Resolution in reconstructed images is fundamentally limited by sampling theory, particularly the Nyquist-Shannon theorem, which requires sampling at least twice the highest to avoid . In , this dictates the minimum projection angles in or k-space density in MRI; undersampling below this rate introduces wrap-around artifacts, while enhances resolution at the cost of acquisition time. Preprocessing steps, such as , may follow to refine the representation.

Signal Processing and Filtering

Signal processing and filtering play a crucial role in medical image computing by enhancing image quality, reducing noise, and extracting meaningful features from acquired data such as MRI, , and images. These techniques operate primarily on the intensities or components of images to mitigate artifacts introduced during acquisition, including , speckle, or , thereby improving diagnostic accuracy and enabling downstream analyses like segmentation. In the spatial domain, basic filtering methods such as mean and median filters are widely used for denoising medical images. The mean filter, also known as the average filter, smooths an image by replacing each value with the average of its neighbors within a defined window, effectively reducing but potentially blurring edges in or MRI scans. The , on the other hand, replaces each with the median value of its neighborhood, making it particularly effective for removing impulse noise like salt-and-pepper artifacts common in images, while preserving edges better than the mean filter. Frequency domain filtering leverages the to analyze and modify the spectral content of medical images, allowing for targeted noise suppression or enhancement. The two-dimensional of an image f(x,y) is given by F(u,v) = \iint f(x,y) e^{-j2\pi(ux+vy)} \, dx \, dy, which decomposes the image into its frequency components; low-pass filters attenuate high frequencies to smooth images and reduce noise in modalities like MRI, while high-pass filters emphasize high frequencies to sharpen edges and highlight structures in images. Advanced methods include transforms for multi-resolution analysis, which decompose medical images into subbands capturing details at varying scales, facilitating and feature extraction in applications such as segmentation of regions of interest. For , the Canny algorithm is a seminal approach applied to medical images, involving Gaussian smoothing followed by computation of the gradient magnitude |\nabla I| = \sqrt{G_x^2 + G_y^2}, where G_x and G_y are the gradients in the x and y directions, to identify strong edges while suppressing noise in brain or scans. Deconvolution techniques address blur in microscopy images, with the Richardson-Lucy algorithm being a widely adopted iterative method for restoring degraded signals under Poisson noise models prevalent in fluorescence microscopy. The update rule is x^{k+1} = x^k \cdot \left( b * \left( \frac{y}{b * x^k} \right) \right), where x^k is the estimate at iteration k, b is the point spread function, y is the observed image, and * denotes convolution; this approach enhances contrast and resolves fine structures in 3D confocal images of biological tissues. Multiscale processing employs Gaussian pyramids to create hierarchical representations of medical images, enabling efficient feature extraction across resolutions by successively applying Gaussian smoothing and , which is useful for tasks like registration in scans to preserve edges without diffusion at coarse levels.

Core Processing Techniques

Segmentation

Segmentation in medical image computing involves partitioning images into meaningful regions corresponding to anatomical structures, such as organs, tumors, or pathological tissues, to facilitate , , and planning. These delineations isolate regions of interest (ROIs) from surrounding structures, enabling tasks like volume measurement and feature extraction. Classical methods, which rely on hand-crafted image features like and gradients rather than data-driven learning, form the foundation of segmentation techniques and remain relevant for their interpretability and efficiency in specific scenarios. Thresholding is a foundational classical method that classifies pixels into foreground and background based on intensity thresholds, producing binary segmentations suitable for images with distinct intensity distributions. Otsu's method automates threshold selection by exhaustively searching for the value that maximizes between-class variance, formulated as \sigma_B^2 = w_1 w_2 (\mu_1 - \mu_2)^2, where w_1, w_2 are the proportions of pixels in each class and \mu_1, \mu_2 are their respective means. This approach assumes a bimodal histogram and has been widely applied in medical imaging for segmenting high-contrast structures, such as bones in CT scans or white matter in MRI, achieving rapid results but requiring multimodal extensions for complex tissues. Region growing extends thresholding by initiating segmentation from user-specified seed points and iteratively incorporating adjacent pixels that meet a homogeneity criterion, often intensity similarity within a tolerance range. This semi-automatic technique excels in segmenting connected, homogeneous regions like liver tumors in abdominal CT, where seeds can be placed interactively, though it demands careful seed selection to avoid leakage into adjacent structures. Active contours, commonly known as snakes, model object boundaries as deformable curves that evolve to minimize a total energy functional E = \int (E_{\text{int}} + E_{\text{ext}}) \, ds, where the internal energy E_{\text{int}} imposes smoothness and continuity constraints, and the external energy E_{\text{ext}} is derived from image gradients to attract the contour toward edges. Introduced for feature extraction, snakes have been adapted for medical applications, such as delineating cardiac boundaries in echocardiography or vessel walls in angiography, providing sub-pixel accuracy when initialized near the target. Graph-based methods, exemplified by graph cuts, represent the image as a weighted graph with pixels as nodes and edges encoding regional and boundary costs; binary segmentation is then solved as a minimum cut that separates source (object) and sink (background) terminals, yielding globally optimal solutions for energy minimization. This interactive framework supports user scribbles to guide segmentation and has proven effective for multi-dimensional medical volumes, such as prostate delineation in MRI, balancing boundary fidelity and regional consistency. Performance of segmentation methods is assessed using overlap and boundary-based metrics to quantify agreement with annotations. The Dice Similarity Coefficient () measures volumetric overlap as DSC = \frac{2 |A \cap B|}{|A| + |B|}, where A and B are the segmented and reference sets, respectively; values above 0.8 often indicate clinically viable results for structures like the liver.00429-6) The complements DSC by capturing boundary errors as the maximum minimum distance between points on the two surfaces, d_H(A, B) = \max(\sup_{a \in A} \inf_{b \in B} d(a,b), \sup_{b \in B} \inf_{a \in A} d(a,b)), with lower values (e.g., under 5 mm) signifying precise edge alignment, though it is sensitive to outliers like small segmentation artifacts. Key challenges in classical segmentation include maintaining topological correctness, such as preserving (e.g., no artificial holes in solid organs like the ), which thresholding and region growing often violate due to disconnected components or over-merging. Partial volume effects, caused by the finite resolution of imaging voxels blending signals from adjacent tissues, further complicate delineation by creating ambiguous boundaries in gradient-based methods like snakes, leading to smoothed or erroneous contours in low-contrast regions such as soft tissues in MRI. These issues underscore the need for robust preprocessing and hybrid approaches to enhance reliability across modalities.

Registration

Medical image registration is a fundamental process in medical image computing that involves aligning two or more images of the same or different subjects, acquired at different times or using different imaging modalities, to a common spatial . This alignment enables the integration of complementary information, such as combining anatomical details from with functional data from , facilitating accurate , treatment planning, and longitudinal studies. The process typically involves estimating a spatial transformation that maximizes a similarity metric between the images while ensuring the transformation is physically plausible, such as preserving tissue topology. Registration methods are categorized by the type of transformation applied, ranging from simple rigid alignments to complex deformable models. Rigid registration accounts only for translations and rotations, using , and is suitable for aligning images where anatomical structures maintain their shape and size, such as intra-subject scans with minimal deformation. Affine transformations extend this by including scaling and shearing, with up to 12 , allowing for global distortions like those caused by different scanner resolutions. Non-rigid or deformable registration handles local deformations, essential for scenarios involving motion or growth; a seminal example is the Demons algorithm, which models the displacement field u as a governed by the \frac{\partial u}{\partial t} = \Delta u + f, where \Delta is the Laplacian operator and f represents forces derived from image intensity differences, enabling smooth, topology-preserving warps. Similarity measures quantify how well the images align after , guiding the process. For monomodal registration, where images are from the same , normalized cross- is widely used, as it is robust to intensity variations and computes the between corresponding intensities to maximize overlap. In multimodal cases, serves as a robust metric, capturing statistical dependencies without assuming linear intensity relationships; it is defined as MI(X,Y) = H(X) + H(Y) - H(X,Y), where H(X) and H(Y) are the marginal of images X and Y, and H(X,Y) is their joint , allowing alignment of images like MRI and despite differing contrast mechanisms. Optimization techniques iteratively refine the transformation parameters to maximize the chosen . Gradient descent methods, including steepest descent and conjugate variants, are commonly employed due to their efficiency in navigating high-dimensional parameter spaces, particularly for intensity-based metrics where derivatives can be computed analytically. For non-convex optimization landscapes, such as those in non-rigid registration, evolutionary algorithms like genetic algorithms provide global search capabilities, evolving a of candidate transformations through selection, crossover, and to avoid local minima. These approaches often incorporate multi-resolution strategies, starting at coarse scales to accelerate convergence. Key applications of registration include motion correction, where it compensates for patient or respiratory movements in serial scans, improving image quality in modalities like MRI and . Another critical use is atlas mapping, aligning patient images to standardized anatomical templates for automated segmentation and , as seen in studies where registration to a reference atlas enables volumetric measurements across populations. These applications underscore registration's role in enhancing clinical workflows and research .

Visualization

Visualization in medical image computing involves techniques for rendering and interacting with multidimensional image data to facilitate clinical interpretation and decision-making. These methods transform raw volumetric datasets, such as those from CT or MRI scans, into intuitive visual representations that highlight anatomical structures, pathologies, and functional aspects without invasive procedures. Effective visualization enhances diagnostic accuracy by allowing clinicians to explore data in multiple views and dimensions, often integrating user interactions for dynamic exploration. A fundamental approach is 2D and 3D rendering, which includes and surface rendering. Volume rendering directly visualizes the entire 3D dataset by simulating light propagation through the volume, preserving internal details like densities. A seminal technique is , where rays are projected from the viewpoint through the volume, accumulating color and opacity along each ray to generate the final image; opacity is composited using front-to-back accumulation, where the resulting color C and opacity \alpha at a sample point are updated as C \leftarrow C (1 - \alpha_s) + c_s \alpha_s and \alpha \leftarrow \alpha + \alpha_s (1 - \alpha), with c_s and \alpha_s being the sampled color and opacity, respectively, until the ray terminates or exits the volume. This method, introduced in early work on , enables photorealistic depictions of soft s and contrasts in medical scans. In contrast, surface rendering extracts and displays isosurfaces—boundaries where scalar values meet a —reducing computational load for opaque structures like bones or organs. The widely adopted algorithm processes the volume cell by cell, interpolating vertices on edges where the isosurface crosses and triangulating the resulting polygon within each cube to form a for rendering; this approach generates high-resolution surfaces from data, forming the basis for many clinical tools. Interaction methods enable clinicians to navigate and manipulate these renderings for detailed inspection. Slice navigation allows sequential browsing through orthogonal 2D planes (axial, sagittal, coronal) of the volume, providing a foundational interactive view for identifying regions of interest. Multi-planar reconstruction (MPR) extends this by generating arbitrary or curved planes from the data, reformatting slices along user-defined orientations to better align with anatomical axes or lesions, which improves of complex structures like vessels or tumors. Virtual simulates an endoscope's perspective by rendering internal surfaces along a virtual path within hollow organs, such as the colon or airways, using or on segmented surfaces to mimic optical without physical insertion; this technique aids in detecting polyps or stenoses preoperatively. Advanced techniques leverage hardware and immersive technologies for enhanced utility. GPU-accelerated rendering exploits parallel processing on graphics hardware to perform ray casting or texture-based slicing in real-time, achieving interactive frame rates (e.g., 30+ fps) for large datasets exceeding 512^3 voxels, which is essential for intraoperative use. Integration of virtual reality (VR) and augmented reality (AR) overlays 3D reconstructions onto the surgical field or immersive environments, supporting preoperative planning by allowing manipulation of patient-specific models to rehearse procedures and assess risks. For instance, VR headsets enable stereoscopic viewing of tumor resections relative to critical structures. Atlases may serve as reference overlays in these visualizations to contextualize patient data against normative anatomy. Key challenges in medical image visualization include handling occlusions, where foreground structures obscure relevant deeper , addressed through techniques like editing to modulate , and ensuring performance amid increasing data volumes from high-resolution modalities, often mitigated by adaptive sampling or hierarchical . These issues demand ongoing advancements to balance fidelity and usability in clinical workflows.

Atlases and Anatomical Modeling

Single-Subject Atlases

Single-subject atlases in medical image computing are reference templates derived from the anatomical data of a single individual, typically constructed through expert manual segmentation of high-resolution imaging scans to delineate structures and regions. These atlases provide a fixed for mapping and analysis, often starting with a postmortem or scan that is meticulously labeled based on histological or radiological criteria. For instance, the Talairach atlas was developed from coronal sections of a single 60-year-old woman's postmortem , sliced at 1 mm intervals with every 10th section stained for detailed parcellation of subcortical and cortical areas. Similarly, the MNI Colin 27 template was created by averaging 27 T1-weighted MRI scans from one healthy young male subject (CJH), yielding a high-resolution (1 mm isotropic) volume that serves as a probabilistic prior for anatomical labeling. This manual or semi-automated labeling process ensures precise boundaries but relies heavily on the expertise of neuroanatomists to define regions like the or gyri. These atlases are primarily applied in to establish standardized coordinate systems for reporting and inter-subject alignment, facilitating the localization of abnormalities or activations across studies. In functional MRI (fMRI) and lesion analysis, enable precise notation of stereotactic targets in or activation foci in cognitive tasks, allowing comparisons without population-specific adjustments. The MNI Colin 27 space, for example, supports nonlinear normalization of individual scans to a common framework, aiding in automated segmentation tools like those in or FSL software for volumetric analysis. Such applications are crucial for early diagnostic pipelines, where single-subject templates provide a quick, deterministic reference for aligning images from modalities like MRI or , though brief registration steps may be involved to warp subject data to the atlas space. Despite their utility, single-subject atlases exhibit significant limitations due to their reliance on one individual's , which introduces bias and fails to account for inter-subject variability in shape, size, and sulcal patterns. The Talairach atlas, derived from an elderly female postmortem , poorly represents living s or younger demographics, leading to misalignment errors up to several millimeters in spatial . Likewise, the Colin 27 template, while sharper than averaged alternatives, inherits idiosyncrasies from its single donor, such as atypical gyral folding, which can distort group-level inferences in diverse cohorts like pediatric or pathological cases. These constraints often necessitate supplementary probabilistic adjustments, but the inherent lack of variability representation limits their accuracy in studies.

Multi-Subject Atlases

Multi-subject atlases in medical image computing represent population-level models that integrate data from multiple individuals to account for inter-subject anatomical variability, typically using probabilistic frameworks to encode uncertainty and statistical distributions of structures. Unlike single-subject exemplars, these atlases generate unbiased templates through iterative alignment and averaging techniques, enabling robust representation of normal anatomical variation across cohorts. Construction often employs large deformation diffeomorphic metric mapping (LDDMM), which computes geodesic flows on diffeomorphism groups to achieve bias-free averaging by simultaneously estimating transformations that minimize deformation energy while aligning images to a evolving mean template. Probabilistic labeling in multi-subject atlases incorporates maximum a posteriori (MAP) estimation to assign labels that maximize the joint probability of observed image intensities and prior anatomical models, often derived from Bayesian inference on training datasets. This approach yields voxel-wise probability maps for tissue classes or regions, capturing variability in shape, size, and orientation. Common types include DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), which creates unbiased templates via high-dimensional diffeomorphic warps on Lie algebra representations of velocity fields, facilitating group-wise normalization without privileging any single subject. Another prevalent type is multi-atlas fusion, where labels from multiple pre-registered atlases are propagated to a target image via deformable registration, followed by consensus voting using methods like STAPLE (Simultaneous Truth and Performance Level Estimation) to weight contributions based on estimated expert reliability and achieve a fused probabilistic segmentation. These atlases find key applications in disease-specific modeling, such as brain atlases that delineate atrophy patterns in regions like the and across patient cohorts, aiding early diagnosis and progression tracking. Recent AI-assisted atlases, such as the 2025 model, further enhance detail in MRI visualization using for probabilistic modeling. By representing population statistics, multi-subject atlases capture normal variation in healthy populations and pathological deviations, thereby improving segmentation accuracy in automated pipelines compared to single-atlas methods. This enhanced precision supports downstream tasks like groupwise analysis without introducing bias from individual exemplars.

Statistical and Analytical Methods

Groupwise and Population Analysis

Groupwise and in medical image computing encompasses statistical frameworks for detecting and quantifying variations in anatomical or functional patterns across cohorts of subjects, typically using registered images in a common reference space such as a multi-subject atlas. These methods enable the identification of group-level differences, such as tissue volume reductions in neurodegenerative diseases, by applying techniques to high-dimensional image data after preprocessing steps like segmentation and . Unlike single-subject analyses, groupwise approaches account for inter-subject variability and control for multiple comparisons across thousands of voxels or regions, providing robust evidence for population-level effects. A foundational technique is voxel-based morphometry (VBM), which assesses local differences in gray matter volume or concentration by segmenting brain tissues from MRI scans and performing voxel-wise statistics on the normalized images. To preserve absolute volume information during spatial normalization, VBM employs modulation, where the segmented images are multiplied by the of the deformation field, compensating for contraction or expansion effects and enabling the detection of true tissue changes rather than artifacts of alignment. This modulation step enhances sensitivity to volumetric alterations, such as cortical thinning in , and was detailed in the seminal methodological paper by Ashburner and Friston in 2000. VBM has been widely applied in over 7,000 studies since its inception, underscoring its impact on structural research. Complementing VBM, tensor-based morphometry (TBM) derives 3D maps of regional tissue expansion or contraction directly from the full deformation tensors obtained during non-linear , offering greater sensitivity to subtle, smooth changes in brain structure compared to scalar measures alone. By analyzing the of the Jacobian matrix at each , TBM quantifies local volume differences without requiring explicit segmentation, making it particularly effective for detecting progressive in conditions like , where it has revealed widespread gray matter loss in large cohorts. The approach gained prominence through Hua et al.'s 2008 cross-sectional study on 676 subjects from the Alzheimer's Disease Neuroimaging Initiative, demonstrating TBM's utility as a for early ; longitudinal extensions have shown effect sizes up to 2-3% annual volume reduction in affected regions. For group comparisons, voxel-wise tests such as independent t-tests for two-group contrasts or ANOVA for multi-group designs are applied to the processed images, assuming after smoothing to enhance and spatial correlation. These mass-univariate models treat each independently while incorporating covariates like age or sex to isolate disease-related effects. To handle the inherent multiple testing problem and validate significance non-parametrically, testing randomizes group labels thousands of times to generate empirical null distributions, controlling family-wise error rates via cluster-level thresholding or topological methods. This framework, introduced by Nichols and Holmes in 2002, ensures reliable inference in datasets with non-Gaussian noise. The (SPM) software implements these procedures for mass-univariate inference, supporting flexible general linear models and of results as statistical parametric maps. In population-level studies, normative modeling builds probabilistic models of metrics from large healthy cohorts to establish benchmarks of variation, enabling the quantification of individual deviations via standardized z-scores calculated as (observed - normative mean) / normative standard deviation. This approach detects subtle abnormalities by flagging z-scores exceeding thresholds (e.g., |z| > 2), as seen in applications to cortical thickness where deviations highlight atypical aging trajectories. The framework, developed for computational by Marquand et al. in 2019, has been extended to diverse modalities, emphasizing hierarchical Bayesian models to capture age- and sex-dependent norms from cohorts exceeding 1,000 subjects.

Shape and Deformation Analysis

Shape and deformation analysis in medical image computing involves the quantitative characterization of anatomical structures' geometry and transformations derived from data, enabling the detection of morphological variations associated with or . This subfield focuses on representing shapes parametrically and measuring deformations to quantify subtle changes in , such as curvatures, volumes, and displacements, often using surfaces extracted from modalities like MRI or . Deformations are typically obtained from non-rigid registration processes that align images while preserving topological properties. Key methods for shape representation include active shape models (ASMs), which use (PCA) on sets of landmark points to capture shape variability. In ASMs, a shape instance x is modeled as the mean shape \bar{x} plus a of principal modes: x = \bar{x} + P b, where P represents the eigenvectors of shape variations (eigenshapes) and b is a vector of model parameters constrained to ensure plausible shapes. This approach, introduced by Cootes et al. in 1995, allows for compact modeling of flexible objects like organs in 2D or 3D images by statistically learning from training examples. Another prominent technique employs (SPHARM) to parameterize closed surfaces of genus zero, expanding the surface coordinates in a Fourier-like basis over a to provide a hierarchical, multi-scale description of shape. Brechbühler et al. demonstrated that SPHARM enables efficient representation and comparison of complex 3D structures, such as brain subregions, by truncating higher-order harmonics for smoothing while retaining low-frequency global features. Deformation metrics quantify the magnitude and direction of transformations, with log-Euclidean metrics applied to diffeomorphisms offering a for averaging and interpolating smooth, invertible mappings while avoiding singularities in the structure. Arsigny et al. showed that this metric facilitates unbiased statistics on deformation fields from , improving accuracy in computational tasks like template estimation. tensor analysis further decomposes deformations into principal components, measuring local stretching, shearing, and compression via the symmetric part of the displacement gradient tensor, which is particularly useful for assessing in dynamic . Abd-Elmoniem et al. applied this to quantify myocardial from cine MRI sequences, revealing heterogeneous deformation patterns in cardiac with sub-millimeter precision. Applications of these methods include detecting anatomical asymmetries in neurodevelopment, where shape analysis identifies deviations from bilateral in structures like the , potentially signaling early disruptions in maturation. For instance, large-scale studies using deformation-based morphometry have quantified hemispheric asymmetries in pediatric cohorts, associating increased rightward hippocampal bending with neurodevelopmental trajectories. In pathology, such analyses reveal deformation-induced changes, such as inward contractions in or semantic variant , aiding by highlighting localized shape alterations beyond volumetric measures. Validation of shape models relies on establishing point-to-point across samples, often optimized using the minimum description length (MDL) to balance model complexity and fidelity to the data. Davies et al. proposed an MDL framework that automatically determines landmark placements by minimizing the encoded length of shape variations, ensuring robust, generalizable models for structures like the or cardiac boundaries with reduced . This approach has been widely adopted to evaluate quality in statistical shape modeling pipelines. Recent advances as of 2025 include the integration of with traditional shape analysis, such as neural networks for automated landmark detection in ASMs, and frameworks for multi-site deformation analysis, enabling privacy-preserving population studies without data centralization.

Longitudinal and Temporal Analysis

Longitudinal and temporal analysis in medical image computing focuses on methods to quantify dynamic changes in anatomical and pathological structures across serial imaging acquisitions, enabling the study of disease evolution at individual and population levels. These approaches integrate spatial alignment with temporal modeling to capture subtle progressions, such as tissue atrophy or expansion, which are often imperceptible in single-time-point images. By leveraging multi-time-point data from modalities like MRI and , this analysis supports , including early detection of progression and evaluation of therapeutic interventions. A core technique is registration, which extends rigid or deformable registration frameworks to the spatiotemporal domain for aligning sequences and tracking motion or growth-induced deformations. This method simultaneously warps all time points to a common reference, minimizing accumulation of registration errors across scans and facilitating voxel-wise . For instance, implicit template-based registration constructs an unbiased average from the sequence itself, avoiding bias toward any single time point as the template, and has demonstrated improved accuracy in longitudinal MRI compared to pairwise methods. Unbiased longitudinal atlasing further advances this by generating subject-specific 4D templates that evolve diffeomorphically over time, preserving while averaging trajectories across visits. These atlases employ log-Euclidean metrics on diffeomorphism groups to ensure smooth, invertible mappings and reduce bias from irregular sampling. A robust uses linear registration followed by diffeomorphic averaging to create 4D atlases, enabling consistent quantification of developmental or degenerative changes in pediatric and adult studies. Trajectory modeling in this domain commonly applies linear mixed-effects models to estimate rates of change in metrics like regional volumes or cortical thickness, incorporating fixed effects for time and random effects for inter-subject variability. These models robustly handle repeated measures and have revealed annual hippocampal volume loss rates of 1-2% in aging cohorts, escalating to 3-5% in prodromal . Bayesian extensions enhance precision by incorporating priors on trajectories, allowing detection of nonlinear patterns in whole-brain voxel-based morphometry data. Event-based analysis complements this by inferring discrete stages of disease progression from the sequence of biomarker abnormalities observed in , estimating event timings without assuming a fixed trajectory. Pioneered in Alzheimer's research, this nonparametric approach orders events like accumulation followed by , using cross-sectional and longitudinal MRI data to stage individuals with high concordance to clinical diagnoses. It has been applied to delineate progression timelines, showing thinning as an early event in familial Alzheimer's, typically occurring 10-15 years before symptom onset. In , longitudinal analysis monitors tumor growth by registering serial or MRI scans to compute volume trajectories, aiding in the assessment of treatment efficacy; for example, diffeomorphic mappings have quantified growth rates in models, revealing deceleration post-chemotherapy with sub-millimeter precision. For neurodegeneration in , these methods track hippocampal and ventricular expansion over multi-year MRI follow-ups, correlating 1-3% annual whole-brain atrophy with cognitive decline in mild cognitive impairment cohorts. High-impact longitudinal studies, such as those using ADNI data, demonstrate that such analyses predict conversion to Alzheimer's with 80-90% accuracy when combined with baseline features. Key challenges include managing missing data from patient attrition, which affects up to 20-30% of longitudinal neuroimaging cohorts and can bias trajectory estimates toward faster progressors if not addressed via multiple imputation or pattern-mixture models. Irregular sampling intervals, often spanning months to years due to clinical constraints, further complicate alignment and rate estimation; recent frameworks like neural ordinary differential equations interpolate trajectories to handle sparsity, improving prediction accuracy by 10-15% in irregularly sampled brain MRI sequences. Groupwise analysis of such longitudinal data extends these techniques to cohort-level inference, briefly integrating temporal models with population atlases for unbiased change mapping. As of 2025, emerging innovations include spatiotemporal graph neural networks for modeling dynamic connectivity in longitudinal fMRI data and privacy-preserving federated analytics for multi-center temporal studies, enhancing scalability and generalizability.

Machine Learning Applications

Supervised and Unsupervised Learning

In medical image computing, supervised and unsupervised learning paradigms from traditional have been widely applied to tasks such as , segmentation, and detection, relying on handcrafted features extracted from images to enable algorithmic decision-making. These methods predate deep learning approaches and emphasize explicit , where guides the selection of descriptors like intensity histograms or spatial patterns to represent anatomical structures or pathologies. Supervised techniques use to train models that map features to predefined outputs, while unsupervised methods discover inherent patterns without labels, both proving effective in resource-constrained settings common to clinical environments. Supervised learning in medical imaging often employs support vector machines (SVMs) for classification tasks, such as distinguishing malignant from benign lesions in mammograms or CT scans. SVMs operate by finding an optimal hyperplane that separates classes in feature space, defined by the equation w \cdot x + b = 0, where w is the weight vector normal to the , x is the input , and b is the term; this maximizes the margin between support vectors of different classes to enhance generalization. Early applications demonstrated SVMs achieving accuracies up to 94% in skin lesion from dermoscopic images, outperforming simpler linear classifiers due to their robustness to high-dimensional data. Random forests, an ensemble of decision trees, have been particularly useful for and detection in multi-class problems in ; each tree votes on classifications, reducing overfitting through bagging and random subset selection. Unsupervised learning facilitates exploratory analysis in medical images, with commonly used for tissue typing and segmentation, partitioning voxels into k groups by minimizing the within-cluster sum of squared distances: \arg\min \sum_{i=1}^k \sum_{x \in C_i} \|x - \mu_i\|^2, where C_i denotes the i-th cluster, x are data points (e.g., pixel intensities), and \mu_i is the cluster centroid. This approach has aided in preliminary tumor localization without annotations. complements this by reducing dimensionality, projecting high-dimensional image features onto principal components that capture maximum variance, thus simplifying datasets for further analysis like noise removal in images. Applications in MRI have shown facilitating efficient visualization of anatomical variations. Feature engineering is central to these paradigms, involving handcrafted descriptors tailored to medical contexts; for instance, (HOG) captures edge directions for in radiographs, while gray-level (GLCM) quantifies texture properties like contrast and homogeneity in ultrasound or histopathology images. HOG divides images into cells and computes gradient orientations to form robust representations against illumination changes. GLCM, derived from pairwise statistics at specified distances and angles, extracts second-order texture features that correlate with tissue heterogeneity. Performance evaluation in these applications typically uses k-fold cross-validation to assess generalizability, dividing datasets into k subsets for iterative training and testing, ensuring unbiased estimates in limited-sample medical cohorts. curves plot true positive rates against false positive rates across thresholds, with the quantifying discriminative power; AUC values exceeding 0.90 have validated SVM classifiers for detection in MR images. These metrics highlight the reliability of classical methods, though they have largely transitioned to for end-to-end in complex tasks.

Deep Learning Architectures

Deep learning architectures have transformed medical image computing by enabling automatic feature extraction and end-to-end learning from raw pixel data, surpassing traditional hand-crafted methods in tasks like segmentation and synthesis since the mid-2010s. Convolutional neural networks (CNNs) form the backbone of many applications, particularly for segmentation, where they capture hierarchical spatial features through convolutional layers followed by pooling and upsampling operations. A seminal architecture in this domain is the , introduced in 2015, which employs an encoder-decoder structure with skip connections to preserve fine-grained details during segmentation of biomedical images. The encoder progressively downsamples the input to learn contextual features, while the decoder upsamples to recover spatial resolution, and skip connections concatenate encoder features to the decoder, mitigating information loss and enabling precise boundary delineation in low-data regimes typical of . This design has become foundational, achieving state-of-the-art performance on datasets like the ISBI cell tracking challenge, where it outperformed sliding-window CNNs by leveraging for robustness. Generative adversarial networks (GANs) extend to image synthesis and , crucial for addressing data scarcity and modality mismatches in medical computing. CycleGAN, proposed in 2017, facilitates unpaired image-to-image translation by enforcing cycle consistency, where mappings between domains A and B are learned such that translating an image from A to B and back to A reconstructs the original. In medical applications, this enables synthesis of images across modalities, such as converting MRI to scans, improving model generalization without paired training data and demonstrating superior fidelity in preserving anatomical structures compared to pix2pix. Vision transformers (ViTs) have emerged as a powerful alternative to CNNs, leveraging self- mechanisms to model long-range dependencies in images treated as sequences of patches. The original ViT architecture, from 2020, divides images into fixed-size patches, embeds them linearly, and processes them through transformer encoders with positional encodings to capture global context without inductive biases like locality. In , adaptations like Swin-UNETR integrate hierarchical Swin transformers into U-Net-like frameworks for segmentation, achieving higher scores on MRI datasets by modeling multi-scale features and outperforming pure CNNs in capturing volumetric relationships. For whole-slide images, ViT-based models excel in attention-based analysis of gigapixel slides, enabling tasks like tumor with improved interpretability through attention maps highlighting relevant tissue regions. As of 2025, foundation models such as Hibou, pretrained on millions of pathology slides, further advance ViT applications by providing robust representations for downstream tasks like cancer subtyping. Training these architectures in medical contexts requires specialized techniques to handle limited annotated data and inherent challenges like class imbalance. via elastic deformations simulates anatomical variations by applying random non-rigid transformations, such as grids, to expand effective dataset size and enhance model invariance, as implemented in frameworks like nnU-Net. from large natural image datasets like initializes encoders with pre-trained weights, accelerating convergence and boosting performance on medical tasks by 5-10% in segmentation accuracy, though fine-tuning is essential to adapt to domain-specific features. To address class imbalance, where foreground structures like tumors occupy few voxels, focal loss modulates by down-weighting easy examples, focusing gradients on hard misclassified pixels and improving metrics like mean IoU in dense detection scenarios. Post-2020 advances emphasize privacy-preserving and generative capabilities. enables collaborative training across institutions without sharing raw data, aggregating model updates to build robust segmenters while complying with regulations like HIPAA, as surveyed in recent works showing comparable accuracy to centralized training on multi-site MRI datasets. , particularly denoising diffusion probabilistic models (DDPMs), generate high-fidelity medical images by iteratively denoising through a , outperforming GANs in sample quality for synthesis tasks like brain MRI generation, with applications in yielding up to 15% gains in downstream segmentation.

Modality-Specific Computing

Magnetic Resonance Imaging

Magnetic resonance imaging (MRI) plays a central role in medical image computing due to its non-ionizing nature and ability to provide high-contrast images of soft tissues, enabling detailed analysis of brain anatomy and function. Computational methods for MRI focus on preprocessing, segmentation, and quantitative analysis tailored to variants like structural, diffusion, and functional MRI, addressing challenges such as noise, artifacts, and variability across scans. These techniques leverage algorithms for intensity normalization, registration, and modeling to extract clinically relevant features from raw data. In structural MRI, T1-weighted and T2-weighted images are processed through pipelines that include skull stripping, intensity inhomogeneity correction, and tissue segmentation to isolate brain structures. T1-weighted scans, which highlight gray-white matter contrasts, undergo automated segmentation to delineate cortical and subcortical regions, often followed by to standard spaces like MNI for group comparisons. T2-weighted images, sensitive to fluid and , require similar preprocessing but emphasize detection through multi-contrast fusion. A prominent example is the FreeSurfer pipeline, which reconstructs cortical surfaces from T1-weighted data via topological correction, segmentation, and pial surface estimation, achieving sub-millimeter accuracy in thickness measurements. Diffusion MRI enables mapping of tracts by modeling water diffusion patterns. In diffusion tensor imaging (DTI), the diffusion tensor D is fitted to signal data using eigenvalue decomposition D = U \Lambda U^T, where U contains eigenvectors and \Lambda the eigenvalues, quantifying metrics like for fiber integrity. reconstructs pathways through deterministic methods, which follow principal diffusion directions for streamlined tracking, or probabilistic approaches that sample uncertainty to model crossing fibers, improving robustness in complex regions. For higher fidelity, high-angular resolution diffusion imaging (HARDI) acquires data at multiple orientations to resolve intra-voxel fiber orientations beyond tensor limitations, supporting advanced like constrained spherical . Functional MRI (fMRI) analyzes blood-oxygen-level-dependent signals to infer neural activity, with preprocessing critical for artifact removal. Motion correction aligns volumes using rigid-body transformations to mitigate head movement, while slice timing correction interpolates signals to a common acquisition time, reducing temporal misalignment in event-related designs. Activation mapping employs the (GLM), formulated as Y = X\beta + \epsilon, where Y is the observed , X the convolving stimuli with the hemodynamic response function, \beta the parameter estimates, and \epsilon the error term, enabling on task-evoked responses. Recent models have further improved preprocessing and activation detection accuracy. MRI-specific challenges include field inhomogeneity, arising from magnetic field variations that cause intensity biases, and limited spatial resolution. Inhomogeneity correction uses algorithms like N4, which iteratively estimates a smooth bias field via B-spline fitting on log-transformed intensities, restoring uniform signal distribution essential for accurate segmentation. Super-resolution techniques enhance resolution by reconstructing high-resolution images from low-resolution inputs, often via multi-frame registration and deconvolution or deep learning models that learn mapping functions, improving diagnostic detail in undersampled scans.

Computed Tomography and Other Modalities

Computed tomography () imaging in medical computing emphasizes techniques to mitigate while preserving diagnostic quality. Iterative reconstruction algorithms represent a cornerstone for dose reduction, iteratively refining image estimates by incorporating statistical models of the imaging process and noise characteristics, enabling up to 50-80% reductions in dose without significant loss in or . These methods outperform traditional filtered back-projection by suppressing noise more effectively in low-dose scans, as demonstrated in abdominal applications where adaptive statistical maintained detectability at reduced tube currents. Calcium scoring algorithms, vital for cardiovascular , quantify coronary calcification by thresholding Hounsfield units (typically >130 ) in non-contrast scans and aggregating Agatston scores based on area and density. Automated variants enhance reproducibility, achieving high agreement with manual scoring ( >0.95) even on non-gated chest s, facilitating opportunistic screening in routine imaging. Recent advancements as of 2025 include DL-based denoising for further dose optimization. Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) computing focuses on correcting for photon and modeling tracer kinetics to enable quantitative uptake analysis. Attenuation correction in PET/SPECT compensates for tissue absorption using transmission scans or hybrid modalities like CT, transforming linear attenuation coefficients into correction factors via segmentation of attenuation maps, which improves quantification accuracy by 20-30% in myocardial perfusion studies. For SPECT, morphology-guided methods integrate anatomical priors from co-registered CT to refine attenuation maps, reducing artifacts in . Kinetic modeling employs compartmental models to derive physiological parameters from dynamic PET data; the Patlak graphical method, a linear two-compartment irreversible model, plots normalized tissue uptake against normalized integral of plasma activity to estimate influx rate K_i, particularly for glucose analogs like FDG in , where it simplifies irreversible trapping assumptions and yields robust uptake metrics without full nonlinear fitting. Ultrasound imaging processing addresses inherent speckle noise and demands real-time computation for clinical utility, especially in cardiac applications. Speckle reduction techniques, such as or wavelet-based thresholding, suppress multiplicative while preserving edges, improving signal-to-noise ratios by 2-5 dB in B-mode images without blurring anatomical boundaries. For , real-time segmentation algorithms delineate left ventricular boundaries using deformable models or convolutional neural networks, enabling automated calculation with similarity coefficients exceeding 0.90, supporting intra-procedural guidance in 3D transthoracic scans. Recent models have achieved Dice scores exceeding 0.92 as of 2025. Emerging modalities like and (OCT) leverage hybrid physics for high-resolution functional and structural analysis. Photoacoustic processing involves acoustic signals from laser-induced thermoelastic expansion, with post-processing techniques like delay-and-sum or minimum variance methods enhancing lateral resolution to sub-millimeter scales and suppressing clutter in vascular imaging. OCT layer segmentation algorithms automatically delineate retinal boundaries using graph-based shortest-path searches or deep convolutional networks, quantifying thicknesses of intra- layers with mean absolute errors below 2 μm, crucial for and monitoring. Multimodal fusion extends CT and ultrasound capabilities for interventional procedures, such as biopsy guidance, by rigidly or non-rigidly registering volumetric CT data to real-time ultrasound via fiducial landmarks or intensity-based metrics, improving target visualization and needle accuracy to within 2-3 mm. In liver biopsies, CT-ultrasound fusion can improve diagnostic yield for focal lesions, combining CT's anatomical detail with ultrasound's portability, while electromagnetic tracking ensures robust co-registration during respiration.

Physiological and Functional Modeling

Biomechanical Simulations

Biomechanical simulations in medical image computing involve deriving patient-specific models from imaging data to predict tissue deformation and stress under mechanical loads. These simulations typically employ finite element analysis (FEA), a that discretizes complex geometries into meshes for solving partial differential equations governing material behavior. Segmentation of medical images, such as MRI or scans, provides the foundational anatomical structures, from which tetrahedral or hexahedral meshes are generated to represent tissues like , muscle, or soft organs. This process enables the simulation of biomechanical responses, such as in response to surgical interventions or external forces, by incorporating material properties derived directly from image intensities or advanced quantification techniques. At the core of FEA in are the equations of and constitutive relations for materials. The balance of linear in the absence of inertial effects is expressed as \nabla \cdot \sigma + b = 0, where \sigma is the and b represents body forces. For linear isotropic materials, relates stress to strain via \sigma = C \epsilon, with C as the stiffness tensor and \epsilon the infinitesimal strain tensor derived from displacement gradients. These formulations allow FEA models to compute deformations by solving the weak form of the equations over the meshed domain, often using software like or custom implementations integrated with image processing pipelines. Validation of such models frequently involves comparing simulated displacements or strains against measurements obtained from techniques like tagged MRI or , achieving reasonable agreement with measurements for applications such as the tibiofemoral joint. Applications of image-derived FEA span pre-surgical planning and orthopedic interventions. In , patient-specific brain models predict intraoperative brain shift—deformations due to gravity, CSF drainage, or tumor resection—by simulating tissue interactions with and dura, aiding neuronavigation accuracy. For orthopedics, FEA assesses fixation stability or implant performance; for instance, CT-derived models of the evaluate stress distributions under loads, informing prosthetic design and reducing revision rates. Personalization enhances these simulations through imaging-based estimation of heterogeneous material properties, such as vessel wall stiffness from (IVUS), where iterative FEA updates calibrate (typically 0.5-2 MPa for ) against cine IVUS deformation data, improving plaque rupture risk predictions.

Functional and Dynamic Modeling

Functional and dynamic modeling in medical image computing involves developing computational frameworks to simulate and quantify time-varying physiological processes, such as blood flow and tissue , derived from dynamic imaging data. These models integrate image-derived geometries and temporal sequences to predict functional behaviors, enabling non-invasive assessment of organ performance and disease states. By solving partial differential equations or using pharmacokinetic approaches, they provide quantitative parameters like flow rates and permeability that inform clinical decisions in , , and . Perfusion modeling focuses on estimating microvascular blood flow and capillary permeability using dynamic contrast-enhanced (DCE) imaging techniques, particularly in (MRI). Compartmental models, such as the Tofts model, describe the of contrast agents by dividing into vascular and extravascular extracellular spaces. In the extended Tofts model, the rate of contrast transfer across permeable capillaries is governed by the volume transfer constant K^{\trans}, which quantifies endothelial permeability and surface area product, while the extravascular extracellular volume fraction v_e represents the distribution volume outside blood vessels, and v_p is the volume fraction. The model equation for concentration C_t(t) is given by: C_t(t) = K^{\trans} \int_0^t C_p(\tau) e^{-k_{ep} (t - \tau)} d\tau + v_p C_p(t) where C_p(t) is the plasma concentration, and k_{ep} = K^{\trans}/v_e is the rate constant for back-flux from tissue to plasma. This framework has become standard for quantifying tumor vascularity and treatment response in DCE-MRI. To derive perfusion metrics like cerebral blood flow (CBF), arterial input function (AIF) deconvolution is essential, isolating the tissue impulse response from the measured signal. The AIF represents the contrast agent concentration in feeding arteries over time, obtained by placing regions of interest on major vessels in dynamic images. Deconvolution techniques, such as singular value decomposition (SVD), solve the convolution integral C_t(t) = C_a(t) \otimes R(t), where C_a(t) is the arterial concentration and R(t) is the residue function, yielding CBF as the initial height of R(t). This model-independent approach, validated against positron emission tomography, corrects for delay and dispersion effects, improving accuracy in low-signal regions like ischemic tissue. Block-circulant SVD variants further stabilize the ill-posed inverse problem by handling oscillatory artifacts. Cardiac modeling leverages 4D flow MRI to capture three-dimensional fields throughout the , providing comprehensive hemodynamic data for ventricular and valvular function. This phase-contrast technique encodes in all spatial directions over time, enabling visualization of helical flow patterns and quantification of parameters like peak and wall shear stress. Derived fields serve as boundary conditions for (CFD) simulations, solving the incompressible Navier-Stokes equations to model intra-cardiac blood flow: \rho \left( \frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v} \right) = -\nabla p + \mu \nabla^2 \mathbf{v} with \mathbf{v} as velocity, p as pressure, \rho as density, and \mu as viscosity. These simulations, patient-specific and image-informed, predict pressure gradients and energy losses in congenital defects, outperforming static assessments by accounting for unsteady flow dynamics. Respiratory dynamics are modeled using 4D computed tomography (4D-CT), which sorts projection data into respiratory phases to reconstruct motion-correlated image volumes. This enables deformation field estimation via diffeomorphic registration, capturing lung and tumor trajectories for radiotherapy planning. The resulting spatiotemporal models parameterize sliding organ interfaces and hysteresis, reducing artifacts in dose delivery by predicting excursion amplitudes up to several centimeters in the thorax. Such models integrate external surrogates like spirometry for robust phase binning, ensuring sub-millimeter accuracy in motion compensation. Integration of functional models with electrophysiological simulations enhances predictive power in cardiac applications, coupling perfusion-derived flow fields with action potential propagation models. Multiphysics frameworks embed Darcy-based myocardial within equations, simulating ischemia-induced arrhythmias by linking oxygen delivery to ionic currents. This approach, validated in perfused heart preparations, reveals how heterogeneous alters conduction velocity, guiding personalized therapies for . Recent advancements as of incorporate physics-informed to refine these models, enhancing personalization and accuracy in frameworks for physiological simulations.

Software and Tools

Open-Source Frameworks

Open-source frameworks form the backbone of medical image computing by providing accessible, modular tools for researchers and developers to build and customize pipelines for image analysis, , and . These frameworks are typically distributed under permissive licenses, enabling widespread adoption in academic and without licensing costs. They support a range of tasks from basic filtering to advanced segmentation and registration, often integrating with programming languages like C++, , and to facilitate and reproducibility. The Insight Toolkit (ITK) is a prominent open-source library designed specifically for multidimensional scientific image processing, with core capabilities in segmentation and registration. Developed as a cross-platform system, ITK offers an extensive suite of algorithms for tasks such as deformable registration and active contour-based segmentation, making it a foundational tool for applications like tumor delineation in scans. Its modular architecture allows integration with other libraries, and it is maintained by the Insight Software Consortium, ensuring ongoing updates and community contributions via . Complementing ITK, the Visualization Toolkit () focuses on 3D graphics, modeling, and scientific visualization, widely used in for rendering volumetric data from modalities like MRI and . VTK provides state-of-the-art tools for , surface extraction, and interactive exploration of anatomical structures, supporting pipelines that combine image processing with high-fidelity display. It is implemented in C++ with bindings for and , and has been instrumental in applications such as surgical planning visualizations. In , the FMRIB Software Library (FSL) serves as a comprehensive suite for analyzing functional, structural, and data, including tools for motion correction, spatial normalization, and in fMRI studies. FSL's command-line and graphical interfaces enable workflows for and connectivity analysis, with particular strengths in handling large-scale population studies. It is developed and supported by the University of Oxford's FMRIB Centre, with documentation and binaries available for multiple operating systems. Similarly, (SPM) is an integrated software package for the analysis of brain imaging data sequences, emphasizing statistical modeling for fMRI, PET, and EEG. SPM facilitates hypothesis testing through general linear models and voxel-based morphometry, allowing researchers to detect activation patterns across cohorts or time series. Hosted by London's Wellcome Centre for Human Neuroimaging, it runs within and includes toolboxes for advanced multivariate analyses. The Analysis of Functional NeuroImages (AFNI) suite provides a robust environment for processing and visualizing fMRI data, featuring tools for preprocessing, , and group-level statistics. AFNI supports analysis and of activation maps overlaid on anatomical scans, with extensions for and structural . Developed by the , it includes C, , and programs, along with shell scripts for automated pipelines. For Python-based workflows, scikit-image offers a versatile collection of algorithms for general image processing, adaptable to medical tasks such as , thresholding, and morphological operations on biomedical datasets. Built on and , it provides efficient, research-oriented utilities for filtering noise in images or segmenting regions in slides. Its open-source nature and integration with the broader SciPy ecosystem make it ideal for scripting custom medical image pipelines. MONAI (Medical Open Network for AI) is an open-source PyTorch-based framework optimized for deep learning applications in medical imaging, supporting tasks like segmentation, classification, and domain adaptation across modalities such as CT and MRI. It provides pre-built components, model zoos, and tools for reproducible AI workflows, with active community development and integrations for clinical deployment as of 2025. 3D Slicer stands out as an integrated open-source platform that combines visualization, processing, segmentation, and registration in a user-friendly graphical interface, supporting extensible workflows for clinical research. It handles multi-modal data like DICOM files and enables interactive 3D modeling for applications in radiotherapy planning and surgical simulation. Backed by a global community, 3D Slicer incorporates extensions from ITK and VTK, fostering collaborative development through its module ecosystem. The open-source ecosystem thrives through community-driven platforms like , where repositories for these frameworks host code, issues, and contributions from thousands of users worldwide. Benchmarking and validation are advanced via challenges organized by the Medical Image Computing and Computer Assisted Intervention (MICCAI) conference, which promote standardized evaluations of tools on diverse datasets, enhancing reliability in medical applications.

Commercial and Integrated Platforms

Commercial and integrated platforms in medical image computing encompass suites and hardware-integrated systems designed for clinical deployment, offering robust tools for image analysis, , and workflow management in healthcare settings. These platforms often combine advanced imaging hardware from vendors like and with specialized software, enabling seamless processing of multi-modality data such as and MRI scans. Unlike open-source alternatives, they prioritize regulatory validation and with hospital systems, facilitating adoption in routine diagnostics and treatment planning. Major vendors provide modality-integrated workstations that support comprehensive image computing tasks. GE HealthCare's Advantage Workstation (AW) serves as a multi-modality platform for reviewing, processing, and analyzing DICOM images from CT, MRI, and other sources, incorporating AI-supported features to enhance diagnostic confidence and streamline workflows across departments. Similarly, Philips' IntelliSpace Portal and Advanced Visualization Workspace offer integrated solutions for 3D rendering, segmentation, and AI-driven insights, designed to optimize radiology reporting and support cross-departmental collaboration. For surgical planning, Materialise's Mimics software processes CT and MRI data to generate 3D models and virtual simulations, aiding in preoperative assessment and guide fabrication for complex procedures like cranio-maxillofacial surgery. These platforms incorporate FDA-approved AI modules to augment clinical decision-making. Aidoc's radiology AI solutions, cleared by the FDA for applications such as triage of acute conditions in scans (e.g., and ), integrate directly into existing workflows to prioritize urgent cases and reduce turnaround times. Cloud-based options like Cloud's Suite enable scalable storage, analysis, and AI model deployment for medical images, supporting interoperability with standards like and FHIR while ensuring data security for multi-site operations. Key advantages of these commercial platforms include intuitive user interfaces that minimize training requirements and accelerate task completion, as seen in the AW's template-based processing tools. They also ensure through FDA clearances, which validate safety and efficacy for clinical use, thereby reducing liability risks for healthcare providers. Seamless integration with Picture Archiving and Communication Systems (PACS) is a core strength, allowing vendor-agnostic access to archived images and reports, which enhances efficiency in enterprise environments. In radiotherapy planning, Varian's system exemplifies integrated platform utility, functioning as an FDA-cleared treatment planning tool that simulates radiation delivery using CT-derived dose calculations and optimization algorithms to tailor plans for individual patients. As of 2017, deployed in over 3,400 cancer centers worldwide, facilitates precise and adaptive planning, improving outcomes in intensity-modulated radiotherapy by integrating data with tools. Such case studies highlight how these platforms bridge image with therapeutic applications, supporting evidence-based care in high-stakes clinical scenarios.

Challenges and Future Directions

Computational and Ethical Challenges

Medical image computing faces significant computational challenges due to the massive scale of data generated in healthcare, particularly from imaging modalities. Biomedical archives have reached exabyte-scale volumes, with estimates indicating around 150 exabytes of healthcare data as early as 2014, driven by high-throughput imaging and continuous data streams from devices like monitors. This volume necessitates scalable architectures, such as distributed cloud systems and databases, to manage storage, retrieval, and analysis without prohibitive costs or delays. In , integrating diverse data types—like with genetic sequences—exacerbates scalability issues, requiring advanced techniques like for processing terabytes from single studies. Real-time processing on edge devices presents additional hurdles, as medical imaging demands low-latency analysis for applications like or MRI diagnostics. enables near-instantaneous handling of high-resolution images by processing data closer to the source, but it struggles with across proprietary systems, which disrupts seamless data exchange. Device constraints, including thermal management and cybersecurity, further complicate deployment, as edge systems must balance computational power with HIPAA-compliant security while minimizing latency for clinical . These challenges limit the adoption of edge-based for on-the-spot image interpretation, potentially delaying interventions in time-sensitive scenarios. Data privacy remains a core issue, with regulations like HIPAA and GDPR imposing strict requirements on () in imaging AI. Under HIPAA, must remove 18 specific identifiers, but medical images—such as features in or MRI scans—are not explicitly listed, leading to vulnerabilities where AI can re-identify patients via advanced recognition techniques. GDPR mandates explicit consent for sensitive data use and complete anonymization for research without permission, yet current methods like skull-stripping may reduce dataset utility for AI training, hindering model generalizability. Emerging regulations like the EU AI Act classify many AI as high-risk, requiring conformity assessments to ensure transparency and bias mitigation. Compliance failures risk legal penalties and erode patient trust, particularly as AI proliferates in imaging workflows. Bias in datasets amplifies inequities, often stemming from demographic underrepresentation that leads to unfair AI outcomes. In chest X-ray analysis, models trained on imbalanced data exhibit underdiagnosis bias, with higher false-positive rates for "no finding" in underrepresented groups such as Black, Hispanic, female, or Medicaid-insured patients across large datasets like MIMIC-CXR and CheXpert. Intersectional effects compound this, as seen in elevated underdiagnosis for Black females, potentially exacerbating health disparities if unaddressed. Such biases arise from historical dataset compositions that overrepresent certain demographics, underscoring the need for diverse, representative training data to ensure equitable AI performance in medical imaging. Ethical concerns intensify with the opacity of black-box AI models, where explainability is essential for clinical trust and regulatory adherence. Black-box deep learning systems in image analysis obscure decision rationales, posing medicolegal risks and impeding adoption, as clinicians cannot verify outputs against medical knowledge. Techniques like , SHAP, and Grad-CAM aim to highlight influential image regions, but limitations in robustness and evaluation metrics persist, requiring human-centered designs to align explanations with clinical needs. Regulations such as GDPR's "" further mandate transparency to safeguard in high-stakes diagnostics like cancer detection. Liability in AI-assisted clinical decisions adds ethical complexity, particularly for where erroneous outputs can influence treatment. Physicians remain accountable for verifying AI recommendations, facing claims if deviations from the occur, even with good-faith reliance on tools for radiograph interpretation. Health systems may incur liability for poor AI vetting or training, while developers risk products for design defects, though legal precedents for software remain underdeveloped. This framework emphasizes the need for clear guidelines to apportion responsibility without stifling in medical image computing. Interoperability challenges extend beyond the DICOM standard, as AI models require standardized formats for scalable integration into workflows. Diverse AI output formats, including proprietary files and non-interactive DICOM secondary captures, create maintenance burdens and limit machine-readable data exchange, complicating enterprise-wide deployment. Inconsistent standards lead to network overload from multiple models, with frameworks like Integrating the Healthcare Enterprise (IHE) AI Workflow profiles needed to ensure and automated result handling. Without advancements in data models encompassing features, AI adoption in remains fragmented, hindering collaborative and efficient clinical use. In medical image computing, has emerged as a pivotal approach to leverage vast amounts of unlabeled data, addressing the scarcity of annotated medical images. By pretraining models on pretext tasks such as image inpainting or contrastive prediction, self-supervised methods enable robust feature extraction for downstream tasks like segmentation and , achieving performance comparable to while reducing annotation costs by up to 90% in some benchmarks. A 2024 review highlights its application in MRI and analysis, where models like SimCLR variants have improved generalization across diverse datasets. Multimodal foundation models represent a significant advancement, integrating with text and clinical records to enhance diagnostic accuracy. Google's Med-PaLM Multimodal, for instance, processes chest X-rays alongside textual reports to generate interpretable diagnoses, outperforming single-modality models in tasks like with reported accuracy gains of 5-10%. These models, built on large-scale pretraining, facilitate for rare conditions, as evidenced in a 2024 of over 50 studies showing their efficacy in workflows. Hardware innovations are pushing the boundaries of computational efficiency in medical image processing. Quantum computing accelerates optimization problems in image , such as solving inverse problems in faster than classical methods; quantum algorithms have shown potential for significant speedups in simulated reconstruction tasks. Neuromorphic chips, mimicking neural architectures, enable low-power for ; a 2023 overview notes their potential for low-power in tasks, with accuracies up to 99% in some applications like disease diagnosis. Digital twins, virtual replicas of patient anatomy derived from multimodal imaging, are transforming personalized simulations. By integrating real-time MRI and data with biomechanical models, they predict treatment outcomes, such as tumor response to radiation, with precision errors below 5% in clinical pilots. complements this by enabling collaborative training across hospitals without data sharing, preserving privacy while improving model robustness; a 2024 survey reports its success in distributed MRI segmentation, achieving 92% scores across institutions. Explainable AI techniques, particularly SHAP (SHapley Additive exPlanations), are gaining traction to demystify black-box models in imaging. SHAP attributes feature importance in convolutional networks, highlighting salient regions in mammograms for detection and improving trust, as shown in a 2025 study where it aligned explanations with annotations in 85% of cases. Sustainable computing trends address the environmental footprint of training, with green practices like reducing use by 50-70% for large-scale image analysis without accuracy loss; initiatives in emphasize carbon-aware scheduling to align computations with sources.

References

  1. [1]
    The Role of Medical Image Computing and Machine Learning in ...
    Medical image computing aims at developing computational strategies for robust, automated, quantitative analysis of relevant information from medical imaging ...
  2. [2]
    [PDF] Reproducibility in medical image computing - HAL
    Jan 19, 2025 · Medical image computing (MIC) is the field devoted to computational methods for the analysis of medical imaging data. As such, it comprises ...
  3. [3]
    Medical Image Processing - SPIE
    Medical image processing means the provision of digital image processing for medicine. Medical image processing covers five major areas.
  4. [4]
    [PDF] A Comprehensive Review of Medical Image Analysis Technology ...
    At the application level, it focuses on four core tasks of medical image analysis, i.e., classification, boundary detection, registration, and text ...
  5. [5]
    Medical image analysis using deep learning algorithms - PMC
    Medical image processing is an area of research that encompasses the creation and application of algorithms and methods to analyze and decipher medical images ( ...
  6. [6]
    Diffusion Models for Medical Image Computing: A Survey
    Subsequently, it discusses the application of diffusion models in five medical image computing tasks: image generation, modality conversion, image segmentation, ...
  7. [7]
    Viewpoints on Medical Image Processing: From Science to Application
    Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment ...
  8. [8]
    A Comprehensive Review of Medical Image Analysis Methods - MDPI
    This work overviews fundamental concepts, state-of-the-art models, and publicly available datasets in the field of medical imaging.
  9. [9]
    Medical Image Computing for Translational Biomedical Research
    Feb 20, 2013 · Medical Image Computing (MIC) is an emerging interdisciplinary field at the intersection of computer science, electrical engineering ...Missing: review | Show results with:review
  10. [10]
    Modern Image-Guided Surgery: A Narrative Review of Medical ... - NIH
    Dec 16, 2023 · Image-guided surgery (IGS) is a form of computer-assisted navigation surgery that focuses on processing image data and converting it into ...
  11. [11]
    How Artificial Intelligence Is Shaping Medical Imaging Technology
    Medical image analysis for disease detection and diagnosis is a rapidly evolving field that holds immense potential for improving healthcare outcomes. By ...
  12. [12]
    Image Processing - Medical Imaging Systems - NCBI - NIH
    In image processing, an image is usually regarded as a function f that maps image coordinates x, y to intensity values. This simplifies the introduction of ...
  13. [13]
    How CT happened: the early development of medical computed ...
    On Friday, October 1, 1971, a new procedure was performed to image a live patient's brain. After a (lengthy) computer processing reconstruction delay, a ...
  14. [14]
    The history of digitalization in medical technology
    Sep 1, 2022 · The first digital technique in radiography was computed tomography (CT), which caused great excitement in the medical community in the early ...
  15. [15]
    The History of the MRI: The Development of Medical Resonance ...
    May 20, 2024 · In 1977, Dr. Raymond Damadian and his team completed the first full-body MRI scan of a human, marking a significant milestone in medical imaging ...<|control11|><|separator|>
  16. [16]
    On Gabor's contribution to image enhancement - ScienceDirect.com
    Dennis Gabor is mainly known for the invention of optical holography and the introduction of the so-called Gabor functions in communications.
  17. [17]
    MICCAI 98
    Jul 27, 1998 · MICCAI 98. First International Conference on Medical Image Computing and Computer-Assisted Intervention
  18. [18]
    Statistical shape models for 3D medical image segmentation: a review
    While 2D models have been in use since the early 1990 s, wide-spread utilization of three-dimensional models appeared only in recent years, primarily made ...
  19. [19]
    About | ITK - Insight Toolkit
    History. In 1999 the US National Library of Medicine of the National Institutes of Health awarded six three-year contracts to develop an open-source ...
  20. [20]
    The UK Biobank imaging enhancement of 100000 participants
    May 26, 2020 · UKB became available for researchers to access in 2012, with imaging data for the first 5,000 participants available in mid-2015 and for ~40,000 ...
  21. [21]
    [PDF] introduction of medical imaging modalities - arXiv
    Jun 1, 2023 · This chapter aims to provide an overview of the most commonly used medical imaging modalities, including X-ray, CT, MRI, ultrasound, and nuclear ...
  22. [22]
    X-ray Imaging - Medical Imaging Systems - NCBI Bookshelf - NIH
    In this chapter, the physical principles of X-rays are introduced. We start with a general definition of X-rays compared to other well known rays, e. g., ...
  23. [23]
    8. Computed Tomography — 10 Lectures on Inverse Problems and ...
    The transform above describes all possible X-ray measurements of u ( x ) and is called the Radon transform after the Austrian mathematician Johann Radon (1887- ...
  24. [24]
    Magnetic Resonance Imaging: Principles and Techniques - NIH
    This article covers a brief synopsis of basic principles in MRI, followed by an overview of current applications in medical practice.
  25. [25]
    [PDF] PET Imaging Physics and Instrumentation - AAPM
    The principle of coincidence detection provides an "electronic collimation" of the counts. One could think of the detector at one end of a line of response as ...
  26. [26]
    Ultrasound Physics and Instrumentation - StatPearls - NCBI Bookshelf
    Mar 27, 2023 · A sound source produces longitudinal wave oscillations, allowing the propagation of energy and critical waveforms for a clinical ultrasound. The ...
  27. [27]
    PET/MRI: a novel hybrid imaging technique. Major clinical ...
    PET/MRI offers significant advantages, including excellent contrast and resolution and reduced ionizing radiation, as compared to well-established PET/CT.
  28. [28]
    About DICOM: Overview
    Digital Imaging and Communications in Medicine — is the international standard for medical images and related information.Translation Policy · Related Standards and SDOs · Governance
  29. [29]
    NIfTI-1 Data Format — Neuroimaging Informatics Technology Initiative
    Oct 25, 2007 · Introduction. NIfTI-1 is adapted from the widely used ANALYZE™ 7.5 file format. The hope is that older non-NIfTI-aware software that uses ...
  30. [30]
    Medicine - The HDF Group - ensuring long-term access and ...
    Using HDF5, many organizations and communities achieve their I/O performance, storage, quality and reliability requirements, without sacrificing their ability ...
  31. [31]
    Deep Learning Based Noise Reduction for Brain MR Imaging
    A Gaussian smoothing filter is a popular technique; however, this kind of local averaging will remove not only noise but also structural details such as ...
  32. [32]
    N4ITK: Improved N3 Bias Correction - PMC - NIH
    Abstract. A variant of the popular nonparametric nonuniform intensity normalization (N3) algorithm is proposed for bias field correction.Missing: seminal | Show results with:seminal
  33. [33]
    (PDF) Analysis of tractography biases introduced by anisotropic voxels
    Similar to the challenge faced by fMRI for processing and analysis (103, 104) , dMRI is facing reproducibility and replication issues.
  34. [34]
    Medical image super-resolution for smart healthcare applications
    Noise amplification, another inherent issue, poses significant challenges. As super-resolution seeks to refine image details, it may inadvertently intensify the ...<|separator|>
  35. [35]
    Successes and challenges in extracting information from DICOM ...
    Sep 12, 2023 · The aim of this work is to examine the current challenges in extracting image metadata and to discuss the potential benefits of using this rich information.
  36. [36]
    3D-QCNet – A pipeline for automated artifact detection in diffusion ...
    This makes quality control (QC) a crucial first step prior to any analysis of dMRI data. Several QC methods for artifact detection exist, however they suffer ...
  37. [37]
    [PDF] 69, 1917
    Über die Bestimmung von Funktionen durch ihre. Integralwerte längs gewisser Mannigfaltigkeiten. ―. Von. JOHANN RADON. Integriert man eine geeigneten ...
  38. [38]
    Application of Convolutions instead of Fourier Transforms - PNAS
    mathematical process of reconstruction of a three-dimen- sional object from its transmission shadowgraphs; it uses convolutions with functions defined in ...
  39. [39]
    [PDF] Peter Mansfield - Nobel Lecture
    These papers emphasized the Fourier transform approach used, even though the images of the camphor stacks were one-dimensional. It was clear that we had made ...
  40. [40]
    [PDF] Maximum Likelihood Reconstruction for Emission Tomography
    Shepp is grateful to B. Efron for bringing the EM algo- rithm to his attention in the context of a discussion on ET. Y. Vardi is grateful to ...
  41. [41]
    [PDF] Sparse MRI: The application of compressed sensing for rapid MR ...
    The sparsity which is implicit in MR images is exploited to significantly undersample k-space. Some MR images such as angiograms are already sparse in the ...
  42. [42]
    [PDF] Communication in the Presence of Noise* - MIT Fab Lab
    Shannon: Communication in the Presence of Noise ity than the message space. The type of mapping can be suggested by Fig. 3, where a line is mapped into a ...
  43. [43]
    Biomedical Image Denoising Based on Hybrid Optimization ... - NIH
    There are two basic approaches to image denoising that include spatial filtering and transform domain filtering methods. Most of the spatial filters are median ...
  44. [44]
    Fourier Transform Filtering - Evident Scientific
    In the tutorial, low-pass and high-pass filters are included to remove high- and low-spatial-frequency information, respectively, from the Fourier transform of ...
  45. [45]
    Multiresolution Analysis Using Wavelet, Ridgelet, and Curvelet ... - NIH
    The main aim of this research is to facilitate the process of highlighting ROI in medical images, which may be encapsulated within other objects or surrounded ...
  46. [46]
    Application of regularized Richardson–Lucy algorithm for ... - NIH
    It can considerably improve image contrast and reduce noise in microscope images. Several deconvolution algorithms have been proposed for three-dimensional (3D) ...
  47. [47]
    Multiscale registration of medical images based on edge preserving ...
    Aug 21, 2012 · In order to solve these problems, multi-scale frameworks can be used to accelerate registration and improve robustness. Traditional Gaussian ...
  48. [48]
  49. [49]
  50. [50]
    A survey of medical image registration - ScienceDirect.com
    The purpose of this paper is to present a survey of recent (published in 1993 or later) publications concerning medical image registration techniques.
  51. [51]
    Image matching as a diffusion process: an analogy with Maxwell's ...
    Medical Image Analysis · Volume 2, Issue 3, September 1998, Pages 243-260 ... Thirion's Demons algorithm. The new method compares favorably with both ...Missing: original | Show results with:original
  52. [52]
    Multimodality image registration by maximization of mutual information
    This method uses mutual information (MI) to measure statistical dependence between image intensities, maximizing MI for geometric alignment. It is validated ...Missing: seminal | Show results with:seminal
  53. [53]
    [PDF] Alignment by Maximization of Mutual Information - DSpace@MIT
    . Wells III, W. M. and Viola, P. A. (1995). Multi-modal volume registration by maximization of mutual information. In preparation. Widrow, B. and Ho , M ...
  54. [54]
    A Review on Medical Image Registration as an Optimization Problem
    Based on the differences in search methods, the commonly used methods in the optimization of continuous variables include (1) gradient descent method (GD), (2) ...
  55. [55]
    Evolutionary Image Registration: A Review - MDPI
    The aim of this paper is to investigate and summarize a series of state-of-the-art works reporting evolutionary-based registration methods.
  56. [56]
    A Review of Medical Image Registration for Different Modalities - PMC
    Aug 2, 2024 · This paper provides a comprehensive review of registration techniques for medical images, with an in-depth focus on 2D-2D image registration methods.
  57. [57]
    Volume Visualization: A Technical Overview with a Focus on ...
    Volumetric medical image rendering is a method of extracting meaningful information from a three-dimensional (3D) dataset, allowing disease processes and ...Missing: seminal | Show results with:seminal
  58. [58]
    Volume rendering | Seminal graphics: pioneering efforts that shaped ...
    A technique for rendering images of volumes containing mixtures of materials is presented. The shading model allows both the interior of a material and the ...
  59. [59]
    Marching cubes: A high resolution 3D surface construction algorithm
    We present a new algorithm, called marching cubes, that creates triangle models of constant density surfaces from 3D medical data.
  60. [60]
    multi-planar reconstruction–applications in radiation oncology ...
    A study has been initiated to investigate the application of multi-planar reconstruction (MPR) techniques of computerized tomography (CT) to radiation ...
  61. [61]
    Virtual endoscopy: Application of 3d visualization to medical diagnosis
    Virtual endoscopy is a diagnostic technique in which a three-dimensional imaging technology CT scan, MRI scan, ultrasound is used to create a computer ...Missing: seminal | Show results with:seminal<|control11|><|separator|>
  62. [62]
    Medical image processing on the GPU – Past, present and future
    This review presents the past and present work on GPU accelerated medical image processing, and is meant to serve as an overview and introduction to existing ...
  63. [63]
    Surgical planning in virtual reality: a systematic review - PMC
    It includes research articles reporting on preoperative surgical planning using patient-specific medical images in virtual reality using head-mounted displays.
  64. [64]
    Computing Large Deformation Metric Mappings via Geodesic Flows ...
    Beg, M.F., Miller, M.I., Trouvé, A. et al. Computing Large Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms. Int J Comput Vision 61, 139 ...
  65. [65]
    Unbiased diffeomorphic atlas construction for computational anatomy
    Unbiased diffeomorphic atlas construction for computational anatomy. Author links open overlay panelS. Joshi , Brad Davis a b , Matthieu Jomier c
  66. [66]
    [PDF] Probabilistic Brain Atlas Encoding Using Bayesian Inference
    This paper addresses the problem of creating probabilistic brain atlases from manually labeled training data. We propose a general mesh-based atlas ...
  67. [67]
    A fast diffeomorphic image registration algorithm - ScienceDirect.com
    This paper describes DARTEL, which is an algorithm for diffeomorphic image registration. It is implemented for both 2D and 3D image registration.
  68. [68]
    Multi‐atlas based representations for Alzheimer's disease diagnosis
    In this article, we propose to measure brain morphometry via multiple atlases, in order to generate a rich representation of anatomical structures that will be ...
  69. [69]
    Voxel-based morphometry--the methods - PubMed
    This paper describes the steps involved in VBM, with particular emphasis on segmenting gray matter from MR images with nonuniformity artifact. We provide ...Missing: medical computing seminal
  70. [70]
    [PDF] Statistical Parametric Mapping: The Analysis of Functional Brain ...
    The book also serves as a companion to the software packages that have been developed for brain imaging data analysis. Audience. Scientists actively involved in ...
  71. [71]
    [PDF] A Log-Euclidean Framework for Statistics on Diffeomorphisms - Inria
    Abstract. In this article, we focus on the computation of statistics of invertible geometrical deformations (i.e., diffeomorphisms), based on the.
  72. [72]
    [PDF] Active Shape Models--Their Training and Application
    Bozma and Duncan [11] describe how such a technique can be used to model organs in medical images. A given shape is represented by a list of values for the ...Missing: seminal | Show results with:seminal
  73. [73]
  74. [74]
    Direct Three-Dimensional Myocardial Strain Tensor Quantification ...
    This article presents a novel method for calculating cardiac 3-D strain using a stack of two or more images acquired in only one orientation.
  75. [75]
    Large‐scale analysis of structural brain asymmetries during ...
    Jul 24, 2024 · Characterization of dynamic patterns of human fetal to neonatal brain asymmetry with deformation‐based morphometry. Frontiers in ...
  76. [76]
    Deformation-based shape analysis of the hippocampus in the ...
    Jun 3, 2020 · Deformation-based shape analysis showed a common pattern of morphological deformation in svPPA and AD compared with controls. More specifically, ...
  77. [77]
    Registration of Longitudinal Brain Image Sequences with Implicit ...
    Jul 23, 2011 · ... 4D registration method (i.e., 4D-HAMMER (Shen and Davatzikos, 2004)), and a groupwise-only registration method (i.e., our method without ...
  78. [78]
    Unbiased Longitudinal Brain Atlas Creation Using Robust Linear ...
    Abstract. e present a new method to create a diffeomorphic longitudinal (4D) atlas composed of a set of 3D atlases each representing an average model ...
  79. [79]
    Within-subject template estimation for unbiased longitudinal image ...
    In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface ...<|control11|><|separator|>
  80. [80]
  81. [81]
    Accessible analysis of longitudinal data with linear mixed effects ...
    May 6, 2022 · Here, we describe the linear mixed effects (LME) model and how to use it for longitudinal studies. We re-analyze a dataset published by Blanton ...
  82. [82]
    Longitudinal bioluminescence imaging to monitor breast tumor ...
    Oct 13, 2022 · Here we report the development of the chick embryo chorioallantoic membrane (CAM) model to study tumor growth and angiogenesis using breast cancer cell lines.
  83. [83]
    Longitudinal Imaging Studies of Tumor Microenvironment in Mice ...
    Recent studies suggested a possibility that rapamycin renormalizes aberrant tumor vasculature and improves tumor oxygenation.
  84. [84]
    Longitudinal MRI and Cognitive Change in Healthy Elderly - PMC
    Results suggest that in normal aging, cognitive functioning declines as cortical gray matter and hippocampus decrease, and WMSH increases.
  85. [85]
    Missing data in longitudinal neuroimaging studies - PMC - NIH
    Missing outcome data are an all-to-common feature of any longitudinal study, a feature that, if handled improperly, can reduce statistical power and lead to ...Missing: irregular | Show results with:irregular
  86. [86]
    Longitudinal individual predictions from irregular repeated ... - Nature
    Jan 18, 2023 · This paper focuses on the development, fitting and evaluation of a prediction model with irregular intensive longitudinal data.<|separator|>
  87. [87]
    An Overview on the Advancements of Support Vector Machine ...
    This review is an extensive survey on the current state-of-the-art of SVMs developed and applied in the medical field over the years.
  88. [88]
    [PDF] Texture features in medical image analysis: a survey - arXiv
    Aug 4, 2022 · The GLCM is also a statistical and probabilistic operator. Therefore, extracting statistical features from it can represent the texture of the ...
  89. [89]
    [PDF] Principal Component Analysis in Medical Image Processing: A Study
    This paper is a review on the applications of the Principal Component Analysis (PCA). Page 4. that has been done in the area of Medical image processing. It ...
  90. [90]
    Characterization of digital medical images utilizing support vector ...
    Mar 10, 2004 · In this paper we discuss an efficient methodology for the image analysis and characterization of digital images containing skin lesions ...Missing: seminal | Show results with:seminal
  91. [91]
    X-ray Image Classification Using Random Forests with Local ... - NIH
    This paper presents a fast and efficient method for classifying X-ray images using random forests with proposed local wavelet-based local binary pattern (LBP)
  92. [92]
    Brain tumor image segmentation using K-means and fuzzy C-means ...
    This chapter provides a comprehensive survey on K-means clustering and fuzzy C-means clustering methods for detecting the location of tumor from brain MRI ...
  93. [93]
    An Intelligent handcrafted feature selection using Archimedes ...
    Mar 2, 2022 · The characteristic vector HOG is formed by concatenating the characteristic vectors of all the blocks for a given image. 3.3 Gray-level co- ...
  94. [94]
    A Guide to Cross-Validation for Artificial Intelligence in Medical ... - NIH
    This article introduces the principles of CV and provides a practical guide on the use of CV for AI algorithm development in medical imaging.
  95. [95]
    The use of the area under the ROC curve in the evaluation of ...
    In this paper we investigate the use of the area under the receiver operating characteristic (ROC) curve (AUC) as a performance measure for machine learning ...
  96. [96]
    U-Net: Convolutional Networks for Biomedical Image Segmentation
    May 18, 2015 · In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more ...
  97. [97]
    Unpaired Image-to-Image Translation using Cycle-Consistent ... - arXiv
    Mar 30, 2017 · We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples.
  98. [98]
    Swin Transformers for Semantic Segmentation of Brain Tumors in ...
    Jan 4, 2022 · We propose a novel segmentation model termed Swin UNEt TRansformers (Swin UNETR). Specifically, the task of 3D brain tumor semantic segmentation is ...
  99. [99]
    Hibou: A Family of Foundational Vision Transformers for Pathology
    Jun 7, 2024 · This paper introduces the Hibou family of foundational vision transformers for pathology, leveraging the DINOv2 framework to pretrain two model variants.
  100. [100]
    nnU-Net: a self-configuring method for deep learning-based ... - Nature
    Dec 7, 2020 · We developed nnU-Net, a deep learning-based segmentation method that automatically configures itself, including preprocessing, network architecture, training ...
  101. [101]
    Med3D: Transfer Learning for 3D Medical Image Analysis - arXiv
    Apr 1, 2019 · Models pre-trained from massive dataset such as ImageNet become a powerful weapon for speeding up training convergence and improving accuracy.
  102. [102]
    [1708.02002] Focal Loss for Dense Object Detection - arXiv
    Aug 7, 2017 · Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during ...
  103. [103]
    Federated Learning for Medical Image Analysis: A Survey - arXiv
    Jun 9, 2023 · In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis.
  104. [104]
    Denoising Diffusion Probabilistic Models for 3D Medical Image ...
    Nov 7, 2022 · This paper shows diffusion models can synthesize high-quality 3D medical images (MRI, CT), and improve breast segmentation models with ...
  105. [105]
    FreeSurfer - PMC - NIH
    FreeSurfer is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and ...
  106. [106]
    MR diffusion tensor spectroscopy and imaging - PubMed
    This paper describes a new NMR imaging modality--MR diffusion tensor imaging. It consists of estimating an effective diffusion tensor, Deff, within a voxel.
  107. [107]
    Diffusion MRI fiber tractography of the brain - Jeurissen - 2019
    Sep 25, 2017 · This paper provides an overview of the key concepts of tractography, the technical considerations at play, and the different types of tractography algorithm.Missing: seminal | Show results with:seminal
  108. [108]
    High Angular Resolution Diffusion Imaging (HARDI) - Descoteaux
    Jun 15, 2015 · This article covers the young history of high angular resolution diffusion imaging (HARDI), from basic diffusion principles and diffusion tensor imaging (DTI) ...
  109. [109]
    OPTIMAL SLICE TIMING CORRECTION AND ITS INTERACTION ...
    Slice timing correction (STC) is a critical preprocessing step that corrects for this temporal misalignment. Interpolation-based STC is implemented in all major ...
  110. [110]
    A review of methods for correction of intensity inhomogeneity in MRI
    In this paper, numerous methods that have been developed to reduce or eliminate intensity inhomogeneities in MRI are reviewed.
  111. [111]
    Super‐resolution in magnetic resonance imaging: A review
    Nov 19, 2012 · For the last 15 years, super-resolution (SR) algorithms have successfully been applied to magnetic resonance imaging (MRI) data to increase ...
  112. [112]
    Image-based biomechanical models of the musculoskeletal system
    Aug 13, 2020 · Finite element analysis allows predicting quantities not measurable in vivo or in vitro. Medical imaging plays a critical role in state-of-the- ...Missing: seminal | Show results with:seminal
  113. [113]
    Considerations for Reporting Finite Element Analysis Studies in ...
    The goal of this document is to identify resources and considerate reporting parameters for FEA studies in biomechanics.
  114. [114]
    Model for in-vivo estimation of stiffness of tibiofemoral joint using MR ...
    Jul 19, 2021 · The current study's purpose was to develop a methodology to estimate the subject-specific stiffness of the tibiofemoral joint using finite- ...
  115. [115]
    Intraoperative brain shift prediction using a 3D inhomogeneous ...
    The aims of this study were to develop a three-dimensional patient-specific finite element (FE) brain model with detailed anatomical structures and appropriate ...
  116. [116]
    Applications of finite element simulation in orthopedic and trauma ...
    The finite element method (FEM) was originally developed for solving structural analysis problems relating to mechanics, civil and aeronautical engineering. The ...
  117. [117]
    Quantify patient-specific coronary material property and its impact on ...
    An intravascular ultrasound (IVUS)-based modeling approach is proposed to quantify in vivo vessel material properties for more accurate stress/strain ...
  118. [118]
    Deconvolution-Based CT and MR Brain Perfusion Measurement
    Deconvolution-based analysis of CT and MR brain perfusion data is widely used in clinical practice and it is still a topic of ongoing research activities.
  119. [119]
    4D flow MRI - PubMed
    4D flow MRI. J Magn Reson Imaging. 2012 Nov;36(5):1015-36. doi: 10.1002/jmri.23632. Authors. Michael Markl , Alex Frydrychowicz, Sebastian Kozerke, Mike ...Missing: original paper
  120. [120]
    A comprehensive mathematical model for cardiac perfusion - Nature
    Aug 30, 2023 · The aim of this paper is to introduce a new mathematical model that simulates myocardial blood perfusion that accounts for multiscale and multiphysics features.
  121. [121]
    Insight Toolkit: ITK
    ITK is an open-source, cross-platform library that provides developers with an extensive suite of software tools for image analysis.Download ITK · ITK · ITK's documentation · About
  122. [122]
    VTK - The Visualization Toolkit
    The Visualization Toolkit (VTK) is open source software for manipulating and displaying scientific data. It comes with state-of-the-art tools for 3D rendering.Download · VTK in Action · Documentation · About
  123. [123]
    InsightSoftwareConsortium/ITK: Insight Toolkit (ITK) - GitHub
    The Insight Toolkit (ITK) is an open-source, cross-platform toolkit for N-dimensional scientific image processing, segmentation, and registration.
  124. [124]
    Introduction - ITK's documentation - Insight Toolkit
    ITK is an open-source, cross-platform toolkit for scientific image processing, segmentation, and registration in two, three, or more dimensions.
  125. [125]
    About - VTK
    VTK is an open-source software for 3D graphics, modeling, image processing, and scientific visualization, written in C++ with other language bindings.
  126. [126]
    FSL - FMRIB Software Library
    FSL is a comprehensive library of analysis tools for FMRI, MRI and diffusion brain imaging data. It runs on macOS (Intel and Apple Silicon), Linux, and ...
  127. [127]
    FSL - PubMed
    Aug 15, 2012 · FSL (the FMRIB Software Library) is a comprehensive library of analysis tools for functional, structural and diffusion MRI brain imaging data.
  128. [128]
    SPM - Statistical Parametric Mapping - FIL | UCL
    The SPM software package has been designed for the analysis of brain imaging data sequences. The sequences can be a series of images from different cohorts, or ...Software · SPM Installation with MATLAB · Documentation · SPM8
  129. [129]
    SPM (Statistical Parametric Mapping)
    The software consists of a suite of tools for analysing brain imaging data, which may be images from different cohorts or time series from the same subject.
  130. [130]
    afni.nimh.nih.gov
    AFNI (Analysis of Functional NeuroImages) is a leading software suite of C, Python, R programs and shell scripts primarily developed for the analysis and ...AFNI’s documentation! · Documentation · Bootcamp · About
  131. [131]
    AFNI: Software for Analysis and Visualization of Functional Magnetic ...
    The software can color overlay neural activation maps onto higher resolution anatomical scans. Slices in each cardinal plane can be viewed simultaneously.
  132. [132]
    scikit-image: Image processing in Python — scikit-image
    scikit-image is a collection of algorithms for image processing. It is available free of charge and free of restriction.Gallery · Examples · 1. Installing scikit-image · Scikit-image 0.25.2 (2025-02-18)
  133. [133]
    scikit-image: image processing in Python - PMC - PubMed Central
    scikit-image is an image processing library that implements algorithms and utilities for use in research, education and industry applications.
  134. [134]
  135. [135]
    3D Slicer as an Image Computing Platform for the Quantitative ...
    3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation.
  136. [136]
    miccai-grand-challenge · GitHub Topics
    This is the source code of the 1st place solution for segmentation task in MICCAI 2020 TN-SCUI challenge. segmentation miccai-grand-challenge 1st-place ...
  137. [137]
    MICCAI 2019-2023 Open Source Papers - GitHub
    MICCAI 2023 Open-Source Papers ; A flexible framework for simulating and evaluating biases in deep learning-based medical image analysis, Emma A.M. Stanley, code.
  138. [138]
  139. [139]
    Enterprise Imaging - Philips
    A unified, scalable solution that consolidates imaging data across departments, improves accessibility and supports data-driven clinical decision-making.
  140. [140]
    Advanced Visualization and AI - Philips
    Advanced Visualization Workspace 15 is designed to support your image diagnostic confidence, while still reducing your time to report through optimized ...
  141. [141]
    Materialise Mimics Core | 3D Medical Image Segmentation Software
    Mimics Core is advanced 3D medical image segmentation software that efficiently takes you from image to 3D model and offers virtual procedure planning ...
  142. [142]
    Radiology AI Imaging | Aidoc – Faster, Smarter Care
    Aidoc's advanced AI medical imaging helps radiologists streamline workflows, prioritize findings, activate care teams and facilitate patient follow-up.Missing: commercial GE Philips Mimics Google Cloud API Eclipse
  143. [143]
    [PDF] Aidoc Medical, Ltd. - accessdata.fda.gov
    Nov 8, 2023 · All devices are artificial intelligence, deep-learning algorithms incorporating software packages for use with compliant scanners, PACS, and ...
  144. [144]
    Medical Imaging Suite | Google Cloud
    Medical Imaging Suite helps organizations transform imaging diagnostics by making imaging data accessible, interoperable and useful.
  145. [145]
    [PDF] Varian Medical Systems, Inc. May 26, 2023 Peter Coronado Sr ...
    May 26, 2023 · Eclipse TPS is a computer-based software device used by trained medical professionals to design and simulate radiation therapy treatments.<|separator|>
  146. [146]
    Radiology image management PACS - Philips
    Our radiology PACS/integrated PACS solution is designed intuitively to optimize care pathways from orchestration to diagnosis to collaboration.
  147. [147]
    Eclipse | Varian
    Eclipse treatment planning system v18.1 delivers on innovation to reshape treatment planning workflows and techniques.
  148. [148]
    Eclipse™ Treatment Planning Software from Varian Medical ...
    Feb 23, 2015 · Varian's Eclipse software, which is in use at some 3,400 cancer treatment centers around the world, optimizes a radiotherapy treatment plan ...
  149. [149]
    Self-supervised learning framework application for medical image ...
    Oct 27, 2024 · This review begins with an overview of prevalent types and advancements in self-supervised learning, followed by an exhaustive and systematic ...
  150. [150]
    Self-supervised learning for medical image classification - Nature
    Apr 26, 2023 · In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers ...
  151. [151]
    Multimodal Foundation Models for Medical Imaging - medRxiv
    Oct 23, 2024 · Some notable work includes Med-PaLM Multimodal, which emerged as a model capable of encoding and interpreting a wide array of biomedical ...
  152. [152]
    Multimodal Large Language Models in Medical Imaging
    This review summarizes the current capabilities and limitations of MLLMs in medicine—particularly in radiology—and outlines key directions for future research.
  153. [153]
    Advances in Quantum Algorithms for Medical Tomography
    Oct 7, 2025 · These findings highlight the potential of quantum algorithms to advance tomographic imaging by enabling efficient and accurate reconstructions ...
  154. [154]
    Neuromorphic applications in medicine - IOPscience
    Aug 22, 2023 · This paper provides an overview of recent neuromorphic advancements in medicine, including medical imaging and cancer diagnosis, processing of ...Introduction · Neuromorphic applications for... · Challenges in integrating...
  155. [155]
    Current progress of digital twin construction using medical imaging
    Aug 21, 2025 · With advancements in medical imaging, digital twins allow for the creation of highly detailed and precise representations of a patient's anatomy ...Abstract · INTRODUCTION · SYSTEM-SPECIFIC REVIEW · DISCUSSION
  156. [156]
    Federated learning for medical image analysis: A survey - PMC
    In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis.
  157. [157]
    Explainable artificial intelligence for medical imaging systems using ...
    Jul 31, 2025 · By integrating the SHAP explainable AI technique, the model provides both local and global levels of interpretability, aiding healthcare ...
  158. [158]
    Environmental Sustainability and AI in Radiology: A Double-Edged ...
    Feb 27, 2024 · However, AI also has the potential to improve environmental sustainability in medical imaging. For example, use of AI can shorten MRI scan times ...