Fact-checked by Grok 2 weeks ago

Difference of Gaussians

The Difference of Gaussians (DoG) is a fundamental technique in and image processing, defined as the subtraction of two Gaussian-smoothed versions of an input , each convolved with Gaussian kernels of differing standard deviations \sigma_1 and \sigma_2 (where \sigma_1 < \sigma_2). This operation yields a band-pass filtered response that enhances edges, blobs, and other local features at scales corresponding to the difference in blur levels, effectively approximating a second-order derivative operator while suppressing low- and high-frequency noise. Originally inspired by models of retinal ganglion cell receptive fields in biological vision, where center-surround antagonism is modeled as the difference between excitatory and inhibitory Gaussian profiles, the DoG was formalized in neuroscience by Rodieck in 1965 to describe the spatial sensitivity of cat retinal ganglion cells. In computer vision, it gained prominence through the 1980 theory of edge detection by Marr and Hildreth, who proposed the DoG as a computationally efficient approximation to the Laplacian of Gaussian (LoG) for locating zero-crossings that delineate intensity discontinuities in natural images. The LoG formulation, \nabla^2 G(x,y,\sigma) = \frac{1}{\pi \sigma^4} (r^2 / (2\sigma^2) - 1) e^{-r^2 / (2\sigma^2)} where r^2 = x^2 + y^2, detects edges across scales, but the DoG G(x,y,\sigma_1) - G(x,y,\sigma_2) achieves similar results with lower computational cost by avoiding explicit Laplacian computation. The DoG's versatility extends to multi-scale feature detection, notably in David Lowe's Scale-Invariant Feature Transform (SIFT) algorithm, where repeated application across an octave of scales (with a constant factor k = \sqrt{2}) identifies stable keypoints as local extrema in the DoG pyramid D(x,y,\sigma) = L(x,y,k\sigma) - L(x,y,\sigma), enabling robust matching invariant to scale, rotation, and illumination changes. Beyond edge and blob detection, DoG has influenced applications in texture analysis, image stylization, and even extended variants like XDoG for artistic rendering, underscoring its enduring role in bridging biological inspiration with practical computational efficiency.

Mathematical Foundations

Definition and Formulation

The Difference of Gaussians (DoG) is a linear filter commonly applied in image processing and computer vision, formed by subtracting two Gaussian kernels with differing variances to create a band-pass response in the spatial domain. In its general n-dimensional formulation, the isotropic Gaussian kernel with variance t > 0 is defined as \Phi_t(\mathbf{x}) = \frac{1}{(2\pi t)^{n/2}} \exp\left( -\frac{\|\mathbf{x}\|^2}{2t} \right), where \mathbf{x} \in \mathbb{R}^n and \|\cdot\| denotes the norm. The DoG kernel is then given by K_{t_1, t_2}(\mathbf{x}) = \Phi_{t_1}(\mathbf{x}) - \Phi_{t_2}(\mathbf{x}), with parameters typically chosen such that $0 < t_1 < t_2 to ensure the inner Gaussian has a narrower spread than the outer one. When applied to an input image or signal I: \mathbb{R}^n \to \mathbb{R}, the DoG filter computes the output via convolution: (I * K_{t_1, t_2})(\mathbf{x}) = (I * \Phi_{t_1})(\mathbf{x}) - (I * \Phi_{t_2})(\mathbf{x}), where * denotes the convolution operator. This formulation leverages the linearity of convolution, making the overall operation linear in I and efficient, as the blurred versions I * \Phi_{t_1} and I * \Phi_{t_2} can be precomputed in a scale-space pyramid. In practice, for 2D grayscale images, the parameters are often specified using standard deviations \sigma_1 = \sqrt{t_1} and \sigma_2 = \sqrt{t_2}, with the Gaussian taking the form G(\mathbf{x}, \sigma) = \frac{1}{2\pi \sigma^2} \exp\left( -\frac{\|\mathbf{x}\|^2}{2\sigma^2} \right).[1] The separability of the Gaussian kernel further enhances computational efficiency: in Cartesian coordinates, the n-dimensional Gaussian is the product of n independent one-dimensional Gaussians along each axis, \Phi_t(\mathbf{x}) = \prod_{i=1}^n \frac{1}{\sqrt{2\pi t}} \exp\left( -\frac{x_i^2}{2t} \right). Consequently, the 2D or higher-dimensional convolution with a Gaussian can be decomposed into successive 1D convolutions along each dimension, reducing the computational cost per pixel from O(K^2) to O(K), where K is the effective kernel size proportional to the standard deviation. This property extends to the DoG, as the difference of separable filters remains separable. The resulting DoG acts as an efficient spatial band-pass filter, attenuating low-frequency components (e.g., uniform regions) captured by the broader Gaussian while suppressing high-frequency noise via the smoothing of both, thereby emphasizing mid-frequency structures such as edges.

Properties of the DoG Kernel

The Difference of Gaussians (DoG) kernel functions as a band-pass filter in the frequency domain, exhibiting a zero direct current (DC) response that effectively suppresses low-frequency components while attenuating high frequencies beyond a certain range. This behavior arises because the Fourier transform of the DoG is the difference of two Gaussian functions, resulting in a response that passes a specific band of mid-range spatial frequencies. The peak sensitivity occurs at frequencies inversely proportional to the geometric mean of the standard deviations of the constituent Gaussians, highlighting features at scales around \sqrt{{grok:render&&&type=render_inline_citation&&&citation_id=4&&&citation_type=wikipedia}}{t_1 t_2}. In the spatial domain, the DoG kernel's two-dimensional profile forms a Mexican hat shape when viewed in cross-section, featuring a central positive lobe that excites responses to local intensity changes and flanking negative lobes that inhibit surrounding regions. This center-surround structure produces zero-crossings at radial distances where the positive and negative contributions balance, delineating potential edge locations without requiring explicit derivative computation. The overall shape promotes selective enhancement of blob-like or edge-like structures while suppressing uniform or slowly varying regions. To ensure consistent responses across scales, the DoG kernel is frequently normalized by dividing by the difference in variances, $1/(t_2 - t_1), which compensates for the increasing magnitude of Gaussian blurring at larger scales and approximates scale-invariant behavior akin to normalized second-order derivatives. Common choices for the standard deviation ratio \sigma_2 / \sigma_1 include approximately 1.6, which provides a balanced approximation suitable for feature detection, and 4 or 5, which widens the kernel for better noise suppression at the cost of reduced contrast in fine details.

Relation to Laplacian of Gaussian

Approximation Mechanism

The Gaussian function \Phi_t(x) in scale-space theory satisfies the heat equation \partial_t \Phi_t(x) = \frac{1}{2} \Delta \Phi_t(x), where \Delta denotes the Laplacian operator and t > 0 parameterizes the scale of smoothing. This diffusion equation ensures that repeated Gaussian smoothing generates a continuous family of increasingly blurred representations, preserving the well-posedness of the scale-space paradigm. To approximate the Laplacian \Delta \Phi_t, a quotient can be applied to the time in the : \Delta \Phi_t \approx \frac{2}{\delta t} (\Phi_t - \Phi_{t + \delta t}) for small \delta t > 0. Rearranging yields \Phi_t - \Phi_{t + \delta t} \approx \frac{\delta t}{2} \Delta \Phi_t, showing that the difference-of-Gaussians kernel K_{t, t + \delta t} = \Phi_t - \Phi_{t + \delta t} approximates \frac{1}{2} \Delta \Phi_t up to a scaling factor proportional to \delta t. Applying this to an input image I, the Laplacian-of-Gaussian operator is \Delta (I * \Phi_t) = I * (\Delta \Phi_t) \approx I * K_{t_1, t_2}, where t_1 = t and t_2 = t + \delta t, again up to scaling. This establishes the Difference of Gaussians as a computationally efficient surrogate for the Laplacian of Gaussian in multi-scale analysis. Rigorous proofs of these approximation properties within scale-space theory, including error bounds and consistency under , are provided by Lindeberg (1994, 2015).

Accuracy and Parameter Selection

The approximation of the Laplacian of Gaussian (LoG) by the difference of Gaussians (DoG) involves a trade-off between accuracy and computational efficiency, as the error in the approximation decreases with a smaller relative scale step \delta t / t, but this requires more closely spaced Gaussian kernels, increasing the number of filtering operations in multi-scale analyses. This finite-difference approximation, rooted in the for , exhibits that scales with O((\delta t / t)^2), leading to better fidelity for when the standard deviation \sigma_2 / \sigma_1 \approx 1.6 (corresponding to a variance t_2 / t_1 \approx 2.56), though practical implementations favor this to minimize overall error while maintaining reasonable and . Parameter selection for the standard deviations in is guided by the desired balance between approximation quality and robustness; a standard deviation ratio \sigma_2 / \sigma_1 \approx 1.6 provides a close match to the , achieving peak sensitivity of about 33% and a half-sensitivity bandwidth of 1.8 octaves, as recommended for in early visual processing. Larger ratios, such as \sigma_2 : \sigma_1 = 4:1, enhance robustness by emphasizing broader differences but at the cost of increased loss and deviation from the shape. DoG offers computational advantages over direct LoG computation by avoiding second-order derivatives, which can introduce numerical instability, and by leveraging the separability of Gaussian filters into 1D convolutions along rows and columns, which is more efficient than convolving with the non-separable LoG . In discrete implementations, sizes are typically rounded to odd integers, such as 5×5 for the smaller Gaussian and 9×9 for the larger one when the standard deviation ratio is 1.6, with truncation at 3–4 standard deviations to capture 99% of the Gaussian energy while minimizing boundary effects.

Biological Inspiration

Retinal Ganglion Cells

Retinal ganglion cells (RGCs) are the output neurons of the , projecting to the brain via the , and their receptive fields form the foundational biological inspiration for the difference of Gaussians (DoG) model in visual processing. These cells exhibit a concentric organization, with two primary types: ON-center/OFF-surround, where onset in the center excites the cell while in the surrounding annulus inhibits it, and the inverse OFF-center/ON-surround configuration, where offset in the center excites and onset in the surround inhibits. This antagonistic structure enhances contrast detection by responding strongly to local changes while suppressing uniform illumination, thereby promoting and blob sensitivity in early visual signaling. This center-surround organization was first described by Kuffler (1953) through electrophysiological recordings in cat RGCs, where responses to small stimuli revealed excitatory or inhibitory centers surrounded by oppositely tuned annuli. Enroth-Cugell and Robson () quantified this by presenting spots of varying sizes and positions, demonstrating that RGCs achieve firing rates when stimuli (typically 0.5–2 degrees) and show reduced or reversed responses for larger spots engaging the surround, confirming the concentric antagonism. Building on this, Hubel and Wiesel (1961) observed analogous receptive field properties in cells, linking retinal outputs to cortical processing, though their work emphasized binocular integration. These findings established the empirical basis for modeling RGC sensitivity as a spatial . Mathematically, the spatial response profile of RGCs is well-approximated by a function, where the center is a narrow Gaussian (standard deviation σ_c) subtracted from a broader surround Gaussian (σ_s), with the weight of the surround often scaled to balance the total integral to zero for uniform fields. A typical parameter ratio of σ_s / σ_c ≈ 5:1 captures the observed sensitivity falloff, aligning the model's contrast sensitivity curve with psychophysical measurements of human spatial at intermediate frequencies (around 2–5 cycles per ). This formulation, introduced by Rodieck (1965), quantitatively reproduces the excitatory-inhibitory without requiring complex nonlinearities for basic linear responses. RGCs further diversify into subtypes with distinct DoG scales: X-cells (analogous to parvocellular pathway) feature finer centers and surrounds (σ_c ≈ 0.2–0.5 degrees), supporting high-acuity form perception, whereas Y-cells (magnocellular pathway) employ coarser surrounds (σ_c ≈ 0.5–1 degree, σ_s up to 5–10 times larger), enabling robust detection of low-contrast, high-speed motion across wider fields. This scale difference arises from convergent inputs—Y-cells pool from more bipolar cells—facilitating transient responses critical for dynamic scene analysis, as evidenced by their preferential activation in motion-sensitive tasks.

Center-Surround Receptive Fields

The center-surround organization of receptive fields in retinal ganglion cells features an antagonistic structure, with a central excitatory (or inhibitory) region opposed by an inhibitory (or excitatory) surround. This arrangement is computationally modeled as the difference between a narrow central Gaussian and a broader surrounding Gaussian, providing a direct mapping to the framework. Early theoretical developments formalized this DoG abstraction for receptive fields, beginning with Ratliff's analysis of and neural networks underlying contrast phenomena in the . Subsequent work by Koch et al. integrated dendritic with functional modeling to explain how such structures generate spatially tuned responses. Functionally, the center-surround configuration enhances local contrasts through subtractive processing, where the surround suppresses uniform background signals to amplify differences at the . This leads to robust detection of edges and blobs, as the DoG response peaks at transitions and exhibits zero-crossings that delineate boundaries. Additionally, the balanced opposition between and surround ensures against global illumination variations, maintaining sensitivity to relative contrasts independent of absolute light levels. In models, the surround size in these fields increases with from the fovea, reflecting sparser peripheral sampling and larger areas; accordingly, parameters must vary to capture this gradient, with broader surrounds in peripheral representations.

Applications

Edge and Blob Detection

The Difference of Gaussians () filter is widely used in by identifying zero-crossings in its response, which correspond to intensity boundaries in the . These zero-crossings occur where the DoG response changes , typically from the positive central lobe to the negative surrounding lobes, marking locations of sharp intensity transitions. This approach stems from the approximation of the Laplacian of Gaussian () operator, where the DoG's band-pass properties highlight edges while smoothing noise. For blob detection, DoG responses are computed across multiple scales by varying the parameter t in the Gaussian kernels, forming a scale-space representation often implemented via multi-scale pyramids. Local maxima in this DoG scale-space indicate blob-like structures that are invariant to scale, as these extrema capture regions of consistent intensity variation across resolutions. This method enables detection of circular or elliptical features without prior knowledge of their size. DoG effectively handles noise by suppressing uniform low-frequency components through the subtraction of blurred versions, rejecting slowly varying intensities while attenuating high-frequency noise. For instance, when Gaussian noise is added to an , the DoG reduces its impact by emphasizing mid-frequency edges and blobs, preserving structural details over random fluctuations. In practice, detections are refined using non-maximum suppression along the scale dimension, which eliminates redundant responses by retaining only the strongest peaks in neighborhoods across position and scale, as seen in variants of the Marr-Hildreth edge detector adapted for multi-scale analysis.

Scale-Invariant Feature Transform

The (SIFT) algorithm utilizes the (DoG) as a core component for detecting keypoints that are invariant to scale and robust to changes in image conditions. In SIFT, DoG is applied across pyramids, where each octave represents a doubling of the scale through successive image down-sampling by a factor of 2, allowing efficient coverage of a wide range of scales. Keypoints are identified as local extrema in the DoG representation, corresponding to stable features like blobs or edges that persist across scales. This multi-scale approach ensures that features detected in one octave align with those in adjacent octaves after resampling, enabling scale-invariant matching. The detection process involves constructing a Gaussian pyramid with s=3 intervals per , using a constant scale factor k = 2^{1/3} \approx 1.26 between adjacent levels. This requires computing 6 Gaussian images per , which produces 5 images per by subtracting adjacent blurred versions. Extrema are then located by comparing each in a image to its 26 neighbors across the current scale and two adjacent scales; those qualifying as maxima or minima are refined for sub-pixel accuracy through quadratic or Taylor series approximation around the discrete location. The ratio of the standard deviations \sigma_2 / \sigma_1 = k \approx 1.26 in provides a close approximation to the scale-normalized Laplacian of Gaussian (), enhancing while maintaining computational efficiency. Additionally, a contrast threshold, typically set to 0.03 (for normalized values in [0,1]), rejects low-response extrema to eliminate unstable or edge-like points. DoG's blob-like responses in scale space confer to SIFT keypoints, as the characteristic scale of each extremum is determined by the Gaussian kernel size at detection, allowing features to be compared regardless of image resizing. invariance is achieved by assigning a dominant to each keypoint based on local histograms, derived from the surrounding DoG-detected region. These properties make SIFT highly robust for , as demonstrated in benchmarks where it correctly matched features under significant noise (such as up to 10% added ), illumination changes, and affine distortions (up to 50 degrees viewpoint change), outperforming earlier methods like Harris corners in repeatability across transformations. The integration of DoG in SIFT was introduced by Lowe in 2004, marking a seminal advancement in feature detection for tasks.

Extensions and Modern Developments

In Computer Vision and Machine Learning

In modern , the Difference of Gaussians (DoG) has been integrated as a preprocessing step in hybrid models combining traditional filters with convolutional neural networks (CNNs) to enhance and segmentation tasks. For instance, multi-scale DoG preprocessing applied before a dual-stream CNN-Transformer improves for skin lesion segmentation by emphasizing multi-resolution boundaries while reducing noise, achieving higher scores compared to baseline CNNs alone. This approach leverages DoG's ability to approximate Laplacian of Gaussian responses efficiently, providing robust that complements the hierarchical of deep networks. Attention mechanisms in have drawn inspiration from DoG's center-surround structure to model contextual modulation in transformer-based architectures. Recent extensions to Vision Transformers (ViTs) incorporate center-surround antagonism via Gaussian-biased , enabling better handling of spatial hierarchies in image recognition and improving robustness to scale variations in models as of 2023. For real-time applications, efficient DoG approximations in mobile () systems, particularly within updated (SIFT) implementations, facilitate fast keypoint detection on resource-constrained devices; OpenCV's integration of non-patented SIFT post-2020 has enabled seamless DoG-based tracking at 20+ on smartphones for object recognition. These integrations demonstrate DoG's enduring role in bridging classical and paradigms for scalable vision systems.

In Neuroscience and Biomedical Imaging

In , the Difference of Gaussians (DoG) model serves as a foundational tool for simulating the receptive fields of simple cells in the primary (V1), capturing their center-surround organization to replicate responses to oriented stimuli and edges. This approach extends the biological inspiration from retinal ganglion cells by incorporating inhibitory surrounds that enhance contrast sensitivity, allowing computational models to predict V1 neuronal firing patterns under varying visual conditions. Seminal work has demonstrated that DoG-based simulations accurately mimic the spatial tuning of V1 simple cells, with parameters tuned to match empirical data from electrophysiological recordings. Extensions to DoG models have advanced (fMRI) analysis of receptive fields, enabling the mapping of population receptive fields (pRFs) in human with greater precision by accounting for suppressive surrounds. In these models, the DoG function replaces traditional Gaussian pRFs to better fit BOLD signals, revealing surround suppression effects that correlate with eccentricity and attentional modulation. For instance, DoG implementations have quantified how inhibitory surrounds influence pRF sizes, providing insights into cortical organization beyond retinotopic mapping. In biomedical imaging, filters enhance retinal (OCT) scans for automated layer segmentation, suppressing noise while highlighting boundaries between retinal sublayers such as the inner nuclear and outer plexiform layers. By applying multi-scale to preprocess OCT volumes, algorithms achieve sub-pixel accuracy in delineating pathologies like , with segmentation errors reduced by up to 20% in clinical datasets.

References

  1. [1]
    [PDF] Distinctive Image Features from Scale-Invariant Keypoints
    Jan 5, 2004 · It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and ...
  2. [2]
    [PDF] Theory of edge detection
    A theory of edge detection is presented. The analysis proceeds in two parts. (1)Intensity changes, which occur in a natural image over a wide.
  3. [3]
    [PDF] XDoG: An eXtended difference-of-Gaussians compendium including ...
    The difference-of-Gaussians (DoG) operator has been shown to yield aesthetically pleasing edge lines without post- processing, particularly when synthesizing ...
  4. [4]
    Analysis of multidimensional difference-of-Gaussians filters in terms ...
    The difference-of-Gaussians (DOG) filter is a widely used model for the receptive field of neurons in the retina and lateral geniculate nucleus and is a ...
  5. [5]
    [PDF] Gaussian filters
    Both, the BOX filter and the Gaussian filter are separable: First convolve each row with a 1D filter. Then convolve each column with a 1D filter.
  6. [6]
    Difference of Gaussians Edge Enhancement Algorithm
    Difference of gaussians is a grayscale image enhancement algorithm that involves the subtraction of one blurred version of an original grayscale image from ...
  7. [7]
    [PDF] Scale-space theory: A basic tool for analysing structures at di erent ...
    The output from the scale-space representation can be used for a variety of early visual tasks; operations like feature detection, feature classification and.
  8. [8]
    [PDF] Outline of the relationship between the difference-of-Gaussian and ...
    Sep 19, 2006 · The difference-of-Gaussian (DoG) kernel is widely used as an approximation to the scale-normalized Laplacian-of-Gaussian (LoG) kernel (e.g., ...<|control11|><|separator|>
  9. [9]
    Difference of Gaussians Edge Enhancement Algorithm - Interactive ...
    May 17, 2016 · Thus, the difference of gaussians is equivalent to a band-pass filter that discards all but a handful of spatial frequencies that are ...
  10. [10]
    DISCHARGE PATTERNS AND FUNCTIONAL ORGANIZATION OF ...
    Modulation of narrow-field amacrine cells on light-evoked spike responses and receptive fields of retinal ganglion cells. 1 Apr 2023 | Vision Research, Vol ...
  11. [11]
    [PDF] 517-552 J. Physiol. Christina Enroth-Cugell and J. G. Robson the cat ...
    retinal ganglion cells would be closely correlated with the characteristics of human spatial vision, especially if measurements of the same kind were considered ...
  12. [12]
    Retinal ganglion cells: a functional interpretation of dendritic ...
    The dendritic architecture of the different types of retinal ganglion cells reflects characteristically different electrical properties.
  13. [13]
    Receptive fields and functional architecture in the retina - PMC - NIH
    Jun 15, 2009 · This takes to the cortical level the principles underlying the ganglion cell's difference-of-Gaussians filter, for which the centre improves SNR ...
  14. [14]
    Reconciling Color Vision Models With Midget Ganglion Cell ...
    Aug 16, 2019 · As Figure 2C demonstrates, center-surround receptive fields are ideal edge detectors for encoding spatial contrast. In contrast to early ...
  15. [15]
    Mapping of Retinal and Geniculate Neurons onto Striate Cortex of ...
    The increase with eccentricity of the receptive-field-center area of broad-band ganglion cells (Fig. 5, curve RF) appears to be similar to, but not as steep as, ...
  16. [16]
    [PDF] Lecture 13: Edge Detection
    Feb 12, 2000 · We can find edges by looking for zero-crossings in the Laplacian of the image. ... This is called the Difference-of-Gaussians or DoG operator.
  17. [17]
    [PDF] Feature Detection with Automatic Scale Selection - DiVA portal
    Early work addressing this problem was presented in (Lindeberg 1991, 1993a) for blob-like image structures. The basic idea was to study the behaviour of image.
  18. [18]
    [PDF] Blobs (and scale selection)
    maxima = dark blobs on light background minima = light blobs on dark ... Locating scale-space extrema. Generating Gaussian and. DOG pyramid. References ...
  19. [19]
    [PDF] Edge Detection 1 Low Level Vision - Cornell: Computer Science
    A number of biological vision systems appear to compute difference of Gaussians in their low-level visual processing. ... Zero crossings connected at adjacent ...
  20. [20]
    [PDF] CS 534: Computer Vision Edges - Rutgers Computer Science
    • Laplacian for edge detection/ Laplacian of Gaussian ... • This is called Difference of Gaussians filter DoG ... • Mark the point with zero crossings:.
  21. [21]
    [PDF] Scale-space image processing - Stanford University
    Difference of Gaussians t = σ2 = 1 t = σ2 = 1, k = 1.1. 1. 2. ∇. 2 f t x ... ▫ Non-maximum suppression in 3x3x3 [x,y,t] neighborhood. ▫ Interpolation of ...
  22. [22]
    Multi-scale Gaussian Difference Preprocessing and Dual Stream ...
    Mar 31, 2023 · In this paper, we propose a Multi-Scale Gaussian Difference Preprocessing and Dual Stream CNN-Transformer Hybrid Network for Skin Lesion ...
  23. [23]
    [PDF] Understanding Gaussian Attention Bias of Vision Transformers ...
    We observed that using Gaussian attention bias improved the performance of ViTs on several datasets, tasks, and models.
  24. [24]
    Introduction to SIFT (Scale-Invariant Feature Transform) - OpenCV
    ... SIFT algorithm uses Difference of Gaussians which is an approximation of LoG. Difference of Gaussian is obtained as the difference of Gaussian blurring of ...Missing: David | Show results with:David
  25. [25]
    SIFT Interest Point Detector Using Python - OpenCV - GeeksforGeeks
    SIFT's patent has expired in March 2020. in versions > 4.4, the detector init command has changed to cv2.SIFT_create(). pip install opencv- ...<|separator|>
  26. [26]
    Difference-of-Gaussian generative adversarial network for ...
    May 1, 2023 · In this paper, we present a difference of Gaussian generative adversarial network (DoG-GAN) model for segmenting BACs in mammograms.
  27. [27]
    Bayesian population receptive field modelling - PubMed Central - NIH
    Such receptive fields may be modelled using a Difference of Gaussians (DoG) function (Rodieck, 1965), which can also capture the neuronal response at the ...
  28. [28]
    Population receptive fields of human primary visual cortex organised ...
    Nov 17, 2021 · These findings for the Difference of Gaussians model in fMRI studies when modelling the neural responses in the primary visual cortex are in ...
  29. [29]
    Modeling center–surround configurations in population receptive ...
    The DoG model allows the pRF analysis to capture fMRI signals below baseline and surround suppression. Comparing the fits of the models, an increased variance ...
  30. [30]
    A.I. Pipeline for Accurate Retinal Layer Segmentation Using OCT 3D ...
    Features were enhanced by using the difference of Gaussians (DoG) technique;. Local maxima and minima values were used to remove low-contrast points to ...
  31. [31]
    A novel method based on a multiscale convolution neural network ...
    Oct 28, 2025 · Specifically, we implemented the Gaussian pyramid construction and Difference of Gaussians computation components from the SIFT framework to ...
  32. [32]
    HistomicsTK: A Python toolkit for pathology image analysis algorithms
    Aug 22, 2025 · - Cell detection: Watershed, Difference of Gaussians (DoG) - Pixel classification. Feature Extraction, - Intensity and histogram based ...