Fact-checked by Grok 2 weeks ago
References
-
[1]
[PDF] Shifts in selective visual attention: towards the underlying neural ...Since the saliency map is still a part of the early visual system, it most likely encodes the conspicuity of objects in terms of simple properties such as color ...
-
[2]
A model of saliency-based visual attention for rapid scene analysisAbstract: A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented.
-
[3]
A saliency-based search mechanism for overt and covert shifts of ...Most models of visual search, whether involving overt eye movements or covert shifts of attention, are based on the concept of a saliency map.
-
[4]
Visualising Image Classification Models and Saliency Maps - arXivDec 20, 2013 · This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets).
- [5]
-
[6]
A feature-integration theory of attention - ScienceDirect.comThe feature-integration theory of attention suggests that attention must be directed serially to each stimulus in a display whenever conjunctions of more than ...
-
[7]
Saliency Based on Information Maximization - NIPS papersAuthors. Neil Bruce, John Tsotsos. Abstract. A model of bottom-up overt attention is proposed based on the principle of maximizing information sampled from ...Missing: computation | Show results with:computation
-
[8]
[PDF] A Model of Saliency-Based Visual Attention for Rapid Scene AnalysisAll feature maps feed, in a purely bottom-up manner, into a master “saliency map,” which topographically codes for local conspicuity over the entire visual.
-
[9]
[PDF] Graph-Based Visual Saliency - NIPS papersA new bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is proposed. It consists of two steps: first forming activation maps on certain ...
-
[10]
[PDF] Quantitative Analysis of Human-Model Agreement in Visual Saliency ...This paper compares 35 saliency models using different datasets and evaluation scores, finding that some models consistently perform better.
-
[11]
[PDF] Shallow and Deep Convolutional Networks for Saliency PredictionIn our case we have adopted a completely data-driven approach, using a large amount of annotated data for saliency prediction. Figure 1 provides an example ...Missing: shift post-
-
[12]
Models of Bottom-up Attention and SaliencyThe most obvious and famous example of the bottom-up saliency of a stimulus is the pop-out effect (Treisman, 1986;Treisman & Gelade, 1980; see Figure 14). ...<|control11|><|separator|>
-
[13]
Five Factors that Guide Attention in Visual Search - PMC - NIHThere are two fundamental rules of bottom-up salience. Salience of a target increases with difference from the distractors (target-distractor – TD- ...
-
[14]
Yarbus, eye movements, and vision - PMC - PubMed Central - NIHThe impact of Yarbus's research on eye movements was enormous following the translation of his book Eye Movements and Vision into English in 1967.
-
[15]
Defending Yarbus: Eye movements reveal observers' task | JOVBuswell (1935) and Yarbus (1967), who were the first to investigate the relationship between eye-movement patterns and high-level cognitive factors. Yarbus ...
-
[16]
Saliency map - ScholarpediaAug 28, 2007 · The original definition of the saliency map by Koch and Ullman (1985) is in terms of neural processes and transformations, rather than in terms ...Definition · Example · Anatomical localization of the... · Saliency maps in other...
-
[17]
Interaction between bottom-up saliency and top-down controlJan 16, 2012 · We found evidence for a hierarchy of saliency maps in human early visual cortex (V1 to hV4) and identified where bottom-up saliency interacts with top-down ...
-
[18]
Objects predict fixations better than early saliency - Journal of VisionWeighted with recall frequency, these objects predict fixations in individual images better than early saliency, irrespective of task.
-
[19]
Information-theoretic model comparison unifies saliency metrics - PMCHere we bring saliency evaluation into the domain of information by framing fixation prediction models probabilistically and calculating information gain.
-
[20]
How Well Can Saliency Models Predict Fixation Selection in Scenes ...Here, we adopt this approach to evaluate how well a given saliency map model predicts where human observers fixate in naturalistic images, above and beyond what ...
- [21]
-
[22]
Magnocellular Bias in Exogenous Attention to Biologically Salient ...Oct 1, 2017 · Results support a magnocellular bias in exogenous attention toward distractors of any nature during initial processing, a bias that remains in later stages.Methods · Results · Experimental Effects
-
[23]
Superior colliculus encodes visual saliency before the ... - PNASAug 14, 2017 · Our results show that neurons in the superficial visual layers of the superior colliculus (SCs) encoded saliency earlier and more robustly than V1 neurons.
-
[24]
[PDF] Frequency-tuned Salient Region DetectionIn this paper, we introduce a method for salient region detection that outputs full reso- lution saliency maps with well-defined boundaries of salient objects.
-
[25]
[PDF] Saliency, attention, and visual search: An information theoretic ...Mar 13, 2009 · This paper proposes a saliency computation model to maximize information sampled, where saliency is the output of combining features into a ...
-
[26]
[PDF] Visual Saliency Based on Multiscale Deep FeaturesVisual saliency attempts to determine the amount of at- tention steered towards various regions in an image by the human visual and cognitive systems [6]. It is ...Missing: seminal | Show results with:seminal
-
[27]
Grad-CAM: Visual Explanations from Deep Networks via Gradient ...Oct 7, 2016 · We propose a technique for producing visual explanations for decisions from a large class of CNN-based models, making them more transparent.
-
[28]
Net: going deeper with nested U-structure for salient object detectionIn this paper, we design a simple yet powerful deep network architecture, U2-Net, for salient object detection (SOD). The architecture of our U2-Net is a ...U-Net: Going Deeper With... · 3. Proposed Method · 4. Experimental ResultsMissing: saliency | Show results with:saliency<|separator|>
-
[29]
Salient Object Detection in the Deep Learning Era: An In-Depth SurveyApr 19, 2019 · To facilitate the in-depth understanding of deep SODs, in this paper we provide a comprehensive survey covering various aspects ranging from ...
-
[30]
Saliency Map for Human Gaze Prediction in Still ImagesAug 9, 2025 · 'Perceptual Image Difference Metrics – Saliency Maps & Eye Tracking'. His research interests include image processing and Digital Signal.
-
[31]
Predicting human gaze beyond pixels | JOV - Journal of VisionWe propose a new saliency architecture that incorporates information at three layers: pixel-level image attributes, object-level attributes, and semantic-level ...
-
[32]
UEyes: Understanding Visual Saliency across User Interface TypesApr 19, 2023 · Given a UI as input, a saliency model can predict saliency maps or scanpaths, simulating how users perceive that UI. These models assist UI ...
-
[33]
How Well Can Saliency Models Predict Fixation Selection in Scenes ...Regarding model evaluation, the best solution to the issue of center bias is to design suitable evaluation metrics (Borji et al., 2013a), an approach we adopt ...
-
[34]
(PDF) Saliency and Human Fixations: State-of-the-Art and Study of ...Kullback-Leibler divergence is a measure used to evaluate the similarity between the probability distribution of the computational saliency map and the ...
-
[35]
[PDF] Quantitative Analysis of Human-Model Agreement in Visual Saliency ...2In addition to above scores, Kullback-Leibler (KL) (the divergence be- tween the saliency distributions at human fixations and at randomly shuffled fixations ...<|control11|><|separator|>
-
[36]
SalFoM: Dynamic Saliency Prediction with Video Foundation ModelsApr 3, 2024 · Deep learning-based video saliency prediction, as explored in [26] , has recently become a prominent method for modeling human gaze in dynamic ...
-
[37]
[PDF] Gaze Prediction in Dynamic 360deg Immersive VideosGaze prediction in 360 videos determines where a user will look, using history scan path and VR content, and is based on a deep learning model.
-
[38]
[PDF] Modelling Spatio-Temporal Saliency to Predict Gaze Direction ...A spatio-temporal saliency model that predicts eye movement during video free viewing inspired by the biology of the first steps of the human visual system ...
- [39]
-
[40]
PyGaze: An open-source, cross-platform toolbox for minimal-effort ...Nov 21, 2013 · The PyGaze toolbox is an open-source software package for Python, a high-level programming language. It is designed for creating eyetracking experiments in ...
-
[41]
[1703.01365] Axiomatic Attribution for Deep Networks - arXivMar 4, 2017 · View a PDF of the paper titled Axiomatic Attribution for Deep Networks, by Mukund Sundararajan and 2 other authors ... Integrated Gradients. Our ...
-
[42]
[1706.03825] SmoothGrad: removing noise by adding noise - arXivJun 12, 2017 · This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps.
- [43]
-
[44]
Survey on Explainable AI: From Approaches, Limitations and ...Aug 10, 2023 · This article aims to present a comprehensive overview of recent research on XAI approaches from three well-defined taxonomies.
-
[45]
Saliency Maps as an Explainable AI Method in Medical ImagingSaliency maps are used to highlight important regions in an image and have been found a user-friendly explanation method for deep learning-based imaging tasks.
-
[46]
[PDF] Saliency Driven Perceptual Image Compression - CVF Open AccessThis paper proposes a new end-to-end trainable model for lossy image compression, which includes several novel components. The method incorporates 1) an ...
-
[47]
(PDF) Saliency Based Image Cropping - ResearchGateAug 7, 2025 · Image cropping is a technique that is used to select the most relevant areas of an image, discarding the useless ones.
-
[48]
A Framework for Video Summarization using Visual Attention ...This paper proposes Histogram based Weighted Fusion (HWF) algorithm that uses spatial and temporal saliency maps to act as guidance in creating the summary of ...
-
[49]
Real-time adjustment of contrast saliency for improved information ...Aug 7, 2025 · In this work, we present a technique based on image saliency analysis to improve the conspicuity of the foreground augmentation to the ...
-
[50]
Automatic video scene text detection based on saliency edge mapThe saliency map is conducive to detecting the text with cluttered backgrounds whereas the edge map is suitable for detecting the scene text with low resolution ...<|control11|><|separator|>
-
[51]
Evaluation - MIT/Tuebingen Saliency BenchmarkMore precisely, each metric is evaluated with the saliency map which the model itself predicts to have highest metric performance. This will result in models ...
-
[52]
Information-theoretic model comparison unifies saliency metricsDec 10, 2015 · A major problem known to the field is that existing model comparison metrics give inconsistent results, causing confusion. We argue that the ...Results · Materials And Methods · Phrasing Saliency Maps...Missing: challenges | Show results with:challenges
-
[53]
The DIEM Project | Dynamic Images and Eye MovementsThe DIEM project is an investigation of how people look and see. DIEM has so far collected data from over 250 participants watching 85 different videos.Missing: attention | Show results with:attention
-
[54]
[PDF] SALICON: Saliency in Context - CVF Open AccessSaliency in Context (SALICON) is an ongoing effort that aims at understanding and predicting visual attention. This paper presents a new method to collect ...