Fact-checked by Grok 2 weeks ago
References
-
[1]
Sequence Learning and NLP with Neural NetworksSequence learning refers to a variety of related tasks that neural nets can be trained to perform. What all these tasks have in common is that the input to the ...
-
[2]
Deep Learning in a Nutshell: Sequence Learning - NVIDIA DeveloperMar 7, 2016 · In this post, we'll look at sequence learning with a focus on natural language processing. Part 4 of the series covers reinforcement learning.Missing: definition | Show results with:definition
-
[3]
Sequential Learning - an overview | ScienceDirect TopicsSequential learning refers to the process of training and evaluating models that can learn and make decisions in a sequential manner, such as model-based ...
-
[4]
Machine Learning for Sequential Data: A Review - ACM Digital LibraryThis paper formalizes the principal learning tasks and describes the methods that have been developed within the machine learning research community for ...
-
[5]
None### Abstract Summary
-
[6]
Large Sequence Models for Sequential Decision-Making: A SurveyJun 24, 2023 · This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer.
-
[7]
A Survey and Formal Analyses on Sequence Learning ... - IEEE XploreThis paper presents a literature survey and analysis on a variety of neural networks towards sequence learning. The conceptual models, methodologies, ...
-
[8]
Sequence Learning### Summary of Sequence Learning Definition and Scope
-
[9]
Temporal-Sequential Learning With a Brain-Inspired Spiking Neural ...Jul 1, 2020 · Sequence learning is a fundamental cognitive function of the brain. However, the ways in which sequential information is represented and ...
-
[10]
The significance of brain oscillations in motor sequence learningComplex movements such as riding a bike or playing a musical instrument are composed of sequences of single mostly simple movements. Therefore, our capacity ...
-
[11]
Data-driven stock forecasting models based on neural networksThis paper comprehensively reviews the literature on data-driven neural networks in the field of stock forecasting from 2015 to 2023.
-
[12]
[PDF] Markovian Models for Sequential Data Yoshua Bengio > Dept ...Hidden Markov Models (HMMs) are statistical models of sequential data that have been used successfully in many applications in artificial intelligence, pattern ...
-
[13]
Motor sequence learning - ScholarpediaMay 24, 2018 · Motor sequence learning broadly refers to the process by which a sequence of movements comes to be performed faster and more accurately than before.Other types of Motor Sequence... · Dimensions of motor...
-
[14]
Encapsulation of Implicit and Explicit Memory in Sequence LearningMar 1, 1998 · In our study, amnesic patients were given extensive SRT training. Their implicit and explicit test performance was compared to the performance ...
-
[15]
[PDF] Learning Structure from the Ground up—Hierarchical ...Chunking as a mechanism is a basis for humans to identify patterns as objects, assigning labels to them to facilitate memory compression [8, 9], sequence ...
-
[16]
Psychology as the Behaviorist Views it. John B. Watson (1913).It has been shown that improvement in habit comes unconsciously. The first we know of it is when it is achieved -- when it becomes an object. I believe that ' ...Missing: sequential | Show results with:sequential
-
[17]
[PDF] The Problem of Serial Order in Behavior - Language LogJan 6, 2017 · Lashlcy's analysis lies in the fact that it exhibits the significant factors involved in the expression of ideas as well as in other instances ...
-
[18]
The problem of serial order in behavior: Lashley's legacyIn 1951, Karl Lashley, a neurophysiologist at Harvard University, published a paper that has become a classic: “The Problem of Serial Order in Behavior.
-
[19]
Hierarchical processing in music, language, and action: Lashley ...Apr 2, 2014 · Karl Lashley suggested that complex action sequences, from simple motor acts to language and music, are a fundamental but neglected aspect of neural function.
-
[20]
Hierarchical processing in music, language, and action: Lashley ...Our study reveals how musical training refines the hierarchical neural processing of music and provides a neuro-computational account of this remarkable ...
-
[21]
Finding Structure in Time - Elman - 1990 - Cognitive ScienceEncoding sequential structure in simple recurrent networks (CMU Tech. Rep ... A learning algorithm for continually running fully recurrent neural networks (Tech.
-
[22]
Sequence learning - ScienceDirect.comSequence learning has provided a natural domain for investigating the computations and neural structures involved in skill acquisition. As we have already ...Missing: definition | Show results with:definition
-
[23]
Implicit learning of artificial grammars - ScienceDirect.comAn artificial grammar was used to generate the stimuli. Experiment I showed that Ss learned to become increasingly sensitive to the grammatical structure of the ...
-
[24]
Attentional requirements of learning: Evidence from performance ...Peter Bullemer. Show more. Add to Mendeley. Share. Cite. https://doi.org/10.1016 ... Journal of Experimental Psychology: Learning, Memory, and Cognition, 10 ...
-
[25]
A Neostriatal Habit Learning System in Humans - ScienceIn contrast, patients with Parkinson's disease failed to learn the probabilistic classification task, despite having intact memory for the training episode.
-
[26]
Sequence learning in the human brain: A functional ...Feb 15, 2020 · The study provides solid evidence that, at least as tested with the visuo-motor SRT task, sequence learning in humans relies on the basal ganglia.
-
[27]
Association and Abstraction in Sequential Learning:“What is ...Aug 6, 2025 · The first part of the article addresses the major questions and challenges that underlie the debate on implicit and explicit learning. In ...
-
[28]
[PDF] Machine Learning for Sequential Data: A ReviewThis paper formalizes the principal learning tasks and describes the methods that have been developed within the machine learning re- search community for ...Missing: survey | Show results with:survey
-
[29]
Machine Learning: Algorithms, Real-World Applications and ... - NIHThis study's key contribution is explaining the principles of different machine learning techniques and their applicability in various real-world application ...
-
[30]
Supervised Machine Learning - DataCampAug 22, 2022 · Supervised machine learning learns patterns and relationships between input and output data. It is defined by its use of labeled data. A labeled ...
-
[31]
[PDF] Reinforcement Learning: An Introduction - Stanford UniversityThe reinforcement learning agent and its environment interact over a sequence of discrete time steps. The specification of their interface defines a ...
-
[32]
[PDF] Deep Reinforcement Learning for Sequence-to-Sequence ModelsWe intend for this paper to provide a broad overview on the strength and complexity of combining seq2seq training with RL training and to guide researchers in ...
-
[33]
[PDF] A Tutorial on Hidden Markov Models and Selected Applications in ...This tutorial is intended to provide an overview of the basic theory of HMMs (as originated by Baum and his colleagues), provide practical details on methods of.
-
[34]
The viterbi algorithm | IEEE Journals & MagazineThis paper gives a tutorial exposition of the algorithm and of how it is implemented and analyzed. Applications to date are reviewed. Increasing use of the ...
-
[35]
[PDF] The Infinite Hidden Markov Model - MLG CambridgeWe have shown how a two-level Hierarchical Dirichlet Process can be used to define a non- parametric Bayesian HMM. The HDP implicity integrates out the ...
-
[36]
Hidden Markov Models and their Applications in Biological ... - NIHWe show how these HMMs can be used to solve various sequence analysis problems, such as pairwise and multiple sequence alignments, gene annotation, ...
-
[37]
[PDF] LONG SHORT-TERM MEMORY 1 INTRODUCTIONFor instance, in his postdoctoral thesis. (1993), Schmidhuber uses hierarchical recurrent nets to rapidly solve certain grammar learning tasks involving minimal ...
-
[38]
[PDF] Long Short-Term Memory - Semantic ScholarLong Short-Term Memory · Sepp Hochreiter, J. Schmidhuber · Published in Neural Computation 1 November 1997 · Computer Science.
-
[39]
[1706.03762] Attention Is All You Need - arXivJun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
-
[40]
[PDF] Backpropagation Through Time: What It Does and How to Do ItThis paper first reviews basic backpropagation, a simple method which is now being widely used in areas like pattern recognition and fault diagnosis, ...
-
[41]
BERT: Pre-training of Deep Bidirectional Transformers for Language ...Oct 11, 2018 · BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.
-
[42]
[PDF] Short term temperature forecasting using LSTMS, and CNNMay 31, 2021 · Long Short-Term Memory (LSTM) is a widely used deep learning architecture for time series forecasting. In this paper, we aim to predict one ...
-
[43]
[PDF] Predicting Stock Prices Using Hybrid LSTM and ARIMA Model - IAENGIn this paper, we use the closing price of the sample as the prediction target, and the input and output of the training model are all one-dimensional matrices.<|separator|>
-
[44]
[PDF] Improving Language Understanding by Generative Pre-TrainingNatural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and.
-
[45]
[1809.04281] Music Transformer - arXivSep 12, 2018 · The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that ...
-
[46]
Highly accurate protein structure prediction with AlphaFold - NatureJul 15, 2021 · The AlphaFold network directly predicts the 3D coordinates of all heavy atoms for a given protein using the primary amino acid sequence and ...
-
[47]
Q-learning | Machine LearningThis paper presents and proves in detail a convergence theorem forQ-learning based on that outlined in Watkins (1989). We show thatQ-learning converges to ...
-
[48]
A Markovian Decision Process - Semantic ScholarA Markovian Decision Process · R. Bellman · Published 18 April 1957 · Mathematics · Indiana University Mathematics Journal.
-
[49]
[PDF] A Markovian Decision Process - DTICA MARKOVIAN DECISION PROCESS. By. Richard Bellman. §1. Introduction. The purpose of this paper is to discuss the asymptotic behavior of the sequence f fN(i)3 I ...
-
[50]
A Review of Deep Reinforcement Learning Algorithms for Mobile ...This review paper discusses path-planning methods that use neural networks, including deep reinforcement learning, and its different types.
-
[51]
Mastering the game of Go with deep neural networks and tree searchJan 27, 2016 · Silver, D., Huang, A., Maddison, C. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489 (2016).
-
[52]
Informing sequential clinical decision-making through reinforcement ...This paper highlights the role that reinforcement learning can play in the optimization of treatment policies for informing clinical decision making. We have ...
-
[53]
Rethinking exploration–exploitation trade-off in reinforcement ...The exploration–exploitation dilemma is one of the fundamental challenges in deep reinforcement learning (RL). Agents must strike a trade-off between making ...
-
[54]
[PDF] A framework for temporal abstraction in reinforcement learningWe can analyze options in terms of the SMDP and then use their MDP interpretation to change them and produce a new SMDP. Page 17. R.S. Sutton et al. / ...
-
[55]
Replay of Learned Neural Firing Sequences during Rest in Human ...The offline “replay” of neural firing patterns underlying waking experience, previously observed in non-human animals, is thought to be a mechanism for memory ...
-
[56]
Article Predictive sequence learning in the hippocampal formationAug 7, 2024 · We developed a predictive autoencoder model of the hippocampus including the trisynaptic and monosynaptic circuits from the entorhinal cortex (EC).
-
[57]
Learning hierarchical sequence representations across human ...Feb 19, 2021 · In contrast, “chunking models” posit that learners represent statistically coherent units of information from the input in memory such that ...
-
[58]
Intact predictive motor sequence learning in autism spectrum disorderOct 19, 2021 · We conclude that individuals with autism do not show atypicalities in response to surprising events in the context of motor sequence-learning.Methods · Serial Reaction Time Task · Results<|control11|><|separator|>
-
[59]
Brain signatures of a multiscale process of sequence learning ... - eLifeFeb 4, 2019 · Using Bayesian inference (again), the estimated statistics are turned into a prediction about the next stimulus. In that framework, surprise ...
-
[60]
[2303.08774] GPT-4 Technical Report - arXivMar 15, 2023 · We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.
-
[61]
[2209.15352] AudioGen: Textually Guided Audio Generation - arXivAudioGen is an auto-regressive model that generates audio samples conditioned on text inputs, using a discrete audio representation.
-
[62]
Mamba: Linear-Time Sequence Modeling with Selective State SpacesDec 1, 2023 · Mamba is a neural network using selective SSMs, with fast inference and linear scaling, achieving state-of-the-art performance in language, ...
-
[63]
[2410.06070] Enforcing Interpretability in Time Series TransformersOct 8, 2024 · We develop a framework based on Concept Bottleneck Models to enforce interpretability of time series Transformers.
-
[64]
Bias and Fairness in Large Language Models: A SurveyWe present a comprehensive survey of bias evaluation and mitigation techniques for LLMs. We first consolidate, formalize, and expand notions of social bias and ...
-
[65]
Large language models show amplified cognitive biases in moral ...Our experiments demonstrate that the decisions and advice of LLMs are systematically biased against doing anything, and this bias is stronger than in humans.
-
[66]
Vita-CLIP: Video and text adaptive CLIP via Multimodal PromptingIn this work, we propose a multimodal prompt learning scheme that works to balance the supervised and zero-shot performance under a single unified training.
-
[67]
Sequence processing with quantum-inspired tensor networks - NatureFeb 28, 2025 · We introduce efficient tensor network models for sequence processing motivated by correspondence to probabilistic graphical models, interpretability and ...
-
[68]
Protein Design by Integrating Machine Learning and Quantum ...Nov 15, 2024 · Strikingly, our quantum-inspired reformulation outperforms conventional sequence optimization even when adopted on classical machines. The ...