Fact-checked by Grok 2 weeks ago
References
-
[1]
Artificial neural networks: a tutorial | IEEE Journals & MagazineThe article discusses the motivations behind the development of ANNs and describes the basic biological neuron and the artificial computational model.
-
[2]
[1404.7828] Deep Learning in Neural Networks: An Overview - arXivApr 30, 2014 · Deep learning uses deep neural networks, distinguished by the depth of their credit assignment paths, and includes supervised, unsupervised, ...
-
[3]
Neuroanatomy, Neurons - StatPearls - NCBI BookshelfTwo connected neurons. Neurons have a soma that contains a nucleus, an axon, and a dendritic tree. A single synapse (red circle) is formed at the point where ...Missing: sheath | Show results with:sheath
-
[4]
Nerve Tissue - SEER Training Modules - National Cancer InstituteEach neuron has three basic parts: cell body (soma), one or more dendrites, and a single axon. Cell Body. In many ways, the cell body is similar to other types ...
-
[5]
Organization of Cell Types (Section 1, Chapter 8) Neuroscience ...Each neuron has only one axon and it is usually straighter and smoother than the dendritic profiles. Axons also contain bundles of microtubules and ...
-
[6]
Ion Channels and the Electrical Properties of Membranes - NCBI - NIHA simple but very important formula, the Nernst equation, expresses the equilibrium condition quantitatively and, as explained in Panel 11-2, makes it possible ...
-
[7]
Chapter 2. Ionic Mechanisms of Action PotentialsSome initial depolarization (e.g., a synaptic potential) will begin to open the Na+ channels. The increase in the Na+ influx leads to a further depolarization. ...
-
[8]
Physiology, Resting Potential - StatPearls - NCBI Bookshelf - NIHThe resting membrane potential is the result of the movement of several different ion species through various ion channels and transporters (uniporters, ...
-
[9]
Physiology, Neurotransmitters - StatPearls - NCBI BookshelfNeurotransmitters are chemicals that allow neurons to communicate, enabling brain functions through chemical synaptic transmission.
-
[10]
Chemical Synapse - an overview | ScienceDirect TopicsChemical synapses are common and are restricted to the nervous system. Electrical synapses are relatively rare and are found in neuronal and nonneuronal cells.
-
[11]
Synaptic Transmission - Basic Neurochemistry - NCBI Bookshelf - NIHSynaptic transmission is chemical communication between nerve cells involving steps like neurotransmitter synthesis, storage, release, and receptor binding.
-
[12]
Physiology, Synapse - StatPearls - NCBI BookshelfMar 27, 2023 · Receptor activation: The neurotransmitter binds to post-synaptic receptors and produces a response in the post-synaptic neuron.<|separator|>
-
[13]
Synaptic Plasticity: The Role of Learning and Unlearning in ...These changes in neuronal connections are the primary mechanism for learning and memory and are known as “synaptic plasticity.” The idea of synaptic plasticity ...
-
[14]
A logical calculus of the ideas immanent in nervous activityBecause of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic.
-
[15]
The perceptron: A probabilistic model for information storage and ...Rosenblatt, F. (1958). The perceptron: A theory of statistical separability in cognitive systems. Buffalo: Cornell Aeronautical Laboratory, Inc. Rep. No. VG- ...
-
[16]
Universal structural patterns in sparse recurrent neural networksSep 8, 2023 · Sparse neural networks can achieve performance comparable to fully connected networks but need less energy and memory, showing great promise ...<|control11|><|separator|>
-
[17]
[PDF] Graph Structure of Neural NetworksAbstract. Neural networks are often represented as graphs of connections between neurons. However, de- spite their wide use, there is currently little un-.
-
[18]
Approximation by superpositions of a sigmoidal functionFeb 17, 1989 · Approximation by superpositions of a sigmoidal function ... Article PDF. Download to read the full article text. Similar content being viewed by ...
-
[19]
[2101.09957] Activation Functions in Artificial Neural Networks - arXivJan 25, 2021 · This paper provides an analytic yet up-to-date overview of popular activation functions and their properties, which makes it a timely resource for anyone who ...
-
[20]
[PDF] Understanding the difficulty of training deep feedforward neural ...Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better ...
-
[21]
[PDF] Rectified Linear Units Improve Restricted Boltzmann MachinesRestricted Boltzmann machines (RBMs) have been used as generative models of many different types of data including labeled or unlabeled images. (Hinton et al., ...
-
[22]
The Perceptron: A Probabilistic Model for Information Storage and ...No information is available for this page. · Learn why
-
[23]
Learning representations by back-propagating errors - NatureOct 9, 1986 · We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in ...
-
[24]
Deep Learning Book - OptimizationChapter 8. Optimization for Training Deep. Models. Deep learning algorithms involve optimization in many contexts. For example,. performing inference in models ...
-
[25]
[1412.6980] Adam: A Method for Stochastic Optimization - arXivDec 22, 2014 · We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order ...
-
[26]
Approximation capabilities of multilayer feedforward networks1991, Pages 251-257. Neural Networks ... Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks.
-
[27]
[PDF] LONG SHORT-TERM MEMORY 1 INTRODUCTIONHochreiter, S. and Schmidhuber, J. (1997). LSTM can solve hard long time lag problems. In. Advances in Neural Information Processing Systems 9. MIT ...
-
[28]
[PDF] Backpropagation Applied to Handwritten Zip Code RecognitionPrevious work performed on recognizing simple digit images (LeCun. 1989) showed that good generalization on complex tasks can be obtained by designing a network ...
-
[29]
Neural networks and physical systems with emergent collective ...Apr 15, 1982 · Neural networks and physical systems with emergent collective computational abilities. J J Hopfield ... ArticleApril 15, 1982. Sequence-specific ...
-
[30]
[PDF] Minsky-and-Papert-Perceptrons.pdf - The semantics of electronicsThis book is about perceptrons-the simplest learning machines. However, our deeper purpose is to gain more general insights into the interconnected subjects of ...Missing: separability | Show results with:separability
-
[31]
Neural networks and physical systems with emergent collectiveThe broken lines show approximations used in modeling. Biophysics: Hopfield. Page 3. Proc. NatL Acad. Sci. USA 79 (1982) ... Biophysics: Hopfield.Missing: original | Show results with:original
-
[32]
[PDF] Neural Networks and Physical Systems with Emergent Collective ...Jul 5, 2004 · Contributed by John J Hopfield, January 15, 1982. ABSTRACT Computational properties of use to biological or- ganisms or to the construction ...
-
[33]
[PDF] A Fast Learning Algorithm for Deep Belief Nets - Computer ScienceWe show how to use “complementary priors” to eliminate the explaining- away effects that make inference difficult in densely connected belief nets.
-
[34]
[1706.03762] Attention Is All You Need - arXivJun 12, 2017 · We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely.
-
[35]
Models - OpenAI APIGPT-5. The best model for coding and agentic tasks across domains ; GPT-5 mini. A faster, cost-efficient version of GPT-5 for well-defined tasks ; GPT-5 nano.GPT-4o · Gpt-4.1 · Gpt-4 · GPT-4o mini
-
[36]
High-Resolution Image Synthesis with Latent Diffusion Models - arXivDec 20, 2021 · Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks.
-
[37]
[PDF] Reducing the Dimensionality of Data with Neural NetworksMay 25, 2006 · We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much.
-
[38]
[PDF] Self-Training: A Survey arXiv:2202.12040v6 [cs.LG] 14 Feb 2025Feb 14, 2025 · This paper presents self- training methods for binary and multi-class classification, along with vari- ants and related approaches such as ...
-
[39]
[1406.2661] Generative Adversarial Networks - arXivJun 10, 2014 · Access Paper: View a PDF of the paper titled Generative Adversarial Networks, by Ian J. Goodfellow and 7 other authors. View PDF · TeX Source.
-
[40]
[1312.6114] Auto-Encoding Variational Bayes - arXivDec 20, 2013 · Authors:Diederik P Kingma, Max Welling. View a PDF of the paper titled Auto-Encoding Variational Bayes, by Diederik P Kingma and 1 other authors.
-
[41]
[2006.11239] Denoising Diffusion Probabilistic Models - arXivJun 19, 2020 · We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from ...
-
[42]
AI & Robotics - TeslaA full build of Autopilot neural networks involves 48 networks that take 70,000 GPU hours to train . Together, they output 1,000 distinct tensors (predictions) ...
-
[43]
AlphaFold - Google DeepMindAlphaFold has revealed millions of intricate 3D protein structures, and is helping scientists understand how all of life's molecules interact.<|control11|><|separator|>
-
[44]
[2102.12092] Zero-Shot Text-to-Image Generation - arXivFeb 24, 2021 · This paper describes a simple approach for zero-shot text-to-image generation using a transformer that autoregressively models text and image ...
- [45]
-
[46]
[PDF] Dropout: A Simple Way to Prevent Neural Networks from OverfittingIn this paper, we described dropout as a method where we retain units with probability p at training time and scale down the weights by multiplying them by ...
-
[47]
Improving neural networks by preventing co-adaptation of feature ...Jul 3, 2012 · Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition. Subjects: Neural and ...
-
[48]
[2001.08361] Scaling Laws for Neural Language Models - arXivJan 23, 2020 · We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the ...
-
[49]
Visualising Image Classification Models and Saliency Maps - arXivDec 20, 2013 · This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets).
-
[50]
"Why Should I Trust You?": Explaining the Predictions of Any ClassifierFeb 16, 2016 · In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner.
-
[51]
[1412.6572] Explaining and Harnessing Adversarial Examples - arXivDec 20, 2014 · Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial ...
-
[52]
A Survey on Bias and Fairness in Machine Learning - arXivAug 23, 2019 · In this survey we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can ...Missing: neural networks
-
[53]
Ethics and discrimination in artificial intelligence-enabled ... - NatureSep 13, 2023 · In 2014, Amazon developed an ML-based hiring tool, but it exhibited gender bias. The system did not classify candidates neutrally for gender ( ...
-
[54]
Social and juristic challenges of artificial intelligence - NatureJun 25, 2019 · It also has a significant impact on many aspects of society and industry, ranging from scientific discovery, healthcare and medical diagnostics ...
-
[55]
Energy and Policy Considerations for Deep Learning in NLP - arXivJun 5, 2019 · In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of ...