Fact-checked by Grok 2 weeks ago
References
-
[1]
[PDF] An Introduction to Autoencoders arXiv:2201.03898v1 [cs.LG] 11 Jan ...Jan 11, 2022 · A Feed-Forward Autoencoder (FFA) is a neural network made of dense lay- ers6 with a specific architecture, as can be schematically seen in ...
-
[2]
Autoencoder neural networks enable low dimensional structure ...Dec 1, 2023 · Autoencoder neural networks enable the compression of high dimensional, noisy data sets down to low dimensional representations (or embeddings).
-
[3]
[2003.05991] Autoencoders - arXivMar 12, 2020 · An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation.Missing: definition | Show results with:definition
-
[4]
Autoencoders and their applications in machine learning: a surveyFeb 3, 2024 · AEs are a type of neural network designed for learning and reconstructing input data. In unsupervised learning, the primary goal is to obtain an ...
-
[5]
Reducing the Dimensionality of Data with Neural Networks - ScienceJul 28, 2006 · Reducing the Dimensionality of Data with Neural Networks. G. E. Hinton and R. R. SalakhutdinovAuthors Info & Affiliations. Science. 28 Jul 2006.Missing: seminal | Show results with:seminal
-
[6]
[PDF] Autoencoders, Unsupervised Learning, and Deep ArchitecturesAutoencoders are simple learning circuits which aim to transform inputs into outputs with the least possible amount of distortion. While conceptually simple, ...
-
[7]
Autoencoders - Deep Learning(. h | x. ) and. p. decoder. (x | h). The idea of autoencoders has been part of the historical landscape of neural. networks for decades (LeCun, 1987; Bourlard ...
-
[8]
Learning representations by back-propagating errors - NatureOct 9, 1986 · Cite this article. Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).
-
[9]
Representation Learning: A Review and New Perspectives - arXivJun 24, 2012 · Title:Representation Learning: A Review and New Perspectives. Authors:Yoshua Bengio, Aaron Courville, Pascal Vincent. View a PDF of the paper ...
-
[10]
On interpretability and proper latent decomposition of autoencodersNov 15, 2022 · This paper interprets autoencoder transformations, analyzes the latent space, and proposes proper latent decomposition (PLD) to find dominant ...
-
[11]
[1905.11062] Quantization-Based Regularization for AutoencodersMay 27, 2019 · Without proper regularization, autoencoder models are susceptible to the overfitting problem and the so-called posterior collapse phenomenon.
-
[12]
[1312.6114] Auto-Encoding Variational Bayes - arXivDec 20, 2013 · Authors:Diederik P Kingma, Max Welling. View a PDF of the paper titled Auto-Encoding Variational Bayes, by Diederik P Kingma and 1 other authors.
-
[13]
beta-VAE: Learning Basic Visual Concepts with a Constrained...Feb 6, 2017 · We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data.
-
[14]
[PDF] Sparse autoencoderJan 11, 2011 · These notes describe the sparse autoencoder learning algorithm, which is one approach to automatically learn features from unlabeled data. In ...
-
[15]
An Analysis of Single-Layer Networks in Unsupervised Feature ...An Analysis of Single-Layer Networks in Unsupervised Feature Learning. Adam Coates, Andrew Ng, Honglak Lee. Proceedings of the Fourteenth International ...
-
[16]
[PDF] Extracting and Composing Robust Features with Denoising ...Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders (Technical Report 1316).Missing: seminal | Show results with:seminal
-
[17]
[PDF] A Connection Between Score Matching and Denoising AutoencodersDenoising Autoencoders (DAEs) are a simple modification of classical autoencoder neural networks that are trained, not to reconstruct their input, but rather to ...Missing: seminal | Show results with:seminal
-
[18]
[PDF] Stacked Denoising Autoencoders: Learning Useful Representations ...Abstract. We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to ...<|control11|><|separator|>
-
[19]
[PDF] Contractive Auto-Encoders: Explicit Invariance During Feature ...We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty.
- [20]
-
[21]
[PDF] Stacked Convolutional Auto-Encoders for Hierarchical Feature ...This paper introduces the Convolutional Auto-Encoder, a hierarchical unsu- pervised feature extractor that scales well to high-dimensional inputs. It learns.
-
[22]
[PDF] Reducing the Dimensionality of Data with Neural NetworksMay 25, 2006 · We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much.
-
[23]
Provable approximation properties for deep neural networksWe discuss approximation of functions using deep neural nets. Given a function f on a d-dimensional manifold, we construct a sparsely-connected depth-4 neural ...
-
[24]
[PDF] The Difficulty of Training Deep Architectures and the Effect of ...Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science,. 313(5786), 504–507. Hinton, G. E., ...
-
[25]
[PDF] The Difficulty of Training Deep Architectures and the Effect of ...It indicates that pre-training earlier layers has a greater effect on the result than pre-training layers that are close to the supervised layer. Moreover, this ...Missing: seminal | Show results with:seminal
-
[26]
[PDF] Autoencoders, Minimum Description Length and Helmholtz Free ...2.1 The "bits-back" argument. The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. We ...
-
[27]
Concrete Autoencoders for Differentiable Feature Selection and ...Jan 27, 2019 · We introduce the concrete autoencoder, an end-to-end differentiable method for global feature selection, which efficiently identifies a subset of the most ...
-
[28]
[1711.00937] Neural Discrete Representation Learning - arXivNov 2, 2017 · Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete ...
-
[29]
Autoencoders with Normalizing Flows for Medical Images Anomaly ...Feb 1, 2023 · We propose a normalizing flow based autoencoder for medical anomaly detection and it outperformed the other approaches by a large margin.
-
[30]
[1511.00830] The Variational Fair Autoencoder - arXivNov 3, 2015 · Title:The Variational Fair Autoencoder ... Abstract:We investigate the problem of learning representations that are invariant to certain nuisance ...
-
[31]
[PDF] Learning representations by back-propagating errors323 9 OCTOBER 1986. LETTERS TONATURE. $33. Learning representations by back-propagating errors. David E. Rumelhart*, Geoffrey E. Hinton. & Ronald J. Williams*.
-
[32]
[PDF] A Learning Algorithm for Boltzmann Machines* - Computer ScienceFahlman, Hinton, and Sejnowski (1983) compare Boltzmann machines with some alternative parallel schemes, and discuss some knowledge representa- tion issues.
-
[33]
[PDF] A Fast Learning Algorithm for Deep Belief Nets - Computer ScienceWe show how to use “complementary priors” to eliminate the explaining- away effects that make inference difficult in densely connected belief nets.
-
[34]
[PDF] Auto-encoding variational bayes - arXivDec 10, 2022 · Auto-Encoding Variational Bayes. Diederik P. Kingma. Machine Learning Group. Universiteit van Amsterdam dpkingma@gmail.com. Max Welling. Machine ...
-
[35]
[PDF] arXiv:1512.09300v2 [cs.LG] 10 Feb 2016Feb 10, 2016 · We find that by jointly training a VAE and a generative adversar- ial network (GAN) (Goodfellow et al., 2014) we can use the GAN discriminator ...
-
[36]
[PDF] arXiv:2111.06377v3 [cs.CV] 19 Dec 2021This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our. MAE approach is simple: we mask random ...
-
[37]
[PDF] Dimensionality Reduction: A Comparative ReviewOct 26, 2009 · An example of an autoencoder is ... broken Swiss roll dataset indicate that most nonlinear techniques for dimensionality reduction do not.
-
[38]
Anomaly Detection Using Autoencoders with Nonlinear ...This paper demonstrates that autoencoders are able to detect subtle anomalies which linear PCA fails. Also, autoencoders can increase their accuracy by ...
-
[39]
Application of ResNet and Autoencoder models for anomaly ...The default threshold, based on the 3-sigma rule (mean MSE + 3 standard deviations), was found to be too conservative, leading to an excessive number of false ...
-
[40]
[PDF] Variational Autoencoder based Anomaly Detection using ...Dec 27, 2015 · We propose an anomaly detection method using the reconstruction probability from the variational autoencoder. The reconstruction probability ...
-
[41]
A comprehensive study of auto-encoders for anomaly detectionThis study systematically reviews 11 Auto-Encoder architectures categorized into three groups, aiming to differentiate their reconstruction ability.Missing: seminal | Show results with:seminal
-
[42]
[PDF] Credit Card Fraud Detection Using Autoencoder Neural NetworkThis paper seeks to implement credit card fraud detection using denoising autoencoder and oversampling. For imbalanced data, we decided use above method to ...
-
[43]
[2110.01447] Real-Time Predictive Maintenance using Autoencoder ...Oct 1, 2021 · The results demonstrate that it is possible to detect anomalies within the amber range and raise alarms before machine failure. Subjects: ...
-
[44]
ECG-NET: A deep LSTM autoencoder for detecting anomalous ECGThis paper proposes a novel and robust approach for representation learning of ECG sequences using a LSTM autoencoder for anomaly detection.
-
[45]
Improving Large-Scale k-Nearest Neighbor Text Categorization with ...Feb 3, 2024 · This paper introduces a multi-label lazy learning approach using a large autoencoder to map labels to a reduced latent space for k-Nearest ...
-
[46]
Compression of molecular fingerprints with autoencoder networksJun 7, 2023 · Several binary molecular fingerprints were compressed using an autoencoder neural network. We analyzed the impact of compression on fingerprint performance.Missing: drug discovery ChemBL
-
[47]
Improving Chemical Autoencoder Latent Space and Molecular De ...Autoencoders have emerged as deep learning solutions to turn molecules into latent vector representations as well as decode and sample areas of the latent ...
-
[48]
Neural machine translation: A review of methods, resources, and toolsIn this article, we first provide a broad review of the methods for NMT and focus on methods relating to architectures, decoding, and data augmentation.
-
[49]
Dual Residual Denoising Autoencoder with Channel Attention ...Jan 16, 2023 · We choose the convolutional denoising autoencoder as the noise reduction model for modulation signals. Residual connections are added not only ...
-
[50]
Autoencoders for Wireless Communications - MATLAB & SimulinkThis example shows how to model an end-to-end communications system with an autoencoder to reliably transmit information bits over a wireless channel.<|separator|>
-
[51]
Enhanced Federated Anomaly Detection Through Autoencoders ...Oct 11, 2024 · This paper introduces a method using summary statistics from normal and anomalous data to improve anomaly detection using autoencoders in ...