Fact-checked by Grok 2 weeks ago

Autoencoder

An autoencoder is an algorithm, typically implemented as an artificial , designed to learn a compressed, informative of input by reconstructing the original input with minimal . The core consists of two main components: an encoder that maps high-dimensional input to a lower-dimensional , and a decoder that reconstructs the input from this , often enforced by a bottleneck structure where the latent dimension is smaller than the input dimension. Introduced in 1986 by David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams as a method for learning internal representations through , autoencoders have become foundational in for tasks requiring efficient encoding. Autoencoders excel in dimensionality reduction by capturing essential features while discarding noise or redundancies, making them useful for preprocessing high-dimensional datasets such as images or sensor data. Beyond basic reconstruction, variants address specific challenges: denoising autoencoders are trained on corrupted inputs to recover clean outputs, enhancing robustness to noise; sparse autoencoders impose sparsity constraints on the latent activations to promote selective ; and variational autoencoders (VAEs) introduce probabilistic modeling of the , enabling generative capabilities by sampling from a prior distribution like a Gaussian. These adaptations have extended autoencoders' utility in applications including , where deviations in reconstruction error flag outliers, and as components in deeper architectures for tasks like image generation and . The significance of autoencoders lies in their ability to perform without , outperforming linear methods like in capturing complex data manifolds. Recent advancements, such as convolutional autoencoders for spatial data and graph autoencoders for structured inputs, continue to broaden their impact across fields like , bioinformatics, and .

Fundamentals

Definition and Purpose

An autoencoder is an architecture designed to learn a compressed representation of input data by mapping it to a lower-dimensional and then reconstructing the original input from that representation. The core objective is to minimize the reconstruction error between the input and the output, thereby capturing the essential features of the data in a more efficient form. This process enables the network to perform , generalizing beyond linear methods like . The primary purposes of autoencoders include feature extraction for representation learning and . By training on unlabeled datasets, autoencoders facilitate tasks such as and data compression without requiring supervisory signals, making them valuable in paradigms. Unlike supervised neural networks, which rely on paired input-output labels to learn specific mappings, autoencoders treat the input as both the source and the target, using self-supervised reconstruction to discover intrinsic data structures.

Architecture Components

The architecture of an autoencoder comprises three primary components: the encoder, the , and the , which together enable the compression and reconstruction of input data. The encoder functions as a that transforms the input data \mathbf{x} into a compressed latent representation \mathbf{z}, typically through a series of nonlinear transformations that progressively reduce dimensionality. In the seminal autoencoder design, the encoder consists of multiple fully connected layers with functions, such as a four-layer structure mapping a 784-dimensional MNIST image input to a 30-dimensional code. This compression enforces learning of essential features while discarding noise or redundancies. The , often implemented as a layer with fewer neurons than the input (e.g., 30 units for 784-dimensional inputs), serves as the core mechanism for , capturing the most salient information in a compact form. By constraining the representation to a lower , the promotes efficient encoding that preserves data structure for subsequent reconstruction. This design is central to the autoencoder's ability to learn hierarchical features, as demonstrated in early applications to high-dimensional datasets like images. The decoder mirrors or extends the encoder to reconstruct the output \mathbf{x}' from the latent representation \mathbf{z}, aiming for \mathbf{x}' to closely approximate the original input. In symmetric architectures, the decoder has an identical layered structure to the encoder but in reverse, which facilitates balanced learning and high reconstruction fidelity, as seen in the original deep autoencoder where symmetric multilayers achieved lower reconstruction errors on datasets like the Olivetti faces compared to . Asymmetric architectures, where the decoder employs different layer counts or types, can enhance fidelity in complex tasks by allowing specialized reconstruction paths, though they may require careful tuning to avoid instability. Common layer types in autoencoders vary by data modality to suit spatial or temporal structures. layers, using dense , are standard for tabular or vectorized data, enabling simple nonlinear mappings. For image data, convolutional layers replace dense ones in the encoder and decoder to exploit local patterns, as in stacked convolutional autoencoders that process grids hierarchically for feature extraction. Recurrent layers, such as LSTMs, are incorporated for sequential data like or text, allowing the encoder to capture temporal dependencies in variable-length inputs before decoding to reconstructed sequences. These adaptations maintain the core encoder-latent-decoder flow while optimizing for domain-specific efficiency.

Mathematical Principles

Encoder-Decoder Formulation

An autoencoder is mathematically formulated as a of an and a , designed to map an input to a lower-dimensional latent and then reconstruct the original input from that representation. The f_\theta, parameterized by weights \theta, transforms the input x \in \mathbb{R}^d into a latent code z \in \mathbb{R}^k where typically k < d, expressed as z = f_\theta(x). This mapping compresses the input by projecting it into a lower-dimensional space, capturing essential features while discarding less relevant information. The decoder function g_\phi, parameterized by weights \phi, then reconstructs an approximation of the input from the latent code, given by x' = g_\phi(z), where x' \in \mathbb{R}^d and the objective is x' \approx x. The full autoencoder model is thus the composite function A_{\theta,\phi}(x) = g_\phi(f_\theta(x)), trained such that the overall mapping approximates the identity function for inputs drawn from the data distribution. In standard autoencoders, both the encoder and decoder are deterministic functions, typically implemented as multilayer neural networks with nonlinear activation functions such as sigmoid for bounded outputs or ReLU for unbounded intermediate layers to introduce nonlinearity and enable complex mappings. The forward pass proceeds sequentially: the input x is fed through the encoder layers to produce z, which serves as the bottleneck layer restricting the dimensionality and enforcing information compression, before being passed through the decoder layers to yield the reconstruction x'. This bottleneck structure ensures that the latent representation z must efficiently encode the input's salient structure to allow accurate reconstruction. While basic autoencoders employ deterministic mappings, extensions such as introduce stochasticity in the encoder to model probabilistic latent distributions, though the core deterministic formulation remains foundational.

Loss Functions and Optimization

The primary objective in training an autoencoder is to minimize the reconstruction error between the input data \mathbf{x} and its reconstructed output \hat{\mathbf{x}}, which measures how well the model captures and reproduces the input. For continuous-valued data, the most common reconstruction loss is the (MSE), defined as L(\mathbf{x}, \hat{\mathbf{x}}) = \|\mathbf{x} - \hat{\mathbf{x}}\|^2_2, where \|\cdot\|_2 denotes the Euclidean norm; this choice is motivated by its simplicity and effectiveness in penalizing large deviations in real-valued reconstructions. For binary or categorical data, such as normalized images treated as probabilities, (BCE) is preferred, given by L(\mathbf{x}, \hat{\mathbf{x}}) = -\sum_i [x_i \log(\hat{x}_i) + (1 - x_i) \log(1 - \hat{x}_i)], as it aligns with the probabilistic interpretation of outputs from sigmoid activations and better handles bounded data. The overall training objective is to minimize the expected reconstruction loss over the data distribution, formulated as \min_{\theta, \phi} \mathbb{E}_{\mathbf{x} \sim p_{\text{data}}(\mathbf{x})} [L(\mathbf{x}, g_\phi(f_\theta(\mathbf{x})) )], where f_\theta is the encoder parameterized by \theta, g_\phi is the decoder parameterized by \phi, and the expectation is approximated empirically via the dataset average during training. To promote generalization and prevent overfitting, regularization terms are often added to the loss, such as L1 or L2 penalties on the network weights (e.g., \lambda \|\mathbf{W}\|_1 or \lambda \|\mathbf{W}\|_2^2), which encourage simpler models by shrinking weights toward zero. Sparsity penalties, which constrain the hidden representations to be sparse (e.g., via KL-divergence between average activations and a low target activity), can also be included briefly here to induce useful feature selectivity, though full details are covered in the sparse autoencoder variant. Optimization proceeds by computing gradients of the total loss with respect to the parameters \theta and \phi using backpropagation, which efficiently propagates errors from the output layer backward through the encoder-decoder network to update weights via gradient descent. Stochastic gradient descent (SGD) and its variants, such as Adam—which adapts learning rates per parameter using momentum and RMSProp-like scaling—are widely used to iteratively minimize the loss, with Adam often preferred for its faster convergence in high-dimensional settings. In deep autoencoders, optimization faces challenges like vanishing gradients, where signals diminish through many layers, hindering effective learning of lower-level features. A key solution is layer-wise pretraining, where individual layers or shallow autoencoders are trained greedily before fine-tuning the full stack with backpropagation, as demonstrated to yield superior low-dimensional representations on datasets like .

Interpretations of Learned Representations

The latent space z in an autoencoder provides a compressed, distributed representation of the input data, encoding the most essential features into a lower-dimensional form while filtering out noise and irrelevant details. This distributed encoding allows multiple aspects of the data to be represented across the dimensions of z, facilitating the capture of intricate, non-local patterns that preserve semantic structure. Interpretability of these representations depends on whether the latent space forms linear or nonlinear manifolds. In linear autoencoders, z approximates a linear subspace akin to principal components, offering straightforward interpretability through orthogonal projections. Nonlinear autoencoders, however, learn curved manifolds that enable hierarchical feature learning, where early layers extract basic elements like edges and later layers build abstract concepts such as object parts, enhancing the model's ability to disentangle complex data hierarchies. Key properties of the learned representations include robustness to minor input perturbations, promoting invariance, and strong generalization to unseen data. This generalization stems from the , which assumes high-dimensional observations lie near a low-dimensional manifold; autoencoders effectively learn coordinates on this manifold, allowing smooth interpolation between data points and extrapolation beyond the training distribution. Compared to (PCA), which serves as a linear special case by maximizing variance along orthogonal directions, autoencoders extend this to nonlinear mappings that yield superior reconstruction fidelity and capture manifold curvatures inaccessible to linear methods. Without regularization, however, these representations risk overfitting, where the model memorizes training specifics rather than generalizable features, or collapsing to trivial identity mappings that fail to compress effectively.

Variations

Variational Autoencoder

The variational autoencoder (VAE) extends the standard autoencoder by incorporating probabilistic modeling in the latent space, enabling both efficient representation learning and generative capabilities. In a VAE, the encoder network parameterizes a distribution q_\phi(z \mid x) over latent variables z given input x, typically a multivariate Gaussian with learnable mean \mu and diagonal covariance \sigma^2, approximating the true posterior p(z \mid x). The decoder then models the conditional likelihood p_\theta(x \mid z), often assuming a Gaussian or Bernoulli distribution depending on the data type, while a prior distribution p(z), usually a standard normal \mathcal{N}(0, I), is imposed on the latents to regularize the approximate posterior. This setup frames the VAE as a latent variable model within a Bayesian framework, allowing for stochastic sampling in the latent space rather than deterministic mappings. Training a VAE involves maximizing the evidence lower bound (ELBO) on the marginal log-likelihood \log p_\theta(x), which decomposes into a reconstruction term and a regularization term: \mathcal{L}(\theta, \phi; x) = \mathbb{E}_{q_\phi(z \mid x)} \left[ \log p_\theta(x \mid z) \right] - D_{\text{KL}} \left( q_\phi(z \mid x) \Vert p(z) \right). The first term encourages faithful reconstruction of x from sampled z, akin to the mean squared error in deterministic , while the Kullback-Leibler (KL) divergence term pushes q_\phi(z \mid x) toward the prior p(z), promoting structured and compact latent representations. Direct sampling from q_\phi(z \mid x) during backpropagation is intractable due to its stochastic nature, so the reparameterization trick addresses this by transforming a fixed noise source: z = \mu_\phi(x) + \sigma_\phi(x) \odot \epsilon, where \epsilon \sim \mathcal{N}(0, I), rendering the sampling differentiable with respect to \phi. This allows end-to-end optimization via stochastic gradient descent. For generation, the VAE leverages its probabilistic structure by first sampling z from the prior p(z) = \mathcal{N}(0, I), then decoding to obtain x \sim p_\theta(x \mid z), producing novel data points that interpolate smoothly in the latent space. This has proven effective for tasks like image synthesis, where VAEs generate realistic handwritten digits from the MNIST dataset by sampling and decoding, outperforming purely deterministic methods in capturing data variability. A notable extension is the β-VAE, which scales the divergence term by a hyperparameter \beta > 1 in the ELBO—\mathcal{L}_{\beta} = \mathbb{E}_{q_\phi(z \mid x)} \left[ \log p_\theta(x \mid z) \right] - \beta D_{\text{KL}} \left( q_\phi(z \mid x) \Vert p(z) \right)—to enhance disentanglement of latent factors, such as separating pose from in facial images on datasets like CelebA, thereby improving interpretability without sacrificing reconstruction quality.

Sparse Autoencoder

A sparse autoencoder is a variant of the designed to learn sparse representations in the by imposing a constraint that encourages most hidden units to remain inactive for any given input. This sparsity promotes the discovery of more selective and efficient features, preventing the network from redundantly encoding information across all hidden units. The sparsity constraint is typically enforced through a regularization term based on the between a target activation probability \rho (often set to a small value like 0.05) and the empirical average \hat{\rho}_j of the j-th hidden unit across the training dataset. The for each hidden unit is formulated as \KL(\rho \parallel \hat{\rho}_j) = \rho \log \frac{\rho}{\hat{\rho}_j} + (1 - \rho) \log \frac{1 - \rho}{1 - \hat{\rho}_j}, which penalizes deviations from the desired low average activity, thereby driving \hat{\rho}_j toward \rho. The overall objective function incorporates this penalty into the standard reconstruction loss: J_{\text{sparse}}(W, b) = J(W, b) + \beta \sum_{j=1}^{s} \KL(\rho \parallel \hat{\rho}_j), where J(W, b) is the squared reconstruction error, \beta > 0 is a hyperparameter balancing against sparsity, and s denotes the number of units. This combined loss incentivizes units to activate selectively, with only a small fraction firing for typical inputs, leading to compact and non-redundant encodings. A primary advantage of sparse autoencoders lies in their support for overcomplete representations, where the hidden layer dimensionality k exceeds the input d (k > d), without incurring the issues that plague standard autoencoders. The sparsity mechanism ensures that the learned features form a parsimonious basis, enhancing interpretability and utility for tasks like feature extraction in high-dimensional data. This approach has proven effective for uncovering hierarchical structures in unlabeled datasets, as the sparse codes highlight only the most relevant patterns. Training proceeds via with , where the sparsity penalty is differentiated and integrated into the error signals. Specifically, the backpropagated error for the hidden layer includes an additive term \beta \left( -\frac{\rho}{\hat{\rho}_i} + \frac{1 - \rho}{1 - \hat{\rho}_i} \right) modulated by the derivative, allowing efficient joint optimization of and sparsity objectives without requiring separate stages. In applications to natural images, sparse autoencoders trained on small patches—such as 10×10 grayscale pixels extracted from face datasets—automatically discover localized detectors. These features resemble Gabor filters, capturing oriented edges at diverse positions and orientations within the patch, thereby demonstrating the model's ability to learn biologically plausible visual primitives from raw, unlabeled data.

Denoising Autoencoder

A denoising autoencoder is a variant of the autoencoder architecture designed to learn robust representations by reconstructing clean input from artificially corrupted versions, thereby enhancing the model's ability to ignore irrelevant and capture essential features. Unlike standard autoencoders, which may simply memorize inputs leading to trivial solutions, denoising autoencoders are trained on noisy inputs to promote and invariance to perturbations. This approach was introduced to extract features that remain useful even when parts of the input are missing or altered, making it particularly effective for real-world prone to corruption. The training setup involves first applying a corruption function \tilde{x} = C(x) to the original input x, where C introduces noise, followed by minimizing the reconstruction loss between the clean x and the output of the decoder applied to the encoded noisy input, formulated as L(x, g_{\phi}(f_{\theta}(\tilde{x}))). Common noise types include additive with variance \sigma^2, which perturbs each input dimension independently; masking noise, akin to dropout, that sets a p of inputs to zero or a constant; and , which randomly flips pixels to extreme values. These corruptions prevent the network from learning identity mappings and instead force it to infer the underlying structure, using the standard reconstruction loss such as . The purpose is to learn features invariant to such , ensuring the latent representations are robust and less sensitive to small input variations, which aids in downstream tasks like . Theoretically, training a denoising autoencoder can be interpreted as learning the data manifold's structure, where the reconstruction task implicitly estimates the score function of the data distribution—the gradient of the log-density—which aligns with score matching objectives under Gaussian corruption assumptions. This connection demonstrates that denoising autoencoders approximate non-parametric techniques, providing a principled way to regularize representations without explicit probabilistic modeling. Empirically, denoising autoencoders have shown improved ; for instance, on the MNIST dataset with masking , they achieve lower reconstruction errors compared to standard autoencoders and, when , serve as effective pretraining for deep networks, reducing classification errors to around 1.2% in fine-tuned models. Similar benefits extend to more complex datasets like , where they enhance feature robustness against real-world corruptions, laying the groundwork for advanced methods.

Contractive Autoencoder

The contractive autoencoder (CAE) is a variant of the autoencoder that incorporates a regularization term to promote robustness in the learned representations by penalizing the of the encoder to small perturbations in the input. This approach encourages the encoder to produce features that are locally , thereby capturing the intrinsic of the data manifold more effectively. Unlike standard autoencoders, which focus solely on reconstruction fidelity, the CAE explicitly enforces contraction in the around training samples, leading to smoother mappings that preserve local geometry. The core innovation lies in the contractive penalty, defined as the squared Frobenius norm of the of the encoder f_\theta(\mathbf{x}) with respect to the input \mathbf{x}: \|\mathbf{J}_{f_\theta}(\mathbf{x})\|_F^2 = \sum_{i=1}^d \sum_{j=1}^h \left( \frac{\partial f_{\theta,j}(\mathbf{x})}{\partial x_i} \right)^2, where d is the input dimensionality, h is the latent dimensionality, and the \mathbf{J}_{f_\theta}(\mathbf{x}) measures the local linearity of the transformation. The total loss combines the standard reconstruction error with this penalty, weighted by a hyperparameter \lambda: \mathcal{L}(\theta, \mathcal{D}) = \sum_{\mathbf{x} \in \mathcal{D}} \| \mathbf{x} - g_\phi(f_\theta(\mathbf{x})) \|^2 + \lambda \sum_{\mathbf{x} \in \mathcal{D}} \|\mathbf{J}_{f_\theta}(\mathbf{x})\|_F^2, where g_\phi is the decoder. This formulation incentivizes small transformations in the latent space near the data manifold, ensuring that nearby inputs are mapped to nearby latent points while distant points remain separated. Computing the Jacobian is typically achieved through automatic differentiation during backpropagation, which efficiently calculates the required partial derivatives; alternatively, finite differences can approximate it for validation, though this is computationally intensive for high dimensions. The resulting representations exhibit enhanced robustness to input deformations, such as translations or noise, without explicitly corrupting the inputs during training—differing from denoising autoencoders, which achieve similar goals through stochastic perturbation. In semi-supervised learning scenarios, these robust features preserve discriminative structure, enabling better generalization when only limited labels are available. From a manifold learning perspective, the CAE aligns the latent space as a tangent approximation to the data manifold, with the contractive penalty enforcing isometric mappings locally around samples, thus avoiding distortions that plague standard autoencoders. Experiments on datasets like MNIST demonstrate this advantage: when using k-means clustering on the learned features, CAEs achieve error rates around 1.5-2% lower than those of vanilla autoencoders, highlighting improved separability and clustering performance. Compared to variational autoencoders, which minimize probabilistic divergences for generative purposes, CAEs emphasize deterministic contraction for feature robustness.

Deep and Advanced Autoencoders

Advantages of Depth

Deep autoencoders enable hierarchical , where initial layers capture low-level patterns such as edges and textures in input data, while subsequent layers progressively abstract higher-level concepts like object parts or entire objects. This layered structure allows the model to build increasingly complex representations, mimicking the hierarchical processing observed in biological vision systems and improving the quality of learned features for tasks involving high-dimensional data. Compared to shallow autoencoders or linear methods like (PCA), deeper architectures facilitate improved data compression through nonlinear disentanglement in the layer, capturing intricate dependencies that linear projections cannot. For instance, deep autoencoders can encode manifold-structured data into lower-dimensional spaces that preserve essential nonlinear relationships, leading to more efficient representations beyond the orthogonal constraints of PCA. Empirical evidence demonstrates these benefits on high-dimensional datasets, such as the MNIST handwritten digits, where a deep autoencoder with hidden layers of 1,000, 500, and 250 neurons and a 30-dimensional achieves a lower error (average squared error of 3.00) than (13.87). Similar improvements are observed on other high-dimensional datasets, where deep models reduce errors by enabling better to unseen data. Theoretically, depth provides advantages in universal approximation capabilities for manifold learning, as deep networks can compose simpler functions to approximate complex, low-dimensional manifolds embedded in high-dimensional spaces with fewer parameters than shallow networks requiring width. This efficiency arises from the compositional nature of deep architectures, which exploit hierarchical structures in data more effectively than flat models. While depth increases the total parameter count—potentially leading to —these challenges are mitigated through techniques like weight tying between encoder and layers, which halves the parameters in symmetric autoencoders, and additional regularization such as sparsity constraints to promote efficient learning.

Training Deep Autoencoders

Training deep autoencoders presents significant optimization challenges due to the non-convex nature of the loss landscape and the tendency to converge to poor local minima when initializing weights randomly, as demonstrated in early experiments showing that deeper networks without proper initialization underperform shallower ones. To address this, a key strategy is layer-wise pretraining, which involves greedily training each layer of a stacked autoencoder in an unsupervised manner before fine-tuning the entire network. In this approach, the first layer is trained as a single autoencoder to reconstruct the input data, and its encoded representations then serve as inputs for training the next layer, progressively building deeper hierarchies; once all layers are pretrained, the weights are untied for the decoder and the full network is fine-tuned using backpropagation to minimize reconstruction error. This method, popularized in seminal work on dimensionality reduction, enables effective learning of low-dimensional codes in high-dimensional data like images, where a 4-layer autoencoder reduced MNIST digits to 30 dimensions with lower reconstruction error than principal component analysis. Alternative pretraining strategies, such as using denoising or sparse autoencoders, further improve initialization by encouraging robust and efficient representations that avoid poor local minima. Denoising pretraining corrupts the input (e.g., by adding ) and trains each layer to reconstruct the clean version, stacking these layers to form a deep network that learns invariant features; this approach has been shown to yield better in deep architectures compared to standard autoencoders. Similarly, sparse pretraining imposes a penalty, such as Kullback-Leibler divergence on hidden unit activations, to promote sparsity in the latent representations, facilitating the discovery of more selective and interpretable features during layer-wise training. To ensure stable training of deep autoencoders, several optimization tweaks are commonly applied, including the use of smaller learning rates to prevent overshooting in and incorporation of batch normalization to normalize layer inputs, reducing internal covariate shift and allowing higher learning rates without divergence. Residual connections, where layers learn residual functions added to the input, can also be integrated into autoencoder architectures to mitigate problems in very deep networks, enabling training of stacks with dozens of layers. A primary challenge in training deep autoencoders is the vanishing or exploding gradients during , which hinder effective weight updates in deeper layers and lead to stalled learning. These issues are often alleviated by employing rectified linear unit (ReLU) activations, which introduce non-saturating non-linearities to maintain gradient flow, outperforming activations in deep networks. Additionally, proper weight initialization schemes like initialization scale initial weights based on the number of input and output units to keep variances consistent across layers, reducing the risk of gradients vanishing early in training. Evaluation of trained deep autoencoders typically focuses on reconstruction error, measured as between input and output, to quantify fidelity of the learned representations, with lower errors indicating better compression without loss of essential structure. Complementary qualitative assessment involves visualizing the using techniques like t-SNE, which projects high-dimensional encodings into two dimensions to reveal clustering and manifold structure in the data.

Other Specialized Variants

The minimum description length autoencoder (MDL-AE) extends traditional autoencoders by incorporating information-theoretic principles to optimize data compression. It approximates through a bits-back argument, where the total description length balances the cost of encoding the model parameters and the reconstruction error, effectively minimizing the expected bits required to describe the data. This approach, introduced by Hinton and Zemel, enables the autoencoder to learn more efficient representations by accounting for both compression of the and the overhead of transmitting model details. The concrete autoencoder employs the concrete distribution, a continuous relaxation of discrete random variables via the Gumbel-softmax trick, to handle categorical latent representations in a differentiable manner. This allows through discrete choices, addressing the non-differentiability issue in standard autoencoders with discrete latents and enabling end-to-end training for tasks like or sparse coding. As proposed in foundational work on categorical reparameterization, this variant facilitates learning of interpretable, discrete codes while maintaining gradient flow. Applications include unsupervised hyperspectral band selection, where it outperforms traditional methods by selecting informative features with lower reconstruction loss. Extensions include the vector quantized variational autoencoder (VQ-VAE), which introduces a discrete codebook to quantize continuous latent vectors, producing symbolic representations suitable for generative modeling. By replacing the continuous posterior with nearest-neighbor assignment in a learned embedding space, VQ-VAE mitigates posterior collapse in VAEs and supports hierarchical structures for scalable generation, as demonstrated in high-fidelity image synthesis. Flow-based autoencoders integrate normalizing flows into the latent space to model invertible, bijective transformations, ensuring exact likelihood computation and reversible mappings for precise density estimation. This addresses limitations in expressiveness of Gaussian assumptions, with applications in anomaly detection showing superior performance over standard autoencoders on medical images. Equity-focused variants, such as the variational fair autoencoder (VFAE), constrain the to be invariant to sensitive attributes like demographics, promoting fair representations that reduce bias in downstream tasks. By maximizing between latents and targets while minimizing dependence on protected variables via kernel mean discrepancies, VFAE achieves demographic without sacrificing , outperforming unconstrained VAEs in fairness metrics on datasets like credit scoring. Transformer-based autoencoders adapt self-attention mechanisms for sequential data, capturing long-range dependencies in time series or text more effectively than convolutional or recurrent architectures. Recent implementations, like masked autoencoders, pretrain on masked sequences to learn robust embeddings, with applications in for communications. These specialized variants collectively overcome key limitations of standard autoencoders: MDL-AE and flow-based models enhance efficiency and invertibility for better ; and VQ-VAE tackle discrete latents' non-differentiability through relaxation and quantization; fair variants like VFAE address ethical constraints; and transformer integrations boost scalability for high-dimensional sequences. Compared to foundational types, they prioritize niche requirements like interpretability or bias mitigation, often at the cost of added computational overhead, but yield higher impact in domains such as fairness-aware and sequential processing.

Historical Development

Early Concepts

The roots of autoencoders trace back to the , emerging from advancements in research aimed at and representation discovery. A pivotal development was the introduction of by Rumelhart, Hinton, and Williams in 1986, which provided an efficient algorithm for training multilayer by propagating errors backward through the layers. This technique enabled the optimization of autoencoder architectures, where the network learns to map inputs to outputs that approximate the original data, fostering internal representations useful for tasks like . Concurrently, Yann LeCun's early work in the late on convolutional neural networks laid groundwork for specialized autoencoder variants, incorporating convolutional layers to handle spatial data such as images while leveraging for training. Preceding formal autoencoder concepts were related models emphasizing probabilistic and . Boltzmann machines, proposed by Ackley, Hinton, and Sejnowski in , offered a framework for learning probability distributions over data, allowing networks to reconstruct inputs through energy-based minimization, which influenced later generative aspects of autoencoders. Similarly, (ART), developed by Carpenter and Grossberg in the mid-1980s, introduced self-organizing mechanisms for stable category learning in response to sequential inputs, addressing the stability-plasticity dilemma in unsupervised settings and inspiring robust feature extraction in autoencoders. Initial motivations for autoencoders centered on enhancing in computing systems and achieving efficient within neural architectures. In single-layer networks, autoassociative mappings were explored to correct errors in noisy inputs, drawing from associative principles to maintain functionality under component failures. For , these models sought to learn compact encodings that preserved essential , analogous to but adaptable via gradient-based learning. A seminal formalization came in 1988 with Bourlard and Kamp's analysis of autoassociative multilayer perceptrons, demonstrating that such networks could perform equivalent to under linear output constraints, thus establishing a theoretical for their use in representation learning. By the 1990s, however, a key limitation hindered progress: the difficulty in training deep networks due to vanishing gradients during backpropagation, which caused weight updates to diminish in deeper layers and led to poor convergence. This challenge confined autoencoders predominantly to shallow architectures, emphasizing single-hidden-layer designs for practical applications until subsequent breakthroughs revived deeper variants.

Key Milestones and Evolution

The revival of interest in deep architectures for autoencoders began in 2006 with Geoffrey Hinton and colleagues' introduction of deep belief networks (DBNs), which employed stacked restricted Boltzmann machines (RBMs) as a foundational approach to learning hierarchical representations, serving as a key precursor to modern deep autoencoders by enabling unsupervised pretraining of multilayer networks. This work addressed the challenges of training deep networks through layer-wise greedy learning, laying the groundwork for subsequent autoencoder developments in the deep learning era. In the 2010s, significant advancements included the 2008 proposal of denoising autoencoders by et al., which enhanced robustness by training models to reconstruct clean inputs from corrupted versions, improving feature extraction for downstream tasks. Building on this, Diederik Kingma and Max Welling's 2013 variational autoencoder (VAE) framework integrated probabilistic latent variables with autoencoding, enabling generative capabilities and stable training via variational inference, which marked a shift toward probabilistic modeling in autoencoders. From 2014 onward, autoencoders saw integrations with generative adversarial networks (GANs), exemplified by Anders Boesen Lindbo Larsen et al.'s 2016 VAE-GAN hybrid, which combined VAEs' structured latent spaces with GANs' adversarial training to produce sharper, more realistic generations while mitigating mode collapse. Concurrently, autoencoders gained prominence in for pretraining, as seen in frameworks leveraging reconstruction objectives to learn transferable representations without labels. Recent developments from 2020 to 2025 have further expanded autoencoders' scope, incorporating them into diffusion models for efficient manipulation, as in Robin Rombach et al.'s 2022 latent diffusion models that use VAEs to compress images into low-dimensional spaces for high-fidelity generation. In vision transformers, Kaiming He et al.'s 2021 demonstrated scalable self-supervised pretraining by reconstructing masked image patches, achieving state-of-the-art . Autoencoders have also scaled to billion-parameter models, such as the VideoMAE framework by Limin et al. in 2023, which trains massive masked autoencoders for video understanding with dual masking strategies. In 2024, sparse autoencoders gained attention for mechanistic interpretability of large language models, with works like Anthropic's scaling of monosemantic features and OpenAI's evaluations enabling the extraction of human-interpretable concepts from model activations to aid research. These evolutions reflect a broader shift from pure tasks to variants that fuse modalities like text and images, as in masked autoencoders, and adaptations that enable privacy-preserving training across distributed devices.

Applications

Dimensionality Reduction

Autoencoders serve as a powerful tool for by learning a compressed of high-dimensional through an unsupervised training process that minimizes the between the input and the output. The network consists of an encoder that maps the input \mathbf{x} \in \mathbb{R}^d to a lower-dimensional \mathbf{z} \in \mathbb{R}^k where k < d, followed by a decoder that reconstructs the input as \hat{\mathbf{x}}. Training typically employs backpropagation to optimize a loss function such as mean squared , \mathcal{L} = \|\mathbf{x} - \hat{\mathbf{x}}\|^2, enabling the encoder to project into \mathbf{z} for tasks like visualization or clustering while preserving essential structure. This process is particularly effective in deep architectures, where multiple layers allow for hierarchical feature learning, outperforming shallower models in capturing complex patterns. Unlike linear methods such as principal component analysis (PCA), autoencoders can model nonlinear relationships, enabling them to unfold curved manifolds in the data. For instance, on the Swiss roll dataset—a 3D point cloud embedded on a 2D helical surface—PCA projects points along straight lines, distorting the intrinsic geometry, whereas an autoencoder can learn a nonlinear mapping to recover the underlying 2D structure without folding artifacts. This nonlinear capability arises from the neural network's layered transformations, which approximate complex functions that linear projections cannot. In undercomplete setups, where the latent dimension k is strictly less than the input dimension d, the bottleneck enforces compression, compelling the model to prioritize salient features and discard noise. Overcomplete configurations (k > d) risk learning trivial identity mappings, but regularization techniques—such as sparsity penalties on the latent activations—prevent representational collapse and promote meaningful reductions. The quality of dimensionality reduction achieved by autoencoders is evaluated using metrics that assess both reconstruction fidelity and embedding preservation. Explained variance, analogous to , quantifies the proportion of input variability captured in the , computed as $1 - \frac{\text{[reconstruction](/page/Reconstruction) error}}{\text{total variance}}, providing a measure of retention. Trustworthiness evaluates how well local neighborhoods in the high-dimensional space are preserved in the low-dimensional , penalizing false neighbors introduced by the projection; scores range from 0 to 1, with higher values indicating better neighborhood fidelity. A representative application is reducing the 784-dimensional MNIST images to a , yielding scatter plots where classes form distinct, nonlinear clusters akin to t-SNE visualizations, with reconstruction errors significantly lower than those from .

Anomaly Detection

Autoencoders are widely used for anomaly detection by training on normal data to learn a compact representation, then identifying outliers based on their reconstruction errors. The core method involves feeding input data x through the autoencoder A to obtain the reconstructed output \hat{x} = A(x), and computing the reconstruction loss L(x, \hat{x}), typically mean squared error. Data points with L(x, \hat{x}) exceeding a predefined threshold are classified as anomalies; a standard threshold is set at the mean reconstruction error plus three standard deviations (3σ) from the training set's error distribution. This approach leverages the autoencoder's tendency to reconstruct normal patterns accurately while struggling with novel or aberrant inputs. Specialized variants adapt autoencoders for improved separation in one-class settings. Variational autoencoders (VAEs) extend the framework by modeling data as a probabilistic latent , using reconstruction probability as the score to capture and enhance discrimination over deterministic errors. Sparse autoencoders incorporate L1 regularization on hidden activations to promote sparsity, emphasizing key features and reducing noise sensitivity in high-dimensional inputs. These tweaks make one-class autoencoders particularly effective when only normal samples are available for training. Key advantages include the unsupervised learning paradigm, which requires no labeled anomalies, and robustness to high-dimensional data without assuming specific anomaly distributions or shapes. Unlike linear methods, autoencoders capture nonlinear manifolds, enabling detection of subtle deviations in complex datasets. Performance is assessed using metrics like area under the ROC curve (AUC-ROC) and precision-recall curves, which handle class imbalance common in anomaly tasks. On the KDD Cup 99 dataset for network intrusion detection, VAEs trained on normal traffic yield AUC-ROC scores of 0.777 for remote-to-local attacks and up to 0.970 for probe attacks. Real-world deployments include detection, where autoencoders flag atypical transactions via reconstruction discrepancies in anonymized feature spaces. In , they monitor equipment sensors to forecast failures by detecting early anomalous vibrations or temperatures in industrial systems.

Image and Signal Processing

Autoencoders have been extensively applied in image denoising tasks, where convolutional architectures learn to reconstruct clean images from noisy inputs. A prominent example is the Denoising Convolutional Neural Network (DnCNN), which uses residual learning to suppress Gaussian and speckle by estimating the noise residual rather than the clean signal directly, achieving superior performance over traditional methods like BM3D on datasets such as BSD500. This approach leverages the autoencoder's encoder-decoder structure to capture hierarchical features, enabling effective noise removal while preserving image details. In image super-resolution, autoencoders facilitate the of low-resolution images by learning mappings to higher-resolution outputs, often outperforming classical techniques in terms of perceptual quality and (PSNR). Variational autoencoders (VAEs) have been particularly effective, as demonstrated in models that generate photo-realistic super-resolved images by modeling probabilistic distributions in the , with reported improvements of up to 1-2 dB in PSNR on benchmarks like Set5 and Set14 compared to . These methods train on paired low- and high-resolution data, allowing the decoder to synthesize fine details from compressed latent representations. Learned image compression employs autoencoder-based codecs that optimize rate-distortion trade-offs, serving as alternatives to standards like by jointly learning and transformation in an end-to-end manner. The framework with a scale hyperprior, for instance, achieves compression rates competitive with BPG while maintaining better visual fidelity, as evidenced by BD-rate savings of approximately 15-25% over JPEG2000 on the dataset. This involves a layer that enforces quantization, enabling scalable bit-rate control for practical deployment. For , recurrent autoencoders extend these principles to sequential data, such as audio denoising, where (LSTM) units in the architecture handle temporal dependencies to reconstruct clean spectrograms from noisy ones. In electrocardiogram (ECG) analysis, recurrent variants detect anomalies by reconstructing normal signals and flagging high reconstruction errors, achieving F1-scores above 0.90 on datasets like ECG5000. Additional applications include missing pixels in images via context-aware autoencoders that fill gaps based on surrounding structures, as in context encoder models trained adversarially for semantic coherence, and style transfer through manipulation, where swapping autoencoders disentangle content and style codes to apply artistic transformations without retraining.

Other Domains

Autoencoders have been applied in to enhance latent semantic indexing, enabling more effective query-document matching by learning compact representations of textual data that capture underlying semantic relationships. In this context, autoencoders compress high-dimensional term-document matrices into lower-dimensional latent spaces, outperforming traditional methods like in handling sparsity and noise for large-scale tasks. For instance, sparse autoencoders have been used to disentangle dense embeddings from retrieval models, improving interpretability and for applications such as (SEO) where query is predicted based on latent features. In , autoencoders facilitate molecular fingerprint compression, allowing efficient of vast chemical libraries like the dataset to identify potential candidates. By encoding binary fingerprints—such as or ECFP descriptors—into reduced latent vectors, these models preserve key pharmacophoric features while reducing storage and computational demands, enabling faster similarity searches and generative sampling of novel molecules. Variational autoencoders, in particular, have demonstrated utility in mapping molecular graphs to latent spaces for design, with compression ratios up to 90% maintaining high performance in downstream tasks like bioactivity prediction. Sequence autoencoders serve as foundational encoders in early encoder-decoder architectures for , particularly in pre-neural (pre-NMT) models that relied on recurrent structures to handle variable-length inputs. These autoencoders learn to map source sequences into fixed-length latent representations, which are then decoded to generate target , addressing challenges in alignment and context preservation before the advent of transformer-based systems. A notable application involves variational autoencoders integrated into bilingual pair , enhancing translation quality by modeling probabilistic latent distributions that capture syntactic and semantic nuances across languages. In communication systems, autoencoders enable channel denoising for signals, learning to reconstruct clean transmissions from noisy inputs affected by or in environments like networks. By training end-to-end, these models jointly optimize encoding and decoding to minimize bit error rates, often outperforming traditional linear equalizers in low conditions. Additionally, autoencoders have been employed for error correction codes, where they discover nonlinear codes that approach limits, as demonstrated in simulations over channels. In interpretability, sparse autoencoders have been used to decompose activations in large models into interpretable features, aiding in understanding model behavior. Beyond these, autoencoders support popularity prediction through user behavior embeddings, compressing sequential interaction histories—such as clicks or views—into low-dimensional vectors that forecast content virality on platforms like . This approach captures temporal patterns in user actions, enabling models to predict engagement metrics with improved accuracy over baseline . Furthermore, federated autoencoders enhance in distributed settings by training latent representations across devices without sharing raw data, as seen in vertical federated learning frameworks that partition autoencoder components to safeguard sensitive information while aggregating global updates.

References

  1. [1]
    [PDF] An Introduction to Autoencoders arXiv:2201.03898v1 [cs.LG] 11 Jan ...
    Jan 11, 2022 · A Feed-Forward Autoencoder (FFA) is a neural network made of dense lay- ers6 with a specific architecture, as can be schematically seen in ...
  2. [2]
    Autoencoder neural networks enable low dimensional structure ...
    Dec 1, 2023 · Autoencoder neural networks enable the compression of high dimensional, noisy data sets down to low dimensional representations (or embeddings).
  3. [3]
    [2003.05991] Autoencoders - arXiv
    Mar 12, 2020 · An autoencoder is a specific type of a neural network, which is mainly designed to encode the input into a compressed and meaningful representation.Missing: definition | Show results with:definition
  4. [4]
    Autoencoders and their applications in machine learning: a survey
    Feb 3, 2024 · AEs are a type of neural network designed for learning and reconstructing input data. In unsupervised learning, the primary goal is to obtain an ...
  5. [5]
    Reducing the Dimensionality of Data with Neural Networks - Science
    Jul 28, 2006 · Reducing the Dimensionality of Data with Neural Networks. G. E. Hinton and R. R. SalakhutdinovAuthors Info & Affiliations. Science. 28 Jul 2006.Missing: seminal | Show results with:seminal
  6. [6]
    [PDF] Autoencoders, Unsupervised Learning, and Deep Architectures
    Autoencoders are simple learning circuits which aim to transform inputs into outputs with the least possible amount of distortion. While conceptually simple, ...
  7. [7]
    Autoencoders - Deep Learning
    (. h | x. ) and. p. decoder. (x | h). The idea of autoencoders has been part of the historical landscape of neural. networks for decades (LeCun, 1987; Bourlard ...
  8. [8]
    Learning representations by back-propagating errors - Nature
    Oct 9, 1986 · Cite this article. Rumelhart, D., Hinton, G. & Williams, R. Learning representations by back-propagating errors. Nature 323, 533–536 (1986).
  9. [9]
    Representation Learning: A Review and New Perspectives - arXiv
    Jun 24, 2012 · Title:Representation Learning: A Review and New Perspectives. Authors:Yoshua Bengio, Aaron Courville, Pascal Vincent. View a PDF of the paper ...
  10. [10]
    On interpretability and proper latent decomposition of autoencoders
    Nov 15, 2022 · This paper interprets autoencoder transformations, analyzes the latent space, and proposes proper latent decomposition (PLD) to find dominant ...
  11. [11]
    [1905.11062] Quantization-Based Regularization for Autoencoders
    May 27, 2019 · Without proper regularization, autoencoder models are susceptible to the overfitting problem and the so-called posterior collapse phenomenon.
  12. [12]
    [1312.6114] Auto-Encoding Variational Bayes - arXiv
    Dec 20, 2013 · Authors:Diederik P Kingma, Max Welling. View a PDF of the paper titled Auto-Encoding Variational Bayes, by Diederik P Kingma and 1 other authors.
  13. [13]
    beta-VAE: Learning Basic Visual Concepts with a Constrained...
    Feb 6, 2017 · We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data.
  14. [14]
    [PDF] Sparse autoencoder
    Jan 11, 2011 · These notes describe the sparse autoencoder learning algorithm, which is one approach to automatically learn features from unlabeled data. In ...
  15. [15]
    An Analysis of Single-Layer Networks in Unsupervised Feature ...
    An Analysis of Single-Layer Networks in Unsupervised Feature Learning. Adam Coates, Andrew Ng, Honglak Lee. Proceedings of the Fourteenth International ...
  16. [16]
    [PDF] Extracting and Composing Robust Features with Denoising ...
    Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P.-A. (2008). Extracting and composing robust features with denoising autoencoders (Technical Report 1316).Missing: seminal | Show results with:seminal
  17. [17]
    [PDF] A Connection Between Score Matching and Denoising Autoencoders
    Denoising Autoencoders (DAEs) are a simple modification of classical autoencoder neural networks that are trained, not to reconstruct their input, but rather to ...Missing: seminal | Show results with:seminal
  18. [18]
    [PDF] Stacked Denoising Autoencoders: Learning Useful Representations ...
    Abstract. We explore an original strategy for building deep networks, based on stacking layers of denoising autoencoders which are trained locally to ...<|control11|><|separator|>
  19. [19]
    [PDF] Contractive Auto-Encoders: Explicit Invariance During Feature ...
    We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty.
  20. [20]
  21. [21]
    [PDF] Stacked Convolutional Auto-Encoders for Hierarchical Feature ...
    This paper introduces the Convolutional Auto-Encoder, a hierarchical unsu- pervised feature extractor that scales well to high-dimensional inputs. It learns.
  22. [22]
    [PDF] Reducing the Dimensionality of Data with Neural Networks
    May 25, 2006 · We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much.
  23. [23]
    Provable approximation properties for deep neural networks
    We discuss approximation of functions using deep neural nets. Given a function f on a d-dimensional manifold, we construct a sparsely-connected depth-4 neural ...
  24. [24]
    [PDF] The Difficulty of Training Deep Architectures and the Effect of ...
    Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science,. 313(5786), 504–507. Hinton, G. E., ...
  25. [25]
    [PDF] The Difficulty of Training Deep Architectures and the Effect of ...
    It indicates that pre-training earlier layers has a greater effect on the result than pre-training layers that are close to the supervised layer. Moreover, this ...Missing: seminal | Show results with:seminal
  26. [26]
    [PDF] Autoencoders, Minimum Description Length and Helmholtz Free ...
    2.1 The "bits-back" argument. The description length of an input vector using a particular code is the sum of the code cost and reconstruction cost. We ...
  27. [27]
    Concrete Autoencoders for Differentiable Feature Selection and ...
    Jan 27, 2019 · We introduce the concrete autoencoder, an end-to-end differentiable method for global feature selection, which efficiently identifies a subset of the most ...
  28. [28]
    [1711.00937] Neural Discrete Representation Learning - arXiv
    Nov 2, 2017 · Our model, the Vector Quantised-Variational AutoEncoder (VQ-VAE), differs from VAEs in two key ways: the encoder network outputs discrete ...
  29. [29]
    Autoencoders with Normalizing Flows for Medical Images Anomaly ...
    Feb 1, 2023 · We propose a normalizing flow based autoencoder for medical anomaly detection and it outperformed the other approaches by a large margin.
  30. [30]
    [1511.00830] The Variational Fair Autoencoder - arXiv
    Nov 3, 2015 · Title:The Variational Fair Autoencoder ... Abstract:We investigate the problem of learning representations that are invariant to certain nuisance ...
  31. [31]
    [PDF] Learning representations by back-propagating errors
    323 9 OCTOBER 1986. LETTERS TONATURE. $33. Learning representations by back-propagating errors. David E. Rumelhart*, Geoffrey E. Hinton. & Ronald J. Williams*.
  32. [32]
    [PDF] A Learning Algorithm for Boltzmann Machines* - Computer Science
    Fahlman, Hinton, and Sejnowski (1983) compare Boltzmann machines with some alternative parallel schemes, and discuss some knowledge representa- tion issues.
  33. [33]
    [PDF] A Fast Learning Algorithm for Deep Belief Nets - Computer Science
    We show how to use “complementary priors” to eliminate the explaining- away effects that make inference difficult in densely connected belief nets.
  34. [34]
    [PDF] Auto-encoding variational bayes - arXiv
    Dec 10, 2022 · Auto-Encoding Variational Bayes. Diederik P. Kingma. Machine Learning Group. Universiteit van Amsterdam dpkingma@gmail.com. Max Welling. Machine ...
  35. [35]
    [PDF] arXiv:1512.09300v2 [cs.LG] 10 Feb 2016
    Feb 10, 2016 · We find that by jointly training a VAE and a generative adversar- ial network (GAN) (Goodfellow et al., 2014) we can use the GAN discriminator ...
  36. [36]
    [PDF] arXiv:2111.06377v3 [cs.CV] 19 Dec 2021
    This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our. MAE approach is simple: we mask random ...
  37. [37]
    [PDF] Dimensionality Reduction: A Comparative Review
    Oct 26, 2009 · An example of an autoencoder is ... broken Swiss roll dataset indicate that most nonlinear techniques for dimensionality reduction do not.
  38. [38]
    Anomaly Detection Using Autoencoders with Nonlinear ...
    This paper demonstrates that autoencoders are able to detect subtle anomalies which linear PCA fails. Also, autoencoders can increase their accuracy by ...
  39. [39]
    Application of ResNet and Autoencoder models for anomaly ...
    The default threshold, based on the 3-sigma rule (mean MSE + 3 standard deviations), was found to be too conservative, leading to an excessive number of false ...
  40. [40]
    [PDF] Variational Autoencoder based Anomaly Detection using ...
    Dec 27, 2015 · We propose an anomaly detection method using the reconstruction probability from the variational autoencoder. The reconstruction probability ...
  41. [41]
    A comprehensive study of auto-encoders for anomaly detection
    This study systematically reviews 11 Auto-Encoder architectures categorized into three groups, aiming to differentiate their reconstruction ability.Missing: seminal | Show results with:seminal
  42. [42]
    [PDF] Credit Card Fraud Detection Using Autoencoder Neural Network
    This paper seeks to implement credit card fraud detection using denoising autoencoder and oversampling. For imbalanced data, we decided use above method to ...
  43. [43]
    [2110.01447] Real-Time Predictive Maintenance using Autoencoder ...
    Oct 1, 2021 · The results demonstrate that it is possible to detect anomalies within the amber range and raise alarms before machine failure. Subjects: ...
  44. [44]
    ECG-NET: A deep LSTM autoencoder for detecting anomalous ECG
    This paper proposes a novel and robust approach for representation learning of ECG sequences using a LSTM autoencoder for anomaly detection.
  45. [45]
    Improving Large-Scale k-Nearest Neighbor Text Categorization with ...
    Feb 3, 2024 · This paper introduces a multi-label lazy learning approach using a large autoencoder to map labels to a reduced latent space for k-Nearest ...
  46. [46]
    Compression of molecular fingerprints with autoencoder networks
    Jun 7, 2023 · Several binary molecular fingerprints were compressed using an autoencoder neural network. We analyzed the impact of compression on fingerprint performance.Missing: drug discovery ChemBL
  47. [47]
    Improving Chemical Autoencoder Latent Space and Molecular De ...
    Autoencoders have emerged as deep learning solutions to turn molecules into latent vector representations as well as decode and sample areas of the latent ...
  48. [48]
    Neural machine translation: A review of methods, resources, and tools
    In this article, we first provide a broad review of the methods for NMT and focus on methods relating to architectures, decoding, and data augmentation.
  49. [49]
    Dual Residual Denoising Autoencoder with Channel Attention ...
    Jan 16, 2023 · We choose the convolutional denoising autoencoder as the noise reduction model for modulation signals. Residual connections are added not only ...
  50. [50]
    Autoencoders for Wireless Communications - MATLAB & Simulink
    This example shows how to model an end-to-end communications system with an autoencoder to reliably transmit information bits over a wireless channel.<|separator|>
  51. [51]
    Enhanced Federated Anomaly Detection Through Autoencoders ...
    Oct 11, 2024 · This paper introduces a method using summary statistics from normal and anomalous data to improve anomaly detection using autoencoders in  ...