Fact-checked by Grok 2 weeks ago

Texture synthesis

Texture synthesis is a computational technique in used to generate large, realistic images or patterns from a small input sample , ensuring the output appears statistically similar to the input by replicating its structural and visual properties without visible repetition or artifacts. The process typically involves analyzing the sample's local neighborhoods—such as patterns or patches—and iteratively synthesizing new content that matches these characteristics, often producing seamless, tileable results suitable for arbitrary sizes. This method draws from modeling to mimic natural formation, distinguishing it from by relying on example-based guidance rather than predefined rules. Developed primarily in the field of since the 1980s, texture synthesis addresses the challenge of creating diverse, realistic surface details for virtual environments, evolving from early statistical and syntactic models to more efficient example-based algorithms in the , and further advanced by techniques since the 2010s. Pioneering work in the mid-1980s surveyed approaches like growth models and stochastic simulations, highlighting their potential for enhancing visual despite initial computational limitations. The technique gained prominence in the late 1990s with Markov random field-based methods, which simplified synthesis by growing textures pixel-by-pixel from a , followed by breakthroughs in the early that enabled faster, higher-quality outputs through optimized neighborhood matching. Key approaches to texture synthesis include pixel-based methods, which build images sequentially by selecting with compatible neighborhoods; patch-based techniques, which copy and blend larger texture fragments for efficiency; parametric models, which extract statistical features like color histograms or filter responses to guide generation; and deep learning-based methods using convolutional neural networks and generative adversarial networks that have gained prominence since the . These methods, exemplified by algorithms such as Efros-Leung (1999) for pixel and Wei-Levoy (2000) for , balance quality and speed while handling both and near-regular . More advanced variants incorporate optimization frameworks to minimize inconsistencies or adapt to surfaces like 3D models. In applications, texture synthesis is widely used for rendering lifelike materials in animations and games—such as terrain, fabrics, or skin—and extends to tasks like holes, super-resolution enhancement, and generating dynamic video textures. It also supports geometry-aware synthesis for texturing complex 3D objects and has influenced tools in software for efficient, non-repetitive visuals.

Fundamentals

Definition and Overview

Texture synthesis is the process of generating large, seamless digital images that mimic the visual appearance of a given small sample texture by analyzing and replicating its statistical . This technique enables the creation of arbitrarily sized textures that are tileable without visible seams or artifacts, making it a fundamental tool in for enhancing realism in rendered scenes. Early approaches, such as pyramid-based methods, focus on matching the content across multiple scales to preserve the perceptual qualities of the input sample. More recent example-based methods emphasize capturing the local structure to produce outputs that appear to be generated by the same underlying process as the exemplar. At its core, texture synthesis relies on non-parametric sampling and statistical matching of local patterns, assuming textures as stationary processes where neighborhood statistics remain consistent across positions. This involves modeling textures using frameworks like Markov Random Fields (MRFs), where the goal is to ensure that each pixel in the output has a spatial neighborhood similar to at least one in the input, thereby producing visually plausible but novel variations. Such principles allow for the replication of repetitive patterns and randomness without requiring explicit parametric models of the texture's generation process. The basic workflow of texture synthesis begins with analyzing the input sample to extract relevant features, such as local neighborhoods or frequency distributions, followed by iterative output generation through sampling or optimization techniques that enforce spatial continuity. This process grows the synthesized texture outward from an initial , prioritizing at boundaries to avoid discontinuities. The result is a larger that maintains the input's perceptual while allowing for extension beyond the original sample's dimensions. Unlike general generation, which often aims to produce semantically meaningful scenes with global , texture synthesis specifically targets repetitive, pattern-based content lacking inherent semantics, focusing instead on local stationarity and perceptual similarity. This distinction ensures that synthesized outputs exhibit uniformity in statistical properties across the , suitable for applications like surface detailing rather than composing complex visual narratives.

Textures in Computer Graphics

In , textures are digital images or patterns applied to surfaces to simulate properties and enhance visual . They are broadly classified into three types: textures, which exhibit random variations such as or clouds; structured textures, featuring regular, repeating patterns like bricks or tiles; and textures, which combine elements of both, such as a patterned fabric with subtle random imperfections. Textures play a foundational role in rendering by being mapped onto three-dimensional surfaces, allowing complex details to be added efficiently without increasing geometric complexity. This mapping process projects the 2D texture coordinates onto 3D models, enabling realistic depictions of surfaces like wood grain or stone. To handle variations in viewing distance and resolution, mipmapping precomputes scaled versions of the texture in a pyramid structure, selecting appropriate levels during rendering to maintain quality across scales. Additionally, filtering techniques, such as bilinear or trilinear interpolation, are applied to smooth transitions between texels and prevent artifacts like aliasing or moiré patterns. Key properties of textures in include periodicity, which refers to the repeating nature of patterns in structured types; , where the texture appears uniform in , as in many stochastic examples; and , where directional variations occur, such as in brushed metal or hair. is another important attribute, ensuring the texture maintains perceptual consistency when resized or viewed from different distances, which is crucial for effective mipmapping. These properties influence how textures interact with and in rendering pipelines. Mathematically, textures are represented as functions f(x, y) that assign color or values to points on a surface, often parameterized over UV coordinates. analysis of these functions quantifies the regularity and repetition within the , aiding in the evaluation of coherence and aiding perceptual analysis in applications.

Contrast with Procedural Textures

Procedural textures are generated algorithmically using mathematical functions to simulate natural phenomena, such as Perlin noise for creating cloud-like or marble patterns, allowing for parametric control and infinite scalability without requiring input samples. These textures are rule-based and data-independent, enabling compact storage and rapid evaluation at any resolution, which makes them suitable for real-time applications in computer graphics. In contrast, texture synthesis employs non-parametric methods that learn statistical properties from an input sample image to generate larger textures, capturing complex, irregular patterns that are difficult to define through explicit rules. While procedural approaches rely on predefined functions for consistency and editability, synthesis is data-driven, prioritizing fidelity to real-world exemplars over manual parameterization. A primary advantage of texture synthesis lies in its ability to replicate intricate, non-repetitive details like the organic variations in , which procedural methods often simplify or fail to render realistically due to their reliance on generic functions that introduce unnatural symmetries. This makes synthesis particularly effective for photorealistic natural textures that evade straightforward . However, texture synthesis can suffer from visible seams at boundaries or high computational demands during generation, whereas offer faster performance and seamless tiling for simpler, structured cases without preprocessing. For instance, a fractal-based might efficiently generate uniform patterns, but synthesizing a sample-based grass better preserves the irregularity and directional growth seen in real foliage.

Goals and Challenges

The primary goals of texture synthesis in are to produce new textures from a small input exemplar that are visually indistinguishable from the original, enabling the creation of larger, seamless, and scalable images without visible repetitions or boundaries. This process aims to match the statistical properties of the input sample, such as distributions and spatial relationships, while preserving both details—like individual patterns or "textons"—and global structures, such as overall coherence and orientation. In contrast to procedural textures, which rely on predefined parametric models, example-based synthesis uses non-parametric estimation from the sample to generate infinite variations suitable for tiling or real-time rendering. Key challenges in achieving these goals include avoiding artifacts such as seams, blurring, or unnatural repetitions that arise during extension of the input sample, particularly in structured textures with distinct orientations or periodic elements. Handling variations in scale, orientation, and illumination across the input exemplar is difficult, as natural textures often exhibit non-stationary properties that require robust neighborhood matching to prevent "slipping" into incorrect visual subspaces. Additionally, ensuring computational efficiency is critical for real-time applications, as synthesis algorithms must balance quality with speed, often demanding large input samples to capture sufficient statistics without excessive processing time. Evaluation of synthesized textures typically focuses on visual and perceptual similarity to the input, assessed through qualitative inspection or quantitative metrics like neighborhood matching, where local contexts are compared using measures such as sum-of-squared differences (SSD) to ensure . Metrics emphasizing seamlessness, such as boundary continuity checks, help quantify the absence of visible artifacts, though subjective human assessments remain essential due to the perceptual nature of quality. A fundamental prerequisite for effective texture synthesis methods is the analysis of the input sample to extract relevant features, such as local histograms or (MRF) models, which inform the generation process and ensure statistical fidelity. This initial step allows algorithms to approximate the underlying generative process of the , setting the foundation for scalable output without prior assumptions about its parametric form.

Methods

Tiling and Mosaic Techniques

techniques represent one of the earliest and simplest approaches to texture synthesis, involving the periodic repetition of a small input sample to generate larger textures. In basic , the output texture is created by directly replicating the input sample across a , where the value at each position (x, y) in the output is determined by the corresponding modular position in the input: \text{output}(x, y) = \text{input}(x \mod w, y \mod h), with w and h denoting the width and height of the input sample, respectively. This method preserves the local structure of the sample but often results in visible seams at tile boundaries due to abrupt discontinuities. To mitigate these artifacts, edge blending techniques such as blending are applied, which solve the equation \Delta f = \text{div}(\nabla g) within the overlapping regions to smoothly interpolate gradients and ensure seamless continuity across edges. Mosaic techniques extend by fragmenting the input sample into smaller patches and rearranging them randomly to reduce periodicity while approximating the original texture's appearance. A prominent example is the chaos method, which begins with an initial of the input sample and then applies a chaotic transformation, such as the cat map iteration, to redistribute blocks within the tiles, followed by cross-edge filtering over a few pixel layers to minimize boundary discontinuities. This random arrangement promotes a distribution of local features, enabling the of large textures, such as a 512×512 image, in under 0.03 seconds on mid-2000s hardware. More advanced algorithms within this category, such as those based on , address the limitations of periodicity by creating aperiodic tilings from a small set of precomputed tiles derived from the input sample. The input is divided into diamond-shaped portions aligned with edge colors, which are then combined into a finite set of (e.g., 8 to 18 tiles) ensuring matching colors on adjoining edges; these tiles are stochastically placed to cover the plane without repetition, using random selection among compatible tiles at each step. This reassembling process allows for efficient, non-periodic texture generation directly on demand, with minimal storage requirements. The primary strengths of tiling and techniques lie in their low computational demands and , making them suitable for applications and procedural texturing of vast areas, such as virtual 100,000×100,000 textures. However, basic suffers from obvious periodic repetition, while methods like mosaics can introduce that disrupts structured patterns, potentially leading to blurred or mismatched features in complex inputs. Wang tile approaches mitigate some periodicity but may still exhibit quilting artifacts if the tile set is too small or if features span tile corners inadequately.

Stochastic and Structured Methods

Stochastic texture synthesis approaches model textures as realizations of Markov random fields (MRFs), where the probability of a 's color depends on its local neighborhood, capturing spatial dependencies in a probabilistic manner. In these methods, the synthesis process builds the output image sequentially, by , starting from a region copied from the input sample. For each new , the algorithm examines its neighborhood in the growing output and searches the input sample for matching neighborhoods to select candidate colors, ensuring local consistency while introducing variability through random selection. A key formulation in stochastic synthesis is the conditional probability P(c \mid n), where c is the color of the and n is its neighborhood context; this probability is estimated non-parametrically by counting occurrences of matching neighborhoods in the input sample. The seminal Efros-Leung algorithm exemplifies this by growing the texture outward from an initial patch, using an exhaustive search to find all pixels in the sample whose neighborhoods match the current output context, then selecting a candidate color randomly from those matches—effectively a k-nearest neighbors approach where k equals the number of exact matches. This non-parametric sampling avoids explicit parameter estimation, making it adaptable to diverse textures without assuming a specific distribution. Structured methods, in contrast, employ rule-based systems to enforce specific patterns, particularly for textures with repetitive or hierarchical elements, blending with sample guidance. These approaches use formal grammars or systems to define production rules that replicate structural motifs from the input, such as tilings or branching patterns, while allowing controlled variation. For instance, tree grammars can infer hierarchical rules from the sample to synthesize textured patterns by recursively applying productions to generate tree-like structures. L-systems, extended for texture synthesis as TSL-systems (which incorporate , , and context-sensitive rules), provide a procedural yet sample-informed framework for generating structured textures like fabrics or natural aggregates. In these systems, an initial derived from the sample evolves through rewriting rules, where probabilities guide choices to match the input's irregularity, producing outputs that preserve global organization without pixel-level sampling. Such methods are single-purpose, tailored to texture classes like quilt-like tilings, where rules enforce adjacency constraints akin to productions for periodic arrangements. These techniques find applications in synthesizing natural, irregular textures such as or , where stochastic methods excel at replicating fine-grained and structured approaches handle underlying motifs like patterns. However, sequential growth in algorithms poses challenges, rendering them computationally slow for large outputs—often requiring hours for modest resolutions due to repeated neighborhood searches—though they remain foundational for unconstrained synthesis tasks.

Pixel-Based Methods

Pixel-based methods in texture synthesis generate output images by sequentially determining the color or intensity value for each pixel, guided by matching local neighborhoods from an input sample texture. The foundational algorithm, developed by Efros and Leung, initializes the output with a single seed pixel randomly chosen from the sample and expands outward in a raster or spiral order. For each new output pixel, the method scans the input sample to find all neighborhoods that closely match the synthesized neighborhood surrounding the target pixel position, then selects one match—often the closest or randomly among the best—and copies its central pixel value to the output. This non-parametric sampling captures complex, non-stationary statistics without assuming a parametric model. The similarity between an output neighborhood n_o and a candidate sample neighborhood n_s is typically measured using the sum of squared differences over the neighborhood window N: d(n_o, n_s) = \sum_{(i,j) \in N} \left( I_o(i,j) - I_s(i,j) \right)^2 The best match minimizes this distance, ensuring . Neighborhood sizes are usually small (e.g., 5x5 or 9x9 pixels) to balance detail capture and computational cost, though larger windows improve quality at the expense of speed. While effective for many textures, the greedy nature of this sequential assignment can propagate errors, leading to seams or artifacts in regions with sparse matches. To mitigate this, subsequent algorithms employ by formulating the synthesis as a (MRF), where each output pixel is a with labels drawn from the sample, and potentials enforce neighborhood similarity. is approximated using loopy or graph cuts to jointly optimize pixel assignments, reducing local inconsistencies. For instance, strong MRF models ensure Markovian properties across subneighborhoods for more robust texture modeling. These optimizations refine the entire output iteratively rather than pixel-by-pixel, improving coherence. Variants accelerate synthesis while preserving quality, such as jump maps, which preprocess the sample to create a pointing each to a similar one in the sample, allowing rapid propagation of values during output generation. Introduced by Zelinka and Garland, jump maps enable near-real-time performance by avoiding exhaustive searches, with preprocessing costs offset by repeated use. Other enhancements include tree-structured for fast approximate matching and k-coherence methods that build candidate sets during analysis for constant-time lookups. These methods excel at reproducing intricate, aperiodic patterns and non-stationary textures that defy simple procedural rules, as they directly sample from the input's empirical distribution. However, they are computationally intensive without accelerations—original implementations can take minutes per image due to search —and remain prone to error accumulation in low-redundancy samples, potentially yielding blurry or erroneous regions.

Patch-Based Methods

Patch-based methods in texture synthesis involve dividing an input sample texture into overlapping patches, which are then selected, placed, and blended to construct the output , preserving both and more efficiently than pixel-by-pixel approaches. This technique addresses limitations in earlier pixel-based synthesis, such as slow computation and potential boundary artifacts, by growing the output in larger chunks while ensuring seamless overlaps through specialized cutting algorithms. A foundational algorithm is Image Quilting, introduced by Efros and Freeman, which synthesizes textures by randomly selecting square patches from the input sample and stitching them in a raster-scan order, with overlaps typically set to about one-sixth of the patch size to maintain continuity. To achieve overlap consistency, the method employs minimum error boundary cuts, where the best seam is found using dynamic programming to minimize visual discontinuities along the overlap region; the boundary cost C(b) for a potential cut b is computed as the sum of squared differences between overlapping pixels: C(b) = \sum_{(i,j) \in b} \| p_{ov1}(i,j) - p_{ov2}(i,j) \|^2, with the optimal path selected to reduce seam visibility. This greedy placement can be extended for texture transfer by incorporating feature maps, such as , to guide patch selection and apply the source texture to a target image while preserving geometric details. Building on this, Kwatra et al. advanced patch matching with cuts for optimal stitching, treating seam finding as a minimum-cost cut problem in a constructed over the overlap region, where edge weights represent dissimilarities (e.g., color or gradient differences). This allows for more flexible, non-straight seams compared to dynamic programming, incorporating constraints from adjacent boundaries to avoid repetitive artifacts; patch candidates are chosen via approximate nearest-neighbor matching from the sample, enabling both greedy and globally optimal . The approach supports extensions to video by enforcing temporal consistency across frames. These methods offer significant benefits over pixel-based techniques, achieving synthesis speeds of seconds to minutes on standard hardware—up to orders of magnitude faster—while better preserving structural elements and minimizing visible seams through overlap optimization. They have been widely applied to enlargement, where patches are scaled and tiled to expand images without repetition, and to , filling missing regions by propagating surrounding patches with low-cost boundaries.

Deep Learning Methods

Deep learning methods have revolutionized texture synthesis by leveraging to capture and generate complex statistical patterns from exemplar images, enabling scalable and high-fidelity outputs that surpass traditional algorithmic approaches. Early neural techniques focused on convolutional neural networks (CNNs) for feature extraction, where pre-trained networks like VGG are used to define a perceptual loss that optimizes synthesized textures to match the statistics of input features, ensuring realistic appearance without explicit patch matching. This , introduced in seminal work on texture synthesis using CNNs, treats textures as stationary processes in feature space, allowing iterative optimization to produce seamless extensions of input samples. Adaptations of further extended this to texture transfer, preserving content structure while imposing stylistic textures via multi-scale feature correlations. Generative adversarial networks (GANs) advanced texture synthesis by employing adversarial training to align the distribution of generated samples with real textures, addressing limitations in optimization-based methods like mode collapse and lack of diversity. A notable example is TiPGAN, which incorporates intrinsic priors such as relative total variation (RTV) to enforce tileability during synthesis, particularly for cloth digitization applications, achieving superior seamlessness and quality metrics compared to prior GANs. This approach uses a generator-discriminator pair where the discriminator penalizes boundary discontinuities, enabling high-resolution, periodic textures suitable for and . Diffusion models represent a recent , offering stable training and high-quality generation through iterative denoising processes conditioned on input exemplars or geometries. In texture synthesis, geometry-aware techniques, such as those in Hunyuan3D 2.0, enable zero-shot generation of consistent, high-resolution textures on meshes by integrating surface normals and UV mappings into the , producing photorealistic results with improved over GAN-based alternatives. These models excel in capturing fine-grained details and global consistency, often evaluated using (FID) scores that highlight their superior realism, with reported FID values below 5 on standard texture benchmarks. Text-guided synthesis has emerged as a transformative capability, allowing control via prompts to generate targeted textures on assets. TexDreamer employs a multi-view framework to produce photorealistic, robust textures from text descriptions, optimizing for zero-shot applicability on meshes and achieving consistent multi-view coherence without paired training data. Similarly, ProcTex facilitates part-consistent text-to-texture for procedural models, ensuring texture continuity across geometric variations in interactive workflows, which is particularly valuable for in . These methods leverage pre-trained large language models to condition samplers, yielding diverse outputs like "rusty metal" or " fabric" with semantic alignment. Additional innovations include orientation-aware CNNs that enhance synthesis for anisotropic textures by augmenting training data with rotated samples, mitigating directional biases and improving generalization to oriented patterns like fur or waves. In evaluation contexts, multi-modal fusion networks have been developed to detect AI-generated textures by combining texture statistics with frequency-domain features, revealing synthesis artifacts and aiding in authenticity verification with accuracies exceeding 95% on benchmark datasets. Despite these advances, challenges persist in requiring large-scale annotated datasets for training, which limits applicability to niche domains, and difficulties in generalizing to 3D surfaces where viewpoint inconsistencies arise; metrics like FID remain essential for quantifying realism but often overlook perceptual seamlessness.

Implementations and Applications

Software Tools and Libraries

Several open-source libraries provide implementations of texture synthesis algorithms, enabling researchers and developers to experiment with pixel-based, patch-based, and deep learning approaches. One prominent example is the Texture Synthesis library developed by Embark Studios, a Rust-based crate that implements multiresolution stochastic texture synthesis for generating images from example textures using non-parametric methods. This library supports both pixel and patch-based techniques, offering a lightweight API for CLI and programmatic use, with optimizations for performance in applications like game development. For deep learning variants, PyTorch implementations are widely available, such as the deep-textures repository, which reimplements convolutional neural network-based texture synthesis from the seminal Gatys et al. work, allowing optimization for style transfer and texture generation. More recent advancements include TiPGAN, a PyTorch-based model for high-quality tileable texture synthesis using GANs with intrinsic priors, particularly suited for cloth digitization, as detailed in its 2025 CAD journal publication. Diffusion model toolkits have also emerged in 2024-2025, exemplified by SceneTex, which leverages depth-to-image diffusion priors for style-consistent indoor scene textures, and Diffusion Texture Painting from NVIDIA, enabling interactive surface texturing on 3D meshes via 2D diffusion models. Commercial software integrates texture synthesis capabilities to streamline workflows in professional graphics pipelines. Adobe Substance Designer features procedural nodes and generators that facilitate texture creation and manipulation, allowing users to build complex materials through a node-based interface. In game engines, plugins extend real-time synthesis: includes the Face Texture Synthesizer module for generating facial textures via synthesis algorithms integrated with its runtime features. supports synthesis through asset store plugins and custom scripts for , though specific real-time plugins often build on open-source libraries like the crate for on-the-fly texture expansion during rendering. Key features across these tools emphasize efficiency and scalability, particularly GPU acceleration in implementations via CUDA-enabled frameworks like , which speeds up training and inference for large-scale synthesis tasks. Accessibility is enhanced by educational resources, such as implementations and tutorials for the Image Quilting algorithm from Efros and Freeman's 2001 SIGGRAPH paper, including benchmark code on File Exchange for testing patch-based synthesis performance. Texture synthesis plays a crucial role in by enabling efficient for surfaces in and , where it generates seamless, large-scale textures from small exemplars to reduce manual artist effort. In procedural world generation for , such as open-world environments, synthesis algorithms create varied terrains and surfaces by extending input samples, enhancing visual diversity without repetitive asset creation. For instance, generative adversarial networks have been applied to procedurally synthesize original textures in , achieving high-quality results that integrate seamlessly into dynamic scenes. In , texture movie synthesis facilitates by generating animated textures for motion pictures and computer-generated , allowing realistic surface variations over time. Image , a key application of texture synthesis, supports editing in graphics pipelines by filling missing regions with contextually matching textures derived from surrounding areas, preserving in workflows. Tools like TextureShop combine texture synthesis with shape-from-shading to edit object textures directly in s, enabling artists to replace or extend surface details interactively. In and (), geometry-aware texture synthesis ensures consistent texturing across meshes by incorporating spatial structure into the generation process, producing high-fidelity UV maps that align with object . For example, TextureDreamer employs geometry-aware models to transfer photorealistic textures from sparse input images (3-5 views) to arbitrary shapes, facilitating efficient asset creation for VR environments. Similarly, TwinTex automates for piece-wise planar models, generating photorealistic results suitable for immersive applications. Dynamic omnidirectional texture synthesis further enhances photorealism in VR by real-time generation of 360-degree textures from RGB-D captures, supporting interactive virtual walkthroughs. Beyond graphics, texture synthesis aids through by synthesizing normal tissue patterns to identify deviations in scans, such as in MRI brain images where generative models create shape- and texture-consistent samples for training detectors. In AI-generated , multi-modal networks detect synthetic images by analyzing inconsistencies in texture features across modalities, providing robust forensics for 2025-era deepfakes. Recent advancements include text-guided synthesis, which enables editing of textures via prompts using diffusion models, as demonstrated in TexGen's multi-view sampling framework for coherent assets. For (AR) and , these methods contribute to robust by synthesizing view-consistent textures that blend seamlessly with real-world views, improving immersion in mixed-reality applications.

Historical Development

Key Milestones

Early work on texture synthesis in dates back to the 1980s, with initial techniques focusing on statistical and syntactic models to generate patterns. A 1985 survey outlined approaches such as growth models and simulations, laying the groundwork despite computational constraints. The field advanced in the early 1990s with foundational techniques focused on and modeling to replicate visual patterns from sample images. Initial approaches, such as simple methods, involved repeating small texture tiles to cover larger surfaces, though they often produced visible seams and lacked natural variation. A significant advancement came in 1995 with the pyramid-based texture and algorithm by David J. Heeger and James R. Bergen, which introduced a model using multi-resolution pyramids to capture and resynthesize content, enabling more realistic aperiodic textures by iteratively adjusting histograms at different scales. This work marked a shift toward , signal-processing-inspired methods that modeled textures as processes, influencing subsequent research on structured and . The late 1990s and early 2000s saw the rise of non-parametric, exemplar-based techniques that sampled directly from input textures to generate novel ones, addressing limitations in parametric models for complex, non-stationary patterns. In 1999, Alexei A. Efros and Thomas K. Leung proposed texture synthesis by non-parametric sampling, a pixel-based method that grows an output image pixel-by-pixel by finding the best-matching neighborhood from the sample, leveraging spatial locality assumptions to preserve local statistics and produce coherent results for a wide range of textures. Building on this, Efros and William T. Freeman introduced image quilting in 2001, a patch-based approach that overlaps and blends rectangular patches from the input sample using minimum-cut optimization, reducing boundary artifacts and enabling efficient synthesis of larger textures with improved scalability. These methods established the core paradigm of exemplar-driven synthesis, widely adopted in for their simplicity and effectiveness. The 2010s brought into texture synthesis, adapting convolutional neural networks (CNNs) to capture hierarchical features and enable transfer. A pivotal development was Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge's 2015 neural algorithm for artistic , which, when applied to s, optimizes images to match the statistics of a while preserving content, effectively synthesizing s through feature correlations in pre-trained CNNs like VGG. This optimization-based technique demonstrated high perceptual quality for generation and inspired feed-forward variants, bridging traditional synthesis with neural representations. In the 2020s, generative adversarial networks (GANs) and diffusion models have dominated, offering controllable, high-fidelity synthesis, including extensions to and text-guided applications. Starting around 2018, GAN-based methods like TextureGAN by Wenqi Xian et al. integrated texture patches into conditional GANs for object-aware synthesis, allowing user-specified textures on sketches and achieving diverse, realistic outputs through adversarial training. More recently, models have advanced text-guided and texture generation; for instance, TiPGAN in 2025 by Honghong He et al. employs a GAN-diffusion hybrid with intrinsic priors for seamless, tileable 2D textures, improving boundary consistency for applications like cloth digitization. Similarly, TexDreamer, introduced in 2024 by Yufei Liu et al., uses multi-view for zero-shot, photorealistic human texture synthesis from text prompts, leveraging geometry-aware to produce robust, high-resolution results on arbitrary meshes. These innovations address gaps in prior methods by enabling scalable, controllable synthesis in emerging domains like and digital fabrication.

Influential Publications

One of the foundational works in texture synthesis is the 1999 paper by Efros and Leung, which introduced a non-parametric sampling approach for pixel-based texture generation. This method synthesizes textures by growing an output image from a seed , sampling neighboring pixels from the input based on local similarity, enabling the preservation of structural details in a wide range of textures. The paper's innovation in avoiding parametric models marked a shift toward exemplar-based techniques and has been extended in numerous subsequent algorithms for image and completion. Building on pixel-based methods, Kwatra et al.'s 2003 paper advanced patch-based synthesis through graph-cut optimization, allowing seamless tiling of larger patches from an input exemplar. This approach minimizes boundary discontinuities by formulating overlaps as minimum-cut problems in a , producing visually coherent results for both images and videos, and influencing later works in video generation and seamless cloning. Its impact is evident in applications requiring scalable, non-repetitive , with extensions to and dynamic scenes. The advent of brought new paradigms, exemplified by Gatys et al.'s 2016 CVPR paper on , which repurposed pre-trained convolutional neural networks (CNNs) to separate and recombine and representations. By optimizing images to match statistics of features from VGG networks, it enabled high-fidelity synthesis without explicit training on textures, inspiring feed-forward variants and broader neural artistic rendering techniques. This work's separation of () from has over 20,000 citations and catalyzed the neural era of modeling. Li and Wand's 2016 ECCV paper introduced Markovian Generative Adversarial Networks (MGANs) for precomputed texture , GANs on small input patches to generate infinite textures via autoregressive sampling. This method achieved efficient, high-quality outputs by leveraging Markov assumptions in the , bridging early GANs with traditional and enabling GPU-accelerated applications in . It demonstrated superior perceptual quality over prior non-neural methods and paved the way for GAN-based texture extensions in rendering. In recent developments, He et al.'s paper on TiPGAN proposed a framework incorporating intrinsic priors for high-quality tileable texture synthesis, particularly suited for cloth . By integrating patch-swapping and tiling modules with a novel relative metric, it generates seamless, periodic textures that outperform baselines in boundary continuity and visual realism, addressing limitations in prior GANs for periodic applications like virtual fabrics. This work highlights the evolution toward domain-specific priors in neural synthesis. Xu et al.'s 2025 on ProcTex advanced text-guided texture synthesis for procedural models, enabling consistent, interactive generation across shape families using depth-aware . The system integrates part-level UV texturing with shape matching, supporting edits in design workflows and extending 2D diffusion models to assets without retraining. Its focus on procedural consistency represents a key step in bridging text prompts with editable pipelines. Li et al.'s 2025 conference paper on TexDreamer introduced a multi-view for text-driven, photorealistic texture synthesis on meshes, generating robust UV maps from prompts while handling complex geometries. By distilling into a multi-view , it achieves high-fidelity results with reduced artifacts compared to single-view methods, and supports extensions to human avatars via zero-shot adaptation. This contributes to scalable content creation in and . The rise of diffusion models has further transformed texture synthesis, enabling probabilistic generation of diverse, high-resolution textures from via iterative denoising. These models extend earlier neural approaches by enabling unconditional and conditional synthesis (e.g., text-guided), with applications in infinite texture tiling and extensions, though challenges in computational efficiency persist.

References

  1. [1]
    Texture Analysis and Synthesis
    Texture synthesis creates new textures from a sample, avoiding repetition, and can produce tileable images. The goal is to create a new texture that appears to ...
  2. [2]
    [PDF] State of the Art in Example-based Texture Synthesis - GAMMA
    Given a sample texture, the goal is to synthesize a new texture that looks like the input. The synthesized texture is tileable and can be of arbitrary size.
  3. [3]
    Texture synthesis techniques for computer graphics - ScienceDirect
    Five primary approaches to texture synthesis for computer graphics are surveyed: a syntactic approach, a statistical approach, a growth model approach, ...
  4. [4]
    [PDF] Pyramid-Based Texture Analysis/Synthesis
    This technique synthesizes textures by analyzing a sample, using parameters to create new images, and is automatic, requiring only the target texture.
  5. [5]
    [PDF] State of the Art in Example-based Texture Synthesis - Hal-Inria
    Jul 7, 2011 · Texturing is a core process for computer graphics applica- tions. The texturing process can be divided into three com- ponents: (1) texture ...
  6. [6]
    [PDF] Texture Synthesis by Non-parametric Sampling - UC Berkeley EECS
    In this paper we have chosen a statistical non-parametric model based on the assumption of spatial locality. The result is a very simple texture syn- thesis ...
  7. [7]
    A Three-Level Approach to Texture Mapping and Synthesis on 3D ...
    May 4, 2020 · We demonstrate that the relevance of the three levels depends on the texture content and type (stochastic, structured, or anisotropic textures).
  8. [8]
    A hybrid-based texture synthesis approach | The Visual Computer
    Mar 4, 2004 · A hybrid texture synthesis approach that combines the strength of Ashikhmin's [1] and Liang's [11] algorithms is presented in this paper.Missing: classification | Show results with:classification
  9. [9]
    (PDF) Survey Of Texture Mapping - ResearchGate
    Aug 5, 2025 · This article surveys the fundamentals of texture mapping, which can be spilt into two topics: the geometric mapping that warps a texture onto a surface, and ...
  10. [10]
    MIP-Map Level Selection for Texture Mapping - ACM Digital Library
    Texture mapping is a fundamental feature of computer graphics image generation. In current PC-based acceleration hardware, MIP-mapping with bilinear and ...
  11. [11]
    Dipmap: A new texture filtering method for texture mapping
    Mipmap is a typical texture filtering method for texture mapping. Due to simplicity and efficiency, it has been popularly used in current graphics hardware.
  12. [12]
    Texture characterization of digital images which have a periodicity or ...
    Mar 17, 2016 · In this paper we analyse digitally generated fractal images added to periodic patterns using the variance of grey levels with four ...
  13. [13]
    A Perceptually Based Spectral Model for Isotropic Textures
    Aug 6, 2025 · We present a perceptually based texture synthesis model for isotropic textures. The model uses additive synthesis of band-limited noise in the ...
  14. [14]
    Lapped solid textures: filling a model with anisotropic textures ...
    Aug 6, 2025 · We identify several texture types considering the amount of anisotropy and spatial variation and provide a tailored user interface for each.
  15. [15]
    (PDF) Analysis of texture characteristics associated with visual ...
    Aug 10, 2025 · In this paper, a set of objective methods for measuring these characteristics is proposed: regularity is estimated by an autocorrelation ...
  16. [16]
    (PDF) Texture Analysis - ResearchGate
    Feb 16, 2015 · This chapter reviews and discusses various aspects of texture analysis. The concentration is on the various methods of extracting textural features from images.
  17. [17]
    [PDF] SAN FRANCISCO JULY 22-26 Volume 19, Number 3, 1985 287
    As with all procedural textures, the database is extremely small. Although it is not immediately obvious, this paradigm is a superset of conventional ...
  18. [18]
    [PDF] A Crash Course on Texturing - Microsoft
    Texturing is a fundamental technique in Computer Graphics, allow- ing us to represent surface properties without modeling geometric or material details.
  19. [19]
    [PDF] Simulating the structure and texture of solid wood | CS@Cornell
    Procedural 3D textures for color variation in wood have been demonstrated before, and very realistic wood appearance has been demonstrated using color and ...
  20. [20]
    [PDF] Poisson Image Editing
    Poisson image editing uses the Poisson equation to achieve seamless local changes in image regions, enabling importation and modification of image appearance.
  21. [21]
    [PDF] Chaos Mosaic: Fast and Memory Efficient Texture Synthesis
    Apr 20, 2000 · We present a procedural method for synthesizing large textures from an input texture sample. The basis of our algorithm is the chaos mosaic, ...
  22. [22]
    [PDF] Wang Tiles for Image and Texture Generation
    We present a simple stochastic system for non-periodically tiling the plane with a small set of Wang Tiles. The tiles may be filled with texture, patterns ...Missing: seminal | Show results with:seminal
  23. [23]
    Markov Random Field Texture Models | IEEE Journals & Magazine
    A texture model is a mathematical procedure capable of producing and describing a textured image. We explore the use of Markov random fields as texture models.
  24. [24]
    [PDF] Exemplar-Based Texture Synthesis: the Efros–Leung Algorithm
    It relies on a Markov assumption and generates new textures in a non-parametric way, directly sampling new values from the input sample. In this paper, we ...
  25. [25]
    Stochastic tree grammar inference for texture synthesis and ...
    This paper introduces a texture grammar inference procedure which employs a clustering algorithm and a stochastic regular grammar inference procedure.
  26. [26]
    Texture synthesis by L-systems - ScienceDirect.com
    We present a method for texture synthesis based on the extended L-systems, called TSL-systems, which are defined as a combination of parametric, stochastic and ...
  27. [27]
    [PDF] Real-Time Texture Synthesis by Patch-Based Sampling
    More recently, Efros and Leung [1999] demonstrated the power of sampling from a local PDF by presenting a nonparametric sampling algorithm that works well for a ...<|control11|><|separator|>
  28. [28]
    (PDF) Strong Markov Random Field Model - ResearchGate
    PDF | The strong Markov random field (strong-MRF) model is a submodel of the more general MRF-Gibbs model. The strong-MRF model defines a system whose.
  29. [29]
    [PDF] Interactive Texture Synthesis on Surfaces Using Jump Maps
    Jump maps were recently introduced12 as a means to accelerate image- based texture synthesis to near real-time speeds. The jump map stores for each pixel of a ...
  30. [30]
    [PDF] Image Quilting for Texture Synthesis and Transfer - People @EECS
    We present a simple image-based method of generating novel vi- sual appearance in which a new image is synthesized by stitching.Missing: rule- | Show results with:rule-
  31. [31]
    [PDF] Graphcut Textures: Image and Video Synthesis Using Graph Cuts
    In this paper we introduce a new algorithm for image and video texture synthesis. In our approach, patch regions from a sample.Missing: belief propagation
  32. [32]
    Image quilting for texture synthesis and transfer - ACM Digital Library
    Image quilting for texture synthesis and transfer. Authors: Alexei A. Efros ... Freeman. William T. Freeman. Mitsubishi Electric Research Laboratories. View ...
  33. [33]
    Graphcut textures: image and video synthesis using graph cuts
    Graphcut textures: image and video synthesis using graph cuts. Authors: Vivek Kwatra. Vivek Kwatra. Georgia Institute of Technology. View Profile. , Arno ...
  34. [34]
    Example-based texture synthesis written in Rust - GitHub
    Oct 28, 2023 · A light Rust API for Multiresolution Stochastic Texture Synthesis [1], a non-parametric example-based algorithm for image generation.
  35. [35]
    Deep Textures - GitHub
    Unofficial Reimplementation of "Texture Synthesis Using Convolutional Neural Networks" in PyTorch [Gatys et al. 2015](http://arxiv.org/abs/1505.07376).
  36. [36]
    TiPGAN: High-Quality Tileable Textures Synthesis with Intrinsic ...
    High-quality tileable textures synthesis with intrinsic priors for cloth digitization applications. This is the official implementation of TiPGAN (CAD 2025).Missing: 2024 | Show results with:2024
  37. [37]
    High-Quality Texture Synthesis for Indoor Scenes via Diffusion Priors
    We propose SceneTex, a novel method for effectively generating high-quality and style-consistent textures for indoor scenes using depth-to-image diffusion ...
  38. [38]
    [SIGGRAPH 2024] Diffusion Texture Painting - GitHub
    We present a technique that leverages 2D generative diffusion models (DMs) for interactive texture painting on the surface of 3D meshes.
  39. [39]
    Texture generators | Substance 3D Designer - Adobe Help Center
    Feb 14, 2025 · Build complex, procedural textures and patterns from scratch using a powerful node-based workflow. Open the app.
  40. [40]
    FMetaHumanFaceTextureSynthe...
    UE API for creating MH Face textures based on the Texture Synthesis module feature set developed in titan.
  41. [41]
    [SigGraph2002] Image Quilting/Texture Synthesize - File Exchange
    Matlab code to do Image Quilting as presented in the SIGGRAPH 2002 paper by Efros & Freeman. Note: this isn't Efros & Freeman's code, just an implementation ...Missing: tutorial | Show results with:tutorial
  42. [42]
    [1508.06576] A Neural Algorithm of Artistic Style - arXiv
    Aug 26, 2015 · Here we introduce an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality.
  43. [43]
    [PDF] TextureGAN: Controlling Deep Image Synthesis With Texture Patches
    TextureGAN is a deep image synthesis method where users control object texture by placing texture patches on sketches, allowing for novel image generation.Missing: seminal 2018 onwards
  44. [44]
    TiPGAN: High-quality tileable textures synthesis with intrinsic priors ...
    We introduce an intuitive metric, Relative Total Variation (RTV), to quantify texture pattern tileability. •. Extensive qualitative and quantitative experiments ...
  45. [45]
    Towards Zero-Shot High-Fidelity 3D Human Texture Generation
    Mar 19, 2024 · We present TexDreamer, the first zero-shot multimodal high-fidelity 3D human texture generation model. Utilizing an efficient texture adaptation ...Missing: synthesis | Show results with:synthesis
  46. [46]
    [PDF] Image Style Transfer Using Convolutional Neural Networks
    In texture trans- fer the goal is to synthesise a texture from a source image while constraining the texture synthesis in order to preserve the semantic content ...
  47. [47]
    Precomputed Real-Time Texture Synthesis with Markovian ... - arXiv
    Apr 15, 2016 · This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis.
  48. [48]
    ProcTex: Consistent and Interactive Text-to-texture Synthesis for Part ...
    ProcTex enables consistent and real-time text-guided texture synthesis for families of shapes, which integrates seamlessly with the interactive design flow of ...Missing: ProTex | Show results with:ProTex
  49. [49]
    TexDreamer: Text-driven Photorealistic and Robust Texture ...
    Jun 30, 2025 · We introduce TexDreamer, a novel method that generates high-quality 3D textures on meshes using text prompts to address these issues.