Fact-checked by Grok 2 weeks ago

Image editing

Image editing is the process of altering digital or traditional images to enhance visual quality, correct imperfections, remove unwanted elements, or achieve creative and artistic outcomes, typically using specialized software that applies mathematical operations to pixel data. This encompasses a wide range of techniques, including cropping to frame subjects, adjusting brightness and contrast for better exposure, color correction to balance tones, retouching to eliminate blemishes, and compositing to combine multiple images into a single scene. The practice traces its roots to the mid-19th century, shortly after photography's invention in 1826 by Joseph Nicéphore Niépce, when manual manipulations like double exposures, scratching negatives, and physical composites began to alter photographic reality for artistic or propagandistic purposes. Notable early examples include the 1860 composite portrait of Abraham Lincoln, which grafted his head onto another politician's body, and Soviet leader Joseph Stalin's 1930s airbrushing of political rivals from official photos. Digital image editing gained momentum in the 1970s with the development of the first experimental digital camera by Kodak engineer Steve Sasson in 1975, which captured images electronically rather than on film. The field's transformation accelerated in the late 20th century with the advent of consumer digital tools; Adobe Photoshop, created by brothers Thomas and John Knoll, was first released in February 1990 as a standalone software for Macintosh computers, introducing layers, masks, and non-destructive editing that democratized professional-level manipulation. By the 1990s, milestones like the 1991 launch of the Logitech Fotoman—the first commercial digital camera—and the integration of JPEG compression enabled widespread digital workflows, shifting from darkroom techniques to computer-based processing. Today, image editing extends to mobile apps and AI-driven features, such as generative fill and object removal, supporting applications in photography, graphic design, journalism, advertising, and forensic analysis while raising ethical concerns about authenticity and misinformation.

Fundamentals

Digital image basics

Digital images are fundamentally categorized into raster and vector formats, each defined by distinct structural principles that influence their creation and manipulation. Raster images, also known as bitmap images, consist of a grid of individual pixels, where each pixel represents a discrete sample of color or intensity from the original scene, making them ideal for capturing and editing complex visual details like photographs. In contrast, vector images are constructed using mathematical equations to define paths, shapes, and curves, allowing for infinite scalability without loss of quality, though they are less suited for pixel-level editing tasks common in image manipulation workflows. Raster images form the primary focus of digital image editing due to their pixel-based nature, which enables precise alterations at the elemental level. The pixel serves as the basic unit of a raster image, functioning as the smallest addressable element that holds color and intensity information, typically arranged in a two-dimensional array to form the complete image. Resolution, measured in pixels per inch (PPI) for digital displays or dots per inch (DPI) for printing, quantifies the density of these pixels and directly affects the perceived sharpness and detail of an image; higher PPI or DPI values yield finer detail but increase file size and processing demands. Bit depth refers to the number of bits used to represent the color or grayscale value of each pixel, determining the range of tonal variations possible— for instance, 8 bits per channel allows 256 levels per color component, while 16 bits enables over 65,000, enhancing editing flexibility by preserving subtle gradients and reducing banding artifacts during adjustments. Image dimensions, expressed as width by height in pixels (e.g., 1920 × 1080), dictate the total pixel count and thus the intrinsic detail capacity of the raster image, profoundly shaping editing workflows by influencing scalability, computational load, and output suitability. Larger dimensions support more intricate edits and higher-quality exports but demand greater storage and processing resources, potentially slowing operations like filtering or compositing, whereas smaller dimensions streamline workflows at the cost of reduced detail upon enlargement. The origins of digital image editing trace back to the 1960s, with pioneering work in computer graphics laying the groundwork for digital manipulation. Ivan Sutherland's Sketchpad system, developed in 1963 as part of his PhD thesis at MIT, introduced interactive graphical interfaces using a light pen to draw and edit vector-based diagrams on a display, marking an early milestone in human-computer visual interaction that influenced subsequent raster image technologies.

Color models and representations

In digital image editing, color models define how colors are numerically represented and manipulated within an image's pixels, enabling precise control over visual elements. These models vary in their approach to encoding color, with some suited to display technologies and others to printing processes. Understanding these representations is essential for tasks like color correction, compositing, and ensuring consistency across devices. The RGB color model is an additive color system used primarily for digital displays and on-screen editing. It combines red, green, and blue light to produce a wide range of colors, where each pixel's color is determined by the intensity values of its three channels, typically ranging from 0 to 255 in 8-bit images. This model is fundamental in software like Adobe Photoshop because it aligns with how computer monitors emit light, allowing editors to directly manipulate hues through channel adjustments. In contrast, the CMYK model operates on subtractive color principles, ideal for printing applications where inks absorb light from a white substrate. It uses cyan, magenta, yellow, and black (key) components to simulate colors by subtracting wavelengths from reflected light, making it the standard for professional print workflows to achieve accurate reproduction on paper or other media. Editors convert RGB images to CMYK during prepress to preview print outcomes, as the gamuts differ significantly. The HSV color model provides a perceptual representation that aligns more closely with human vision, organizing colors in a cylindrical coordinate system of hue (color type), saturation (intensity), and value (brightness). Developed by Alvy Ray Smith in 1978, it facilitates intuitive editing operations, such as adjusting saturation without altering brightness, which is particularly useful for selective color enhancements in images. Color space conversions are critical in image editing to adapt representations between models, often involving mathematical transformations to preserve perceptual accuracy. For instance, converting an RGB image to grayscale computes luminance as a weighted sum that approximates human sensitivity to green over red and blue: \text{Gray} = 0.299R + 0.587G + 0.114B This formula, derived from ITU-R BT.601 standards for video encoding, ensures the resulting monochrome image retains natural tonal balance. Bit depth determines the precision of color representation per channel, directly impacting the dynamic range and editing flexibility. In 8-bit images, each RGB channel supports 256 discrete levels (2^8), yielding about 16.7 million possible colors but risking banding in smooth gradients during heavy adjustments. 16-bit images expand this to 65,536 levels per channel (2^16), providing over 281 trillion colors and greater latitude for non-destructive edits like exposure recovery, as the expanded range minimizes quantization artifacts. Historically, the Adobe RGB (1998) color space emerged as an advancement over standard RGB to address limitations in gamut for professional photography and printing. Specified by Adobe Systems in 1998, it offers a wider color gamut—encompassing about 50% more colors than sRGB—particularly in greens and cyans, enabling editors to capture and preserve subtle tones from high-end cameras without clipping during workflows.

File formats and storage

Image file formats play a crucial role in image editing by determining how data is stored, compressed, and preserved for manipulation. These formats vary in their support for quality retention, transparency, and metadata, influencing editing workflows and final output compatibility. Editors must select formats that balance file size, fidelity, and functionality, such as lossless options for iterative changes versus lossy for distribution. Common formats include JPEG, which employs lossy compression to reduce file sizes significantly, making it ideal for web images where moderate quality loss is acceptable. In contrast, PNG uses lossless compression, preserving all original data while supporting alpha transparency for seamless compositing in editing software. TIFF offers high-quality storage with support for editable layers and multiple color depths, suitable for professional pre-press and archival purposes. RAW files capture unprocessed sensor data directly from cameras, providing maximum flexibility for post-processing adjustments like exposure and white balance. AVIF, introduced in 2019 by the Alliance for Open Media, uses the AV1 video codec for both lossy and lossless compression, achieving high efficiency with support for transparency and high dynamic range (HDR), making it suitable for modern web and mobile applications as of 2025. Compression in image formats falls into two main types: lossless, which allows exact reconstruction of the original image without data loss, as in PNG and uncompressed TIFF; and lossy, which discards redundant information to achieve smaller files but introduces artifacts, such as blocking in JPEG where visible 8x8 pixel grid patterns appear in uniform areas due to discrete cosine transform processing. These artifacts degrade image quality upon repeated saves, emphasizing the need for lossless formats during editing to avoid cumulative degradation. Many formats embed metadata standards to store additional information. EXIF, developed for digital photography, records camera-specific details like model, aperture, shutter speed, and GPS coordinates, aiding editors in replicating shooting conditions. IPTC provides editorial metadata, including captions, keywords, and copyright notices, facilitating asset management in professional workflows. The evolution of image formats has addressed efficiency and modern needs. WebP, introduced by Google in 2010, combines lossy and lossless compression with transparency support, achieving up to 34% smaller files than JPEG or PNG for web applications. HEIF, standardized by MPEG in 2017, enables high-efficiency storage of images and sequences using HEVC compression, supporting features like multiple images per file and becoming default on devices like iPhones for reduced storage without quality compromise.
FormatCompression TypeKey FeaturesTypical Use
JPEGLossySmall files, no transparencyWeb photos
PNGLosslessTransparency, exact fidelityGraphics, logos
TIFFLossless (or lossy variants)Layers, high bit-depthPrinting, archiving
RAWUncompressedSensor data, non-destructive editsProfessional photography
WebPLossy/LosslessEfficient web compression, transparencyOnline media
HEIFLossy (HEVC-based)Multi-image support, small sizeMobile devices
AVIFLossy/Lossless (AV1-based)High compression efficiency, transparency, HDR supportWeb and mobile images

Tools and Techniques

Image editing software overview

Image editing software encompasses a range of applications designed to manipulate digital images, primarily categorized into raster and vector editors based on their handling of image data. Raster editors work with pixel-based images, allowing detailed modifications to photographs and complex visuals, while vector editors focus on scalable graphics defined by mathematical paths, ideal for logos and illustrations that require resizing without quality loss. This distinction emerged in the late 1980s as computing power advanced, enabling specialized tools for different creative needs. Pioneering raster software includes Adobe Photoshop, first released on February 19, 1990, which revolutionized photo retouching and compositing with tools for layer management and color adjustments. For vector graphics, Adobe Illustrator debuted on March 19, 1987, providing precision drawing capabilities that became essential for print and web design. Open-source alternatives like GIMP, initiated in 1996 as a free raster editor, offered accessible alternatives to proprietary tools, supporting community-driven development for tasks such as painting and filtering. These categories have evolved to include hybrid features, but their core focuses remain distinct. Key advancements in functionality include non-destructive editing, introduced in Adobe Lightroom upon its release on February 19, 2007, which allows adjustments without altering original files through parametric edits stored separately. The shift toward accessible platforms accelerated with mobile and web-based tools; Pixlr, launched in 2008, provides browser-based raster editing with effects and overlays for quick enhancements. Similarly, Canva, released in 2013, integrates simple image editing into a drag-and-drop design ecosystem, emphasizing templates and collaboration for non-professionals. Cloud integration further transformed workflows, exemplified by Adobe Creative Cloud's launch on May 11, 2012, enabling seamless syncing of assets across devices and subscriptions for updated software. Recent accessibility trends incorporate AI assistance, such as Adobe Sensei, unveiled on November 3, 2016, which automates tasks like object selection and content-aware fills to democratize advanced editing. More recent AI integrations, such as Adobe Firefly launched in 2023, have introduced generative AI capabilities for creating and editing image content based on text prompts. These developments have broadened image editing from specialized desktop applications to inclusive, cross-platform ecosystems.

Basic tools and interfaces

The user experience in image editing has evolved significantly since the 1970s, when digital image processing primarily relied on command-line tools in research and space applications, such as those developed for medical imaging and remote Earth sensing. These early systems required users to input textual commands to manipulate pixel data, lacking visual feedback and making iterative editing cumbersome. The transition to graphical user interfaces (GUIs) began in the mid-1970s with innovations at Xerox PARC, including the 1975 Gypsy editor, which introduced bitmap-based WYSIWYG (what-you-see-is-what-you-get) editing with mouse-driven interactions for the first time. This paved the way for more intuitive designs, culminating in MacPaint's release in 1984 alongside the Apple Macintosh, which established enduring GUI standards for bitmap graphics editing through its icon-based tools and direct manipulation on screen. MacPaint's influence extended to consumer software, demonstrating how pixel-level control could be accessible via simple mouse gestures rather than code. Modern image editing software employs standardized GUI components to facilitate efficient workflows. The canvas, or document window, acts as the primary workspace displaying the active image file, often supporting tabbed or floating views for multiple documents. Toolbars, typically positioned along the screen's edges, house selectable icons for core functions, while options bars dynamically display settings for the active tool, such as size or opacity. Panels provide contextual controls; for instance, the layers panel organizes stacked image elements for non-destructive editing, allowing users to toggle visibility, reorder, or blend layers without altering the original pixels. Undo and redo histories, usually accessible via menus or keyboard shortcuts, maintain a chronological record of actions, enabling step-by-step reversal or reapplication of changes to support experimentation. Essential tools form the foundation of hands-on editing and are universally present across major applications. The brush tool simulates traditional painting by applying colors or patterns to the canvas, often with customizable hardness, flow, and pressure sensitivity for tablet users to vary stroke width and opacity based on pen force. The eraser tool removes pixels or reveals underlying layers, mimicking physical erasure with similar adjustable properties. The move tool repositions selected elements or entire layers, while the zoom tool scales the view for precise work, typically supporting keyboard modifiers for fit-to-screen or actual-size displays. These tools often include modes like pressure sensitivity, which enhances natural drawing by responding to input device dynamics, a feature refined in professional software since the 1990s. Basic workflows in image editing begin with opening files from supported formats, followed by iterative application of tools on the canvas, and conclude with saving versions to preserve non-destructive edits. Saving supports multiple formats and versioning to track changes, preventing data loss during sessions. For efficiency with large volumes, batch processing introduces automation, allowing users to apply predefined actions—such as resizing or color adjustments—to multiple files sequentially without manual intervention per image. This capability, integral to professional pipelines, streamlines repetitive tasks while maintaining consistency across outputs.

Selection and masking methods

Selection tools in image editing software enable users to isolate specific regions of an image for targeted modifications, forming the foundation for precise edits without affecting the entire composition. Common tools include the marquee, which creates geometric selections such as rectangular or elliptical shapes by defining straight boundaries around areas. The lasso tool allows freehand drawing of irregular selections, while its polygonal variant uses straight-line segments for more controlled outlines; the magnetic lasso variant enhances accuracy by snapping to edges detected via algorithms that identify contrast boundaries in the image. These edge detection methods typically rely on gradient-based techniques to locate transitions between pixels, improving selection adherence to object contours. The magic wand tool selects contiguous pixels based on color similarity to a clicked point, employing a flood-fill algorithm that propagates from the seed pixel to neighboring ones within a specified tolerance. Mathematically, this thresholding process includes pixels where the absolute color difference from the reference, often measured in RGB space as \sqrt{(R_1 - R_2)^2 + (G_1 - G_2)^2 + (B_1 - B_2)^2}, falls below a user-defined tolerance value, enabling rapid isolation of uniform areas like skies or solid objects. Anti-aliased and contiguous options further refine the selection by smoothing jagged edges and limiting spread to adjacent pixels, respectively. Masking techniques build on selections to achieve non-destructive isolation, preserving the original image data for reversible edits. Layer masks apply grayscale values to control layer visibility, where white reveals content, black conceals it, and intermediate tones create partial transparency, allowing iterative adjustments without pixel alteration. Clipping masks constrain the visibility of a layer to the non-transparent shape of the layer below, facilitating composite effects like texture overlays limited to specific forms. Alpha channels store selection data as dedicated grayscale channels within the image file, serving as reusable masks that define transparency for export formats like PNG and enabling complex, multi-layered isolations. Refinement methods enhance selection accuracy and integration, particularly for complex boundaries. Feathering softens selection edges by expanding or contracting the boundary with a gradient fade, typically adjustable in pixel radius, to blend edited areas seamlessly and avoid harsh transitions. AI-driven quick selection tools, such as Adobe Photoshop's Object Selection introduced in 2019, leverage machine learning models to detect and outline subjects automatically from rough bounding boxes or brushes, incorporating edge refinement for subjects like people or objects with minimal manual input. These advancements, powered by Adobe Sensei AI, analyze image semantics to propagate selections intelligently, reducing time for intricate isolations compared to manual tools.

Content Modification

Cropping and resizing

Cropping is a fundamental technique in image editing that involves selecting and retaining a specific portion of an image while discarding the rest, primarily to improve composition, remove unwanted elements, or adjust the aspect ratio. This process enhances visual focus by emphasizing key subjects and eliminating distractions, often guided by compositional principles such as the rule of thirds, which divides the image into a 3x3 grid and positions subjects along the lines or intersections for balanced appeal. Preserving the original aspect ratio during cropping ensures the image maintains its intended proportions, preventing distortion when preparing for specific outputs like prints or social media formats. Non-destructive cropping allows editors to apply changes without permanently altering the original image data, enabling adjustments or resets at any time through features like adjustable crop overlays in software such as Adobe Photoshop. This method supports iterative composition refinement by retaining cropped pixels outside the visible area for potential later use. Canvas extension complements cropping by increasing the image boundaries to add space around the existing content, aiding composition by providing room for repositioning elements or integrating additional details without scaling the core image. Trimming, conversely, refines edges by removing excess canvas after extension, ensuring a tight fit to the composed frame. The practice of cropping originated in darkroom photography, where photographers physically masked negatives or prints to isolate sections, influencing digital standards established in the 1990s with the advent of software like Adobe Photoshop, which digitized these workflows for precise, layer-based control. Resizing alters the overall dimensions of an image, either enlarging or reducing it, which necessitates interpolation to estimate pixel values at new positions and minimize quality degradation such as blurring or aliasing. Nearest-neighbor interpolation, the simplest method, assigns to each output pixel the value of the closest input pixel, resulting in fast computation but potential jagged edges, particularly during enlargement. Bilinear interpolation improves smoothness by averaging the four nearest input pixels weighted by their fractional distances, using the formula: f(x, y) = (1 - a)(1 - b) f(0,0) + a(1 - b) f(1,0) + (1 - a) b f(0,1) + a b f(1,1) where a and b are the fractional offsets in the x and y directions, respectively. Bicubic interpolation further refines this by considering a 4x4 neighborhood of 16 pixels and applying cubic polynomials for sharper results, though it demands more processing power and may introduce minor ringing artifacts. These methods relate to image resolution, where resizing impacts pixel density, but careful selection preserves perceptual quality across scales.

Object removal and cloning

Object removal and cloning are essential techniques in image editing for erasing unwanted elements from an image while preserving visual coherence, often by duplicating and blending pixels from donor regions to fill the targeted area. These methods rely on manual or automated sampling of source pixels to replace the removed content, ensuring seamless integration with the surrounding texture and structure. Unlike simple cropping, which alters the overall frame, these tools focus on localized content manipulation within the image canvas. The clone stamp tool, a foundational manual cloning method, allows users to sample pixels from a source area (donor) and paint them directly onto a target region to cover unwanted objects. Introduced in early versions of Adobe Photoshop around 1990, it copies exact pixel values without alteration, making it ideal for duplicating patterns or removing distractions like wires or blemishes in uniform areas. To use it, the editor sets a sample point using Alt-click (on Windows) or Option-click (on Mac), then brushes over the target, with options like opacity and flow controlling the application strength. This direct copying can sometimes result in visible repetition if the source is overused, but it provides precise control for texture matching in repetitive scenes such as skies or foliage. The healing brush tool extends cloning by sampling from a source area but blending the copied pixels with the target's lighting, color, and texture for more natural results. Debuting in Photoshop 7.0 in 2002, it uses Adobe's texture synthesis to match not just pixels but also tonal variations, reducing artifacts in complex areas like skin or fabric. Similar to the clone stamp, it requires manual source selection, but the blending occurs automatically during application, making it superior for repairs where exact duplication would appear unnatural. For instance, it effectively removes scars from portraits by borrowing nearby skin texture while adapting to local shadows. Spot healing, an automated variant, simplifies the process for small blemishes by sampling pixels from the immediate surrounding area without manual source selection. Introduced in Photoshop CS2 in 2005, the spot healing brush analyzes a radius around the target (typically 20-50 pixels) to blend content seamlessly, leveraging basic inpainting to fill spots like dust or acne. It excels in homogeneous regions but may struggle with edges or patterns, where manual healing is preferred. The tool's sample all layers option allows non-destructive edits on layered files. Content-aware fill represents a significant advancement in automated object removal, introduced by Adobe in Photoshop CS5 in 2010, using advanced inpainting to synthesize fills based on surrounding context rather than simple sampling. After selecting and deleting an object (e.g., via Lasso tool), the Edit > Fill command with Content-Aware mode generates plausible content by analyzing global image statistics and textures, often removing people or logos from backgrounds with minimal seams. This feature, powered by patch-based algorithms, outperforms manual cloning for large areas by propagating structures like lines or gradients intelligently. For example, it can extend a grassy field to replace a removed signpost, drawing from distant similar patches. At the algorithmic core of these tools, particularly healing and content-aware methods, lie patch-based synthesis techniques that fill missing regions by copying and blending overlapping patches from known image areas. Seminal work by Efros and Leung in 1999 introduced non-parametric texture synthesis, where pixels or small patches are grown iteratively by finding the best-matching neighborhood from the input sample, preserving local statistics without parametric models. This approach laid the groundwork for exemplar-based inpainting, as refined by Criminisi et al. in 2004, which prioritizes structural elements like edges during patch selection using a confidence-based priority function: for a patch on the fill front, priority P(p) = C(p) \cdot D(p), where C(p) measures data term (e.g., edge strength via Sobel gradients) and D(p) is the isophote-driven distance term, ensuring linear structures propagate first. To minimize visible seams in synthesized regions, graph cuts optimize patch boundaries by finding low-energy cuts in an overlap graph. Kwatra et al. in 2003 developed this for texture synthesis, modeling the overlap as a graph where nodes are pixels and edges weighted by difference in intensity or gradient; the minimum cut (via max-flow) selects the optimal seam, reducing discontinuities. In inpainting, this integrates with patch synthesis to blend multi-pixel overlaps, as in Criminisi's method, where post-copy graph cuts refine boundaries for artifact-free results. These algorithms enable tools like content-aware fill to handle irregular shapes efficiently, with computational complexity scaling with patch size (typically 9x9 to 21x21 pixels) and image resolution.

Layer-based compositing

Layer-based compositing is a fundamental technique in digital image editing that allows users to stack multiple image elements on separate layers, enabling non-destructive manipulation and precise control over composition. Introduced in Adobe Photoshop 3.0 in 1994, this feature revolutionized workflows by permitting editors to overlay, blend, and adjust components without altering underlying data, facilitating complex assemblies in professional environments such as graphic design and photography. Layers come in several types, each serving distinct purposes in compositing. Pixel layers hold raster image data, supporting direct painting and editing with tools or filters to build or modify visual content. Adjustment layers apply tonal and color corrections non-destructively atop other layers, preserving the original pixels below. Shape layers store vector-based graphics, ensuring crisp scalability for logos or illustrations integrated into raster compositions. Smart objects embed linked or embedded content, such as images or vectors, allowing repeated scaling and transformations without quality loss, which is essential for maintaining resolution in iterative editing. Blending modes determine how layers interact, altering the appearance of stacked elements through mathematical operations on pixel values normalized between 0 and 1. The Normal mode simply overlays the top layer's color onto the base, replacing pixels directly without computation. In Multiply mode, the result darkens the image by multiplying the base and blend colors, yielding black for black inputs and unchanged colors for white; the formula is: \text{Result} = \text{Base} \times \text{Blend} Screen mode lightens the composition by inverting and multiplying the colors, producing white for white inputs and unchanged for black; its formula is: \text{Result} = 1 - (1 - \text{Base}) \times (1 - \text{Blend}) These modes enable effects like simulating light interactions or creating depth in composites. Opacity settings on layers control transparency from 0 (fully transparent) to 1 (opaque), modulating the blend's influence via the equation: \text{Result} = (\text{Opacity} \times \text{Blend Result}) + (1 - \text{Opacity}) \times \text{Base} This allows subtle integration of elements. Layers can be organized into groups for hierarchical management, collapsing related components to streamline navigation in complex projects. Masking within layers, often using grayscale thumbnails, hides or reveals portions non-destructively, similar to selection-based masking techniques but applied per layer for targeted compositing. In professional workflows, layer-based compositing supports iterative refinement, version control, and collaboration by isolating edits, reducing file sizes through smart objects, and enabling rapid previews of stacked designs—core to industries like advertising and film post-production.

Appearance Enhancement

Color correction and balance

Color correction and balance in image editing involves techniques to ensure accurate representation of colors as intended, removing unwanted casts and achieving neutrality across the image. This process is essential for maintaining color fidelity, particularly when images are captured under varying lighting conditions or displayed on different devices. White balance is a foundational method that compensates for the color temperature of the light source, making neutral tones appear truly white or gray. White balance adjustments can be performed automatically by software algorithms that analyze the image to detect and neutralize dominant color casts, often based on scene statistics or predefined lighting presets. Manual correction typically employs an eyedropper tool to sample a neutral gray area in the image, which sets the balance point for the entire photo. Additionally, sliders for temperature (measured in Kelvin, shifting from cool blue to warm orange) and tint (adjusting green-magenta shifts) allow fine-tuned control over the overall color neutrality. These methods are widely implemented in tools like Adobe Camera Raw, where the white balance tool directly influences RGB channel balances. Histogram-based adjustments, such as levels and curves, provide precise control over color distribution by mapping input pixel values to output values, enhancing overall balance without altering the image's core content. Levels adjustments target shadows, midtones, and highlights by setting black and white points on the histogram, which clips or expands tonal ranges to redistribute colors more evenly. Curves offer greater flexibility, representing the tonal mapping as a diagonal line on a graph where users add control points to create custom adjustments; the curve is interpolated using spline methods to ensure smooth transitions, defined mathematically as: \text{Output} = f(\text{Input}) where f is a spline-interpolated function. This approach allows targeted color corrections across the tonal spectrum, such as balancing subtle hue shifts in landscapes. Selective color adjustments enable editors to isolate and modify specific hue ranges, such as enhancing skin tones by fine-tuning reds and yellows while preserving other colors. This technique works by defining color sliders for primary ranges (e.g., reds, blues) and secondary composites (e.g., skin), adjusting their saturation, lightness, and balance relative to CMYK or RGB components. It is particularly useful for correcting localized color issues, like yellowish casts on portraits, without global impacts. To achieve device-independent color balance, standards like ICC profiles are employed, which embed metadata describing how colors should be rendered across input, display, and output devices. The International Color Consortium released the first ICC specification in 1994, establishing a standardized file format for color transformations that ensures consistent fidelity from capture to reproduction. These profiles reference device-specific color spaces, often built on models like CIE Lab for perceptual uniformity.

Contrast, brightness, and gamma adjustments

Contrast, brightness, and gamma adjustments are fundamental techniques in image editing that modify the tonal distribution of an image to enhance visibility, correct exposure errors, and achieve desired aesthetic moods without altering color hue or saturation. These adjustments primarily target luminance values across shadows, midtones, and highlights, allowing editors to balance the overall lightness or darkness while preserving perceptual uniformity. In digital workflows, they are often implemented via sliders or curves in software interfaces, enabling non-destructive edits that can be fine-tuned to avoid loss of detail. Brightness and contrast adjustments typically employ linear transformations to shift tonal values uniformly across the image. The brightness slider adds or subtracts a constant value to each pixel's luminance, effectively lightening or darkening the entire image in a linear manner, while the contrast slider stretches or compresses the range around the mid-gray point by multiplying deviations from 128 (in 8-bit scale), increasing separation between light and dark areas. However, excessive use risks clipping, where highlight values exceed the maximum (e.g., 255 in 8-bit) or shadows fall below zero, resulting in loss of detail as clipped areas become uniformly white or black. To mitigate this, editors often preview adjustments using histograms, which visualize tonal distribution and warn of impending clipping. Gamma correction provides a nonlinear adjustment to refine tonal reproduction, particularly for matching image data to display characteristics or perceptual response. It applies the transformation given by the equation \text{Output} = \text{Input}^{\frac{1}{\gamma}} where \gamma (gamma) is typically 2.2 for standard sRGB workflows, effectively decoding gamma-encoded images to linear light or vice versa to ensure accurate luminance rendering. This nonlinear mapping preserves midtone details better than linear shifts, as it emphasizes perceptual uniformity by allocating more bit depth to darker tones, which the human eye perceives more gradually. In practice, gamma adjustments in editing software allow fine control over the overall tone curve, improving visibility in underexposed shadows or taming harsh highlights without uniform linear changes. In RAW image editing, exposure compensation simulates in-camera adjustments by applying a linear multiplier to the raw sensor data, scaling photon counts to recover or enhance overall lightness before demosaicing. Tools like Adobe Camera Raw's exposure slider adjust values in stops (e.g., +1.0 EV doubles brightness), leveraging the higher dynamic range of RAW files (often 12-14 bits) to minimize noise introduction compared to JPEG edits. This method is particularly useful for correcting underexposure, as it preserves latent detail in highlights that might otherwise be lost in processed formats. The use of gamma in digital image tools traces its origins to the 1980s, when CRT monitors' inherent nonlinear response—approximating a power function with exponent around 2.5—necessitated correction to achieve linear light output for accurate reproduction. Standardization to gamma 2.2 in the 1990s, influenced by early digital video and graphics standards, carried over to modern software, ensuring compatibility across displays and workflows. This historical adaptation continues to shape tools like Photoshop's gamma sliders, bridging analog display limitations with digital precision.

Sharpening, softening, and noise reduction

Sharpening techniques enhance the perceived detail in images by increasing contrast along edges and textures, a process essential for compensating for limitations in capture devices. The unsharp mask method, originally developed in the 1930s for improving X-ray image reproduction and later adapted for analog photography to enhance fine detail in high-contrast reproductions like maps, was adapted for digital image processing in software such as Adobe Photoshop starting in the 1990s. This technique creates a mask by blurring the original image with a low-pass filter, typically Gaussian, then subtracts it from the original to isolate high-frequency edge details, which are added back with a controllable amount to amplify transitions without altering overall brightness. Mathematically, the output sharpened image g(x,y) is given by
g(x,y) = f(x,y) + k \left[ f(x,y) - f_b(x,y) \right],
where f(x,y) is the input image, f_b(x,y) is the blurred version, and k (often between 0.5 and 2) controls the sharpening strength; this convolution-based approach highlights edges by emphasizing differences in pixel intensities. A related edge-detection method employs the Laplacian filter, a second-order derivative operator that computes the divergence of the image gradient to detect rapid intensity changes, defined as
\nabla^2 f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2}.
The Laplacian kernel, such as the 3x3 matrix with center -4 and surroundings 1, is convolved with the image to produce a sharpened result by adding the filtered output to the original, effectively boosting edge responses. The rise of charge-coupled device (CCD) sensors in the 1990s, which dominated digital cameras by 1990 and produced images with softer edges compared to film due to finite resolution and anti-aliasing, made digital sharpening indispensable for restoring perceptual acuity in post-processing workflows.
Softening, conversely, applies blur effects to diffuse details, either for artistic simulation or to correct over-sharpening. Gaussian blur uses a rotationally symmetric kernel based on the Gaussian function to spread pixel values smoothly, preserving isotropy and minimizing artifacts in corrective applications like reducing moiré patterns. Motion blur simulates linear movement by averaging pixels along a directional vector, useful for artistic effects or deblurring compensation, while radial blur creates circular diffusion from a center point to mimic spinning or zooming, often employed in creative compositing. Noise reduction addresses imperfections like sensor grain or compression artifacts by suppressing random variations while retaining structural content. Median filtering, introduced by Huang et al. in 1979, replaces each pixel with the median value of its neighborhood, excelling at removing impulsive "salt-and-pepper" noise from digital sensors without blurring edges as severely as linear filters. For more complex Gaussian or Poisson noise common in CCD captures, wavelet denoising decomposes the image into wavelet coefficients, applies soft-thresholding to shrink noise-dominated small coefficients toward zero, and reconstructs the signal; this method, pioneered by Donoho in 1995, achieves near-optimal risk bounds for preserving textures and edges in noisy images.

Advanced Editing

Perspective and distortion correction

Perspective and distortion correction addresses geometric aberrations in images caused by lens imperfections or camera positioning, restoring accurate spatial relationships for applications like photography and document processing. These corrections are essential in image editing software to mitigate effects such as barrel distortion, where straight lines bow outward, or pincushion distortion, where they curve inward, both arising from radial lens properties. Lens correction typically employs polynomial models to remap distorted pixels to their ideal positions. The Brown-Conrady radial distortion model, a foundational approach, approximates this using an even-order polynomial:
r_d = r (1 + k_1 r^2 + k_2 r^4 + k_3 r^6)
where r_d is the distorted radial distance from the image center, r is the undistorted distance, and k_1, k_2, k_3 are coefficients fitted via calibration; negative k_1 values correct barrel distortion, while positive ones address pincushion. This model, introduced by Duane C. Brown in 1966, enables precise compensation by inverting the transformation during editing.
Perspective warp techniques correct viewpoint-induced distortions, such as converging lines in architectural shots, by aligning vanishing points and applying mesh-based transformations. Vanishing point detection identifies convergence of parallel lines, often using line clustering or Hough transforms, to estimate the image's projective geometry; subsequent homography or mesh warping then rectifies the view to a frontal plane. For instance, mesh transformations divide the image into a grid and adjust control points to conform to detected perspective planes, ensuring smooth deformation without artifacts. Keystone correction specifically targets trapezoidal distortions in scanned documents or projected images, where off-axis capture causes top-bottom asymmetry. Algorithms detect document boundaries or vanishing points to compute an affine transformation, pre-warping the image to yield a rectangular output; for example, camera-assisted methods use region-growing on projected patterns to infer the screen-to-camera mapping and apply inverse distortion. This is particularly useful for mobile scanning, improving text readability for OCR. Software tools like Adobe Camera Raw, introduced in 2003, incorporate profile-based auto-correction by matching lens metadata to pre-calibrated distortion profiles, automatically applying polynomial adjustments for common camera-lens combinations. These features streamline workflow, combining manual sliders for fine-tuning with automated detection for efficiency.

Special effects and filters

Special effects and filters in image editing encompass a range of creative transformations that stylize images, drawing from traditional analog techniques while leveraging digital precision. These effects originated in analog darkroom processes, such as solarization discovered in 1862 by Armand Sabattier through accidental re-exposure of film during development, which was later artistically refined by Man Ray in the 1920s to produce tone reversals. The transition to digital began with early software like Adobe Photoshop 1.0 in 1990, which included basic filters emulating these effects, evolving into sophisticated tools by the late 1990s with the introduction of layer effects in Photoshop 5.0 (1998). This shift allowed non-destructive application and precise control, transforming darkroom experimentation into accessible digital workflows. Many special effects rely on convolution matrices, small kernels that process pixel neighborhoods to generate stylized outputs. The emboss filter, for instance, simulates a raised, three-dimensional surface by emphasizing edges in a specific direction, using a kernel such as: \begin{bmatrix} -2 & -1 & 0 \\ -1 & 1 & 1 \\ 0 & 1 & 2 \end{bmatrix} This matrix computes intensity differences, producing highlights and shadows for a metallic or engraved appearance; it has been a standard in image processing since early digital tools. Similarly, the solarize filter digitally emulates the analog Sabattier effect by inverting pixel values above a threshold (typically 50% brightness), creating surreal tone reversals where dark areas lighten and vice versa, as implemented in Photoshop's Stylize menu since version 1.0. The oil paint emulation filter applies edge-preserving smoothing via adaptive convolution or bilateral filters to mimic brush strokes, reducing detail in flat areas while retaining contours; a seminal approach uses color palette quantization combined with spatial filtering for realistic painterly results. Warping and liquify effects enable organic distortions for artistic manipulation, often using displacement maps—grayscale images where brightness values dictate pixel shifts. Introduced in Photoshop as the Displace filter in early versions, displacement maps offset pixels based on luminance (e.g., midtones for no shift, whites for outward push), allowing textures to conform to surfaces like fabric wrinkles. The liquify tool, debuted in Photoshop 6.0 (2000), extends this interactively with brushes for pushing, puckering, or bloating pixels, facilitating fluid reshaping without rigid geometry; it supports forward warping for natural deformations in creative composites. Gradient maps and posterization further convert images into artistic renditions by remapping tonal values. A gradient map adjustment layer, available in Photoshop since version 4.0 (1996), replaces grayscale intensities with colors from a user-defined gradient—shadows to one end, highlights to the other—enabling dramatic stylistic shifts like duotones or surreal palettes. Posterization reduces color depth to a specified number of levels (e.g., 4-8 tones), creating bold, flat areas reminiscent of screen printing or pop art; this effect, rooted in analog halftone limitations, digitally quantizes continuous gradients for high-impact visuals. These techniques, often combined with blending modes for layered integration, prioritize conceptual stylization over photorealism.

Automated and AI-driven enhancements

Automated and AI-driven enhancements in image editing leverage machine learning techniques, particularly deep neural networks, to perform complex adjustments and modifications with minimal user intervention. These methods automate tasks that traditionally required manual expertise, such as improving image quality or generating new content, by learning patterns from large datasets. Generative adversarial networks (GANs) and diffusion models have been pivotal in advancing these capabilities, enabling realistic outputs that preserve semantic consistency while enhancing visual appeal. Super-resolution techniques exemplify AI-driven upscaling, where low-resolution images are enhanced to higher resolutions without introducing artifacts like blurring or aliasing. A prominent approach is the Enhanced Super-Resolution Generative Adversarial Network (ESRGAN), which builds on the original SRGAN by introducing a Residual-in-Residual Dense Block (RRDB) architecture without batch normalization, a relativistic discriminator for better realism assessment, and perceptual losses based on pre-activation features. This results in more natural textures and details, outperforming prior methods in visual quality metrics and securing first place in the 2018 PIRM Super-Resolution Challenge. ESRGAN's GAN framework pits a generator against a discriminator to produce photorealistic high-resolution images, making it widely adopted for applications like restoring old photographs or enlarging digital media. One-click auto-enhancement tools use neural networks to automatically adjust exposure, color balance, and overall tone, analyzing image statistics to apply corrections that mimic professional editing. For instance, methods employing convolutional neural networks (CNNs) estimate optimal adjustments by learning from paired datasets of under- and over-exposed images, correcting color shifts through a decomposition into illumination and reflectance components. These networks, often trained end-to-end, enable rapid enhancements that improve dynamic range and vibrancy without user-specified parameters, as demonstrated in frameworks that handle both low-light and high-exposure scenarios with minimal computational overhead. Such techniques are integrated into software like Adobe Lightroom, providing instant previews of balanced results. AI-driven inpainting and style transfer extend automation to content generation and artistic reconfiguration, akin to deepfake methodologies but applied to broader image regions. Inpainting fills masked areas with contextually appropriate content using generative models, while style transfer applies the aesthetic of a reference image—such as brush strokes or color palettes—to the target without altering its structure. Adobe's 2023 integration of Content Credentials in tools like Photoshop's Generative Fill exemplifies this, employing diffusion-based models to perform seamless inpainting and style adaptations, while embedding metadata to disclose AI involvement for transparency. These features allow users to remove objects or restyle scenes via text prompts, ensuring edits blend naturally with surrounding elements. Recent advancements in diffusion models have revolutionized generative editing, enabling precise manipulations guided by textual or visual inputs. Stable Diffusion, introduced in 2022, operates in latent space to iteratively denoise images, facilitating tasks like localized editing or global restyling with high fidelity. Unlike earlier GANs, diffusion models excel in handling complex prompts for inpainting and outpainting, producing diverse yet coherent results that address limitations in traditional methods, such as mode collapse. In October 2024, Stability AI released Stable Diffusion 3.5, featuring enhanced image quality, better typography rendering, and variants like Large and Medium for diverse applications in generative editing. This has filled gaps in post-2020 AI coverage by supporting creative workflows in tools like Adobe Firefly, where users can generate or modify elements while maintaining ethical traceability through attached credentials.

Applications and Considerations

Multi-image editing and merging

Multi-image editing involves techniques that integrate and process content from multiple source images to create composite outputs with enhanced detail, dynamic range, or field of view. These methods are essential for applications such as landscape photography and product visualization, where single images may lack sufficient coverage or tonal fidelity. Panorama stitching aligns and merges overlapping images to form a wider scene, typically using feature matching algorithms to identify corresponding points across images. A seminal approach is the Scale-Invariant Feature Transform (SIFT), introduced by David Lowe in 1999, which detects and describes local features invariant to scale, rotation, and illumination changes, enabling robust matching even under varying viewpoints. After alignment, seam blending minimizes visible boundaries in overlap regions; one influential technique is gradient-domain blending, which optimizes seams by solving a Poisson equation to preserve image gradients while ensuring seamless transitions. Commercial tools like Adobe's Photomerge, available since Photoshop CS in 2004, automate this process by combining feature detection, geometric warping, and blending for user-friendly panorama creation. High Dynamic Range (HDR) merging fuses multiple images captured at different exposures—known as exposure bracketing—to produce a single image capturing a broader range of luminance values than available on standard displays. This process begins with registering the bracketed exposures to align any minor misalignments from camera movement, followed by weighted fusion of pixel values based on exposure quality metrics such as saturation and well-exposedness. The resulting HDR image often requires tone mapping to fit low-dynamic-range displays; the Reinhard operator (2002) achieves this globally via the formula: L_d = \frac{L_w^a \cdot L_a}{1 + L_w^a \cdot L_a} where L_w is the scaled world luminance, L_a is an adaptation luminance, and a is an adjustment parameter, effectively compressing the dynamic range while preserving local contrast. Batch editing applies uniform modifications across multiple images to ensure consistency, commonly using recorded actions or scripts in software like Adobe Photoshop. Actions capture a sequence of edits—such as resizing, color adjustments, or watermarking—into a reusable macro, which can then be executed via the Batch command on a folder of files, processing hundreds of images efficiently without manual repetition. This automation, supported by scripting languages like JavaScript, extends to complex workflows, including conditional edits based on image metadata.

Output for print and web

Preparing edited images for output involves tailoring the file specifications to the target medium, ensuring optimal quality, file size, and compatibility. For print, this typically requires high-resolution settings and color management suited to physical reproduction, while web output emphasizes compression and adaptability to digital displays. These preparations bridge the gap between digital editing and final delivery, influencing everything from visual fidelity to loading performance. In print preparation, images are standardized to a resolution of 300 dots per inch (DPI) to achieve sharp reproduction on presses, particularly for materials like magazines printed at 150-175 lines per inch (lpi) screen ruling. Color profiles are converted to CMYK mode, which aligns with the cyan, magenta, yellow, and black inks used in offset printing, preventing color shifts from the RGB gamut common in digital editing. Additionally, bleed areas—typically extending 0.125 to 0.25 inches beyond the trim edge—are added to account for cutting tolerances, accompanied by trim marks that guide precise guillotining during production. For web optimization, compression techniques reduce file sizes without excessive quality loss; for instance, JPEG files are often saved at quality levels of 70-85% to balance sharpness and download speed. Responsive sizing employs HTML attributes like srcset to serve appropriately scaled versions based on device viewport, minimizing bandwidth use on mobile networks. Modern formats such as AVIF, released in 2019 by the Alliance for Open Media, provide superior compression—up to 50% smaller than JPEG for similar quality—while supporting transparency and high dynamic range, making it ideal for web delivery. Slicing divides a composite image into modular pieces, such as headers or buttons, which are exported separately and reassembled via HTML tables or CSS for interactive web layouts, allowing targeted optimization per element. Embedding metadata, including EXIF tags for camera details or IPTC fields for captions and copyrights, preserves contextual information in the file header, aiding asset management and searchability without altering the visual content. The evolution of image output traces from halftone printing in the late 19th century, which simulated tones via patterned dots on metal plates for letterpress reproduction, to the 1990s advent of digital Raster Image Processors (RIP) software. These RIP systems, integrated with imagesetters like the Linotronic 330, converted vector and raster data directly into printable halftones, enabling computer-to-plate workflows and revolutionizing prepress efficiency. Image editing raises significant ethical concerns, particularly in fields like journalism where authenticity is paramount. The National Press Photographers Association (NPPA) adopted the Digital Manipulation Code of Ethics in 1991, emphasizing that altering the content of a photograph undermines its credibility as a journalistic document. According to this code, digital alterations should only be used to correct mistakes, improve clarity, or enhance visual presentation, and any changes to content must be disclosed to the viewer. This principle aligns with broader journalistic standards requiring transparency to prevent deception. In copyright law, editing images often involves creating derivative works, which are subject to the original creator's rights. Under U.S. law, the Digital Millennium Copyright Act (DMCA) of 1998 prohibits circumventing technological protection measures on digital works, including images, which can limit unauthorized editing. However, fair use doctrine allows certain transformative uses, such as criticism or parody, where the edited image adds new expression or meaning without supplanting the original market. For instance, significantly modifying an image for artistic purposes may qualify as transformative, provided it meets the four-factor fair use test. The rise of AI-driven image editing has amplified concerns over deepfakes and misinformation, where synthetic media can fabricate realistic alterations indistinguishable from reality. The European Union's AI Act, effective from 2024, classifies deepfakes as high-risk AI systems requiring clear labeling of artificially generated or manipulated content, such as images, to inform users and mitigate societal harms like disinformation. This regulation mandates transparency in AI enhancements, ensuring disclosure of synthetic elements in edited media. To address AI ethics in image editing, standards like the Coalition for Content Provenance and Authenticity (C2PA), established with specifications released in 2022, promote watermarking and metadata embedding to verify content origins and track edits. These content credentials provide a verifiable history, helping distinguish AI-generated from authentic images and fostering ethical practices by enabling detection of manipulations. Such measures are crucial for maintaining trust in digital media amid advancing automated enhancements.

References

  1. [1]
    Digital Image Processing: It's All About the Numbers
    Jan 26, 2023 · When we use image editing software to alter images, they run calculations across these numbers to make edits. If we want to use a more speedy ...
  2. [2]
    [PDF] Using Image Editors To Change - ERIC
    Image-editing software permits the correction of. phltographic faults by providing simple means for cropping, adjusting. brightness, contrast, and color ...
  3. [3]
  4. [4]
    [PDF] Photo Tampering Throughout History - College of Computing
    Here, I have collected some examples of tampering throughout history. To help contend with the implications of this tampering, we have developed a series of.Missing: authoritative | Show results with:authoritative
  5. [5]
    History of digital photo manipulation | National Science and Media ...
    Jun 16, 2021 · The 21st century has seen a boom in digital photo manipulation, from Photoshop to Instagram. But the phenomenon goes back to the early 1990s.Missing: authoritative | Show results with:authoritative
  6. [6]
    Celebrating 35 Years of Creativity, Community, and Innovation with ...
    Feb 19, 2025 · Photoshop was introduced by Thomas and John Knoll to provide powerful digital image editing tools to edit and enhance photographs.
  7. [7]
    [PDF] 2021-S-0036 Standard Guide for Image Authenication
    3.1 Definitions: 60. 3.1.1 alter, v –to change image features through image editing techniques. 61. 3.1.2 composite, v –to duplicate or combine elements from ...
  8. [8]
    Raster vs. vector: What are the differences? - Adobe
    Raster files are generally larger than vector files. They can contain millions of pixels and incredibly high levels of detail.
  9. [9]
    Raster vs. Vector Images - All About Images - Research Guides
    Sep 8, 2025 · Raster images use pixels, are resolution-dependent, and lose quality when scaled. Vector images use points and equations, are infinitely ...
  10. [10]
    Pixels Per Inch & Pixel Density | What is PPI Resolution? - Adobe
    Pixels per inch (PPI) refers to the number of pixels contained within each inch of a digital image. It also refers to the set number of pixels a screen can ...
  11. [11]
    When We Say We Want “Resolution”: DPI and PPI Explained
    Aug 22, 2023 · DPI, short for dots per inch, tells us how many dots of ink can be displayed or printed within one inch of space.
  12. [12]
    Photoshop image size and resolution - Adobe Help Center
    Feb 25, 2025 · File size is proportional to the pixel dimensions of the image. Images with more pixels may produce more detail at a given printed size, but ...
  13. [13]
    Sketchpad, a man-machine graphical communication system
    Sketchpad, a man-machine graphical communication system. Author(s). Sutherland, Ivan Edward,1938-. Thumbnail. Download15036306-MIT.pdf (40.66Mb). Terms of use.Missing: original | Show results with:original
  14. [14]
  15. [15]
    Subtractive CMYK Color Mixing | Color Models - X-Rite
    Sep 28, 2020 · To render color on paper, printers use reflected light and subtractive color inks. By laying Cyan, Magenta, and Yellow pigments upon a white, ...
  16. [16]
    [PDF] Color Gamut Transform Pairs - Alvy Ray Smith
    The first of the two new models, the hexcone model, is intended to capture the common notions of hue, saturation, and value (HSV) as three dimensions for de-.
  17. [17]
    Bit depth and preferences - Adobe Help Center
    May 24, 2023 · Bit depth specifies how much color information is available for each pixel in an image. More bits of information per pixel result in more available colors.
  18. [18]
    [PDF] Adobe RGB (1998) Color Image Encoding
    May 5, 2005 · This document specifies an output-referred RGB color encoding named Adobe RGB (1998) to be used for digital exchange of Adobe RGB (1998)-encoded ...
  19. [19]
    A guide to image file formats and image file types | Adobe Acrobat
    The most common image file formats include JPEG, GIF, PNG, TIFF, BMP, and PDF. Read on to learn more about which files will best meet your imagery needs.A Guide To Image File... · Image File Formats List... · What Are The Best Image File...Missing: RAW EXIF IPTC
  20. [20]
    Image file type and format guide - Media - MDN Web Docs - Mozilla
    Most TIFF files are uncompressed, but lossless PackBits and LZW compression are supported, as is lossy JPEG compression. Licensing, No license required (aside ...Common Image File Types · Image File Type Details · Ico (microsoft Windows Icon)Missing: RAW EXIF IPTC
  21. [21]
    Image formats: PNG - web.dev
    Feb 1, 2023 · PNG uses lossless compression, supports alpha transparency, and is best for simple artwork with semi-transparency, but has large file sizes. It ...
  22. [22]
    What are TIFF files and how do you open them? - Adobe
    What are TIFFs used for? · High-quality photographs. · High-resolution scans. · Container files. · Tiff files can hold layers created with an image editing program ...
  23. [23]
    Image File Formats: RAW, JPEG & more - Canon Georgia
    A RAW file is what the name suggests: raw, unprocessed data. It contains the image data exactly as captured on your camera sensor. Any white balance, Picture ...
  24. [24]
    JPEG Image Deblocking Using Deep Learning - MathWorks
    This example shows how to reduce JPEG compression artifacts in an image using a denoising convolutional neural network (DnCNN).
  25. [25]
    Standards: Exif - Photometadata.org
    Some common data fields include the camera make and model, its serial number, the date and time of image capture, the shutter speed, the aperture, the lens used ...
  26. [26]
    Photo Metadata - IPTC
    IPTC Photo Metadata sets the industry standard for administrative, descriptive, and copyright information about images.Image. Metadata · The Standard · Quick guide to IPTC Photo... · Browser extensionsMissing: editorial | Show results with:editorial
  27. [27]
    WebP, a new image format for the Web - Google for Developers Blog
    Sep 30, 2010 · To improve on the compression that JPEG provides, we used an image compressor based on the VP8 codec that Google open-sourced in May 2010. We ...
  28. [28]
    HEIF - High Efficiency Image File Format
    High Efficiency Image File Format (HEIF) is a standard for storage and sharing of images and image sequences, developed by the MPEG.
  29. [29]
  30. [30]
    Adobe Explains It All: Photoshop
    Feb 25, 2015 · Photoshop was developed in 1988 by the Knoll brothers, and version 1.0 was released by Adobe to the public on February 19, 1990. Thomas Knoll ...
  31. [31]
    Adobe Explains It All: Illustrator
    Mar 23, 2015 · The first version available to the general public was released on March 19, 1987. Who is it for? Illustrator is used by artists and graphic ...
  32. [32]
    How It All Started… - GIMP
    Jul 29, 1995 · GIMP 0.54, the (in)famous Motif release, is announced in February 1996. From: Peter Mattis Subject: The GIMP v0.54 -- General Image Manipulation ...
  33. [33]
    10 Years of Lightroom
    Feb 19, 2017 · On February 19, 2007, Lightroom 1.0 was officially released. The release version had some major changes from the earlier betas.
  34. [34]
    Editing in the Develop module in Lightroom Classic
    Apr 27, 2021 · Nondestructive editing means you can explore and create different versions of your photo without degrading your original image data. The panels ...4. Reduce Noise And Apply... · 5. Retouch And Correct Flaws · 8. Soft-Proof Images
  35. [35]
    Autodesk Acquires Online Photo Editing Service Pixlr - TechCrunch
    Jul 19, 2011 · Pixlr was started in Sweden in August 2008 and offers a suite of cloud-based image tools and utilities such as photo editing tool Pixlr Editor, ...
  36. [36]
    About Canva
    Launched in 2013, Canva is an online design and publishing tool with a mission to empower everyone in the world to design anything and publish anywhere.
  37. [37]
    Adobe launches Creative Cloud subscription service - CNET
    May 11, 2012 · The subscription service to CS6 software such as Photoshop, mobile apps, and online services is now live -- as is a potentially more stable business for Adobe.
  38. [38]
    Adobe Launches Sensei, AI for Digital Experiences - eWeek
    Nov 3, 2016 · At its Adobe MAX 2016 conference, Adobe announced a new artificial intelligence framework and services known as Sensei across its palette of ...Missing: introduction date
  39. [39]
    [PDF] Adobe Photoshop 5.5 Reviewer's Guide
    Image selection You can make selections in either program using different marquee selection tools, the lasso and polygon lasso tools, and the magic wand tool.
  40. [40]
    Free Online Image Editor | Magic Wand Selection Tool - Gifgit
    The magic wand uses a flooding algorithm that spreads out from the clicked pixel and compares the color of adjoining pixels to the color of the clicked pixel.
  41. [41]
    Image Thresholding Techniques in Computer Vision - GeeksforGeeks
    Jul 23, 2025 · Image thresholding is a technique in computer vision that converts a grayscale image into a binary image by setting each pixel to either black or white based ...
  42. [42]
    Select an object with the Magic Wand tool - Adobe Help Center
    May 24, 2023 · In the tool options bar, specify a selection option: New Selection, Add to Selection, Subtract from Selection, or Intersect with Selection.Missing: thresholding algorithm
  43. [43]
    Clipping Path Vs Masking : Know the Key Difference
    Jul 25, 2025 · A layer mask is a non-destructive editing tool to conceal or show portions of a layer specifically. ... You can scale, resize, and edit images ...Types Of Clipping Path · Types Of Masking · 03. Alpha Channel Masking
  44. [44]
    Alpha Channel Masking: Perfecting Complex Image Edits
    Jul 5, 2025 · Masking allows you to make non-destructive edits, such as changing a background, adjusting exposure, or isolating parts of an image without ...
  45. [45]
    How to feather edges in Adobe Photoshop.
    Feathering is a way to soften the hard edges of an object in your image. By gradually fading between the colors of the pixels on the edge and the pixels ...
  46. [46]
    Adobe Has Introduced An Object Selection Tool That's Powered By ...
    Oct 31, 2019 · The selection tools in Adobe Photoshop are getting an upgrade with the introduction of a machine learning, AI-powered Object Selection tool.
  47. [47]
    Adobe previews precise Photoshop object selection powered by ...
    Oct 29, 2019 · A new Object Selection Tool hopes to speed up editing workflows through Sensei technology, Adobe's artificial intelligence platform.
  48. [48]
    How to use (& break) the rule of thirds in photography - Adobe
    The rule of thirds places your subject in the left or right third of an image, using a grid with four crosshairs to balance the subject with negative space.<|separator|>
  49. [49]
  50. [50]
  51. [51]
    Adjust crop, rotation, and canvas size - Adobe Help Center
    Apr 18, 2024 · The Canvas Size command lets you increase or decrease an image's canvas size. Increasing the canvas size adds space around an existing image.
  52. [52]
  53. [53]
    Retouch images with the Clone Stamp tool - Adobe Help Center
    May 24, 2023 · Photoshop · Open app. The Clone Stamp tool copies pixels from one part of an image to another. Photoshop Clone Stamp Tool.
  54. [54]
    How to Use the Clone Stamp Tool in Adobe Photoshop | Photzy
    What is the clone tool or stamp? How is it used? What are its best uses? Are there adjunct tools that work well with the clone tool? • Are there good ...
  55. [55]
    How to use the Healing Brush tool in Photoshop - Adobe Help Center
    May 24, 2023 · Learn how to repair imperfections by painting with pixels from another part of your image using the Healing Brush tool.Missing: explanation | Show results with:explanation
  56. [56]
    Remove marks with the Spot Healing Brush tool - Adobe Help Center
    Apr 30, 2024 · The Spot Healing Brush tool quickly repairs image imperfections. ... For smaller areas: Simply select the area you want to fix. For larger areas: ...
  57. [57]
    Photoshop CS5 New Features - Content Aware Fill Tutorial
    Make objects magically disappear from a photo as if they were never there with the new Content-Aware Fill feature in Photoshop CS5.
  58. [58]
    [PDF] Texture Synthesis by Non-parametric Sampling - UC Berkeley EECS
    In this paper we have chosen a statistical non-parametric model based on the assumption of spatial locality. The result is a very simple texture syn- thesis ...
  59. [59]
    [PDF] Region Filling and Object Removal by Exemplar-Based Image ...
    IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 9, SEP 2004. 1. Region Filling and Object Removal by. Exemplar-Based Image Inpainting. A. Criminisi*, P.
  60. [60]
    [PDF] Graphcut Textures: Image and Video Synthesis Using Graph Cuts
    Unlike dynamic program- ming, our graph cut technique for seam optimization is applicable in any dimension. We specifically explore it in 2D and 3D to per ...
  61. [61]
    Adobe Photoshop 3.0 in 1994 - Web Design Museum
    In November 1994, Windows, IRIX, and Solaris versions were launched. Adobe Photoshop 3.0 included new features such as layers and tabbed palettes. The Mac ...Missing: introduction | Show results with:introduction
  62. [62]
    Manage layers and groups
    ### Summary of Layers in Photoshop
  63. [63]
  64. [64]
  65. [65]
    Photoshop | Make Software, Change the World!
    Photoshop introduced an unprecedented tool for shaping emotions, opinions, and actions, from art and advertising to propaganda, from science to selfies.
  66. [66]
    Make color and tonal adjustments in Adobe Camera Raw
    Oct 27, 2025 · To adjust the white balance quickly, select the White Balance tool and select an area in the image that you want to be a neutral gray. The ...Missing: techniques | Show results with:techniques
  67. [67]
    Surprising looks with white balance - Adobe
    Adjusting white balance in editing.​​ Tweaking the color balance of your photos in post can be as simple as adjusting the Temperature Slider in Adobe {{lightroom ...Missing: techniques | Show results with:techniques
  68. [68]
    Using the Curves adjustment in Photoshop - Adobe Help Center
    Jul 18, 2024 · In the Curves adjustment, you adjust points throughout an image's tonal range. Initially, the image's tonality is represented as a straight diagonal line on a ...Missing: spline | Show results with:spline
  69. [69]
    darktable 3.6 user manual - curves
    Interpolation is the process by which a continuous curve is derived from a few nodes. As this process is never perfect, several methods are offered that can ...
  70. [70]
    Make selective color adjustments - Adobe Help Center
    May 24, 2023 · Selective color correction is a technique used by high-end scanners and separation programs to change the amount of process colors in each of the primary color ...
  71. [71]
    Selective Color Correction in Lightroom and Photoshop
    Feb 11, 2018 · Thanks to Lightroom 4's selective white balance correction, fixing colors in a certain area is a very easy and straightforward process.
  72. [72]
    ICC Specifications - INTERNATIONAL COLOR CONSORTIUM
    The ICC specification defines the file format for profiles that connect between colour encodings. The first version was published in 1994, and the most recent ...
  73. [73]
    Color Science History and the ICC Profile Specifications
    Before the ICC released its V2 color profile specifications in 1994, color management was handled "in-house" by all the big establishments who made prints ...
  74. [74]
    Apply a Brightness/Contrast adjustment in Photoshop
    May 24, 2023 · In the Properties panel, drag the sliders to adjust the brightness and contrast. Dragging to the left decreases the level, and dragging to the ...Missing: linear | Show results with:linear
  75. [75]
    Understanding Gamma Correction - Cambridge in Colour
    Technical Note: Gamma is defined by Vout = Vingamma , where Vout is the output luminance value and Vin is the input/actual luminance value. This formula causes ...
  76. [76]
  77. [77]
    Does the Exposure Slider in Adobe Camera Raw Have Same Effect ...
    Nov 6, 2014 · My assumption is that the Exposure slider in ACR does NOT have the same effect as using in-camera Exposure Compensation. Am I correct?How does the exposure slider work in Adobe Camera Raw/Lightroom?RAW files and In-Camera Exposure CompensationMore results from photo.stackexchange.com
  78. [78]
    [PDF] The rehabilitation of gamma - Charles Poynton
    The main purpose of gamma correction is to compensate for the nonlinearity of the CRT. The main purpose of gamma correction in video, desktop graphics, prepress ...
  79. [79]
    Sharp Images and Unsharp Masks - OpenEdition Journals
    Feb 26, 2025 · This article explores the genealogy of unsharp masking, a method for enhancing images popularized by Adobe Photoshop.
  80. [80]
    Digital Image Processing - Unsharp Mask Filtering - Interactive Tutorial
    Feb 11, 2016 · Unsharp mask filtering improves detail by removing low-frequency information, using a blurred image (unsharp mask) created by a Gaussian filter.Missing: seminal | Show results with:seminal
  81. [81]
    Optimized Laplacian image sharpening algorithm based on graphic ...
    In this paper, we propose a parallel implementation of Laplacian sharpening based on Compute Unified Device Architecture (CUDA), which is a computing platform ...
  82. [82]
    The 60-Year History of Digital Image Sensors As Told By ... - PetaPixel
    Feb 3, 2025 · By 1990, nearly all digital cameras used CCDs, including models made by Sony, Matsushita (Panasonic), Toshiba, Sharp, and NEC in Japan and ...
  83. [83]
    Photoshop Elements Blur filters - Adobe Help Center
    Oct 1, 2024 · The Gaussian Blur filter quickly blurs a selection by an adjustable amount. Gaussian refers to the bell-shaped curve that Photoshop Elements ...
  84. [84]
    Effects > Blurs Menu - Paint.NET
    Radial Blur. This effect is similar to Motion Blur , except that the apparent movement is along a circular path instead of a linear path. The ...<|separator|>
  85. [85]
    A fast two-dimensional median filtering algorithm - IEEE Xplore
    Abstract: We present a fast algorithm for two-dimensional median filtering. It is based on storing and updating the gray level histogram of the picture ...
  86. [86]
    De-noising by soft-thresholding | IEEE Journals & Magazine
    Abstract: Donoho and Johnstone (1994) proposed a method for reconstructing an unknown function f on [0,1] from noisy data d/sub i/=f(t/sub i/)+/spl ...Missing: seminal | Show results with:seminal
  87. [87]
    An Exact Formula for Calculating Inverse Radial Lens Distortions
    Jun 1, 2016 · This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion.
  88. [88]
    [PDF] Fast and Robust Perspective Rectification of Document Images on a ...
    To rectify the perspective distortion, the vertical vanish- ing point is detected and sent to infinity. The idea of the ver- tical vanishing point detection is ...
  89. [89]
    Use Vanishing Point in Photoshop - Adobe Help Center
    Oct 14, 2024 · Vanishing Point simplifies perspective-correct editing in images that contain perspective planes—for example, the sides of a building, walls ...Missing: mesh | Show results with:mesh
  90. [90]
    [PDF] Automatic Keystone Correction for Camera-assisted Presentation ...
    This paper presents a fully-automatic method for keystone correction. The two key concepts are: (1) a digital camera is used to observe the projected image; (2 ...Missing: scanned documents
  91. [91]
    Correct lens distortions in Camera Raw - Adobe Help Center
    Nov 3, 2023 · Learn how to make lens corrections in Adobe Camera Raw. Correct image perspective and lens flaws automatically and manually.Missing: history 2003
  92. [92]
    Solarisation | Encyclopedia MDPI
    Oct 17, 2022 · The effect was usually caused by accidentally exposing an exposed plate or film to light during developing. The artist Man Ray perfected the ...
  93. [93]
    [PDF] Making the Transition from Film to Digital - Adobe
    The terms sharpening and unsharp masking are unfortunate historical holdovers because the process of digital sharpening has nothing to do with sharpness, but ...
  94. [94]
    Image Kernels explained visually - Setosa.IO
    An image kernel is a small matrix used to apply effects like the ones you might find in Photoshop or Gimp, such as blurring, sharpening, outlining or embossing.Missing: solarize oil paint
  95. [95]
    <feConvolveMatrix> - SVG - MDN Web Docs
    Oct 27, 2025 · The <feConvolveMatrix> SVG filter primitive applies a matrix convolution filter effect. A convolution combines pixels in the input image with neighboring ...
  96. [96]
    What Is Photoshop Solarize? - Computer Hope
    Jul 31, 2022 · Photoshop Solarize or Solarize converts an image's colors to a blend of positive and negative images. Solarize works on 8 Bit, 16 Bit, RGB (red, green, and ...<|separator|>
  97. [97]
    Image stylization by oil paint filtering using color palettes
    Abstract. This paper presents an approach for transforming images into an oil paint look. To this end, a color quantization scheme is proposed that performs ...
  98. [98]
    Use the Oil Paint filter in Photoshop - Adobe Help Center
    Apr 18, 2024 · The Oil Paint filter lets you transform a photo into an image with the visual appearance of a classic oil painting.Missing: emboss solarize
  99. [99]
    How to create a displacement map in Adobe Photoshop
    Learn how to make a displacement map and add graphics, text, or logos that match the texture and contours of your image.
  100. [100]
    The history of Photoshop | Creative Bloq
    Feb 18, 2025 · It was 35 years ago today, on 19 February, 1990, that Adobe shipped Photoshop 1.0, an application that made professional-level image processing ...
  101. [101]
    Apply special color effects to images - Adobe Help Center
    May 24, 2023 · Apply a gradient map to an image · Click the Gradient Map icon in the Adjustments panel. · Choose Layer > New Adjustment Layer > Gradient Map.
  102. [102]
    Diffusion Model-Based Image Editing: A Survey
    ### Summary of Diffusion Model-Based Image Editing (arXiv:2402.17525)
  103. [103]
    Enhanced Super-Resolution Generative Adversarial Networks - arXiv
    Sep 1, 2018 · The proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR ...
  104. [104]
    [1412.7725] Automatic Photo Adjustment Using Deep Neural Networks
    Dec 24, 2014 · In this paper, we explain how to formulate the automatic photo adjustment problem in a way suitable for this approach.<|separator|>
  105. [105]
    Image Style Transfer AI: Stylize Images - Adobe Firefly
    AI image style transfer lets you upload an image as a reference, then generate new assets that align with that style. Whether you're creating art, illustrations ...
  106. [106]
    Content Credentials overview - Adobe Help Center
    Sep 2, 2025 · Learn how creators can use Content Credentials to gain proper recognition and promote transparency in the content creation process.Adobe Content Authenticity · View Content Credentials · Adobe, Inc.Missing: 2023 | Show results with:2023
  107. [107]
    Object recognition from local scale-invariant features - IEEE Xplore
    Object recognition from local scale-invariant features. Abstract: An object recognition system has been developed that uses a new class of local image features.
  108. [108]
    [PDF] Object Recognition from Local Scale-Invariant Features 1. Introduction
    The SIFT features share a number of properties in common with the responses of neurons in infe- rior temporal (IT) cortex in primate vision. This paper also.
  109. [109]
    [PDF] Seamless Image Stitching in the Gradient Domain - CS.HUJI
    Optimal seam methods produce a seam artifact in case of photometric inconsistencies between the images (first row). Feathering and pyramid blending produce ...<|control11|><|separator|>
  110. [110]
    [PDF] Photographic tone reproduction for digital images
    Jan 14, 2002 · The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In ...
  111. [111]
    Record an action - Photoshop - Adobe Help Center
    Oct 30, 2025 · Learn how to record actions in Photoshop to automate repetitive tasks and save time by capturing a sequence of steps.
  112. [112]
    Image resolution - what is it & what to do with it
    Magazines are typically printed using a 150 or 175 lpi screen ruling. This means images need to be 300 dpi. My bird picture is 3000 pixels wide, which means ...Missing: source:
  113. [113]
    Color-managing documents when printing - Adobe Help Center
    May 24, 2023 · Note: Gray, RGB, LAB, and CMYK color profiles are grouped by category in Advanced view. They are combined on the Profile menu in Basic view.
  114. [114]
    Page bleed | what is it, how much is needed and how to fix it
    Bleed refers to objects that extend beyond the edge of the printed page. This page tells you why bleed is needed, how much is needed and how to fix issues ...
  115. [115]
    Specify printer's marks, bleeds, or slug areas in Adobe InDesign
    Dec 12, 2024 · Select File > Print. · Select Marks and Bleed on the left side of the Print dialog box. · Select either All Printer's Marks or individual marks.
  116. [116]
    Choose the correct level of compression | Articles - web.dev
    Aug 30, 2018 · As a hands-on example, when using a lossy format such as JPEG, the compressor will typically expose a customizable "quality" setting (for ...Lossless versus lossy image... · Effects of image compression...
  117. [117]
  118. [118]
    Slice web pages in Adobe Photoshop
    May 24, 2023 · Slices divide an image into smaller images that are reassembled on a web page using an HTML table or CSS layers.
  119. [119]
    Understanding EXIF and metadata - Canon Georgia
    Everything you need to know about EXIF data – how to view the shooting information in your photos, edit it, remove it or add metadata using Canon apps.
  120. [120]
    Setting the Right Tone | The History of Halftone Printing
    The halftone process that evolved slowly through the mid to late 1900s involved exposing images onto a metal photo plate through a course screen, resulting ...
  121. [121]
    The Proof Is in the Printing - Glenn Fleishman
    Jul 11, 2023 · A Linotronic 330 imagesetter at right with its hardware RIP at left could produce high-resolution paper and film output in the 1990s. When ...
  122. [122]
    National Press Photographers Association Code of Ethics
    Digital Manipulation Code of Ethics. NPPA Statement of Principle. adopted 1991 by the NPPA Board of Directors. As journalists we believe the guiding principle ...
  123. [123]
    [PDF] The Digital Millennium Copyright Act of 1998
    The DMCA, signed in 1998, implements WIPO treaties, addresses online copyright issues, and has five titles, including one on online liability.Missing: editing transformative
  124. [124]
    U.S. Copyright Office Fair Use Index
    Additionally, “transformative” uses are more likely to be considered fair. Transformative uses are those that add something new, with a further purpose or ...Missing: editing | Show results with:editing
  125. [125]
    EU AI Act: first regulation on artificial intelligence | Topics
    Feb 19, 2025 · Content that is either generated or modified with the help of AI - images, audio or video files (for example deepfakes) - need to be clearly ...
  126. [126]
    C2PA | Providing Origins of Media Content
    - **C2PA Standard Summary**: The Coalition for Content Provenance and Authenticity (C2PA) provides an open technical standard called Content Credentials to establish the origin and edits of digital content, ensuring transparency as the digital ecosystem evolves.
  127. [127]
    How it works - Content Authenticity Initiative
    Our work is fully compliant with the technical specifications released in 2022 by the Coalition for Content Provenance and Authenticity (C2PA) or C2PA Content ...Missing: ethics | Show results with:ethics