TouchDesigner
TouchDesigner is a node-based visual programming language and development platform designed for creating real-time interactive multimedia content, including 2D and 3D visuals, audio-reactive installations, and data-driven experiences.[1] Developed by the Canadian software company Derivative, it enables artists, designers, and developers to build custom applications through a modular operator network system, supporting high-performance rendering, Python scripting, and integration with hardware like sensors and displays.[2] Originally evolved from the PRISMS 3D animation software developed in the 1980s at Omnibus Computer Graphics, TouchDesigner traces its roots to advancements in real-time graphics pioneered by co-founder Greg Hermanovic, who acquired PRISMS rights and co-founded SideFX before establishing Derivative in 2000 alongside Rob Bairos and Jarrett Smith.[3] The platform's core strength lies in its pull-based cooking model, where operators process data only when downstream nodes request it, optimizing performance for live events, projection mapping, and immersive installations.[4] Key features include ultra-high-resolution video playback with support for efficient codecs like HAP, a vast library of over 500 built-in operators for geometry, shading, compositing, and simulation, and recent additions like the Point Operator (POP) family introduced in the 2025 official build for advanced 3D particle systems and simulations.[2] TouchDesigner is available in free Non-Commercial and paid Commercial editions, with official stable releases ensuring reliability for production use, and it has been employed in high-profile projects such as live visuals for tours like Rush's Vapor Trails in 2002 and contemporary installations by studios like Sila Sveta.[5][2] Its open ecosystem fosters a global community through workshops, forums, and events, making it a staple tool in media arts education and professional creative coding.[2]Overview
Description and Purpose
TouchDesigner is a node-based visual programming environment developed by Derivative, designed for constructing networks of operators that generate interactive visuals, audio, and data-driven experiences in real time.[6] This paradigm allows users to connect modular components visually, enabling the creation of complex, procedural systems without relying on traditional text-based coding.[1] The software's operator network facilitates scalable projects, from simple prototypes to large-scale productions involving thousands of nodes.[6] Primarily utilized by artists, designers, and developers, TouchDesigner supports real-time applications such as generative art, VJing performances, and interactive installations.[2] These uses leverage its capabilities for live media manipulation, where creators can produce dynamic content for events, exhibitions, and multimedia shows.[7] For instance, it enables the development of responsive environments that integrate sensors, audio inputs, and visual outputs seamlessly.[8] A key emphasis of TouchDesigner lies in its real-time performance, which provides immediate visual feedback and supports rapid iteration within creative workflows.[6] This "always-alive" system eliminates the need for compilation cycles, allowing changes to propagate instantly across the network, powered by graphics hardware acceleration.[6] Such features make it ideal for live authoring and performance contexts, bridging the gap between design and execution.[1] Derivative's core philosophy centers on empowering non-traditional programmers through visual scripting, offering flexibility and accessibility for interactive multimedia creation without deep coding expertise.[6] By prioritizing intuitive tools over conventional programming barriers, the platform democratizes advanced real-time development for creative professionals.[1]Platforms and System Requirements
TouchDesigner primarily supports Microsoft Windows 10 and Windows 11 as its core operating systems, providing full feature access and optimal performance for real-time applications.[9] macOS 13 (Ventura) and later versions are also supported, including native Apple Silicon (arm64) compatibility on hardware such as Mac Pro, iMac, MacBook Pro, and MacBook Air from 2020 onward, though certain advanced features like some GPU-accelerated operations may have limitations compared to Windows.[9] There is no native Linux support, but users have reported running TouchDesigner via compatibility layers like Wine or virtual machines, albeit with potential stability issues and reduced performance.[9] Hardware prerequisites emphasize a capable GPU for real-time rendering and processing, with Vulkan 1.1 support required across all platforms. NVIDIA GeForce 1000-series or better (including Quadro/RTX Pascal and newer, with drivers 530.00+ recommended) and AMD Radeon 5000-series or better are recommended, while Intel integrated graphics from the 500 series or newer (excluding 5000 and 6000 series) offer limited support with lower performance; Intel Arc discrete GPUs are supported if they meet Vulkan 1.1 requirements. A minimum of 4GB GPU VRAM is required, though 8GB or more is advised for complex projects, including advanced features like the Point Operator (POP) family in 2025 builds. For real-time tasks, multi-core CPUs are beneficial, and 16GB+ system RAM is recommended to handle data-intensive workflows without bottlenecks, though official documentation does not specify minimums beyond GPU requirements.[9][10] Licensing options include a free Non-Commercial edition for personal, educational, and non-paying use, limited to 1280x1280 resolution output and excluding commercial exports like H.264/H.265 video; this version allows up to 10 keys per account and is available for current builds. Commercial licenses start at $600 USD for standard professional use, enabling full resolution, advanced exports, and integration with tools like TouchEngine, while the PRO tier at $2200 USD adds specialized features such as frame-lock, tracking support, and priority assistance for high-end installations. Educational licenses at $300 USD provide full features for non-commercial learning environments but prohibit paid projects.[11] Installation involves downloading from the official Derivative website, where users select between stable Official builds for production reliability and Experimental builds for testing upcoming features, with the latter carrying higher bug risks. The process uses web installers for Windows (approximately 668MB) and drag-and-drop packages for macOS (around 728MB), followed by license key activation via the Key Manager dialog or optional hardware dongles/cloud options; as of November 2025, the latest Official build is 2025.31740, released on November 16, 2025, which includes ongoing support for NVIDIA's 50-series GPUs introduced in the October 2025 release.[12][13][10]History
Origins and Founding
TouchDesigner originated from the efforts of Derivative, a software company founded in 2000 by Greg Hermanovic, Rob Bairos, and Jarrett Smith in Toronto, Canada.[3] The founders, who had previously worked at Side Effects Software, drew upon their experience in 3D graphics development to create tools tailored for emerging needs in interactive media. Hermanovic, a co-founder of Side Effects, leveraged the Houdini 4.1 codebase as the starting point for TouchDesigner, adapting it from offline animation workflows to support real-time applications.[3] The software's roots trace back to PRISMS, a 3D animation and particle effects tool initially developed from 1984 to 1987 at Omnibus Computer Graphics and continued at Side Effects Software starting in 1987, where Hermanovic contributed to its creation.[3][6] PRISMS emphasized procedural generation and visual effects, influencing TouchDesigner's node-based architecture for compositing and interactivity. This heritage addressed limitations in existing tools by focusing on real-time 3D graphics, bridging the gap between film production software and live performance environments in the early 2000s.[5] Early development was motivated by the rising demand for accessible platforms among VJs and interactive media artists, particularly in electronic music and performance art scenes. Derivative aimed to provide innovative tools for live visuals, capitalizing on advancements in CPU and graphics hardware to enable on-the-fly interactivity and multimedia integration.[5] Initial closed alpha versions (001–006) were tested internally from 2000 to 2001, followed by the public launch of TouchDesigner 007 in late 2001, which introduced OpenGL-based rendering for efficient real-time 3D visualization and effects.[5] This beta phase emphasized hardware-accelerated graphics, laying the groundwork for tools like TouchMixer, designed specifically for VJ performances with MIDI control and video mixing capabilities.[5]Major Releases and Evolution
TouchDesigner's development began with its first generation of releases, spanning versions 007 through 017 from 2002 to 2007, which established the core node-based system for real-time 2D and 3D interactive animation and compositing.[3] These early builds, derived from Houdini source code, introduced foundational tools like MIDI support, basic file formats (.toe, .tos, .top), and offscreen rendering, laying the groundwork for procedural workflows.[5] In 2008, Derivative released the beta version of TouchDesigner 077, marking a complete rewrite that incorporated a fully procedural OpenGL compositing pipeline, GPU-accelerated rendering, and a redesigned user interface to support more complex real-time media projects.[3] This shift enabled better performance for live visuals and interactivity, addressing limitations in the initial node system. The stable version 088 followed in 2013, following a beta program starting in 2012, which introduced Python scripting as the primary extension language, replacing older T-Script methods and enabling advanced customization and automation.[14][5] Subsequent updates continued to refine the platform's capabilities. The 2020 release (2020.20000 series) emphasized Point Cloud workflows, Web Server and Client integration, and support for new devices like ZED cameras.[15] The 2022 release (2022.24200 series) integrated Vulkan as the graphics API, replacing OpenGL to deliver superior GPU performance, cross-platform compatibility (including Metal on macOS via MoltenVK), and support for modern hardware like NVIDIA RTX series, which enhanced ray tracing and real-time rendering efficiency.[16][17] In 2023 (2023.10000 series, released December 2023), builds focused on broadcast and production enhancements, including timecode synchronization, Apple ProRes codec support, new sensor integrations like OAK-D cameras and LiDAR, and Python upgrades to version 3.11.[18] The latest major release, build 2025.31740 on November 16, 2025, introduced the Point Operators (POPs) family—the first new operator type in over a decade—for GPU-accelerated 3D point cloud and geometry processing, alongside DMX workflows via dedicated POPs and CHOPs for lighting control.[10] It also added Python enhancements like the TDI Library for VS Code integration, a Thread Manager, and an updated core Python to version 3.11.10; native support for NVIDIA 50-series GPUs (Blackwell architecture); and 10-bit HDR capabilities in spatial audio and video pipelines, including color space management for ACEScg and sRGB Linear.[10] Throughout its evolution, TouchDesigner has been shaped by community feedback via forums and beta programs, adaptations to hardware advancements such as RTX integration for AI-accelerated tasks, and expansion into VR/AR applications through improved 3D and sensor data handling.[5][3] These drivers have ensured the software remains a versatile tool for real-time interactive media, from live performances to installations.[10]User Interface and Workflow
Network Editor
The Network Editor serves as the primary visual interface in TouchDesigner for constructing and navigating operator networks, functioning as a default pane that opens upon launching the software.[19] It represents operators as nodes within a hierarchical structure, where these nodes are connected by animated wires that illustrate data flow between them, enabling users to build modular pipelines for real-time processing.[19] Viewers embedded in nodes allow for real-time previews of outputs, activated via flags in the lower-right corner of each node to enter interactive modes without disrupting the overall workflow.[19] This setup promotes a node-based paradigm where data propagates visually from input to output, facilitating intuitive design of complex systems. Navigation within the Network Editor is facilitated by standard mouse and keyboard tools, including left-mouse-button dragging for panning empty areas or selecting and moving nodes, middle-mouse-button scrolling or dragging for zooming, and keyboard shortcuts such as "o" for an overview zoom, "f" for fitting the view to selected elements, and Enter or "i" for diving into nested components.[19] Networks can be organized into containers for modularity, with the Base COMP acting as the root of all projects—accessible via the path "/"—which inherently supports embedding sub-networks to maintain hierarchical organization without additional parameters for panels or 3D objects.[20][21] Grid snapping and node resizing further aid in layout precision, adjustable through preferences or right-click menus, while toggleable list modes (via Shift+T) provide alternative views for denser networks.[19] An integrated undo/redo system supports iterative editing in design mode, with Ctrl+Z for undo and Ctrl+Y for redo, ensuring reversible changes across network modifications.[22] Performance monitoring is accessible through the editor via the Performance Monitor dialog, which tracks frame cook times and operator processing demands, often displayed alongside frame rate indicators in the interface to highlight real-time efficiency.[23] Once networks are wired, parameter adjustments for individual nodes integrate seamlessly with the editor, allowing configuration without leaving the visual workspace.[19]Parameter and Component Management
In TouchDesigner, parameter panels serve as interactive dialogs that allow users to configure and adjust the properties of individual operators. These panels, accessible by pressing the 'P' key or right-clicking an operator and selecting "Parameters...", display a range of control types including integers, floating-point numbers, toggles, menus, text strings, paths, and Python objects, enabling precise control over operator behavior.[24] Users can set parameters to constant values for static configurations, or switch to expression mode to incorporate dynamic Python-based calculations that reference other operators or data sources, with evaluation occurring via methods likeop('node').par.param.eval().[24] Additionally, callbacks can be defined on pulse buttons within these panels to execute scripts or actions upon activation, facilitating event-driven workflows.[24]
Component management in TouchDesigner emphasizes modularity through the use of .tox files, which encapsulate entire networks of operators as reusable components. A .tox file stores a single top-level component along with its sub-components, allowing users to export selected networks via right-click menu options like "Save Component" for easy import into other projects.[25] This format promotes modular design by enabling the creation of self-contained libraries of components, such as custom user interfaces or processing chains, that can be loaded dynamically to maintain project portability and reduce redundancy.[25] For instance, a complex audio processing setup saved as a .tox can be referenced across multiple .toe project files without duplicating code.
Tools for inspection and history tracking support efficient debugging and dependency management within components. The Examine DAT operator provides a comprehensive inspection interface, where users specify an operator path and source (e.g., storage, expressions, or extensions) to output detailed tables of Python locals, globals, and custom attributes, with options for depth limiting and filtering to trace issues.[26] Complementing this, the undo and redo system records design-time changes such as parameter edits and network modifications, accessible via Ctrl+Z/Cmd+Z for undo and Ctrl+Y/Cmd+Y for redo, allowing users to revert or replay actions while preserving a history stack within the session.[22] The dependency system further aids in tracking by automatically propagating changes through expressions and references, ensuring that alterations in one component update dependent elements without manual intervention.
Efficient management of parameters and components is enhanced by keyboard shortcuts and contextual menu systems. Application-level shortcuts, such as 'P' to toggle the parameter panel or Ctrl+F/Cmd+F to find operators, streamline navigation and editing tasks.[27] Right-click menus on operators offer quick access to actions like saving .tox files, viewing dependencies, or inspecting info via the Info DAT, while customizable shortcuts in the Preferences dialog (Alt+P) allow tailoring for workflow efficiency.[27] These tools integrate seamlessly with network connectivity, where selected operators can be wired via drag-and-drop while maintaining parameter oversight.[28]
Core Features
Rendering and Compositing
TouchDesigner's rendering engine leverages Vulkan as its primary graphics API since the 2022 release, providing a modern foundation for real-time 2D and 3D graphics generation, with OpenGL completely removed to enable lower driver overhead and cross-platform consistency. On macOS, Vulkan operates through MoltenVK, a layer that translates calls to Apple's Metal framework, ensuring compatibility while supporting advanced GLSL shaders up to version 4.60. The built-in renderer utilizes GLSL code for pixel, vertex, geometry, and compute shaders to process polygons, textures, and channel data into rendered images, incorporating effects such as anti-aliasing, screen-space ambient occlusion (SSAO), depth shadows, fog, and particle sprite rendering. This pipeline relies on off-screen buffers in GPU memory via the Render TOP and Render Pass TOP operators, where the latter optimizes multi-pass workflows by reusing buffers to avoid transfers to slower system memory. Compositing in TouchDesigner facilitates the layering of visual elements through texture manipulation, enabling complex outputs for applications like video mapping and projections. Workflows typically involve stacking textures with transformations—such as scaling, rotation, and perspective corrections—applied via operators like the Corner Pin TOP, while masks defined by 2D polygons or bezier curves isolate regions for blending. Tools like Kantan Mapper allow users to create mask shapes filled with TOP-generated textures, supporting seamless integration of images, videos, or procedural graphics onto irregular surfaces. For projections, techniques such as quad reprojection ensure pixel-accurate mapping from arbitrary viewpoints, accommodating distortions from projector angles or LED panel geometries. These processes, handled primarily through TOP operators for texture operations, support non-real-time compositing for pre-rendered content or real-time adjustments during live performances. The engine supports multi-display outputs tailored for immersive environments, including LED walls, multiple projectors, and VR headsets, with built-in resolution scaling to match hardware configurations. Outputs can span across several monitors or external devices using the Window COMP, where extended desktop modes on Windows or separate Spaces on macOS enable full-screen rendering without mirroring. For LED walls and projectors, resolution adjustments via parameters like pixel mapping ensure content aligns with panel grids or projection surfaces, often requiring graphics cards supporting at least four simultaneous outputs or hardware splitters for larger arrays. VR integration allows rendering to headsets like Oculus or HTC Vive, with stereo output and head-tracking synchronization for 3D scenes, facilitating pre-visualization of installations. Performance optimizations emphasize GPU acceleration and efficient resource management to sustain high frame rates, often exceeding 60 FPS in demanding scenarios. Vulkan's reduced overhead compared to legacy APIs minimizes CPU-GPU synchronization costs, while compute shaders enable parallel processing on all platforms, including macOS. Caching mechanisms, such as buffer reuse in Render Pass TOPs and selective cooking in networks, prevent redundant computations, allowing for smoother high-frame-rate rendering by keeping data in fast GPU memory. Additional techniques include reducing render resolution, enabling early depth testing to cull hidden pixels, and limiting light sources or particle counts to balance visual fidelity with throughput, as monitored via the built-in Performance Monitor.Data Processing and Interactivity
TouchDesigner facilitates data processing and interactivity by integrating various input sources and protocols, enabling real-time responsiveness in visual and performative applications. This includes support for standard protocols like MIDI and OSC, as well as sensor devices such as Kinect and Leap Motion, which feed data into the system's node-based workflow to drive dynamic behaviors. Network protocols further extend this capability, allowing synchronization with external hardware like lighting systems. These elements collectively support the creation of interactive installations, live performances, and responsive environments where user inputs directly influence content generation.[29][30][31][32][33] For input/output support, TouchDesigner provides dedicated operators to handle MIDI events, including notes, controllers, program changes, system exclusive messages, and timing data from connected devices. The OSC protocol is similarly accommodated, receiving messages from third-party applications over UDP for flexible communication in networked setups. Sensor integration includes the Kinect for capturing positional and skeletal tracking data from up to six individuals, and the Leap Motion controller for hand, finger, tool, and gesture tracking with position, rotation, and velocity outputs. Network protocols such as Art-Net, sACN, DMX, and KiNET enable bidirectional data exchange with lighting and control systems, where incoming DMX channels (valued 0-255) can trigger procedural responses, and outgoing data supports real-time fixture control. These inputs are typically processed through CHOP operators for channel-based data manipulation.[34][35][31][32][36][37] Audio and video I/O in TouchDesigner emphasize real-time capture and processing, with synchronization across streams to maintain temporal alignment in interactive projects. Audio input is captured via the Audio Device In CHOP, which reads from attached microphones or line inputs using DirectSound, CoreAudio, or ASIO for low-latency processing, while output is handled by the Audio Device Out CHOP to route processed signals to speakers or external devices. Video capture occurs through the Video Device In TOP, supporting webcam and professional camera feeds in formats like SDI or HDMI for immediate integration into data flows, with export options via Movie File Out TOP ensuring synced audio-video rendering. These I/O pathways allow for synchronized real-time manipulation, such as analyzing live audio spectra to modulate video parameters or vice versa.[38] Animation and simulation features in TouchDesigner enable procedural motion driven by data streams, using tools like timelines, low-frequency oscillators (LFOs), and physics simulations to generate organic, responsive movements. The Timeline CHOP provides keyframe-based animation for channel data, allowing users to sequence events and interpolate values over time for smooth transitions in interactive scenarios. LFOs, implemented via the LFO CHOP, produce oscillating waveforms—such as sine, square, or noise-based patterns—to animate parameters procedurally, often modulating inputs from sensors for evolving patterns. Physics integration, through operators like the Dynamics CHOP, simulates forces, collisions, and constraints on data channels, facilitating realistic procedural animations like particle flows or rigid body interactions triggered by real-time events. These tools process incoming data to create non-linear, adaptive motions without manual keyframing.[39] Event handling in TouchDesigner relies on triggers and callbacks to implement responsive behaviors, detecting changes in data streams to initiate actions. The Event CHOP processes off-to-on transitions in input channels—commonly from MIDI or sensor data—generating discrete samples for timing-based responses, such as spawning events in a particle-like system. Triggers can be set on parameters to pulse on value changes, while callbacks monitor state shifts across operators, enabling chained reactions like updating visuals upon gesture detection. This system ensures low-latency handling of interactive inputs, supporting complex, event-driven workflows in live settings.[40][41]Scripting and Extensibility
TouchDesigner provides extensive scripting capabilities through its Python API, which grants full programmatic access to operators, parameters, and callbacks, enabling automation of complex workflows. The API, built on Python 3.11, allows developers to interact with the node's network using functions likeop() to reference operators and .par to manipulate parameters, such as evaluating or setting values with .eval(). Callbacks, such as onCook() or onValueChange(), facilitate event-driven scripting for real-time responses, as seen in the Execute DAT for frame-based automation. This integration supports custom logic without altering the visual node-based paradigm.[42]
DAT operators enable text-based scripting for executing Python code, evaluating expressions, and integrating web data directly within networks. The Script DAT runs Python scripts on each cook cycle, processing input tables to generate or modify outputs, while the Text DAT stores multiline scripts or shaders for reuse. The Evaluate DAT computes expressions across DAT cells, supporting dynamic calculations like mathematical formulas or conditional logic. For web integration, the Web DAT fetches and parses content from URLs, allowing scripts to incorporate external data such as JSON APIs or HTML into TouchDesigner projects. These DATs collectively form a robust system for embedding code-driven behaviors.[43][44][45]
Extensibility in TouchDesigner is achieved through custom operator development, primarily via its C++ API, which allows third-party creators to build plugins that function identically to native operators. Developers can implement custom TOPs, CHOPs, or SOPs using classes like CPlusPlusTOP, with support for Python callbacks to expose scripting interfaces. A growing plugin ecosystem includes community-contributed extensions, such as GPU-accelerated MediaPipe for machine learning tasks, distributed via repositories like GitHub. While Python enables internal customization within components, C++ remains the standard for core operator extensions, fostering an open development environment.[46][47][48]
In 2025 updates, TouchDesigner enhanced its Python tools to better support AI and machine learning integration, including an upgraded Python 3.11.10 environment with NumPy 2.1.2 and a new Python Environment Manager for seamless third-party library installation. These improvements facilitate loading models like Stable Diffusion or Depth Anything V2 directly in scripts, enabling real-time AI inference without external dependencies. The Thread Manager further optimizes multi-threaded Python execution for performance-critical ML tasks.[49][50][51]
Operators
COMP (Components)
COMP operators in TouchDesigner serve as modular containers that organize and encapsulate networks of other operators, enabling hierarchical structuring of projects for better manageability and reusability.[52] They form the foundational building blocks for embedding sub-networks within larger compositions, allowing developers to create self-contained modules that can be nested or referenced across projects.[53] Unlike specialized operators focused on data types like textures or channels, COMPs emphasize structural organization, facilitating the division of complex systems into logical units.[54] In the operator hierarchy, Base COMPs provide the simplest form, lacking both panel and 3D object parameters, making them suitable for non-visual logic or data transformation tasks, such as converting RGB to HSV channels without any associated user interface.[20] Container COMPs extend this by grouping multiple panel components, including buttons, sliders, fields, and other containers, to construct interactive interfaces or modular sub-networks.[55] Panel COMPs, a sub-family of COMPs, specialize in creating custom 2D control panels and user interfaces, inheriting from the base COMP class while adding capabilities for interactive elements like state representation through panel values.[56][57] These types support embedding by allowing operators to be nested inside COMPs, promoting a tree-like structure visible in the network editor.[58] Among COMP variants, Widget COMPs act as a superset of Container COMPs, designed for rapid UI development with built-in support for elements like knobs, sliders, and buttons, enabling quick assembly of control panels through predefined layouts and behaviors.[59][60] Layout capabilities within COMPs, particularly in Container and Panel types, involve parameters for positioning and sizing sub-elements via X, Y, width, height, fixed aspect ratios, and alignment modes, which aid in designing responsive interfaces without manual repositioning.[61][55] Connectivity in COMPs is handled through input and output ports, represented as connector terminals on the operator nodes, which allow data—such as channels or references—to flow between nested networks and external operators. For instance, input ports on a Container COMP can receive signals to drive internal parameters, while output ports propagate processed data outward, supporting seamless integration in larger workflows.[62] This port-based wiring ensures modular data passing without exposing internal details.[28] For modularity, COMPs leverage cloning and replication to scale projects efficiently. Cloning creates instances that mirror a master COMP's structure and parameters, with edits to the master automatically propagating to clones, ensuring consistency in replicated elements.[63] The Replicator COMP enhances this by generating multiple copies based on a table's rows or a specified count, functioning like a for-loop to produce scalable arrays of components, such as duplicated UI panels or sub-networks driven by dynamic data.[64] These features are essential for building extensible systems, like interactive installations, where repeated elements must adapt to varying inputs while maintaining centralized control.[65]TOP (Textures)
TOP (Textures) operators in TouchDesigner form a core family dedicated to 2D image and video processing, enabling real-time manipulation of textures within the GPU-accelerated rendering pipeline. These operators handle the ingestion, transformation, and output of visual data, supporting a wide array of creative and technical workflows in interactive media. By leveraging graphics hardware, TOPs facilitate efficient operations on pixel-based content, distinct from other operator families that manage different data types.[66] The primary functions of TOP operators encompass loading external media, applying filters and effects, and compositing multiple layers. Loading capabilities include importing video files or live streams via the Movie File In TOP, which supports formats like MP4 and AVI for seamless playback. Filtering and effects cover transformations such as blurring with the Blur TOP for softening edges, warping via the Displace TOP to distort geometry based on input maps, and adding procedural noise through the Noise TOP for organic variations in color and intensity. Compositing tools, like the Composite TOP, allow blending inputs using modes such as add, multiply, or alpha-based overlay, enabling complex scene assembly. These operations are performed entirely on the GPU, ensuring low-latency processing essential for real-time applications.[66][67] TOPs emphasize flexible resolution and format handling to accommodate diverse project needs. They natively support RGBA color spaces, including floating-point precision for high dynamic range (HDR) imaging, which preserves detail in bright and dark areas during manipulations. Operations are resolution-independent, automatically scaling inputs and outputs to match project requirements without loss of quality, though constrained by the GPU's maximum texture size—typically up to 16384x16384 pixels on modern hardware. All computations occur on the GPU via shaders, optimizing for parallel processing and minimizing CPU involvement, which enhances performance in bandwidth-intensive scenarios.[66] Practical examples illustrate TOPs' utility in optimization and input management. The Cache TOP stores computed or dynamic image sequences in GPU memory, reducing redundant calculations and improving frame rates in looping or static effect chains. For video input, the Movie File In TOP not only loads footage but also handles cueing, speed adjustments, and alpha channel extraction, integrating smoothly into broader networks. These operators integrate with the rendering pipeline to supply processed textures to components like Geometry COMPs for display.[68][67] The Direct Display Out TOP utilizes Vulkan API to route output directly to GPU-attached displays like DisplayPort or HDMI, bypassing the operating system's display compositor. This enables ultra-low-latency, high-resolution delivery across multi-monitor configurations, ideal for professional AV setups with minimal overhead.[69]CHOP (Channels)
CHOPs, or Channel Operators, form a core operator family in TouchDesigner for manipulating time-based data through channels, which are sequences of floating-point numbers known as samples. These channels represent multi-sampled data streams that capture values evolving over time, such as animation paths or audio waveforms, allowing for precise temporal processing in real-time applications. Each CHOP output includes one or more named channels, where the data is passed between operators in a network to build complex signal flows.[70] A key aspect of channels is their sampling rate, defined as the number of samples per second, which governs the resolution of time-dependent data. For example, animation channels often operate at 60 samples per second to synchronize with standard frame rates, while audio channels may use rates up to 48,000 samples per second for high-fidelity capture. Channel lengths can be expressed in samples, seconds, or frames (with a default of 60 FPS), enabling flexible handling of both short bursts and extended sequences in projects.[70] CHOP operations facilitate analysis, transformation, and generation of channel data using specialized operators like the Math CHOP, which performs arithmetic functions such as addition, multiplication, or scaling on input channels. The Filter CHOP applies smoothing or delay effects to refine signals, supporting tasks like noise reduction or lag introduction for more natural motion. Input-focused CHOPs, including the Audio Device In CHOP for capturing real-time audio from hardware devices and the MIDI In CHOP for processing incoming MIDI messages, provide essential entry points for external data streams.[70] Export mechanisms allow CHOP channels to influence other parts of a TouchDesigner project by linking them to parameters or operators, such as overriding a component's position values to drive visual animations. This is achieved through the Export CHOP or export flags, where channels can be referenced via expressions likeop('math1')[0] to enable dynamic, real-time control of visuals and interactions.[70]
SOP (Surfaces)
Surface Operators (SOPs) in TouchDesigner provide a node-based system for generating, modifying, and combining 3D geometry, enabling procedural construction of surfaces essential for real-time 3D scenes.[71] These operators form the backbone of 3D modeling within the platform, allowing users to build complex structures from basic elements without traditional manual sculpting. SOPs operate in a network editor, where each node processes input geometry to produce output, supporting non-destructive workflows that adapt dynamically to parameter changes or external data inputs.[71] Geometry primitives in SOPs serve as foundational building blocks for 3D scenes, including points, lines, polygons, and meshes. Operators such as the Circle SOP create 2D outlines that can be extruded into 3D forms, while the Sphere SOP generates parametric spheres and the Grid SOP produces planar meshes for surfaces like floors or terrains.[72][73][74] These primitives can be combined hierarchically; for instance, lines from the Line SOP can connect points to form wireframes, which are then converted to polygonal meshes using the Convert SOP for solid rendering. This approach emphasizes modularity, where primitives are instantiated and refined procedurally to construct scenes ranging from simple shapes to intricate environments.[71] SOP tools draw inspiration from procedural modeling techniques, offering operations for transformation, deformation, and boolean manipulations to shape geometry dynamically. The Transform SOP applies translations, rotations, and scales in object space, enabling precise positioning of entire networks or individual elements.[75] Deformation tools like the Noise SOP introduce organic distortions by applying procedural noise to vertex positions, while the Twist SOP bends geometry along an axis for effects such as helical structures.[76] Boolean operations via the Boolean SOP perform unions, intersections, or differences on closed polygonal sets, facilitating clean cuts and combinations for complex forms like architectural details.[77] These tools support real-time adjustments, making them ideal for interactive applications where geometry evolves based on user input or simulations. SOP geometry integrates seamlessly with lighting and camera components to establish complete 3D scenes within a Geometry COMP. The Geometry COMP houses SOP networks, which are rendered using a Render TOP that incorporates Light COMPs for illumination—such as directional or point lights affecting surface shading—and Camera COMPs for viewport perspective, with parameters like Look At enabling dynamic targeting.[78][79][80] Display and Render Flags control visibility in viewers, ensuring SOP outputs respond to scene-wide transforms and masks for selective lighting. Materials from the MAT family can be applied briefly to these surfaces for texturing, though detailed shading is handled separately.[78] While powerful for procedural tasks, SOPs have limitations as they are not a comprehensive digital content creation (DCC) tool like Blender or Maya, prioritizing real-time performance over advanced simulation or high-fidelity modeling.[71] They run on the CPU, which can constrain complex computations compared to GPU-accelerated alternatives, and the platform recommends importing polished models via the File In SOP from external DCC software for production-grade assets.[81] This focus on real-time proceduralism suits interactive media but may require supplementation for photorealistic or static content workflows.[82]MAT (Materials)
MAT operators in TouchDesigner provide shading and texturing capabilities for 3D surfaces generated by SOP operators, enabling realistic rendering through shaders applied to geometry within components like Geometry COMP. These materials define how light interacts with surfaces, supporting a range of shader models to achieve effects from basic lighting to physically accurate simulations. MATs are essential for integrating lighting, textures, and custom effects in interactive 3D scenes, outputting to render pipelines for final display or further processing.[83] TouchDesigner offers several built-in shader types within MAT operators, including the Phong MAT, which implements the classic Phong shading model for per-pixel calculations of ambient, diffuse, and specular reflections, requiring normal attributes for specular highlights and texture coordinates for map application. The PBR MAT employs a physically based rendering approach, compatible with environment lighting via the Environment Light COMP and procedural textures from Substance Designer files imported through the Substance TOP, facilitating realistic material representations with metallic, roughness, and other surface properties. For advanced customization, the GLSL MAT allows users to write or import vertex, pixel, and geometry shaders using GLSL code via DAT operators, supporting selectable GLSL versions, preprocessor directives, and inheritance of uniforms from other GLSL MATs to extend or modify shading behaviors.[84][85][86] Material mapping in TouchDesigner MAT operators enhances surface realism through UV-based texturing, where coordinates on SOP geometry drive the application of maps such as color/diffuse, specular, emission, metallic, roughness, and ambient occlusion textures. Normal mapping, requiring tangent space attributes on the geometry, simulates detailed surface perturbations via bump or normal maps without altering the underlying mesh topology, adding depth and detail to shaded surfaces. Environment textures, typically in the form of spherical or cubic HDR maps, contribute to reflective and refractive effects, particularly in PBR MATs when paired with environment lighting components, allowing for dynamic scene reflections based on surrounding illumination.[84][85][86] Render states in MAT operators control how geometry interacts with the rendering pipeline, including blending for transparency effects where the Phong and PBR MATs use configurable source and destination blend factors to combine pixel colors, such as source alpha with one minus destination alpha for semi-transparent overlays. Depth testing ensures correct occlusion by comparing fragment depths against the depth buffer, with options to enable or disable testing via functions like less-than or equal-to, and toggles for writing depth values to support layered rendering in complex scenes. These states are adjustable per MAT to achieve effects like additive blending for glows or z-sorting for transparent objects.[84][85] MAT operators facilitate export to TouchDesigner's render engine by assigning them to the Material parameter of Object Components, such as Geometry COMP, which then renders the shaded SOP geometry via the Render TOP for output to displays, videos, or further compositing. Additionally, Phong and PBR MATs include an Output Shader parameter that generates GLSL code into a GLSL MAT and Text DAT, allowing shader adaptation or reuse in custom rendering workflows outside standard TouchDesigner pipelines.[83][84][85]DAT (Data)
DAT operators in TouchDesigner are designed to manage discrete, non-temporal data such as text strings, tables, and scripts, distinguishing them from time-based channel data handled by CHOP operators.[43] These operators enable the storage, manipulation, and execution of structured and unstructured text data within visual programming networks, facilitating tasks like data import, scripting, and network communication.[43] The Table DAT serves as the primary tool for handling tabular data akin to spreadsheets, allowing users to create, edit, and populate rows and columns with text strings.[87] It supports manual editing through an interactive viewer or procedural generation via fill types such as "Set Size and Contents" or "Fill by Row," where expressions likeme.subRow can dynamically compute cell values.[87] For importing data, the Table DAT accommodates formats including CSV and TSV files, which can be loaded directly from local paths, drag-and-drop operations, or web URLs starting with http://.[87] JSON data can be imported indirectly by first fetching it via the Web DAT and then parsing into a table structure.[45] Outputs from the Table DAT can be exported to operator parameters, enabling data-driven control of other elements in a TouchDesigner project, such as linking table values to component positions or effects.[87]
Complementing tabular handling, the Text DAT is used for storing and editing multi-line ASCII text, including scripts, notes, XML, or shader code like GLSL.[88] It allows loading from external files in .txt or .dat formats, with options for real-time synchronization to disk and automatic reloading on project startup.[88] While the Text DAT itself does not execute code, its contents can drive scripting workflows by feeding into execute DATs that trigger Python interpretations based on events.[88]
For direct script execution, the Script DAT provides a Python-based environment where code runs each time the operator cooks, modifying an output table derived from optional inputs.[44] It features callback methods like cook() for per-frame processing, setupParameters() for initialization, and onPulse() for custom triggers, allowing dynamic data transformations such as appending rows or altering cell contents programmatically (e.g., op('table1').appendRow('new data')).[44] This operator integrates seamlessly with broader scripting extensibility in TouchDesigner by outputting results to parameters or other operators.[44]
Web integration expands DAT capabilities for UI and networking through specialized operators like the Web DAT and WebSocket DAT.[45] The Web DAT fetches HTML or JSON content from URLs using GET or POST methods, automatically handling gzip decompression and formatting XML with indentation; for JSON, custom headers such as Content-Type: application/json ensure proper parsing into text that can be further processed by Table or Text DATs.[45] Meanwhile, the WebSocket DAT establishes bidirectional communication with web servers, receiving messages as a FIFO table of rows and supporting secure TLS connections without manual header configuration, ideal for real-time data exchange in interactive applications.[89]
The SocketIO DAT leverages the Socket.IO C++ Client API (version 3.1.0) to connect to compatible v3 or v4 servers.[90] It listens for server-emitted events specified in a single-column input table, invoking the onReceiveEvent Python callback per message, and allows emitting events via the emit method, with support for additional headers to facilitate seamless integration in networked UI scenarios.[90] This operator builds on WebSocket foundations by adding reconnection logic with configurable delays, enhancing reliability for live data streams.[90]