Fact-checked by Grok 2 weeks ago

GNU parallel

GNU Parallel is a tool for operating systems that executes jobs in parallel across one or more computers, where a job consists of a single command or small script applied to multiple inputs. Developed by Ole Tange and integrated into the GNU Project, it automates the parallelization of tasks to utilize idle CPU cores or cluster nodes efficiently, often outperforming manual scripting for repetitive computations. The tool reads inputs from files, command lines, or standard input, replacing arguments in command templates with those inputs to run jobs concurrently while managing output ordering and resource limits. GNU Parallel supports remote execution via SSH, job queuing with external systems like Slurm, and features like context replacement for complex workflows, making it valuable in environments for tasks such as and simulations. First released in 2009, it has evolved through community contributions and remains actively maintained, with documentation emphasizing its design for simplicity and extensibility over specialized alternatives. No significant controversies surround its development, though its power invites careful use to avoid overwhelming systems with unchecked parallelism.

Overview

Purpose and Core Functionality

GNU Parallel is a command-line utility designed to execute shell commands or short scripts concurrently across multiple processes on a single local machine or distributed over remote hosts via SSH, thereby enabling efficient parallelization of computational workloads from the shell without requiring custom parallel programming frameworks. It transforms sequential command invocations into parallel ones by automatically managing job distribution to available CPU cores or nodes, optimizing resource utilization for tasks where individual jobs operate independently. At its core, GNU Parallel accepts input lines—typically from standard input (stdin), command-line arguments, or files—and substitutes these as placeholders in a specified command template, launching multiple instances of the command in parallel while handling job queuing, output serialization to avoid interleaving, and resource limits such as the number of concurrent jobs (defaulting to the number of available cores). This mechanism supports replacing manual loops or background processes (e.g., via & in ) with declarative parallelism, reducing the need for scripting overhead and minimizing idle CPU time during execution. The tool excels in scenarios involving operations, such as batch file processing (e.g., applying transformations to thousands of data files), independent simulations, or repetitive pipelines where job outputs do not depend on each other, allowing full exploitation of multicore processors or resources to achieve linear proportional to the number of slots. By default, it limits concurrency to the detected core count to prevent system overload, though users can adjust this via options like -j for finer control.

Author and Licensing

Ole Tange, a bioinformatician at the , developed GNU Parallel in response to deficiencies in existing command-line tools for parallel job execution, with initial copyrights dating to 2008. Tange's background in bioinformatics informed the tool's design for handling compute-intensive tasks, such as processing large datasets on multicore systems or clusters. GNU Parallel is distributed under the GNU General Public License version 3 or later, a license that mandates availability, allows modification and redistribution, and ensures compatibility with the Project's ethos while prohibiting proprietary derivatives. As the primary maintainer, Tange sustains an individual-led development model under GNU auspices, releasing updates roughly monthly since April 2010 to incorporate enhancements and fixes promptly, differing from the irregular cadences of alternatives like traditional implementations.

History

Origins as Independent Tools

GNU Parallel traces its origins to two independent command-line tools developed in the early 2000s to overcome limitations in standard Unix utilities for argument handling and job execution. The xxargs tool, drawing from prior efforts to fix quoting and whitespace issues in xargs—described by developer Cameron Simpson in 2001 as a "busted piece of crap" due to its failure to properly delimit arguments containing spaces or special characters—incorporated basic xargs-like features such as -0 for null-delimited input, -n for maximum arguments per line, -x for line length checks, and {} for string replacement. Concurrently, the parallel tool emerged as a simple script authored by Ole Tange on January 6, 2002, functioning as a wrapper that generated temporary Makefiles to enable job execution via make -j, addressing the absence of native parallelism in shells for processing lists of commands. This prototype allowed users to pipe command lines into parallel <number>, distributing across available cores but suffering from issues like ungrouped output and errors in complex scenarios. These tools arose from practical frustrations with ' inability to reliably parse and quote arguments in pipelines involving filenames with spaces or metacharacters, compounded by the lack of straightforward mechanisms for concurrent execution, which necessitated error-prone hacks such as manual background processes or improvised scripting. Ole Tange, a bioinformatician, developed them amid needs for efficient in computational workflows, where sequential tools bottlenecked repetitive tasks on multi-core systems. By 2005, frequent pairing of xxargs for input preprocessing with parallel for execution prompted their merger into a single "parallel" binary under Tange's maintenance, abandoning separate xxargs development while preserving its quoting enhancements. Early iterations circulated informally among users via personal distribution and later nongnu.org hosting starting in 2007, prior to formalized packaging.

Development and GNU Project Integration

In 2010, the tool originally known as "parallel" was adopted as an official package, with its name changed to GNU Parallel to reflect its integration into the ecosystem and distinguish it from prior implementations like Tollef Fog's version. This adoption followed evaluations ensuring compatibility with principles, including licensing under the GNU General Public License version 3 or later, which mandates source code availability and permits modification and redistribution. The transition addressed distribution challenges of earlier multi-file modules by consolidating GNU Parallel into a single, self-contained script, enhancing portability across systems without requiring additional dependencies beyond standard installations. Ole Tange, the primary developer working in bioinformatics, drove this evolution to meet demands in and large-scale data processing, where sequential command execution proved inefficient for repetitive tasks. Community feedback, gathered through mailing lists and bug reports on the GNU Savannah platform, has since influenced iterative improvements, such as refined remote execution capabilities introduced shortly after official adoption. Tange's 2011 publication in ;LOGIN: further disseminated the tool's design rationale, emphasizing robust input handling and quoting mechanisms tailored for real-world scripting needs in scientific workflows. By 2011, GNU Parallel's GNU status facilitated broader adoption via official repositories and package managers, with ongoing releases—such as version 20101113—incorporating enhancements like SQL database integration for job logging, reflecting sustained development aligned with user-reported requirements in environments. This integration underscored 's commitment to command-line utilities that prioritize efficiency and extensibility, while maintaining backward compatibility to minimize disruption for existing scripts.

Key Milestones and Release Cadence

GNU Parallel achieved its initial stable release as an official package in 2010, marking its formal integration into the Project after prior development as independent tools. This version established core execution capabilities on local systems, with subsequent early updates focusing on robustness and usability enhancements. By 2011, support for remote execution via SSH was added, allowing users to distribute jobs across multiple networked machines without requiring specialized software, provided SSH access was configured. This feature addressed a critical need for scalable in distributed environments, as evidenced by its inclusion in documentation and tutorials from that period onward. Significant functional milestones included the refinement of the --pipe option for handling inputs, which splits stdin into blocks and them concurrently to commands, enabling efficient of large, continuous data flows such as logs or outputs. Additionally, integration with SQL databases for job queuing and logging emerged as a later advancement, permitting storage of job variables, outputs, and status in database tables via DBURL syntax, which facilitates advanced tracking and resumption in long-running workflows. The project adheres to a rigorous monthly release cadence, typically on or around the 22nd of each month, prioritizing bug fixes, performance optimizations, and minor feature additions while preserving to minimize user disruption. For example, version 20241122, released on November 22, 2024, incorporated targeted corrections and refinements based on reported issues. This predictable schedule, maintained for over a , reflects empirical responsiveness to community-submitted patches and usage patterns, ensuring sustained relevance in evolving environments.

Design Principles

Architecture and Implementation

GNU Parallel is implemented as a single script, designed to minimize external dependencies and facilitate portability across systems with installed, avoiding the need for compilation or multiple files. This monolithic structure encapsulates an object-oriented design where classes are defined within the same file, diverging from conventional practices to simplify and . The script parses command-line arguments and input streams—typically from input or files—into discrete jobs by substituting predefined replacement strings, such as {} for full arguments or {.} for without extension, ensuring each job receives uniquely quoted parameters to prevent misinterpretation and conditions from argument injection. The core worker model employs a master process that maintains a derived from the parsed input and spawns worker subprocesses up to a configurable number of slots, defaulting to the detected number of CPU cores for local execution or scaled by host specifications for remote setups. Local workers are forked and execute commands via exec, while remote workers are invoked over SSH using a parallel --slave mode that establishes bidirectional communication for job dispatch and completion signaling. Load balancing occurs dynamically as the master assigns the next queued job to a freed slot upon worker completion, maintaining full utilization without predefined scheduling algorithms, which supports reliable ism for independent tasks by isolating executions in separate processes and avoiding shared mutable state. By default, jobs complete and output in arbitrary order as they finish, prioritizing throughput over sequencing to minimize wait times in unbalanced workloads, as the does not impose beyond slot limits. For deterministic ordering, the --keep-order option buffers each job's output in temporary files—consuming up to four s per delayed job—releasing them only after all preceding inputs have completed, thus preserving input without introducing significant latency unless file descriptor exhaustion occurs. This approach ensures causal efficiency in simple, scenarios by decoupling execution from output , mitigating potential races in output handling while assuming user-provided commands are stateless.

Input Processing and Quoting Mechanisms

GNU Parallel ingests input data from multiple sources, including standard input (stdin via ), files designated with the -a or --arg-file option, and additional positional arguments provided after -- on the command line. This flexibility allows seamless integration into shell pipelines or scripts where data originates from varied formats, such as command outputs or pre-existing lists. By default, GNU Parallel treats each line of input as a single argument for replacement tokens like {}, splitting on newlines while preserving embedded whitespace within lines. For finer control over , GNU Parallel supports custom via --delimiter (or -d), which specifies a character or regex pattern for splitting lines into multiple arguments, enabling positional replacements such as {1}, {2}, and so on. This mechanism accommodates inputs requiring field-based decomposition, such as CSV-like data, without relying on external preprocessing tools. The --regex option further extends this by applying Perl-compatible regular expressions to extract arguments, supporting complex patterns while maintaining efficiency in parallel contexts. To ensure argument safety, particularly with filenames or data containing metacharacters (e.g., spaces, , or asterisks), GNU Parallel employs automated through the --quote (-q) option, which wraps arguments in single quotes and escapes internal quotes as needed for compatibility. This prevents unintended expansion or injection by generating context-aware escaped strings, such as converting file name.jpg to 'file name.jpg', thereby preserving integrity during command invocation. For scenarios involving or null-terminated streams, the --null (-0) option interprets input delimited by bytes (ASCII 0) instead of newlines, facilitating safe parallelization of outputs from tools like find -print0 or ls -1Z, where newlines or whitespace in paths would otherwise cause splitting errors. These features collectively enhance robustness in data ingestion, minimizing parsing failures in diverse environments, as demonstrated in examples where find . -print0 | parallel -0 process_file {} handles arbitrary filenames without additional escaping.

Features

Local and Remote Parallel Execution

GNU Parallel enables local parallel execution by launching multiple independent jobs concurrently on the host machine, leveraging available CPU s for improved throughput in compute-bound workflows. The -j governs concurrency, defaulting to one job per detected core to optimize utilization while avoiding excessive switching or overload; for instance, -j+1 permits one additional job beyond core count for I/O-bound tasks, whereas explicit limits like -j 4 cap execution slots to prevent system saturation on multi-core systems with 8 or more cores. This slot-based model ensures predictable scaling, as empirical benchmarks on multi-core hardware demonstrate near-linear speedup for tasks up to the core limit, beyond which diminishing returns occur due to overhead. For remote execution, GNU Parallel distributes jobs across networked hosts via SSH (or configurable alternatives like parallel-ssh), treating remote systems as extended compute slots in a configuration. Hosts are defined via --sshlogin or a file listing targets (e.g., user@host:ncpus), enabling seamless job dispatching with balancing to idle slots; required files are staged automatically using or for inputs, and environment variables are exported to maintain consistency across nodes. This facilitates cluster-scale parallelism without custom orchestration, as jobs inherit local stdin/stdout redirection unless overridden. Heterogeneous environments are supported by assigning variable slots per host (e.g., --sshlogin server1=8 server2=4), proportionally allocating workload to match differing core counts or capacities, thus sustaining efficiency in mixed hardware setups like ad-hoc clusters. The --eta flag enhances monitoring by pre-scanning inputs to estimate completion time in seconds, accounting for observed job durations and remaining queue, which aids in resource planning for distributed runs spanning dozens of nodes. For correlated workloads, --group or --group-by bundles related input arguments into fewer, multi-argument jobs, reducing invocation overhead in scenarios with implicit ordering needs, though true dependencies require external scripting.

Advanced Options for Job Control and Output Handling

GNU Parallel provides several options for precise output management during parallel execution. The --results option directs stdout, stderr, , and data for each job into a structured directory hierarchy, typically named by job sequence or input arguments, facilitating post-run analysis and in large-scale workflows. For visual , --bar renders a showing completed jobs as a , updated in without interrupting output streams, which aids in tracking long-running batches on terminals. To mitigate buffering-induced delays in interleaved output from concurrent jobs, --line-buffer flushes complete lines immediately while permitting mixing across jobs, preserving sequential readability akin to serial execution but with ism. Job reliability and resource allocation are enhanced through control mechanisms like --retries, which automatically reattempts jobs failing with non-zero exit codes up to a specified count, improving in unreliable environments such as remote clusters. The --timeout flag enforces per-job duration limits, terminating exceeding processes and propagating signals, with to within two seconds, essential for preventing hogs in production pipelines. Priority adjustments via --nice apply a niceness increment to jobs, both locally and on remote hosts, enabling quality-of-service differentiation by yielding CPU to higher-priority tasks without altering core parallelism. For hybrid environments, GNU Parallel integrates with GNU Make's jobserver protocol by parsing --jobserver-auth from the MAKEFLAGS environment, allowing it to acquire and release slots dynamically, thus capping total concurrent jobs across Make recursions and Parallel invocations based on Make's configured limits. This coordination supports runtime feedback loops where slot availability reflects system load, preventing overload in build systems combining scripted parallelism with Make's . Additionally, --sqlandworker enables logging job metadata, variables, and outputs directly to SQL databases (e.g., , ) or /TSV files via DBURL schemes, supporting queryable archives for workflow auditing and selective re-execution.

Usage

Basic Command Syntax

The core syntax of GNU Parallel invokes the tool as parallel [options] [command [initial arguments]] ::: [job arguments...], where the ::: delimiter separates the command template from the list of arguments to parallelize, with each argument substituting into the command sequentially. Alternatively, arguments can be supplied via standard input, as in parallel [options] command < inputfile, reading lines from the file and treating each as a job argument. Without explicit input sources, GNU Parallel interprets trailing arguments as jobs if no command is specified, but the explicit form ensures clarity for command construction. Replacement strings in the command template facilitate argument substitution and manipulation; {} denotes the full job argument, automatically appended if absent from the command for basic cases like parallel echo ::: foo bar, which expands to echo foo and echo bar run in parallel. For path-based arguments, additional strings enable templating: {.} replaces with the argument minus its extension (removing from the last ., yielding the stem), {/.} with the immediate parent directory's basename, supporting operations like processing files in varying locations without manual parsing. Parallelism level is set via the -j or --jobs option: -jN limits execution to exactly N simultaneous jobs regardless of system load, while -j0 imposes no artificial cap, allowing the system to run as many as resources permit—typically bounded by available cores for CPU-intensive workloads or higher for I/O-bound ones to maximize throughput without explicit tuning. This option integrates with other flags but forms the foundation for controlling concurrency in basic invocations.

Practical Examples for Common Tasks

GNU Parallel facilitates efficient parallel execution for routine Unix tasks, such as batch file processing, by distributing jobs across available CPU cores or remote hosts via SSH. For instance, compressing multiple text files locally can be achieved with the command parallel gzip {} ::: *.txt, where {} is replaced by each filename from the glob expansion, allowing concurrent gzip invocations limited by the system's core count or a specified -j parameter. This approach yields near-linear speedups for I/O-bound compression tasks on multicore systems; tests on datasets with thousands of files, such as 2580 text files, demonstrate completion times reduced proportionally to the number of parallel jobs, assuming sufficient disk bandwidth. For distributed processing across multiple machines, GNU Parallel supports remote execution by specifying a list of SSH-accessible hosts. A practical example is parallel --sshloginfile hosts.txt --workdist myprog {} ::: inputfiles, which transfers input files to remote nodes, executes myprog on each, and retrieves results, enabling workload distribution in cluster environments like PBS or Slurm. This is particularly effective for compute-intensive jobs, such as protein docking simulations, where local resources are insufficient; in one reported setup with , it launched numerous jobs across nodes, achieving throughput scaling with available remote CPUs while managing SSH key-based authentication for overhead minimization. Streaming parallelism handles large inputs without full materialization in memory, using --pipe to partition stdin into blocks for concurrent processing. An example for block-based compression is cat largefile | parallel --pipe --block 1M -k gzip > compressed.tar, where -k preserves order and --block 1M chunks data into 1-megabyte units, each piped to a gzip instance; subsequent concatenation or tools like pigz can merge outputs into a valid archive. Efficiency gains are evident in high-throughput scenarios, such as processing terabyte-scale logs, where parallel block handling saturates I/O pipelines and exploits multicore decompression, though Perl's byte-level overhead in --pipe limits gains for very small blocks compared to native parallel compressors.

Comparisons with Alternatives

Versus Traditional xargs

GNU xargs enables parallel execution via the -P option, which specifies a fixed maximum number of concurrent processes, but lacks automatic adaptation to the system's CPU core count, necessitating manual adjustment for efficient multi-core utilization. GNU Parallel, by default, allocates job slots equal to the number of available CPU cores, facilitating immediate and optimal parallelism on modern hardware without user intervention. xargs processes input by splitting on whitespace or , which frequently results in argument fragmentation when dealing with filenames or strings containing spaces, quotes, or special characters like backslashes, unless null-delimited input (-0) is enforced—a not supported by all upstream tools such as or that default to separation. GNU Parallel mitigates these quoting failures by parsing input line-by-line as discrete arguments and applying safe expansion rules, ensuring integrity even for inputs with embedded special characters, thus avoiding erroneous command invocations. In terms of performance, demonstrates lower per-job overhead (around 0.3 ms), rendering it quicker for straightforward, local serial-to-parallel conversions on trivial inputs. GNU Parallel introduces modestly higher overhead (approximately 3 ms per job) due to its enhanced feature set, yet excels in scaling for remote and distributed workloads through native SSH integration, automatic , and load distribution—features xargs entirely omits. Both accommodate null-terminated inputs (--null in Parallel mirroring -0 in xargs), but Parallel augments this with versatile customization and refined input parsing for greater robustness across varied data streams. Parallel execution with risks output interleaving, where lines from concurrent processes merge unpredictably, hindering downstream analysis; GNU Parallel counters this via explicit grouping and sequencing controls to maintain output coherence.

Versus Other Parallelization Tools

Parallel provides superior flexibility over GNU Make for non-build parallelization tasks, integrating natively into pipelines without the need for Makefile syntax or dependency graphs. GNU Make, optimized for software compilation with rule-based execution, incurs additional overhead from parsing makefiles, which can exceed Parallel's per-job startup time of 2-5 ms, particularly disadvantageous for short, independent commands lacking interdependencies. This shell-centric approach enables rapid ad-hoc execution of arbitrary commands, avoiding Make's boilerplate for scenarios like or script batching where build semantics are irrelevant. In contrast to cluster schedulers such as SLURM, GNU Parallel supports lightweight, dependency-free parallelization for ad-hoc local or remote runs, bypassing submission queues and delays inherent to enterprise HPC environments. SLURM excels in managed, large-scale queuing with fair-share policies but introduces scheduling overhead unsuitable for immediate, opportunistic tasks; , by dispatching processes directly, achieves lower latency for such uses while coupling effectively within SLURM jobs for intra-node parallelism. It forgoes advanced features like job prioritization, limiting scalability in contended clusters without integration. GNU Parallel demonstrates an empirical advantage in bioinformatics and HPC workflows for managing variable job lengths, dynamically reallocating slots to minimize idle cores unlike static partitioning in schedulers. Benchmarks on exascale systems like show it scaling linearly for over 1 million heterogeneous tasks with overhead under 1%—processing 1.152 million jobs in 561 seconds—outperforming SLURM's srun in dispatch efficiency for simulations and data pipelines. Compared to specialized tools like pmap, which exhibit blocking on uneven completions and higher rigidity (e.g., 1.2 ms overhead but no remote support), Parallel's adaptive queuing better suits fluctuating runtimes in or modeling, though it requires manual tuning for extreme short-job volumes exceeding 200 per second.

Reception and Impact

Adoption in Computing Workflows

GNU Parallel has seen significant adoption in (HPC) environments for high-throughput workflows, where it facilitates parallel execution of shell commands on single nodes without the overhead of full job schedulers. At NERSC, it is recommended for tasks requiring rapid process dispatching, as demonstrated in evaluations on supercomputers like Perlmutter, which showed its efficiency in embarrassingly parallel across thousands of cores. UC Berkeley's Research IT documentation similarly endorses its use for automating parallelization of multiple serial or parallelizable programs on cluster nodes, integrating seamlessly with Slurm for resource management. In bioinformatics pipelines, GNU Parallel is routinely applied to accelerate compute-intensive operations, such as distributing alignments across input files or processing large genomic datasets in , thereby reducing from days to hours on multi-core systems. This adoption stems from its ability to handle variable inputs and outputs without custom scripting, as noted in lab-specific guides from facilities like . Beyond research, the tool enhances workflows by parallelizing routine tasks like batch image resizing with tools such as or web scraping via invocations, distributing jobs across local cores or remote SSH hosts to exploit idle capacity. Its 2011 USENIX publication by developer Tange, which has accumulated over 1,450 scholarly citations, underscores its empirical impact, with the software's licensing requiring attribution in derived academic outputs to ensure reproducible workflows. Community feedback reinforces its productivity gains, with users on describing it as a "godlike" utility for unlocking concurrency in shell scripts without delving into language-specific libraries like Python's, enabling faster iteration in data processing pipelines. Similar sentiments appear on , where practitioners highlight its role in transforming sequential command chains into scalable operations for everyday computing tasks.

Criticisms and Limitations

GNU Parallel introduces a per-job overhead of approximately 2 to 10 milliseconds, primarily due to process spawning and management, which can result in CPU underutilization or net slowdowns for extremely short-running tasks lasting under 5 ms, where serial tools like xargs may perform better without such latency. This overhead makes it less suitable for I/O-bound workloads dominated by quick operations unless explicitly tuned with options like -u for unbuffered output, though tuning does not eliminate the base cost. The tool's policy of displaying a persistent —requiring users to run parallel --[citation](/page/Citation) or manually acknowledge publication credit—has drawn complaints for interrupting workflows, particularly in scripted or automated environments, with some labeling it as nagware that prints lengthy messages to standard output until satisfied. This enforcement, while aimed at ensuring academic recognition for , can halt execution in non-interactive contexts without prior configuration. GNU Parallel assumes jobs are independent and lacks native support for dependency graphs or task ordering beyond basic sequencing, rendering it inadequate for workflows involving inter-job prerequisites, such as build systems, where tools like make -j compute and respect such relations upfront. Users must resort to external scripting or wrappers for non-embarrassingly parallel scenarios, increasing complexity. Remote execution demands pre-established passwordless SSH access via key-based to target hosts, as interactive password prompts block parallel operation; this setup friction contrasts with managed cluster environments offering seamless credential handling, potentially delaying deployment in heterogeneous or unsecured networks. Failure to configure SSH keys results in stalled jobs, with no built-in fallback for alternative methods.

References

  1. [1]
    GNU Parallel - GNU Project - Free Software Foundation
    GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for ...Parallel TutorialGNU PARALLEL EXAMPLESDesign of GNU ParallelGNU semHistory of GNU Parallel
  2. [2]
    GNU Parallel: The Command-Line Power Tool - USENIX
    GNU Parallel: The Command-Line Power Tool ; Author(s):. Ole Tange ; Download Article: PDF icon 105438-Tange.pdf ; Article Section: SYSADMIN ;;login: issue:.
  3. [3]
  4. [4]
    GNU Parallel Tutorial
    This tutorial shows GNU parallel's functionality, options, and syntax. It's not for real-world examples, but to learn the tool's capabilities.
  5. [5]
  6. [6]
    [PDF] GNU Parallel: The Command-Line Power Tool - USENIX
    bz2 . Ole Tange works in bioinformatics in Copenhagen. He is active in the free software community and is.Missing: development | Show results with:development
  7. [7]
    History of GNU Parallel - GNU Project - Free Software Foundation
    History of GNU Parallel. GNU parallel was originally two tools: xxargs ... Copenhagen, 2010-11-20, Ole Tange, Author of GNU Parallel (Source code added ...
  8. [8]
  9. [9]
    Design of GNU Parallel
    One file program​​ GNU parallel is a Perl script in a single file. It is object oriented, but contrary to normal Perl scripts each class is not in its own file.Missing: executable | Show results with:executable
  10. [10]
  11. [11]
    GNU Parallel's 20th birthday - Free Software Foundation
    Jan 6, 2022 · GNU Parallel's 20th birthday. On 2022-01-06 GNU Parallel will be 20 years old. The birthday is an opportunity to take stock.
  12. [12]
    [PDF] Design of GNU Parallel
    GNU parallel uses the DBURL from GNU sql to give database software, username, password, host, port, database, and table in a single string. The DBURL must ...<|separator|>
  13. [13]
    GNU Parallel - News [Savannah]
    Nov 22, 2024 · GNU Parallel 20241122 ('Ahoo Daryaei') has been released. It is available for download at: lbry://@GnuParallel:4. Quote of the month: GNU ...
  14. [14]
    GNU Parallel - NERSC Documentation
    GNU Parallel is a free, open-source tool for running shell commands and scripts in parallel and sequence on a single node.
  15. [15]
    parallel(1) — Arch manual pages
    GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for ...
  16. [16]
    GNU PARALLEL EXAMPLES
    GNU parallel makes all combinations when given two lists. To make all combinations in a single list with unique values, you repeat the list and use replacement ...
  17. [17]
    --eta, --bar, --progress (progression indicator): no show real progress
    Apr 2, 2014 · --eta. Show the estimated number of seconds before finishing. This forces GNU parallel to read all jobs before starting to find the number ...
  18. [18]
  19. [19]
    Job Slots (GNU make) - GNU.org
    13.1 Sharing Job Slots with GNU make. GNU make has the ability to run multiple recipes in parallel (see Parallel Execution) and to cap the total number of ...Missing: SQL dynamic
  20. [20]
    Introduction to GNU parallel - Bioinformatics Workbook
    GNU parallel can be used on trivially parallelizable problems. The program that we use to parallelize a bioinformatics problem is GNU parallel. It is “a shell ...Missing: early development xargs
  21. [21]
    gnu parallel one job per processor - Stack Overflow
    Oct 12, 2016 · I am trying to use gnu parallel GNU parallel (version 20160922) to launch a large number of protein docking jobs (using UCSF Dock 6.7). I am ...
  22. [22]
    How to use GNU parallel effectively - Unix & Linux Stack Exchange
    Feb 3, 2015 · GNU Parallel is written in perl, and with --pipe every single byte has to go through the single process, which has to do a bit of processing on each byte.A GNU Parallel Job Queue Script - Unix & Linux Stack ExchangeHow to make GNU parallel report progress in a way suitable for use ...More results from unix.stackexchange.comMissing: shift | Show results with:shift
  23. [23]
    [PDF] NAME DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES
    Here are the examples from paexec's example dir with the equivalent using GNU parallel: ... It limits the practical usability to jobs outputting < 256 MB.
  24. [24]
    xargs(1) - Linux manual page - man7.org
    xargs reads items from the standard input, delimited by blanks (which can be protected with double or single quotes or a backslash) or new‐ lines.
  25. [25]
    How do xargs and gnu parallel differ when parallelizing code?
    Jun 22, 2020 · Tl;dr: xargs is faster because there is almost no overhead (~0.3 ms/job compared to GNU Parallel's ~3 ms/job). GNU Parallel is safer because it ...xargs -n option to achieve parallelism and less processing timexargs command length limits - Stack OverflowMore results from stackoverflow.comMissing: early development bioinformatics limitations
  26. [26]
    GNU parallel doesn't fully utilize my CPUs - Server Fault
    May 31, 2016 · The overhead for a single GNU Parallel job is in the order of 2-5 ms, so when you are getting more than 200 jobs per second, GNU Parallel will not perform ...
  27. [27]
    [PDF] Enabling Low-Overhead HT-HPC Workflows at Extreme Scale using ...
    This framework simplifies the configuration and execution of exper- iments on heterogeneous computing clusters, highlighting the utility of GNU Parallel in ...
  28. [28]
    Parallelization with Slurm and GNU parallel
    GNU Parallel is a great option because it is easily coupled with slurm to launch multiple instances of a program as one job in parallel.
  29. [29]
    Using GNU-Parallel for bioinformatics - Daniel E. Cook
    Sep 27, 2019 · GNU Parallel is an indispensible tool for speeding up bioinformatics. ... Note - There are limits to the number of arguments you can provide ...Missing: early development xargs
  30. [30]
    How to optimize GNU parallel for this use?
    Jan 25, 2020 · GNU Parallel spends 2-10 ms overhead per job. It can be lowered a bit by using -u, but that means you may get output from different jobs mixed.GNU parallel vs & (I mean background) vs xargs -PHow to use GNU parallel effectively - Unix & Linux Stack ExchangeMore results from unix.stackexchange.comMissing: comparison | Show results with:comparison
  31. [31]
    Enabling Low-Overhead HT-HPC Workflows at Extreme Scale using ...
    Feb 11, 2025 · Our results on two leading supercomputers, OLCF's Frontier and NERSC's Perlmutter, showcase GNU Parallel's rapid process dispatching ability ...
  32. [32]
    GNU parallel - Research IT - UC Berkeley
    GNU Parallel is a shell tool for executing jobs in parallel on one or multiple computers. It's a helpful tool for automating the parallelization of multiple ( ...
  33. [33]
    GNU Parallel - Science IT Technical Documentation
    GNU Parallel is a shell tool for executing jobs in parallel, automating the parallelization of multiple jobs, and splitting input into commands.
  34. [34]
    ‪Ole Tange‬ - ‪Google Scholar‬
    Co-authors ; Gnu parallel‐the command‐line power tool. O Tange. Usenix Mag 36 (1), 42, 2011. 1457, 2011 ; GNU parallel 2018. O Tange. Lulu. com, 2018. 471, 2018.
  35. [35]
    GNU Parallel, where have you been all my life? - Hacker News
    Aug 21, 2023 · It's more GNU Parallel has host groups in a config so you can send files for a job to the right one where its going to execute and bring things ...<|separator|>
  36. [36]
    GNU Parallel is godlike, share tips and usecases you might not think of
    Oct 22, 2023 · GNU Parallel has so many useful options and tricks. Please share your usecases and experiences where GNU parallel improved your workflow immensely!Never got around to learning GNU Parallel? Here is the cheat sheet ...GNU Parallel: Run any shell command on multiple cores ... - RedditMore results from www.reddit.com
  37. [37]
    So you *knew* of this limitation, but failed to mention it?! Wow. Just ...
    GNU parallel would have to be over 3X faster for your correctness concerns to even matter relative to serial xargs. People don't usually use parallelism to slow ...Missing: early development bioinformatics
  38. [38]
    953292 – sys-process/parallel "citation request" breaks shell scripts
    Apr 7, 2025 · GNU Parallel displays a nagging "citation" message until you run "parallel --citation" and enter "will cite". Or this, which is quicker ...
  39. [39]
    NAME — GNU Parallel 20250922 documentation
    xargs has no support for running jobs on remote computers. xargs has no support for context replace, so you will have to create the arguments. If you use a ...
  40. [40]
    GNU parallel with dependencies? - Stack Overflow
    Jul 28, 2022 · We are clearly in territory where there must be better tools: GNU Parallel does not have a dependency graph like make has. Share. Share a ...gnu parallel no dependency between jobs in each divisionHow can I find which jobs in a dependency tree can be run in parallel?More results from stackoverflow.com
  41. [41]
    Using 'parallel' to execute command on remote hosts - nothing is ...
    Feb 20, 2018 · So you get asked for a password, this means GNU Parallel will not run. You need to set up ssh keys to ensure you can have password-less ...bash - Parallel execution of remote commandsHow to run scripts in parallel on a remote machine?More results from unix.stackexchange.comMissing: friction | Show results with:friction
  42. [42]
    GNU Parallel does not do anything using remote execution
    Jan 16, 2020 · I think you are expected to set up passwordless ssh logins to all the remotes so GNU Parallel can get into them. – Mark Setchell.GNU parallel - running command through multiple ssh jumphostsRunning GNU Parallel via ssh login on a linux and a mac machineMore results from stackoverflow.comMissing: friction | Show results with:friction