Fact-checked by Grok 2 weeks ago

Rclone

Rclone is a and open-source command-line program designed to manage, synchronize, and transfer files to and from providers, often described as " for ". It provides a feature-rich alternative to the web-based interfaces offered by cloud vendors, supporting operations like copying, syncing, and mounting over 70 different storage backends, including , , , Microsoft OneDrive, Backblaze B2, and various protocols. Developed primarily by Nick Craig-Wood, Rclone originated with its first code commit on November 18, 2012, and has since evolved into a mature tool written in the Go programming language. The project is licensed under the , making it freely available for personal and commercial use across platforms including , Windows, macOS, and . Among its notable features are server-side transfers to avoid downloading and re-uploading data, client-side encryption and via virtual backends, preservation of file timestamps and checksum verification, and the ability to serve over protocols such as , HTTP, WebDAV, FTP, and . These capabilities make Rclone particularly valuable for tasks like data backups, migrations between providers, and automating file management in high-latency environments.

Overview

Description

Rclone is an open-source, multi-threaded command-line program designed for syncing, copying, and managing files across cloud storage and other high-latency systems. It serves as a versatile tool for handling data transfers in environments where network delays are common, such as remote cloud providers. Often referred to as "rsync for cloud storage," Rclone facilitates primary use cases like migrating files between local systems and cloud services or directly between different cloud platforms. It supports over 70 storage backends, including popular options such as Amazon S3, Google Drive, and Dropbox, enabling seamless interoperability without reliance on provider-specific interfaces. Rclone's architecture is optimized for high-latency networks, incorporating features like restartable transfers over limited or intermittent connections to ensure reliability. It includes checksum verification using or hashes to maintain file integrity during operations and provides detailed progress reporting to monitor transfer status in .

Key Features

Rclone employs multi-threading to facilitate parallel file transfers, enabling efficient handling of large datasets by allowing users to adjust the number of concurrent operations via the --transfers flag, which defaults to 4 but can be increased up to 250 or higher based on hardware capabilities and backend limits. This parallelism extends to file checking with the --checkers flag (default 8) and multi-threaded streaming for files exceeding 256 MB on supported backends like local filesystems and Amazon S3. For data integrity, Rclone uses checksum verification to compare file hashes and sizes, supporting algorithms such as , , SHA-256, and backend-specific variants like Dropbox's DBHASH or OneDrive's QuickXorHash, thereby detecting corruption without full content retransmission. This feature activates with the --checksum option and skips incompatible files, ensuring reliable transfers across diverse cloud providers. Rclone allows mounting of remotes as local filesystems, leveraging on and macOS systems or WinFsp on Windows to provide transparent access that integrates with native tools and applications as if the data were stored locally. The tool offers robust filtering mechanisms, including pattern-based includes and excludes for selective operations, alongside preservation of such as modification times, permissions, and content () types where the backend permits, facilitating precise control over file management workflows. Where backend APIs support it, Rclone executes server-side operations, such as direct copying between remotes without local intermediary downloads, which conserves bandwidth and accelerates transfers between compatible providers like buckets. Per-backend optional features enhance adaptability, including case insensitivity for file systems like and , duplicate file handling through modes like renaming or skipping to resolve conflicts, and fast listing capabilities for rapid traversal of large directories on object stores such as Backblaze B2 and .

Licensing and Development

Rclone is released under the , which permits free use, modification, and distribution of the software. The copyright is held by Nick Craig-Wood since its inception in 2012. The project is primarily developed by Nick Craig-Wood and is hosted on , where community members contribute through pull requests following the outlined contributing guidelines. Rclone is written in the Go programming language, enabling cross-platform compatibility across operating systems including Windows, , macOS, and without requiring platform-specific modifications. As of November 2025, Rclone remains under active maintenance with regular releases from its maintainers. The latest stable version, v1.71.2, was released on October 20, 2025, featuring enhancements such as the stabilization of the bisync command for bidirectional synchronization, addition of new S3-compatible backends, and optimizations for with flags like --max-connections. The supports ongoing through a dedicated at forum.rclone.org for discussions and issue reporting, comprehensive hosted on rclone.org, and official images available on Docker Hub for containerized deployments.

History

Origins and Early Development

Rclone was initiated in 2012 by Nick Craig-Wood as a hobby project to explore the Go programming language and develop a tool for syncing files to , beginning with support for Swift and quickly extending to services like . The project's origins were influenced by the utility's challenges in managing high-latency connections typical of cloud APIs, prompting Craig-Wood to create a more suitable alternative for efficient file transfers over such networks. The first commit occurred on November 18, 2012, marking the start of development on , where the repository has remained open-source since inception. Early releases, starting with the first public version v0.96 on April 24, 2014, concentrated on core copy and synchronization operations primarily for , with initial backends also including and Amazon S3. These versions emphasized straightforward file management for personal use cases, such as backing up local data to remote providers, which drove initial adoption among hobbyists and individuals seeking reliable offsite storage solutions. By 2014, the release of v1.00 on July 3, 2014, introduced enhanced configuration for multiple remotes, solidifying its utility beyond single-backend syncing and attracting broader interest within developer communities. The project's expansion accelerated in 2015, incorporating additional backends like and through early community contributions, which broadened its appeal for multi-cloud workflows. A primary motivation throughout this phase was to outperform legacy tools like in latency-prone environments, leading to the implementation of basic multi-threading in v1.20 (released on September 15, 2015) to parallelize transfers and mitigate delays inherent in cloud interactions. This foundational enhancement laid the groundwork for Rclone's growth as a versatile command-line utility.

Major Milestones and Releases

Rclone's development from onward has been marked by iterative enhancements to its core capabilities, with major releases introducing pivotal features that expanded its utility for management. Version 1.33, released on August 4, 2016, introduced the functionality for FUSE-based systems and remotes using the NACL secretbox with optional filename to secure data before upload. This release also broadened backend support to include providers like . Version 1.40, released on March 19, 2018, further expanded backend support to include additional providers like and improved overall performance and configuration options. Version 1.50, launched on October 26, 2019, advanced the functionality by improving compatibility with on , , and macOS, and enhancing support for WinFsp on Windows, which permitted users to cloud storage as local virtual drives for direct access. These updates addressed previous limitations in file handling and case sensitivity, making Rclone more versatile for cross-platform environments. Version 1.53, dated September 2, 2020, refined error handling for large-scale transfers to reduce interruptions during high-volume operations and reworked the VFS layer for better performance. Version 1.59, released on July 9, 2022, implemented a comprehensive framework supporting backends like , S3, and . Version 1.60, released on October 21, 2022, expanded on this with improved server-side copy operations for efficient transfers without local downloads and integrated with restic via the serve restic command for streamlined backup workflows. Version 1.58, released on March 18, 2022, introduced the experimental bisync command for bidirectional syncing with , which was refined in subsequent releases and promoted to stable in v1.71 on August 22, 2025. In 2023, v1.65, released on November 26, 2023, added new backends such as Files and ImageKit, along with the serve s3 command. Version 1.66, on March 10, 2024, introduced directory metadata syncing to preserve across operations. As of November 2025, versions 1.68 and later (e.g., v1.68 on September 8, 2024, adding Proton Drive; v1.70 on June 17, 2025, with convmv command and multi-thread support enhancements; v1.71 on August 22, 2025, stabilizing bisync and adding new S3 providers) continued to optimize performance for demanding scenarios, including HPC-focused optimizations such as parallel chunking to handle massive datasets more efficiently. The --fast-list , available since early versions, aids accelerated listings in compatible backends.

Installation and Configuration

Installation Methods

Rclone can be installed on various operating systems using several methods, all of which provide a single, self-contained executable with no external dependencies required. The official binaries are available for download from the project's and support multiple architectures, including native builds for ARM64 and x86_64 on , macOS, Windows, and other platforms. The simplest installation approach is downloading the pre-compiled binary directly from rclone.org/downloads. Users select the appropriate zip archive for their platform and architecture—such as rclone-v1.71.2-linux-amd64.zip for x86_64 Linux or rclone-v1.71.2-windows-arm64.zip for ARM64 Windows—extract it, and place the resulting rclone (or rclone.exe on Windows) executable in a directory included in their system's PATH, such as /usr/local/bin on Unix-like systems. This method ensures the latest version without needing compilation or package management tools. For and macOS users, an automated script installation is available via a single command: curl https://rclone.org/install.sh | [sudo](/page/Sudo) [bash](/page/Bash). This downloads, verifies, and installs the stable binary to /usr/bin/rclone, handling architecture detection automatically for both ARM64 and x86_64 systems. A version can be installed by appending -s beta to the command. Rclone is also distributed through popular package managers for easier integration with system updates. On and , it can be installed using [sudo](/page/Sudo) apt install rclone, though this may provide an older version from the distribution repositories compared to the official releases. On macOS, Homebrew users run brew install rclone, while Windows users can use with choco install rclone or Winget via winget install Rclone.Rclone. For containerized environments, an official Docker image is provided at rclone/rclone on Docker Hub. Users can pull the latest image with docker pull rclone/rclone:latest and run commands such as docker run rclone/rclone:latest [version](/page/Version) to verify, mounting volumes for configuration and data as needed (e.g., -v ~/.config/rclone:/config/rclone). The image supports /amd64 and is suitable for ARM64 hosts via multi-architecture builds. To confirm a successful installation, execute rclone version in , which displays the installed and build details, ensuring with the targeted architectures. Once installed, initial of remotes is performed using rclone config, as detailed in subsequent sections.

Configuring Remotes

Rclone's remote is managed via the rclone config command, which initiates an interactive session for creating and editing connections to backends. Users begin by selecting 'n' to create a new remote, then choose the storage provider from a numbered list displayed in the terminal, such as entering the number for . The wizard prompts for essential parameters like the remote name and provider-specific options, ensuring a guided setup process. Authentication varies by backend to accommodate different security models. For OAuth-based services like , the tool automatically launches a to the authorization endpoint, where users grant permissions and receive a verification code to paste back into the terminal; this generates access and refresh tokens stored securely in the configuration. Amazon S3-compatible storages require input of an access key ID and secret access key, often sourced from the provider's console, with optional session tokens for temporary credentials. For legacy protocols such as FTP or , configuration involves providing a , username, and obscured password, or for , specifying a private key file path instead of a password to leverage SSH key authentication. All remote settings are saved to a configuration file at ~/.config/rclone/rclone.conf on Linux and macOS, or %APPDATA%/rclone/rclone.conf on Windows, in a simple INI-like format readable by text editors. To enhance security, users can enable password protection during the config session, which encrypts sensitive fields like tokens and keys using a user-supplied passphrase, prompting for it on subsequent rclone invocations unless set via the RCLONE_CONFIG_PASS environment variable. Manual refinements are possible with rclone config edit, which opens the in the default system editor for direct tweaks to options like timeouts or endpoints. For deployment across multiple machines, the entire rclone.conf file can be copied, preserving all remotes without re-authentication where tokens are portable. During setup, common issues include provider-imposed rate limits, addressable by adding flags like --tpslimit 10 to throttle operations and avoid temporary bans. OAuth tokens refresh automatically upon expiry if the remains writable, though manual re-authorization may be needed for revoked access. To confirm a remote's validity post-configuration, execute rclone ls remote: to enumerate files, revealing any connectivity or permission errors early.

Core Functionality

Supported Backends

Rclone supports over 70 backends for and as of 2025, allowing users to manage files across diverse systems through a unified . These backends are categorized by type, with variations in capabilities such as methods, transfer efficiency, and handling that influence their suitability for different use cases. Object storage backends, including , , and Backblaze B2, provide robust support for server-side copying and multipart uploads, enabling efficient handling of large-scale data transfers without downloading files to the local machine. Consumer cloud backends such as , Microsoft OneDrive, , and rely on OAuth-based authentication and incorporate quota management features to monitor and respect storage limits imposed by the providers. Protocol-based backends encompass FTP, , WebDAV, and HTTP, which facilitate connections to traditional file servers and network shares, though they typically lack advanced cloud-specific optimizations like server-side operations. Specialized backends include , which implements for secure file storage, and options like Jottacloud and pCloud that offer zero-knowledge encryption to ensure user privacy without provider access to data. Recent additions to Rclone's backends as of 2025 include Proton Drive, emphasizing privacy-focused storage, and iCloud Drive for integration with Apple's ecosystem. Feature support varies significantly across backends, affecting operations like checks, accuracy, and listing efficiency. The following table compares key features for representative backends:
Backend Modtime PreservationCase InsensitivityDuplicate HandlingFast ListMIME Types
R/WNoNoYesR/W
Google Drive, , SHA256DR/WNoYesYesR/W
DBHASHRYesNoNo-
FTP-R/WNoNoNo-
In this table, "R/W" indicates read and write support, "DR/W" indicates read and write support on files and directories, "R" indicates read-only support, and "-" signifies lack of support. Configuration of these backends is handled via the rclone config command, as detailed in the dedicated section on remotes.

Basic Commands and Syntax

Rclone employs a where the basic syntax follows the structure rclone subcommand [options] <parameters>, with subcommands specifying the operation, options as flags for customization, and parameters denoting source and destination paths. Remote paths are formatted as remote:path/to/dir, where remote refers to a configured backend such as or , and the colon separates the remote name from the path. This syntax applies universally across supported backends, enabling seamless interaction with from local systems. The core command for one-way file copying is rclone copy source: dest:, which transfers files from the source to the destination without deleting extras in the destination. For mirroring directories, including deletions to match the source exactly, rclone sync source: dest: is used, making it suitable for backups where the destination should replicate the source state. To list files in a remote, rclone [ls](/page/Ls) remote: outputs object names and sizes in bytes, while rclone lsd remote: displays only directories at a specified depth (default 1). Additionally, rclone about remote: provides storage quota information, including total, used, and free space where supported by the backend. Common flags enhance these commands for monitoring and safety. The --progress (or -P) flag displays real-time transfer statistics, such as speed and , during operations. For testing without actual changes, --dry-run simulates the command's actions and logs what would occur. Performance can be tuned with --transfers N, setting the number of parallel file (default 4), as in a example: rclone sync /local/path gdrive:/backup --transfers 4 --progress, which synchronizes a local directory to a remote using four threads while showing . For robustness, --ignore-errors allows operations to continue despite I/O or server errors, and --max-errors N (default 0, meaning unlimited) halts after N errors to prevent runaway failures. These flags can be combined, such as rclone copy /home/user/files s3:bucket --dry-run --ignore-errors, to preview a to an S3 while ignoring potential errors.

Advanced Usage

Encryption with Crypt Remotes

Rclone's crypt remotes provide for data stored on untrusted backends, wrapping an existing remote to encrypt both file contents and filenames before . To set up a crypt remote, users run rclone config to create a new remote of type crypt, specifying the underlying remote (e.g., remote = myremote:[path](/page/Path)) and providing a for . This configuration ensures that all operations on the crypt remote, such as copying or syncing files, are transparently encrypted locally before upload and decrypted upon download. File contents are encrypted in 64 KiB chunks using NaCl SecretBox, which employs the for confidentiality and Poly1305 for message authentication, with a 32-byte key derived per file. Filenames and directory paths are handled separately: by default, they use a salted scheme to hide structure without full , but the --crypt-filename-encryption option enables stronger modes, including "standard" (AES-256 in EME mode with PKCS#7 padding and modified encoding, supporting up to approximately 143 characters) or "obfuscate" (a lightweight rotation-based permutation). Directory names are encrypted if filename encryption is enabled, while the "off" mode adds a .bin suffix to files without encrypting names. The key material is generated using the (with parameters N=16384, r=8, p=1) from a user-supplied () and an optional salt (), producing 80 bytes of output to support both content and filename keys. To rotate the , existing data must be re-encrypted using the new and re-uploaded to the . Multiple crypt remotes with different passphrases can be used for new data on the same underlying , but existing files require the original for access. A cryptographic is automatically applied during to ensure that identical inputs produce varying ciphertexts, enhancing against pattern analysis. Crypt remotes introduce limitations inherent to client-side processing: all encryption and decryption occur locally, which can increase latency for large transfers due to computational overhead on the client machine. Server-side features like search or listing are unavailable on encrypted data, as the backend sees only obfuscated or encrypted names, requiring full client-side traversal for operations. For efficiency with server-side copied paths, the crypt-server-side-path-config option in commands like rclone sync can map unencrypted paths to encrypted equivalents without re-encrypting data. An example configuration might define a crypt remote named secret over an S3 bucket: type = crypt, remote = s3:bucket/encrypted, with a passphrase set during rclone config. Users could then sync files via rclone sync /local/docs secret:backup, where contents and names are encrypted before storage on S3.

Mounting and Synchronization Options

Rclone provides advanced mounting capabilities that allow users to access remote storage as a local file system, facilitating seamless integration with applications that expect direct file system access. The rclone mount command mounts a remote path to a local directory, such as rclone mount remote:path /mountpoint, enabling operations like reading, writing, and streaming files as if they were on local disk. For full read/write functionality, the --vfs-cache-mode writes option is essential, as it enables caching of files to local disk, buffering writes and supporting retries for failed uploads up to one minute to ensure compatibility with diverse applications. Synchronization in Rclone extends beyond one-way copying with specialized variants for complex scenarios. The rclone bisync command performs bidirectional between two paths, propagating changes such as new files, updates, and deletions in both directions while comparing file listings from previous runs to minimize data transfer. It includes safety features like --max-delete to limit deletions to 50% of files by default and options, such as renaming conflicting files with a .conflict . For versioning during sync operations, the --backup-dir moves deleted or overwritten files to a specified , preserving historical versions without permanent loss. As of v1.71 (October 2025), bisync includes enhancements like separate directories (--backup-dir1 and --backup-dir2) and rename tracking (--track-renames) for improved . Several flags enhance the precision and efficiency of synchronization processes. The --fast-list option leverages recursive listing where supported by the backend, reducing calls at the cost of higher memory usage for large directories. For integrity verification, --checksum compares files using both and values, falling back to -only if checksums are unavailable, which is particularly useful for detecting subtle changes in remote storage. Conversely, --size-only skips files based solely on differences, ignoring modification times or checksums for faster scans when content integrity is not a primary concern. Performance optimizations further refine mounting and synchronization. The --multi-thread-streams flag configures multiple streams (defaulting to four) for chunked downloads and uploads, accelerating transfers of large files across high-latency connections. Additionally, --order-by allows sorting transfers by criteria like size or modification time, such as --order-by size,modtime, to prioritize efficient sequencing and reduce overall operation time. Mounting operates cross-platform with distinct implementations: on systems (, macOS, ) for kernel-level integration, and WinFsp on Windows, which supports both fixed disk and network drive modes. However, mounts impose limitations, including the inability to support hard links, as the does not replicate underlying remote storage structures like native s. Backend-specific support for these features varies, with some remotes requiring additional flags for optimal performance.

Applications in Research and Computing

Use in High-Performance Computing Environments

In high-performance computing (HPC) environments, Rclone is commonly employed for syncing large datasets to and from cloud storage services such as Google Drive and Amazon S3, facilitating archiving and data migration on clusters at institutions including Yale University, the University of Southern California (USC), and the University of Florida. These operations enable researchers to transfer research outputs from shared cluster storage to remote backends without relying on graphical user interfaces, which is particularly advantageous in batch processing workflows. For instance, at Yale's Center for Research Computing, Rclone supports synchronization between cluster storage and services like Box or AWS S3, streamlining data egress for long-term preservation. Optimizations in HPC deployments leverage Rclone's parallel transfer capabilities, such as the --transfers set to 32 or higher to maximize I/O throughput across multi-node setups, and --s3-upload-concurrency for efficient handling of uploads. Integration with job schedulers like Slurm allows Rclone commands to run as batch jobs, enabling automated, resource-aware transfers on compute nodes; for example, users submit Slurm scripts that execute rclone sync commands to move terabyte-scale datasets during off-peak hours. At the University of Utah's Center for , these flags are recommended in documentation for high-bandwidth cloud syncing from Lustre-based filesystems. Case studies highlight practical implementations: Iowa State University's HPC guide details configuring Rclone for synchronization, allowing users to mirror project directories from cluster scratch space to cloud archives via simple rclone sync commands. Similarly, New York University's Greene HPC cluster provides instructions for accessing and without web browsers, using Rclone to bypass institutional restrictions and enable scriptable transfers directly from login nodes. These approaches support shared storage environments like Lustre, where Rclone operates on parallel filesystems to avoid bottlenecks in data pipelines. Rclone's benefits in HPC include its capacity for petabyte-scale migrations, as demonstrated by the , which uses a custom Rclone-based system to back up extensive research datasets to AWS S3, achieving reliable large-volume transfers across distributed storage. By circumventing GUI-imposed limits on file sizes or concurrent operations, it enhances efficiency in resource-constrained settings. As of 2025, Rclone is widely adopted in research computing, with pre-installed modules available on numerous clusters—such as via module load rclone at , NYU, and the University of Maryland—facilitating immediate access for users without manual installation.

Academic Evaluations and Studies

A 2022 forensic analysis published in Forensic Science International: Digital Investigation examined Rclone's artifacts in investigations, focusing on of configuration files, password cracking techniques for encrypted remotes, and reconstruction of file s to . The study highlighted that Rclone's config files, often stored in or weakly obfuscated formats, can reveal remote credentials and histories through tools like for password , enabling investigators to reconstruct exfiltration paths. It also assessed prospects for broader cloud forensic applications, noting Rclone's role in facilitating investigations of synchronized data across providers like and Amazon S3. Performance evaluations of Rclone have emphasized its efficiency in bandwidth-constrained environments. A 2025 study presented at the International Conference on and Applications (ICoDSA) explored Rclone's use with free-tier services for optimizing cross-cloud data transfers, demonstrating reduced costs and improved throughput by leveraging multiple providers' limits simultaneously. In comparisons with other tools, Rclone exhibited superior transfer rates in high-latency scenarios, such as movements, outperforming alternatives like AWS CLI and in throughput for both and operations. These findings underscore Rclone's capabilities, particularly for small-file synchronization, where it achieves higher speeds than single-threaded tools in networked environments. Security audits have positioned Rclone within established threat frameworks while acknowledging its open-source benefits. The MITRE ATT&CK knowledge base designates Rclone as software S1040, classifying it as a tool for over web services due to its ability to sync data to without detection. However, its fully auditable enhances transparency, allowing security researchers to verify implementations and mitigate risks like credential exposure in configs. In educational contexts, a tool named rClone Red—distinct from the file synchronization Rclone but sharing nomenclature—has been evaluated for teaching . A 2018 paper in Synthetic Biology described rClone Red as a low-cost kit for undergraduate labs, enabling bacterial experiments through mutational analysis of binding sites with high success rates in student-led research. This application highlights the potential for similarly named open tools in , though it operates in rather than . As of 2025, academic literature reveals gaps in Rclone's scholarly evaluation, with limited peer-reviewed benchmarks comparing it directly to proprietary tools like AWS Transfer Family or Cloud Transfer Service.

Misuse and Security Concerns

Association with Cybercrime

Rclone, a legitimate command-line for synchronizing files with providers, has been frequently exploited by cybercriminals for in attacks. Threat actors leverage Rclone to upload stolen data to remote services such as , , and others, enabling double schemes where victims are threatened with both and data leaks, even if decryption is provided. This misuse bypasses traditional encryption barriers by staging exfiltrated files on anonymous cloud accounts before publication on leak sites. In the MITRE ATT&CK framework, Rclone is classified as software S1040, a command-line tool often integrated into living-off-the-land (LotL) techniques that utilize legitimate utilities to avoid detection during data theft. Adversaries combine Rclone with protocols like FTP, HTTP, or to transfer files to cloud platforms including , , and , minimizing the need for custom . This approach aligns with broader tactics under over web services (T1567.002), allowing seamless integration into multi-stage attacks. Notable examples include its use by Russian military cyber actors affiliated with , as detailed in a 2024 CISA advisory, where Rclone facilitated the theft of sensitive data from targets in the United States and globally. In another case, the BlackLock ransomware group, active in 2025, employed Rclone to transfer exfiltrated victim data between accounts as part of their double extortion operations, affecting multiple sectors. These incidents highlight Rclone's role in state-sponsored and financially motivated operations. Detecting Rclone's malicious use poses significant challenges due to its status as a benign, open-source utility that typically evades . Indicators include anomalous processes such as rclone.exe executing unusual file transfers, remnants of configuration files (e.g., rclone.conf) in user directories, or artifacts like connections to endpoints during off-hours. Forensic reveals system-level traces in logs, dumps, and patterns, but the tool's ubiquity in legitimate environments complicates attribution without contextual behavioral . The tool's developer has made no modifications to Rclone in direct response to its criminal exploitation, maintaining its open-source nature for general use. However, the Rclone community has issued warnings on official s regarding its abuse in , advising users to monitor for unauthorized configurations and emphasizing secure credential handling to mitigate risks. Specific campaigns involving Rclone misuse are further detailed in dedicated sections on notable incidents.

The Rclone Wars Incidents

The term "Rclone Wars" was coined in a blog post by the managed detection and response firm Red Canary, referring to a series of non-encrypting campaigns where threat actors employed Rclone alongside the service to steal sensitive data from victims. These incidents, observed during incident response engagements, highlighted a shift toward "pure" tactics, where attackers avoided traditional encryption to minimize detection risks while still leveraging stolen data for financial gain. In these campaigns, adversaries typically gained initial access through vulnerabilities or stolen credentials, then used Rclone—often renamed to evade basic controls—to exfiltrate large volumes of data to anonymous accounts, which offer free storage up to 20 GB without requiring user authentication. Following exfiltration, attackers would contact victims via email or dedicated leak sites, demanding payments to prevent the public release of the compromised information, thereby applying pressure without deploying that could trigger widespread alerts. This approach was particularly effective against enterprises with robust backup strategies, as it bypassed recovery mechanisms focused on decryption. The campaigns impacted various enterprises, including those in critical sectors, by exposing proprietary and operational details, which could lead to or regulatory scrutiny. Defenders responded by implementing for anomalous network activity, such as high outbound traffic to cloud provider IP ranges associated with or unusual Rclone command-line arguments like --config for remote . Similar tactics have persisted into 2025 in operations, with Rclone continuing to be a favored tool for in double-extortion schemes. Additionally, the U.S. (CISA) issued alerts on Russian state-sponsored actors, including , employing Rclone for espionage-related exfiltration from targets since at least 2020. To mitigate such threats, security teams are advised to deploy detection rules targeting Rclone artifacts, such as rclone.exe processes or associated files in directories, alongside behavioral for unexpected transfers to remote services.

Comparisons

Rclone versus

Rclone and are both command-line tools for and transfer, but they differ fundamentally in their design and primary applications. is optimized for local and network transfers, particularly emphasizing delta-transfer algorithms that compute and transmit only the differences between files, making it efficient for incremental updates over local networks or SSH connections. In contrast, Rclone is tailored for interactions, supporting over 70 cloud providers such as and , where it performs full transfers without native delta support; however, it leverages parallelism for handling multiple small files simultaneously. In terms of , Rclone demonstrates significant advantages in synchronization scenarios due to its multi-threading capabilities. Benchmarks from 2025 show Rclone achieving up to 4x faster transfer speeds than for large datasets over high- networks, such as saturating a 10 Gbps link at approximately 1 while syncing 60 GiB of files, compared to 's single-threaded limit of around 350 . , however, outperforms Rclone for large files with minor modifications, where its delta algorithm reduces usage; for instance, benchmarks of full 50 server-to-server transfers over show completing the task in 6 minutes 50 onds, compared to rclone's 7 minutes 5 seconds, highlighting its efficiency, especially for files with minor modifications where the delta algorithm reduces usage by transmitting only changed portions. Feature-wise, excels in incremental backup operations with options like --partial, which allows interrupted transfers to resume by retaining partial files, and --inplace, which updates files directly without temporary . Rclone, while lacking these delta-specific features, provides robust cloud authentication mechanisms, such as and keys, and the ability to remote as a local filesystem for seamless integration. Additionally, Rclone's rclone serve command enables hybrid compatibility, allowing to operate over it for cloud scenarios via protocols like SSH. Common use cases highlight these distinctions: is preferred for server maintenance tasks, such as synchronizing /home directories between machines over a local network, due to its efficiency in bandwidth-constrained environments. is ideal for migrations, like transferring datasets to S3 buckets, where its parallel processing and provider-specific optimizations shine. Limitations further underscore the trade-offs. Rclone's reliance on full-file hashing and verification leads to higher CPU utilization—up to several times that of in parallel transfers—making it less suitable for resource-constrained systems. , conversely, struggles with cloud latency, as it lacks native support for cloud APIs and performs poorly in high-latency environments without additional tunneling.

Alternatives to Rclone

Several tools serve as alternatives to Rclone for cloud file management, each tailored to specific use cases such as backups, single-provider interactions, or graphical interfaces, though they often lack Rclone's broad multi-cloud compatibility and command-line versatility. Backup-oriented solutions like and Restic emphasize deduplication and built-in encryption for efficient, secure archiving to various backends, making them suitable for incremental backups rather than real-time or ad-hoc synchronization tasks where flexibility across providers is needed. Vendor-specific command-line interfaces, such as the for operations or gsutil for , provide robust tools for managing files within their respective ecosystems but are confined to a single provider, without the cross-platform syncing capabilities of broader tools. (GUI) options like GoodSync and MultCloud cater to users preferring visual workflows for file replication and cloud-to-cloud transfers, supporting multiple services with automated scheduling; however, they tend to be slower for large-scale operations and less amenable to scripting in automated environments. For protocol-focused transfers, excels at mirroring and synchronizing over FTP and with features like segmented downloads, but it falls short on native integrations with modern APIs. As of 2025, emerging self-hosted systems like SeaweedFS offer distributed object storage with S3 for scalable file handling in private clouds, yet Rclone continues to lead among open-source CLI tools for its extensive backend support and ease of integration in diverse workflows.

References

  1. [1]
    Rclone
    Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces.DownloadUsageInstallGoogle driveMicrosoft OneDrive
  2. [2]
    rclone/rclone: "rsync for cloud storage" - Google Drive, S3 ... - GitHub
    Rclone ("rsync for cloud storage") is a command-line program to sync files and directories to and from different cloud storage providers.
  3. [3]
    Authors - Rclone
    Aug 22, 2025 · Authors and contributors. Authors. Nick Craig-Wood nick@craig-wood.com. Contributors. Alex Couper amcouper@gmail.com; Leonid Shalupov ...<|control11|><|separator|>
  4. [4]
    Rclone is 10 years old today - Annoucements
    Nov 18, 2022 · The first commit for rclone was added 10 years ago today. In the beginning rclone started small and its first release public release (v0. 96) ...
  5. [5]
    Global Flags - Rclone
    Flags used for check commands. --max-backlog int Maximum number of objects in sync or check backlog (default 10000). Networking. Flags for general networking ...
  6. [6]
  7. [7]
    Overview of cloud storage systems
    **Summary of Rclone Overview:**
  8. [8]
  9. [9]
    rclone mount
    Sep 24, 2025 · Rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up ...
  10. [10]
    Rclone Filtering
    ### Summary of Rclone Filtering (https://rclone.org/filtering/)
  11. [11]
  12. [12]
  13. [13]
    Licence - Rclone
    Jul 10, 2025 · Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/ Permission is hereby granted, free of charge, to any person obtaining a ...
  14. [14]
    rclone/COPYING at master · rclone/rclone
    **Summary of License Information from https://github.com/rclone/rclone/blob/master/COPYING:**
  15. [15]
    Rclone and Go
    Nov 20, 2021 · Rclone is an Open Source command line program to transfer files to and from cloud storage started by me as a hobby in 2012.
  16. [16]
    Changelog - Rclone
    Changelog. v1.71.2 - 2025-10-20. See commits. Bug Fixes. build. update Go to 1.25.3; Update Docker image Alpine version to fix CVE-2025-9230.v1.55.0 - 2021-03-31 · v1.52.0 - 2020-05-27 · v1.51.0 - 2020-02-01
  17. [17]
    Releases · rclone/rclone - GitHub
    This is the v1.71.2 release of rclone. Full details of the changes can be found in the changelog. Assets ...
  18. [18]
  19. [19]
    Documentation - Rclone
    Starting with rclone version 1.61, any Unicode numbers and letters are allowed, while in older versions it was limited to plain ASCII (0-9, A-Z, a-z).Rclone copy · Rclone sync · Rclone mount · Rclone config
  20. [20]
    rclone/rclone - Docker Image | Docker Hub
    Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.
  21. [21]
    Install - Rclone
    Aug 13, 2025 · Already installed rclone can be easily updated to the latest version using the rclone selfupdate command. See the release signing docs for how ...Missing: November | Show results with:November
  22. [22]
    Rclone downloads
    Aug 13, 2025 · Rclone is single executable ( rclone , or rclone.exe on Windows) that you can simply download as a zip archive and extract into a location of your choosing.Rclone Commands · Rclone-beta · Install · UpMissing: November | Show results with:November
  23. [23]
    rclone config
    Sep 24, 2025 · Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your ...Rclone config file · Rclone config show · Config create · Rclone config edit
  24. [24]
    Remote Setup - Rclone
    Oct 2, 2025 · There are three ways of doing it, described below. Configuring using rclone authorize. On the headless machine run rclone config, but answer N ...
  25. [25]
    Google drive - Rclone
    Oct 2, 2025 · The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.iCloud Drive · Proton Drive · Remote setup docs · Microsoft OneDriveMissing: development latency
  26. [26]
    Amazon S3 Storage Providers - Rclone
    Aug 25, 2025 · The S3 backend can be used with a number of different providers: AWS S3 Home Config; Alibaba Cloud (Aliyun) Object Storage System (OSS) Home ...Missing: 2015 | Show results with:2015
  27. [27]
    FTP - Rclone
    Aug 25, 2025 · This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, ...Missing: providers | Show results with:providers
  28. [28]
    SFTP - Rclone
    Aug 25, 2025 · SSH Authentication. The SFTP remote supports three authentication methods: Password; Key file, including certificate signed keys; ssh-agent. Key ...<|control11|><|separator|>
  29. [29]
    rclone ls
    Sep 24, 2025 · Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.Rclone lsd · Rclone lsf · Rclone lsl
  30. [30]
    Mega - Rclone
    Sep 2, 2025 · Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded.Configuration · Failure To Log-In · Repeated Commands Blocks...
  31. [31]
  32. [32]
  33. [33]
    Rclone Commands
    Rclone Commands ; rclone link, Generate public link to file/folder. ; rclone listremotes, List all the remotes in the config file and defined in environment ...Rclone copy · Rclone sync · Rclone config · Rclone lsd
  34. [34]
  35. [35]
  36. [36]
  37. [37]
  38. [38]
  39. [39]
    Crypt - Rclone
    Aug 25, 2025 · Rclone crypt remotes encrypt and decrypt other remotes. A remote of type crypt does not access a storage system directly, but instead wraps another remote.Missing: introduction | Show results with:introduction
  40. [40]
  41. [41]
  42. [42]
  43. [43]
  44. [44]
  45. [45]
  46. [46]
  47. [47]
  48. [48]
    Bisync - Rclone
    Aug 25, 2025 · v1.58. Bisync. bisync is considered an advanced command, so use with care. Make sure you have read and understood the entire manual ...<|control11|><|separator|>
  49. [49]
  50. [50]
  51. [51]
  52. [52]
  53. [53]
  54. [54]
  55. [55]
  56. [56]
    Install
    ### Summary of Cross-Platform Mounting Notes
  57. [57]
    Rclone - Yale Center for Research Computing
    Jun 7, 2023 · rclone is a command line tool to sync files and directories to and from all major cloud storage sites. You can use rclone to sync files and ...Set up Rclone on YCRC clusters · Examples · Use Rclone on Yale clusters
  58. [58]
    Transferring Data Using rclone
    Mar 5, 2024 · Rclone allows users to copy or sync files from local storage to a cloud storage provider like Google Drive, Dropbox, OneDrive, etc (or vice ...
  59. [59]
    rclone - UFIT-RC Documentation - University of Florida
    Rclone is a command line program to manage files on cloud storage. Environment Modules¶. Run module spider rclone to find out what environment modules are ...Missing: HPC | Show results with:HPC
  60. [60]
    Rclone - Google Sites
    Running Rclone as a job in HPC​​ Below is a sample slurm script that downloads a file from a remote (named BOX_DATA with a folder named DATA), renames it and ...Missing: integration | Show results with:integration
  61. [61]
    rclone - Center for High Performance Computing - chpc.utah.edu
    Jun 30, 2025 · Rclone is a command-line program that supports file transfers and syncing of files between local storage and Google Drive as well as a number of other storage ...
  62. [62]
    Rclone | High Performance Computing - Iowa State University
    Feb 12, 2019 · This guide explains how to setup and use rclone to sync data between HPC clusters and Google Drive. Rclone can be used with a wide variety of cloud services.
  63. [63]
    Transferring Cloud Storage Data with rclone - Google Sites
    A command line program to sync files and directories to and from cloud storage systems such as Google Drive, Amazon Drive, S3, B2 etc.
  64. [64]
    University of California Irvine backs up petabytes of research data to ...
    May 29, 2025 · The implementation framework centers on a custom-developed system that uses rclone for data movement while incorporating multiple AWS services ...
  65. [65]
    HPC Software Rclone
    Summary and Version Information. Package, Rclone. Description, Command line utility for syncing files to/from various cloud storage providers. Categories, Misc ...
  66. [66]
    A forensic analysis of rclone and rclone's prospects for digital ...
    In this article, we look at rclone from two perspectives: First, we perform a forensic analysis on rclone and discuss aspects such as password recovery.
  67. [67]
    Leveraging Free Cloud Services with rclone for Bandwidth ...
    Sep 20, 2025 · Leveraging Free Cloud Services with rclone for Bandwidth-Optimized Cross-Cloud Transfers. July 2025. DOI:10.1109/ICoDSA67155.2025.11157583.
  68. [68]
    [PDF] Navigating the Unexpected Realities of Big Data Transfers in a ...
    Still, rclone provided yet again the best throughput in both types of trans- fer when compared to aws-cli and cyberduck, which reported transfer rates between ...
  69. [69]
    Rclone, Software S1040 - MITRE ATT&CK®
    Aug 30, 2022 · Rclone is a command line program for syncing files with cloud storage services such as Dropbox, Google Drive, Amazon S3, and MEGA.
  70. [70]
    Analyzing Data Encryption Efficiencies for Secure Cloud Storages
    May 7, 2022 · This study will determine if "7zip" or "rclone" encryption programs work best with these three cloud storage. The data will be collected using ...
  71. [71]
    rClone Red facilitates bacterial gene expression research by ...
    Aug 7, 2018 · rClone Red is a low-cost and student-friendly research tool that has been used successfully in undergraduate teaching laboratories.
  72. [72]
    Comparing Top Cloud Sync Tools in 2025: S3 Browser, Cyberduck ...
    Jan 20, 2025 · Scalability: Rclone handles terabytes of data efficiently, making it perfect for the university's research datasets. Multi-Cloud Mastery: Alex ...<|control11|><|separator|>
  73. [73]
    Detecting Rclone – An Effective Tool for Exfiltration | NCC Group
    May 27, 2021 · Frequently Rclone is used with a MEGA.io account to stage the exfiltrated data before it is made available on leak sites. In the case of Conti ...
  74. [74]
    Rclone Wars: Transferring leverage in a ransomware attack
    May 4, 2021 · Rclone is a versatile, open-source file transfer utility used by ransomware operators to move data, enabling them to modify exfiltration ...Transferring Leverage In A... · Mega Detection · Rclone Detection<|separator|>
  75. [75]
    Exfiltration Tools: How Cybercriminals Make Off with Your Data
    Aug 8, 2024 · Rclone is currently the most popular exfiltration tool used by threat actors, appearing in handled by ReliaQuest in between September 2023 to ...Common Data-Exfiltration Tools · Atypical Tools for Data... · Threat Forecast
  76. [76]
    Russian Military Cyber Actors Target US and Global Critical ... - CISA
    Sep 5, 2024 · This Cybersecurity Advisory provides tactics, techniques, and procedures (TTPs) associated with Unit 29155 cyber actors—both during and ...
  77. [77]
    CISA Alert AA24-249A: Russian GRU Unit 29155 Targeting US and..
    Rating 4.9 (214) Dec 26, 2024 · 002- Exfiltration Over Web Service: Exfiltration to Cloud Storage. Unit 29155 cyber actors employed Rclone, a popular command-line program, to ...
  78. [78]
    Resecurity turns the table on BlackLock ransomware - The Register
    Mar 27, 2025 · The infoseccers said they found eight email accounts used by the group to access Mega, its client, and the rclone utility to transfer victim ...
  79. [79]
    Detecting rclone, and therefore data exfiltration, from the server side
    May 27, 2021 · NCC Group CIRT has responded to a large number of ransomware cases where frequently the open source tool Rclone being used for data exfiltration.How to stop someone who hacked my machines and my rclone ...Avast anti-virus flags rclone.com as being infected with Win64:CVE ...More results from forum.rclone.orgMissing: artifacts | Show results with:artifacts<|separator|>
  80. [80]
    The many lives of BlackCat ransomware | Microsoft Security Blog
    Jun 13, 2022 · The BlackCat ransomware, also known as ALPHV, is a prevalent threat and a prime example of the growing ransomware as a service (RaaS) gig economy.
  81. [81]
    Rclone vs. rsync Comparison - SourceForge
    Compare Rclone vs. rsync using this comparison chart. Compare price, features, and reviews of the software side-by-side to make the best choice for your ...
  82. [82]
    Rclone vs. Rsync - Pure Storage Blog
    Sep 11, 2024 · The consistent sub-millisecond response times enable rclone and rsync to operate at their designed performance levels without storage-induced ...
  83. [83]
    Rclone + Chunker = Rsync? - Feature
    Jan 24, 2024 · Rclone transfers whole files, while Rsync is able to save bandwidth by transferring only file differences (deltas), ie chunks of files.Missing: performance | Show results with:performance
  84. [84]
    4x faster network file sync with rclone (vs rsync) - Jeff Geerling
    May 6, 2025 · Even for very large files, rsync seems to max out on this network share around 350 MB/sec. I had been playing with different compression ...Missing: Google latency
  85. [85]
    rsync vs rclone vs scp: Fastest Way to Move Data Between Servers
    Sep 24, 2025 · We'll compare rsync, rclone, and scp in terms of performance, protocol efficiency, encryption, features, and real-world benchmarks. By the ...
  86. [86]
    Restic vs Rclone vs Rsync: Choosing the Right Tool for Backups
    Sep 1, 2025 · Rclone is unbeatable for cloud syncing and large-file storage, rsync is the simplest choice for direct server-to-server sync, but when it comes ...
  87. [87]
    What's the difference between restic and rclone? - Help and Support
    Jul 24, 2017 · Restic is a dedicated backup program. Rclone is a more general purpose file copying tool. Rclone copies files 1:1 on the storage provider, wheras restic copies ...Restic+rclone vs rclone+rclone crypt+rclone chunker - rclone forumRestic backup is *abysmally* slow - Help and Support - rclone forumMore results from forum.rclone.org
  88. [88]
    Duplicati vs. Rclone Comparison - SourceForge
    Compare Duplicati vs. Rclone using this comparison chart. Compare price, features, and reviews of the software side-by-side to make the best choice for your ...
  89. [89]
    Mastering S3 Sync: A Developer's Guide to s3cmd, AWS CLI, and ...
    Dec 12, 2023 · In this guide, we will explore how to perform S3 sync using three popular tools: s3cmd, aws, and rclone.
  90. [90]
    Gsutil vs rclone - Help and Support
    Mar 28, 2019 · Hi,. I would like to know how rclone is different than gsutil with very similar commands for moving files from local drive to google cloud ...
  91. [91]
    8 Best Rclone Alternatives for Fast File Sync & Access | Resilio Blog
    six file sync and transfer alternatives and two storage gateway alternatives ...<|control11|><|separator|>
  92. [92]
    Best MultCloud Alternatives: 2025 Cloud Storage Managers
    Jan 27, 2025 · 6. Cyberduck. Cyberduck is a libre server cloud storage manager that operates with both Windows and macOS. Like rclone, it's open-source ...<|control11|><|separator|>
  93. [93]
    syncing files: lftp vs rsync vs dropbox vs nextcloud
    Aug 29, 2019 · I suspect, this would would take a lot less RAM and CPU, than what is required by dropbox or nextcloud. I think lftp can be used to sync files.
  94. [94]
    SeaweedFS is a fast distributed storage system for blobs ... - GitHub
    SeaweedFS is ideal for serving relatively smaller files quickly and concurrently. SeaweedFS can also store extra large files by splitting them into manageable ...SeaweedFS · SeaweedFS Operator · Issues 585 · Pull requests 66
  95. [95]
    Best Open Source Tools for Robust Cloud Storage Architectures
    May 12, 2025 · In summary, SeaweedFS delivers performance, reliability, and ease of integration, making it a practical choice for modern file management ...