Fact-checked by Grok 2 weeks ago

Solid compression

Solid compression is a data compression technique employed in file archiving software, wherein multiple files are concatenated into a single continuous and compressed as one cohesive unit, rather than being processed individually, to enhance overall efficiency by exploiting redundancies across files. This method, also known as solid archiving or solid mode, originated in formats like and , where it treats the input as a unified block to provide the with broader context for identifying patterns and similarities, particularly effective for groups of similar or small files. In practice, tools such as , Bandizip, and implement solid by merging files before applying algorithms like LZMA or , resulting in smaller archive sizes compared to non-solid approaches like standard . Key advantages include improved ratios—often 5-15% better for redundant data—due to reduced overhead and enhanced detection, as well as lower storage needs for collections of homogeneous files, such as logs or documents. However, it introduces trade-offs: extraction of individual files requires decompressing the entire preceding , slowing and updates; moreover, corruption in one part of the can propagate errors to subsequent files, reducing . Solid compression is natively supported in formats including , , and certain variants like TAR.GZ or TAR.BZ2, with modern archivers allowing configurable block sizes to balance performance and ratio, making it a staple for and scenarios prioritizing size over speed.

Core Concepts

Definition

Solid is a compression used in archiving wherein multiple input files are first concatenated into a single continuous , or block, which is then treated as a unified during the application of a compression . This approach contrasts with traditional per-file compression methods by allowing the algorithm to analyze and encode the entire stream holistically, rather than processing each file in isolation. A key distinction of solid compression lies in its ability to exploit redundancies that span across file boundaries, which individual file compression cannot capture due to its segmented nature. In non-solid modes, each file is compressed separately, often resulting in repeated encoding of similar patterns if they appear in multiple files; solid compression mitigates this by enabling cross-file building and within the compressor. For example, when archiving a collection of similar text files such as system logs, solid compression can yield a higher overall by identifying and efficiently encoding recurring phrases or structures that extend beyond single-file limits. The output of this process is referred to as a solid archive, encapsulating the concatenated and compressed data in a format that maintains the benefits of inter-file optimization.

Relation to Archiving and Compression

Archiving refers to the process of bundling multiple files and into a single container file while preserving such as filenames, permissions, timestamps, and directory structures, without altering the original content size. This consolidation facilitates easier storage, transfer, and management of related data, as seen in formats like , which create an uncompressed archive by concatenating files in a structured manner. In contrast, standalone compression applies algorithms, such as LZ77 for dictionary-based encoding or for entropy reduction, to a single file or to eliminate redundancy and reduce size, operating independently of file boundaries or metadata preservation. Tools like exemplify this by compressing individual files or streams without inherent archiving capabilities, focusing solely on data efficiency rather than organization. Solid compression integrates these concepts by first performing an implicit form of archiving through of multiple files into a unified data block, which is then treated as a single continuous stream for , enabling to detect and exploit redundancies across file boundaries, such as repeated strings in adjacent files. This approach, native to formats like and , contrasts with non-solid methods—where files are compressed individually before or without bundling (compress-then-archive)—which limit optimization to intra-file patterns and cannot leverage inter-file similarities. The separation of archiving and compression in early tools evolved for , allowing specialized handling of versus size reduction, but solid compression combines them to enhance overall efficiency, particularly for collections of similar or small files.

Operational Mechanism

File Preparation and

In solid compression, the file preparation phase begins with organizing the input files to optimize the subsequent concatenation and process. When enabled, files may be sorted by type or extension, grouping similar together to facilitate the exploitation of redundancies across files during analysis. This sorting option, available in tools like via the -mqs=on switch, arranges files by their file extension before concatenation, though it is disabled by default in modern versions to prioritize alphabetical ordering. Following preparation, the uncompressed contents of the files are concatenated into a single linear byte stream. This involves appending the raw data of each file sequentially without alteration, creating a continuous block that treats the entire set as one entity for compression. To maintain file integrity and enable extraction, metadata such as file names, original sizes, and timestamps is captured separately in archive headers, often including offsets that indicate the start and end positions of each file's data within the decompressed stream; this approach preserves essential archiving metadata without embedding it directly into the concatenated stream. Diverse file types, including and text files, are handled uniformly in the concatenated , as the process disregards individual file structures and processes all data as a homogeneous byte . This uniformity allows the to detect patterns spanning multiple files, such as repeated in similar types. In practice, the resulting 's size is limited to avoid excessive memory usage during ; for instance, 7-Zip imposes default solid block limits based on compression level, such as 2 for , configurable via the -ms switch to cap the block at a specified byte size. As an illustrative example, consider three small image files—each approximately 300 KB in format—totaling 900 KB uncompressed. During preparation, if sorting by extension is enabled, these files would be grouped together; their contents would then be appended sequentially into a 900 KB stream, with metadata headers recording their names, sizes, and offsets for later reconstruction.

Compression Application

In solid compression, algorithms are selected based on their ability to leverage large context windows that extend across multiple files within the concatenated block. Dictionary-based methods, such as or in the format, are commonly used due to their support for dictionary sizes up to 4 GB, enabling the identification of repeated patterns over extended data ranges. Similarly, the format employs , a prediction by partial matching algorithm optimized for solid archives, which builds probabilistic models from the continuous stream to enhance encoding efficiency. The process involves scanning the entire block as a unified , where the algorithm applies techniques like sliding windows or dynamic to detect and encode redundancies that span original file boundaries. This approach allows for more effective elimination of duplicate sequences, as the compressor maintains a broader historical context than would be possible with isolated file compression. For instance, in LZMA, the LZ77-based matching searches for phrases within the dictionary that may originate from preceding files, followed by range encoding to further reduce the output size. In PPMd, context mixing predicts byte probabilities based on the evolving stream, adapting to patterns across the block for superior handling of structured or repetitive content. The resulting solid archive is output as a single compressed file, preserving the bundled structure through embedded such as file headers and uncompressed sizes. In , offsets stored in header sections like PackInfo and FolderInfo indicate positions in the unpacked stream, where files within a solid block are sequential; extraction of an individual file thus requires decompressing the entire block up to that file's position. A critical in this process is the solid block , which determines the maximum extent of the continuous subjected to ; in 7z, the default is limited by min( × 128, 4 GiB), but it is configurable via options like -ms=64m to create 64 MB chunks, balancing improved ratios against practical considerations such as usage and partial access. As an example, applying LZMA to a concatenated of similar files, such as text documents or , can yield 20-30% better ratios compared to per-file , particularly for redundant datasets where cross-file patterns are prevalent.

Advantages

Enhanced Compression Ratios

Solid compression achieves enhanced compression ratios by treating multiple files as a single concatenated , enabling the compression algorithm to identify and exploit repeated sequences and redundancies that span across individual file boundaries. This approach provides a larger contextual for , allowing it to model the data's more accurately and reduce redundancy more effectively than compressing s in isolation, where such cross-file patterns remain undetected. Quantitative improvements are particularly notable in scenarios involving numerous small files with shared content. For instance, in a comprising 490 files totaling 12.5 , primarily text and structured , 7-Zip's PPMd in mode achieved a compressed size of 26.29% of the original, leveraging inter-file similarities for substantial size reduction. In contrast, for the same using LZMA2, mode yielded a compressed size of 30.29%, a marginal improvement over non-solid mode at 30.44%, highlighting that gains vary by and characteristics. Benchmarks also demonstrate broader advantages; 7-Zip in mode typically produces archives 30-70% smaller than equivalent ZIP archives without compression, as seen in tests compressing 551 MB of mixed text-like content to 38 MB with 7z versus 149 MB with non-solid ZIP. Several factors influence these ratio enhancements. File similarity plays a key role, with greater benefits observed for datasets like files or repositories where common phrases or structures recur across files. The number of files also matters, as solid compression excels with many small files by minimizing per-file overhead and maximizing context sharing, whereas unique large files yield . Algorithm selection further impacts outcomes, with dictionary-based methods like PPMd or LZMA outperforming others in exploiting repetitive patterns in solid blocks. Improvements plateau in cases of already compressed data, such as images or audio, or highly random content like encrypted files, where cross-file redundancies are minimal and additional context offers little reduction. Similarly, for datasets lacking similarity, such as diverse unique large files, solid mode provides negligible gains over non-solid approaches.

Optimization for Data Patterns

Solid compression excels in scenarios involving highly redundant or patterned data, where redundancies span across multiple files, allowing the algorithm to build a more comprehensive for encoding. For instance, source code repositories benefit significantly from this approach, as similar code structures, variable names, and syntax patterns enable improvements of around 30-40% in (e.g., reducing compressed size from 29.9% to 18.2% of original for concatenated files versus individual compression), outperforming individual file compression by leveraging context-based grouping. Similarly, log files, which often contain repetitive entries, timestamps, and structured formats, achieve enhanced ratios through solid mode, as the extended context identifies common motifs that would be isolated in non-solid archiving. Document collections with shared headers, footers, or , such as batches of reports or configurations, also see improved efficiency, minimizing the storage of duplicated elements. A key strength lies in optimizing for small files, particularly those under 1 , where traditional per-file compression incurs high overhead from metadata like headers and checksums. By treating numerous tiny files—such as 1,000 configuration snippets—as a single block, solid compression eliminates this redundancy, processing them via techniques like Burrows-Wheeler Transform to yield superior ratios for datasets up to 200 in aggregate. This is especially valuable in environments with fragmented data, reducing the effective size without sacrificing lossless integrity. Cross-file synergies further amplify benefits in formats supporting shared dictionaries, such as , where solid compression exploits similarities in sets. For example, collections of similar images or video frames benefit from pattern reuse across files, leading to tighter packing than independent compression. Conversely, solid compression offers limited value for diverse or incompressible data, such as encrypted files or pre-compressed videos, where the pseudo-random nature or inherent efficiency results in gains below 5%. Encrypted content resists further reduction due to its lack of patterns, while videos like or MP4, already optimized with lossy algorithms, show negligible improvement even in solid mode. In practice, software distribution packs exemplify an ideal , as installers often include repeated binaries, libraries, and assets that solid compression consolidates effectively, achieving better ratios for bandwidth-limited downloads and distribution. This approach is particularly impactful for large-scale deployments, where the unified block enhances overall archive efficiency without altering the underlying data.

Disadvantages

Performance Overhead

Solid compression introduces significant performance overhead primarily during extraction and access operations due to its concatenated block structure. To extract a single file from within a solid block, the decompressor must process and decompress all preceding data in the block sequentially, leading to a linear time complexity of O(n) for the nth file. This contrasts with non-solid archives, where individual files can be decompressed independently without affecting others. The initial compression phase also incurs additional costs from file concatenation and holistic block analysis, increasing compression time by roughly 20-50% for large archives compared to non-solid modes. For instance, in a benchmark compressing a dataset using 7-Zip, solid mode required approximately 46% more time for packing (18 minutes 18 seconds versus 12 minutes 28 seconds for non-solid) while achieving better ratios. Decompression of the full archive shows minimal difference in some cases, but partial extractions amplify the penalty. Memory demands escalate with larger solid blocks, as the compressor and decompressor must handle the entire block in context, often requiring substantial . In , default LZMA settings with dictionary sizes up to 512 MB or more can consume over 256 MB during processing of multi-gigabyte solid archives, particularly on high-compression levels. is further limited, as there is no support for direct seeking within the compressed stream; tools typically extract the relevant portion by unpacking the full block to a temporary directory before isolating the target file. To mitigate these overheads, users can configure smaller solid block sizes—such as limiting blocks to 1 GB rather than treating the entire archive as one unit—which reduces extraction times and peaks while only marginally decreasing efficiency. This approach balances with the benefits of solid mode, as supported in formats like and .

Error Propagation Risks

In solid compression, where multiple files are concatenated into a continuous before , a single bit or in one segment can propagate during extraction, rendering multiple subsequent files unrecoverable due to the sequential nature of the process. This occurs because the compressed stream relies on the integrity of preceding data to correctly decode later portions, amplifying the impact of even minor damage. The scope of such errors depends on the archive configuration: in a fully archive, early in the stream may invalidate the entire archive, making all files inaccessible without external intervention. Conversely, archives divided into partial blocks confine the damage to the affected segment, allowing of unaffected blocks, though detection adds complexity to efforts. Basic solid mode lacks inherent redundancy mechanisms, such as error-correcting codes, leaving archives vulnerable to irreversible from transmission errors, storage degradation, or hardware failures. Tools like provide repair functions that attempt to skip corrupted sections and parse remaining data, but success rates vary widely, with partial recovery possible depending on the corruption's location and extent, with no guarantee for solid streams where metadata like file names may also be lost. Unlike non-solid archives, where each file is compressed independently and errors typically isolate damage to a single item, solid compression heightens risks for critical by linking file integrity across the entire . To mitigate these issues in high-stakes scenarios, solid archives should be paired with external backups or parity-based error-correcting systems, such as PAR2 files, which enable reconstruction of damaged portions without relying on the archiver's built-in tools.

Historical Development

Origins in Unix Tools

The origins of solid compression trace back to the Unix ecosystem, where the practice emerged organically through the combination of archiving and compression tools. The utility, introduced in the Seventh Edition of Unix in 1979, was designed to concatenate multiple files into a single contiguous stream for efficient storage on magnetic tapes, effectively creating a unified block of without inherent . This modular approach aligned with the of building small, single-purpose tools that could be chained together via pipelines, allowing tar to feed its output directly into a compressor for holistic processing of the entire archive as one data block. Early compression tools like compress, which implemented the Lempel-Ziv-Welch (LZW) and appeared in Unix systems around 1985, were commonly piped after to reduce the size of these concatenated streams, resulting in archives such as .tar.Z that treated the entire content as a single compressible unit—unwittingly establishing solid compression behavior. Later, , released in October 1992 as a patent-free to compress, further popularized this pipeline with commands like tar cf - directory | gzip > archive.tar.gz, producing solid .tar.gz files where compression exploited redundancies across all files in the archive. This incidental solid archiving arose from the stream-oriented nature of Unix tools, where 's block formation enabled compressors to analyze and encode the full dataset contextually, often yielding better ratios for related files like . By the early 1990s, solid compression via pipelines gained widespread adoption for over newsgroups and FTP servers, particularly for Unix and emerging source distributions, as it leveraged inter-file similarities in tarballs to minimize on slow connections. However, these early implementations lacked explicit solid options, relying instead on the pipeline's inherent design, which imposed limitations such as poor —requiring full to extract individual files—and vulnerability to corruption propagating across the entire archive. A key milestone in formalizing this handling came with enhancements during the , including adoption of the ustar format around 1989–1992, which improved block portability and added features like support, making solid pipelines more robust for cross-system use without altering the core streaming behavior.

Adoption in Proprietary Formats

gained prominence in proprietary formats starting in the early , as developers sought to overcome the limitations of earlier archive standards like , which treated files independently and often yielded suboptimal compression ratios for groups of similar files. The format, developed by Russian software engineer in 1993, marked one of the earliest adoptions of solid compression in a archiver. Designed initially for and later Windows environments, RAR integrated solid mode by default from its inception, leveraging a modified Lempel-Ziv (LZ77) algorithm to concatenate files into a single stream for enhanced redundancy exploitation and better overall compression. This approach was particularly effective for distributing software and media over dial-up connections prevalent at the time. Building on this foundation, the format, introduced in 2001 by Igor Pavlov's project (first released in 1999), with its first public release in 2001. Unlike RAR's proprietary nature, 7z was open in specification while prioritizing high ; solid mode became the default for 7z archives starting with version 2.30 in 2001, employing the LZMA (Lempel-Ziv-Markov chain Algorithm) method to achieve superior ratios compared to ZIP or early RAR. To mitigate the performance drawbacks of solid archiving on single-threaded systems, version 4.45 in 2007 introduced configurable solid block sizing, allowing users to balance compression gains against extraction speed for larger archives. The rise of broadband internet and in the late 1990s and 2000s further drove adoption, as solid compression addressed ZIP's non-solid constraints by enabling tighter packing of redundant data in shared collections like software distributions or sets, reducing needs without sacrificing accessibility. In the mid-2000s, tools like (initial release in 2006) extended solid compression support to open-source alternatives, integrating it prominently for formats such as and to offer competitive ratios in cross-platform environments. A significant advancement came with 7-Zip's introduction of LZMA2 in 2009, which enhanced multi-core processing for solid archives, improving parallel compression speeds while maintaining or exceeding prior ratio benefits.

Implementations and Usage

Supported Software Tools

Several open-source software tools provide robust support for solid compression, particularly in formats like 7z and RAR. 7-Zip, a cross-platform file archiver available for free, supports solid compression when creating 7z archives, with solid mode enabled by default via the -ms switch, enabling higher compression ratios for groups of similar files by treating them as a continuous data stream. PeaZip, a graphical user interface tool compatible with Linux and Windows, supports solid compression options for both 7z and RAR formats, allowing users to enable it during archive creation for improved efficiency on redundant data. Additionally, the official 7-Zip command-line interface, available for Unix-like systems since version 19.00 in 2019, offers solid compression capabilities through method switches that group files into solid blocks for enhanced ratios; p7zip serves as a legacy port with similar functionality. Proprietary tools also incorporate solid compression features tailored to specific platforms and formats. , primarily designed for Windows, includes a "Create solid archive" option during RAR file creation, which compresses multiple files as a single stream to achieve better ratios, especially for similar content. Bandizip, another Windows-focused archiver, supports solid compression in 7z archives with configurable solid block sizes, permitting adjustments to balance compression gains against accessibility. Command-line utilities offer more limited or indirect support for solid compression. Info-ZIP, a portable tool for handling archives, provides limited solid-like behavior through integration with , where files are first bundled into a container before compression, simulating a continuous . Similarly, GNU combined with compressors like or enables indirect solid compression by archiving files into a single prior to applying the , effectively treating the entire as one compressible unit. A key compatibility consideration is that while most tools can read and extract solid archives seamlessly, creating them often requires explicit flags; for instance, in , the -ms switch activates solid mode during archiving.

Configuration and Best Practices

To enable solid compression in , use the command-line flag -ms when creating an archive, such as 7z a -ms archive.7z files/ for full solid mode or -ms=off to disable it. In graphical interfaces like , select the "Create solid archive" checkbox in the archiving dialog, or use the -s command-line switch for RAR format. supports solid mode via the "Advanced" tab in the archiving window, where it can be enabled for formats like 7Z and RAR, with options to group blocks by file extension for better pattern matching. Optimizing solid block sizes balances , extraction speed, and resilience; larger sizes maximize ratios for static datasets but increase time for individual files, while smaller blocks suit frequent access scenarios. Defaults vary by compression level: 16 MB for fastest, 128 MB for fast, 2 GB for normal, and 4 GB for maximum/ultra in format. Best practices include using solid compression for static archives of similar files, such as backups of text documents or images of the same type, where redundancies yield 10-30% better ratios compared to non-solid modes. For large archives exceeding 4 , combine solid mode with multi-volume splitting (e.g., 's "Split to volumes, size" option set to 1 ) to manage file sizes while retaining block benefits. Always test ratios on sample first, as gains depend on file homogeneity—sort files by extension before archiving to enhance patterns. To mitigate errors, pair solid archives with checksum verification tools like functions during creation and , or enable recovery records in (e.g., 5-10% overhead via the archiving dialog) to repair minor corruption without full recompression. Avoid solid mode for live or random-access needs, such as databases, due to sequential requirements that amplify failure risks; opt for non-solid instead. An example workflow in involves selecting files of the same type (e.g., multiple JPEGs), enabling solid mode in the Advanced tab with blocks grouped by extension, setting a block size limit, and creating the to shared patterns for optimal ratios.

References

  1. [1]
    What is solid compression, how to use, advantages - PeaZip
    Solid compression is an option meant to improve data compression ratio providing a wider context for compression algorithm while compressing multiple files, ...
  2. [2]
    Solid Compression - Bandizip - Bandisoft
    Solid compression, on the other hand, is a method of processing multiple files as a single unit. Processing files as one unit has an advantage of a higher ...
  3. [3]
    Solid archives - WinRAR - Documentation & Help
    Solid archive is an archive packed with a special compression method, which treats several or all files within the archive as one continuous data stream.Missing: definition | Show results with:definition
  4. [4]
    Does compression into one large archive result in better ...
    Dec 5, 2018 · Not necessarily. Only if the archive is using solid compression. A non-solid archive (like a Zip archive) compresses files individually.Do 7z archives compress each file individually or ... - Super UserWhich archival formats efficiently extracts a single file from an archive?More results from superuser.comMissing: definition | Show results with:definition
  5. [5]
    Which Compression Format to Use for Archiving - Ukiah Smith
    Sep 11, 2019 · A Solid Archive is one where many files are concatenated into a single block that is then compressed. This single block is unwise as bit rot ...Missing: definition | Show results with:definition
  6. [6]
    Archiving and Compression | Linux Journal
    Oct 23, 2006 · Zip both archives and compresses files, thus making it great for sending multiple files as email attachments, backing up items, or for saving disk space.Missing: solid | Show results with:solid
  7. [7]
    7z Format
    ### Summary of Solid Compression in 7-Zip
  8. [8]
    .7z format specification — py7zr – 7-zip archive library - Read the Docs
    The data block SHOULD placed after the signature header. The data block is shown as Packed Streams. A header database SHOULD be placed after the data block.
  9. [9]
    How to determine compression method of a ZIP/RAR file
    Aug 1, 2011 · I'm trying to analyze the properties of how each file was compressed (compression level, compression algorithm (eg deflate, LZMA, BZip2), dictionary size, word ...
  10. [10]
    7-Zip / Discussion / Help: 4gb block size when solid set to on
    Mar 6, 2018 · default solid block size is min(dictionary_size * 128, 4GiB). You can set big solid size (1 TiB) with -ms1T switch.
  11. [11]
    [PDF] Bachelor Degree Project An Analysis of Data Compression ...
    smallest compression ratio of all the lossless algorithms at 26.29%. When ... number of very small, similar files, which implies solid compression techniques ...Missing: benchmark | Show results with:benchmark
  12. [12]
    Zip / 7zip Compression Differences [closed] - Stack Overflow
    Feb 24, 2014 · 7-Zip compresses to 7z format 30–70% better than to zip format, and 7-Zip compresses to zip format 2–10% better than most other zip-compatible programs.Missing: mechanism benchmarks
  13. [13]
    Frequently Asked Questions (FAQ) - 7-Zip
    New versions of 7-Zip (starting from version 15.06) use another file sorting order by default for solid 7z archives. ... size is smaller than total size of files.
  14. [14]
    How to optimize file compression, best practices - PeaZip
    Solid compression, available as option for some archival formats like 7Z and RAR, can improve final compression ratio, it works providing a wider context for ...
  15. [15]
    On the compressibility of large-scale source code datasets
    In this paper we discuss and experiment with content- and context-based compression techniques for source-code collections that tailor known and novel tools to ...
  16. [16]
    Compression of small text files - ScienceDirect.com
    The solid mode means that all files are processed as one big file and a model for statistical-based compression or a dictionary for dictionary based compression ...
  17. [17]
    8 Lossless Compression File Types Explained | Compresto
    One standout feature of RAR is solid compression, which processes multiple files as a single unit for improved efficiency. ... software distribution. Its ...
  18. [18]
    Why I cannot compress some types of files (avi, mp3, pdf...) - PeaZip
    Most common reasons why a file can't be efficiently compressed: cannot compress mp3 avi mpeg mkv files The file contains lots of non significant or duplicate ...
  19. [19]
    RAR file - WinRAR Compressed Archive - File-Extensions.com
    Solid Compression: This method treats multiple files within an archive as a ... In Software Distribution. Software distribution is another area where ...<|control11|><|separator|>
  20. [20]
    Extracting one file from archive: 7-zip requires decompressing entire ...
    Oct 27, 2013 · You need to change the Solid Block size option to Non-solid . RAR supports solid compression too, but perhaps it's not enabled by default.How resilient is the .7z file format? - Super UserDifferent compression methods in 7zip: Which is best suited for what ...More results from superuser.comMissing: time | Show results with:time
  21. [21]
    is 7zip faster archiving (solid or not solid)? - Stack Overflow
    Feb 22, 2017 · 7z has a solid block size setting, one of the options is "non-solid" ... 7z uncompressed, not-solid,-mx0 size: 728 MB pack: 12:28 unpack: 9 ...
  22. [22]
    What are the best options to use when compressing files using 7 Zip?
    May 10, 2011 · I have left the default parameters of dictionary size as 512 MB and solid block size On. The tool uses the LZMA method. The best ...<|separator|>
  23. [23]
    Regarding the 7z format: How does encryption, solid compression ...
    Feb 4, 2015 · If a file is corrupted, all subsequent files in the same solid block cannot be extracted. So, a non-solid archive significantly increases the ...
  24. [24]
    How to recover corrupted 7z archive
    7z archive consists of 4 main blocks of data: Start Header (32 bytes): it contains signature and link to End Header; Compressed Data of files; Compressed ...7z Archive Structure · Corrupt Archive · Recover Archive
  25. [25]
    What year did the Unix "tar" program get released?
    Mar 8, 2013 · Similarly, when Seventh Edition UNIX was released in January 1979, it featured a new set of file system features and a new tape backup program ...How can get the creation date of a file? - Unix & Linux Stack ExchangeGet tar to save birth time - Unix & Linux Stack ExchangeMore results from unix.stackexchange.com
  26. [26]
    History of Lossless Data Compression Algorithms
    Jan 22, 2019 · The Lempel-Ziv Markov chain Algorithm was first published in 1998 with the release of the 7-Zip archiver for use in the .7z file format. It ...<|separator|>
  27. [27]
    GNU's Bulletin, vol. 1 no. 14 - GNU Project - Free Software Foundation
    It is called gzip and was written by Jean-Loup Gailly, jloup@chorus.fr . When compressing, gzip produces a new format all its own. We cannot implement compress ...
  28. [28]
    gzip without tar? Why are they used together? - Super User
    Mar 2, 2011 · To be precise, a .tar.* compressed archive is always “solid”, ie. consists of a single compressed stream. A .zip archive on the other hand is ...tar or zip compression? What is the difference between ...What is the advantage of using 'tar' today?More results from superuser.com
  29. [29]
    When did the standard for packaging Linux source code become .tar ...
    Dec 27, 2012 · It has a long history in the various Unix operating systems dating back to 1979 in 7th edition Unix where it replaced tp. Linux systems are ...
  30. [30]
    On Linux/Unix, does .tar.gz versus .zip matter? - Super User
    May 29, 2010 · tar/gzip is a pretty crappy format since the archive cannot be randomly accessed, updated, verified or even appended to... without having to ...gzip without tar? Why are they used together? - Super UserTar and gzip together, but the other way round? - Super UserMore results from superuser.com
  31. [31]
    GNU tar 1.35 - GNU.org
    Aug 22, 2023 · This manual is for GNU tar (version 1.35, 22 August 2023), which creates and extracts files from archives.
  32. [32]
    RAR Archive File Format Family - The Library of Congress
    Apr 30, 2024 · RAR, or the Roshal ARchive format thanks to its namesake creator software developer Eugene Roshal, is a proprietary archive file format that ...
  33. [33]
    None
    ### Summary of 7-Zip History.txt (Key Dates and Features)
  34. [34]
    7z File Format - The Library of Congress
    Apr 30, 2024 · 7z was initially the default format for 7-Zip, developed by Igor Pavlov in 1999 that was used to compress groups of files. There is no formal ...
  35. [35]
    PeaZip free archiver utility, open extract RAR TAR ZIP files
    PeaZip is a free file archiver utility, similar to WinRar, WinZip, and 7-Zip (or File Roller, and Ark on Linux), based on Open Source technologies.Download PeaZip for Windows... · Compression benchmark · Screenshots · ThemesMissing: solid 2006
  36. [36]
  37. [37]
    -m (Set compression Method) switch - MIT
    Set a limit for the total size of a solid block in bytes. These are the default limits for the solid block size: Compression Level, Solid block size. Store, 0 B.
  38. [38]
    How to choose optimal archiving settings - WinRAR
    "Create solid archive" option compresses files as single contiguous data stream resulting in higher compression ratio when processing many small similar files.
  39. [39]
    GNU tar 1.35: 8.1.1 Creating and Reading Compressed Archives
    GNU tar is able to create and read compressed archives. It supports a wide variety of compression programs, namely: gzip , bzip2 , lzip , lzma , lzop ...Missing: indirect solid
  40. [40]
    libarchive - C library and command-line tools for reading and writing ...
    The libarchive library features: Support for a variety of archive and compression formats. Robust automatic format detection, including archive/compression ...Missing: solid Utility
  41. [41]
    -m (Set compression Method) switch - 7-Zip
    Enables or disables sorting files by type in solid archives. The default mode is qs=off. Old versions of 7-Zip (before version 15.06) used file sorting "by type ...