Fact-checked by Grok 2 weeks ago

Datomic

Datomic is a system designed for data-of-record applications, where data is stored as a collection of immutable facts known as datoms, enabling complete trails and time-based queries without deletions or updates to existing records. Developed since 2010 by , the creator of the programming language, and the team at and Partners, which merged to form Cognitect (acquired by in 2020), Datomic emphasizes principles, drawing inspiration from persistent data structures to ensure transaction guarantees and total ordering of transactions. Written primarily in and running on the (JVM), it supports a flexible that allows entities to have any attributes without relying on values, facilitating relational modeling with hierarchical navigation. At its core, Datomic's architecture separates storage from querying: the datoms are entity-attribute-value triples augmented with transaction metadata, stored in a chronological log that serves as an indelible history, while multiple read-only indexes (EAVT, AVET, AEVT, and VAET) enable efficient querying across diverse patterns. Queries are expressed in , a declarative language that supports joins, rules, and historical as-of queries, allowing applications to reconstruct any past state of the database. Datomic offers editions for different deployment needs, including a local embedded version for development, Datomic Pro for distributed setups with pluggable storage backends like DynamoDB or SQL databases, and Datomic Cloud, which is optimized for AWS with automated scaling and serverless deployment via Ions for application logic. Released under the Apache 2.0 license, Datomic is free for production use and has been adopted by hundreds of organizations for read-heavy workloads requiring and scalability, such as financial systems and , by leveraging read scaling through peer nodes that indexes locally. Its immutable design eliminates many traditional database challenges like locking contention and replication lag, instead appending new facts to build a comprehensive, queryable timeline of data evolution.

Overview

Introduction

Datomic is a distributed, immutable database system that implements as its , enabling flexible and powerful data retrieval. Designed primarily as a general-purpose database for data-of-record applications, it excels in scenarios requiring inherent auditability, historical access, and point-in-time queries, such as in , healthcare, and compliance-heavy industries. A key differentiator of Datomic is its treatment of data as immutable facts, referred to as datoms, rather than mutable records found in conventional relational databases; this approach ensures that all changes are additions, preserving the full history of data without overwrites or deletions. Datomic is with binaries licensed under the 2.0 License, allowing broad adoption for both commercial and non-commercial use. Originally developed by and his team—initially under the banner before forming Cognitect—in 2012, the project was advanced by Cognitect until its acquisition by in 2020, where it continues to be maintained and integrated into large-scale production systems.

History

Datomic was developed by , the creator of the programming language, in collaboration with the team at , a software consultancy firm. The project began around 2010, with the initial occurring on March 29, 2012, introducing Datomic as a next-generation database emphasizing immutability and distributed architecture. The first public release, version 0.8.3335, followed on July 24, 2012, offering both free and professional editions with early support for Clojure integration to facilitate adoption within the functional programming community. In September 2013, Relevance merged with Metadata Partners to form Cognitect, which took over stewardship of Datomic and continued its development. Early releases, such as version 0.8.4020 in June 2013, enhanced transaction data handling and solidified the peer library for embedding database operations directly into application code, promoting seamless Clojure-based workflows. A stable release in the 0.9 series arrived in January 2014 with version 0.9.5130, marking Datomic Pro as suitable for production environments through features like schema alterations and improved reliability. Datomic Cloud was introduced on December 17, 2017, providing a managed AWS deployment option to simplify scaling and operations for cloud-native applications. In July 2020, , a digital banking firm, acquired Cognitect, ensuring continued investment in Datomic's maintenance and ecosystem growth without disrupting customer access. The project transitioned to the 1.0 series in November 2020 with version 1.0.6222. This acquisition paved the way for further accessibility improvements, including the release of Datomic binaries under the Apache 2.0 license on April 27, 2023, allowing free use via Maven Central for non-enterprise scenarios. As of 2025, Datomic remains in the 1.0 series with no major version increments since its introduction in 2020, focusing instead on incremental enhancements for cloud scalability, such as optimizations in version 1.0.7469 released on October 23, 2025, alongside improved documentation and integration tools to support evolving deployment needs.

Design Principles

Core Data Model

Datomic's core data model revolves around immutable facts known as datoms, which form the foundational building blocks of the database. Each datom is a 4-tuple consisting of an (E), an attribute (A), a (V), and a (Tx), representing a single, asserted fact about the world. This structure captures relationships and properties in a universal relation, where datoms are never modified once added, ensuring a consistent historical record. Entities in Datomic emerge as dynamic collections of datoms that share the same entity ID, providing a lazy, associative view of related attributes and values at a specific point in time. Accessed via the entity API, such as (d/entity db id), an entity functions like a where attributes serve as keys and values as their associated , including s to other entities for relational . This design allows entities to evolve organically without rigid predefined structures, supporting flexible data representation across diverse domains. Attributes define the properties of entities and are themselves entities in the , specified with a value type (e.g., :db.type/string, :db.type/ref) and (:db.cardinality/one for single values or :db.cardinality/many for sets). Uniqueness constraints can be applied via :db/unique, either as :db.unique/[identity](/page/Identity) for upserting based on domain keys like addresses or :db.unique/value to enforce singleton usage. Entity IDs, which are database-unique 64-bit longs assigned by the transactor, ensure stable identification, while temporary IDs facilitate client-side creation before resolution. Schema evolution in Datomic is inherently flexible, as attributes can be added, modified, or extended transactionally without impacting existing datoms or requiring . For instance, new attributes can be introduced to entities over time, preserving historical integrity while accommodating changing requirements. Unlike traditional relational models with fixed tables and columns, Datomic's entity-centric approach supports joins through referential attributes but eschews tabular storage, enabling a more fluid, schema-optional structure that emphasizes relationships via datoms. This immutability of datoms underpins the model's time-aware nature, though details on transactional addition are covered elsewhere.

Immutability and Transactions

Datomic enforces immutability by treating databases as ledgers of facts, where no in-place updates or deletions occur. Instead, changes are represented by adding new datoms— facts consisting of an ID, attribute, value, and identifier—while preserving the entire history of prior states. This design ensures that every database value is an immutable set of all datoms ever added, enabling reliable auditing and temporal queries without mutable state conflicts. Transactions in Datomic are declarative and submitted via the d/transact , which atomically accrues a set of datoms to the database. Each transaction specifies additions or retractions using forms like [:db/add entity-id attribute value] or [:db/retract entity-id attribute old-value], processed as a cohesive unit without intermediate visibility to other operations. The transactor, a centralized component, serializes and applies these transactions in the order received, ensuring that the resulting database reflects the complete set of changes or none at all. Datomic provides full guarantees for . is achieved through a single transactor that performs writes in one to durable , preventing partial commits. Consistency is maintained via schema validation, which enforces rules like unique attributes and entity predicates before accepting changes, alongside a global transaction time basis for ordered state transitions. ensures by delivering point-in-time database views to peers, where reads are monotonic and writes form a across the system. Durability relies on the underlying storage backends, such as DynamoDB or , which confirm writes before completion. Transaction functions extend Datomic's capabilities by allowing custom code to be invoked during for , such as validation or derived fact generation. These pure functions take the database state before the transaction and arguments, returning additional transaction data or aborting via d/cancel if rules are violated. Deployed either as database functions (via transactions) or classpath functions (on the transactor), they integrate seamlessly with the immutable model while maintaining properties. Error handling in Datomic ensures transactional integrity by failing the entire operation if any constraint is breached, such as violations or aborts, with no partial effects persisted. The transactor processes transactions serially from a , returning detailed reports on success or failure, including exceptions for issues like timeouts or invalid data. Peers can monitor outcomes asynchronously to confirm the database state post-submission.

Querying and Data Access

Datalog Language

Datomic employs as its primary , a declarative, logic-based system designed for retrieving and manipulating data from its immutable database of facts known as datoms. in this context allows users to express queries as logical patterns and rules, focusing on what data to retrieve rather than how to access it, which facilitates complex relational queries without procedural code. At its core, Datomic's Datalog operates on relations represented by triples of entity ID, attribute, and value, enabling joins and pattern matching akin to but optimized for database operations. Queries are structured using keywords like :find to specify output variables, :where for pattern clauses, :in for input parameters, and optionally :rules for defining reusable logic. Variables, denoted by ? prefixes (e.g., ?e for an entity), bind to values during execution, while clauses such as [?e :artist/name ?name] match entities with specific attributes. Rules support , for instance, by defining transitive closures over relationships like parent-child hierarchies through repeated applications of base patterns. Query execution in Datomic returns unordered sets of bindings for the :find variables, typically as tuples or collections, executed against a database value that represents a point-in-time view of the data. Inputs can include constants, entity IDs, or even subqueries via :in, allowing parameterized and composable queries, while outputs can leverage the Pull API for hierarchical entity attribute selection beyond flat bindings. Compared to SQL, Datomic's offers advantages in handling complex joins and natively without subqueries or recursive common table expressions, and it provides schema flexibility by querying schema-optional data without requiring fixed table structures. Its use of data structures for queries avoids vulnerabilities inherent in string-based SQL and enables engine-level optimizations that evolve independently of query logic. For a simple entity lookup, a query might find all entities named "" as follows:
[:find ?e
 :where [?e :artist/name "The Beatles"]]
This returns a set containing the entity , such as #{[17592186045470]}. A multi-hop query could retrieve names and s for songs by that artist, joining across entities:
[:find ?name ?duration
 :where [?e :artist/name "The Beatles"]
        [?track :track/artists ?e]
        [?track :track/name ?name]
        [?track :track/duration ?duration]]
Executing this yields bindings like [["Here Comes the Sun" 186000]], demonstrating traversal of artist-to-track references.

Time Travel Features

Datomic's time travel features allow users to query the database at any historical point, providing immutable snapshots and change logs without requiring separate auditing mechanisms. This capability stems from the system's design, where every transaction appends new facts to an indelible log, enabling retrospective analysis of data evolution. As-of queries retrieve a consistent view of the database as it existed at a specific or , excluding all subsequent changes. For instance, invoking db.asOf(t) returns a database value containing only facts asserted up to time t, which can be specified as a ID, basis-t, or . This facilitates auditing past states, such as verifying decisions based on outdated levels where a query might show a count of 7 for an item before a correction updated it to 100. History views, obtained via db.history(), expose the complete of all datoms ever asserted or retracted, including both additions (true) and removals (false) across the database's lifetime. Queries against this can reveal the full sequence of changes for an , such as tracking updates to an item's count from 100 to 1000 over multiple transactions, complete with operation flags and timestamps. This unfiltered perspective supports detailed forensic analysis without data loss. Since and until filters enable targeted examination of changes within temporal bounds by combining functions like db.since(t1).asOf(t2), which yields datoms added after t1 but present up to t2. The since isolates facts transacted after a given point, useful for detecting deltas, while chaining with as-of refines the window for precise . These operations require careful handling of database to avoid inconsistencies in lookups. Each transaction in Datomic is associated with a :db/txInstant attribute, a timestamp recording when the transaction occurred, serving as the anchor for all temporal queries. This built-in metadata ensures that time points are precise and queryable, allowing users to reference exact moments like #inst "2014-01-01" for reproducible historical views. These features underpin key use cases such as regulatory compliance auditing, where full histories satisfy retention requirements; debugging application logic by replaying data states; and versioning without auxiliary logs, as the persistent fact log inherently supports rollback simulations and trend analysis.

Architecture

System Components

Datomic's system architecture revolves around a distributed setup comprising peers, a transactor, and a , designed to separate read and write for reliability and . Peers act as application-side clients that handle both querying and submission, connecting directly to the to discover the transactor's via periodic heartbeats. The transactor serves as the central authority for all write , while the provides durable persistence across various backends. This separation ensures that read workloads can scale independently without impacting writes, though the single active transactor imposes a natural limit on write . Peers are read-oriented JVM processes in applications, caching database indexes locally to enable fast, consistent queries even during transactor outages. They maintain a local view of the database value, allowing calls to retrieve the most recent consistent available in , and use functions like d/sync to align with storage for point-in-time accuracy. For writes, peers submit transactions to the transactor over a secure , but do not process them locally, ensuring all modifications are sequenced centrally. Multiple peers can connect to the same database concurrently, distributing read load across application instances without requiring additional coordination. The transactor is a single, active process responsible for processing all across one or more databases, guaranteeing properties through sequencing and validation. It receives transaction requests from peers, orders them linearly, and applies changes to the database while writing heartbeats to the layer to advertise its endpoint for peer reconnection. In production, a standby transactor monitors the active one for , enabling without data loss, though frequent failovers signal underlying issues. The transactor's design emphasizes fail-fast behavior, isolating it on dedicated hardware to minimize interference from peer or loads. The layer is a pluggable abstraction for persistent data , supporting backends such as AWS DynamoDB for scalable persistence, for distributed clusters with a minimum of three nodes and replication factor of three, and relational databases like or via JDBC. These backends store the database's datoms and es durably, with compatibility allowing seamless switching via connection strings. The transactor interacts with to persist outcomes and segments, while peers read from it for population and . Index building occurs on the transactor, which accumulates recent datom changes in memory until a threshold (default 32MB) triggers background indexing into immutable segments of up to approximately 50KB each. These segments, covering all index types like EAVT and AEVT, are then pushed to the storage layer for persistence, ensuring each datom is replicated at least three times for redundancy. If memory usage approaches the maximum (default 512MB), the transactor applies back pressure to throttle incoming transactions until indexing completes, preventing overload. Parallelism in indexing, configurable up to eight threads on multi-CPU systems with scalable storage, accelerates this process for high-write scenarios. For scalability, Datomic leverages multiple peers to horizontally scale read queries and local caching, allowing applications to handle increased load by adding instances without affecting the transactor. Writes, however, are constrained by the single transactor's , which can be tuned via CPU allocation, write concurrency (e.g., four threads for 800KB/second throughput on DynamoDB), and provisioning. The architecture's —running peers, transactor, and on separate —ensures that spikes in one area minimally impact others, supporting reliable in distributed environments. High availability is further enhanced by the standby transactor and replication, though overall write scaling requires careful planning to avoid indexing bottlenecks.

Indexing and Storage

Datomic employs four built-in covering indexes to organize datoms for efficient data access patterns. These indexes maintain ordered sets of datoms, enabling optimized lookups without requiring additional configurations for most queries. The primary index, EAVT (Entity-Attribute-Value-), sorts datoms by ID (E) ascending, followed by attribute (A), value (V) ascending, and transaction ID (T) descending; it facilitates lookups, akin to accessing rows in a , grouping all facts associated with a specific for master-detail operations. The AVET index sorts by attribute (A) ascending, value (V) ascending, entity (E) ascending, and transaction (T) descending, supporting attribute scans to retrieve all values for a given attribute across entities, similar to column-wise access in SQL; this index requires explicit schema configuration (:db/index true) in Datomic Pro but is always enabled in Datomic Cloud. For composite queries involving specific attribute-value pairs, the AEVT index orders by attribute (A) ascending, entity (E) ascending, value (V) ascending, and transaction (T) descending, allowing efficient filtering on unique combinations. The VAET index, a reverse index for reference attributes (:db.type/ref), sorts by value (V) ascending, attribute (A), entity (E), and transaction (T) descending, enabling value-based access and relationship traversal, such as finding all entities linked to a particular reference value. Index segments form the foundational units of these indexes, consisting of immutable, time-range sorted files that capture snapshots of datoms over transaction intervals. Each segment is a shallow with a wide (approximately 1000) and leaf nodes holding a few thousand datoms, ensuring compact storage and fast traversal. The transactor periodically rebuilds these segments through background indexing jobs, merging recent datoms from an in-memory index into durable tiers; this adaptive process scales sublinearly with data volume, minimizing rewritten segments and maintaining query performance. Datomic supports multiple storage backends for persisting datoms and indexes, prioritizing durability and scalability. In Datomic , AWS DynamoDB serves as the default backend, providing ACID-compliant with automatic replication across multiple availability zones. For on-premises or hybrid deployments, is a common choice, utilizing a dedicated (datomic_kvs) for key-value of datoms and segments. Development environments typically use an in-memory backend backed by local disk files via an embedded JDBC server, suitable for non-production testing but lacking enterprise-grade persistence. Other options like are available for high-availability needs, requiring at least three nodes with a replication factor of three. To mitigate in distributed reads, peers employ mechanisms for replicas. An LRU object holds frequently accessed and log segments as objects directly in , requiring no explicit . For larger-scale , Valcache provides a Memcached-compatible using local SSD on supported instances (e.g., AWS i3), with fallback to shared EFS for broader ; this layered approach ensures segments are readily available without repeated fetches from the primary backend. Local options further allow peers to maintain personal replicas of , reducing overhead in multi-node setups. Data durability in Datomic is ensured through backend-specific replication guarantees, applying to both datoms and derived indexes. DynamoDB, for instance, replicates data across three facilities by default, offering 99.999999999% availability over a year, while configurations can achieve similar via clustering. Indexes inherit this durability as immutable artifacts stored alongside datoms, with background jobs ensuring without risking during rebuilds. This stratified persistence model—combining transactional , durable caches, and archival layers—provides robust from failures across the system.

Deployment and Integrations

Deployment Options

Datomic offers three primary deployment options tailored to different scales and environments: Datomic Local for development and testing, Datomic Pro for distributed on-premises or hybrid setups, and Datomic Cloud for fully managed AWS deployments. Each option leverages the same core data model and query language while varying in infrastructure management and scalability features. Datomic Local provides an embedded, single-process database suitable for local development, continuous integration, and small applications without external dependencies. It stores data in local files or in-memory, requiring no network connectivity or separate server processes, and supports the full Datomic API for transactions and queries. Ideal for testing, it allows rapid iteration by adding the Datomic Local library to the application classpath and configuring storage via a .datomic/local.edn file to specify directories for databases. Datomic Pro enables distributed deployments with , supporting on-premises, private cloud, or hybrid environments through custom storage backends like SQL databases, DynamoDB, or . It requires managing an active transactor for writes—optionally with a standby for —and multiple peers for read scaling, all running on separate JVM processes for production reliability. uses files, such as dev-transactor-template.properties for initial setup, and the system supports manual scaling without automated cloud orchestration. Datomic Cloud delivers a serverless, fully managed exclusively on AWS, automating with services like DynamoDB for transaction logs, S3 for indexes, and EC2 Auto Scaling Groups for compute. It eliminates transactor management, providing elastic scaling, built-in backups via S3 retention, and seamless integration with AWS features like API Gateway. Deployment occurs through templates, focusing on VPC setup, roles, and encryption for security. All Datomic editions require a , with Pro and needing Java 11 or later (LTS versions recommended) and using Java 17 on compute nodes; is essential for advanced peer or integrations. Configuration often involves EDN files for and ions, or properties files for Pro transactors, ensuring reproducible setups. Migration paths facilitate scaling: from to or by exporting databases and reconnecting via compatible URIs, preserving the immutable ; from to involves porting applications to client-only access and leveraging ions for AWS-native features, though peer-dependent code may need refactoring. Contact Cognitect support for complex transitions.

Client Interfaces

Datomic provides two primary programmatic interfaces for applications to interact with the database: the Peer and the Client . The Peer API is a full-featured library designed for embedding directly within application processes, offering direct access to queries and transactions on the JVM. In contrast, the Client API serves as a lightweight interface for remote connections, particularly suited for deployments and short-lived services, routing requests through a peer or infrastructure. The Peer API, available as the datomic.api namespace in and the datomic.Peer class in , enables direct database connections for embedded use cases. Applications connect using a database that specifies the and storage backend, such as datomic:dev://[localhost](/page/Localhost):4334/hello for local development or datomic:sql://host:port?jdbc:postgresql://.../mydb for SQL-based storage. Key functions include connect(uri) to establish a thread-safe connection, q(query, db) for executing queries, pull(selector, eid) for retrieving , and transact(conn, [tx-data](/page/Tx-data)) for submitting transactions, which blocks until completion or can use transact-async for non-blocking operation. Connections are automatically cached for reuse, providing implicit pooling without manual management. The , exposed through the datomic.client.api , offers a synchronous interface wrapping an asynchronous core, ideal for remote access in distributed environments like Datomic Cloud. It begins with client(config-map) where the :server-type is set to :cloud (specifying :region, :endpoint, etc.) or :peer-server for on-premises setups, followed by connect(db-name) to obtain a connection. This supports equivalent operations to the Peer API, including q(query) for queries, pull(selector, [eid](/page/Eid)) for data retrieval, and transact(tx-data) for transactions, with results returned directly or via channels in async mode. Designed for smaller footprints, it communicates over HTTP to a gateway, enabling scalability in . Integrations leverage these APIs natively: Clojure applications use datomic.api for seamless access, while Java interop employs the Peer class directly. For non-JVM languages, the Client API facilitates bindings via its HTTP-based protocol, though the legacy REST API (accessible at endpoints like https://localhost:8001) provides an alternative EDN-formatted HTTP interface for programmatic calls, albeit not recommended for new development. Connection management relies on URI schemes to abstract backends, with Peer API URIs like datomic:ddb://us-east-1/my-table/my-db for DynamoDB or datomic:mem://my-db for in-memory testing, and Client API using configuration maps for cloud or dev-local modes. Best practices include utilizing async transactions (transact-async in Peer, channel-based in Client) to achieve high throughput without blocking, retrying on transient errors like :busy with , and relying on built-in caching for efficient connection reuse in both APIs.

References

  1. [1]
    Introduction | Datomic
    ### Summary of Datomic from Introduction and Overview Sections
  2. [2]
    Rich Hickey's Datomic embraces Cloud, intelligent Applications and ...
    Apr 3, 2012 · Developed since 2010 by Rich Hickey and the Relevance team, Datomic offers some new approaches to database architecture.<|control11|><|separator|>
  3. [3]
  4. [4]
  5. [5]
    Datomic - Overview
    The fully transactional, cloud-ready, distributed database. Build flexible, distributed systems that can leverage the entire history of your critical data.Datomic Documentation · Support Pricing · Datomic - Cloud · Datomic Developers
  6. [6]
  7. [7]
    Nubank acquires US company Cognitect, the team behind Clojure ...
    Jul 23, 2020 · Nubank acquires US company Cognitect, the team behind Clojure and Datomic ... Rich Hickey will remain at the helm of Clojure, which remains ...
  8. [8]
    Datomic services - Cognitect.com
    For almost two years, we've been working with Rich Hickey to realize his vision for a next generation database: Datomic.Missing: history announcement
  9. [9]
    Datomic Pro Change Log
    Datomic Release 1.0.7469 fixes a bug that prevented excisions from removing datoms from the as-of or history indexes. This release also includes a tool to ...
  10. [10]
    Relevance and Metadata Partners Join Forces to Become Cognitect
    Sep 16, 2013 · Relevance and Metadata Partners Join Forces to Become Cognitect. Justin Gehtland - September 16, 2013. It is with great pleasure that I ...Missing: date | Show results with:date
  11. [11]
    Datomic is Free
    Apr 27, 2023 · The Datomic binaries are being released under the Apache 2.0 license and will be readily available for direct download and use via Maven Central ...Missing: date | Show results with:date
  12. [12]
    Critical Release Datomic 1.0.7469 Pro now Available
    Oct 23, 2025 · Datomic Release 1.0.7469 fixes a bug that prevented excisions from removing datoms from the as-of or history indexes. This release also includes ...<|control11|><|separator|>
  13. [13]
    Entities - Datomic Documentation
    A Datomic entity provides a lazy, associative view of all the information that can be reached from a Datomic entity id. The entity interface provides ...Entities · Basics · Laziness And Caching
  14. [14]
    Schema Data Reference - Datomic Documentation
    Check this reference guide for Datomic schema elements to define and understand database structure.:Db/valuetype · Composite Tuples · Entity SpecsMissing: core datoms
  15. [15]
    Identity and Uniqueness - Datomic Documentation
    Every datom in Datomic includes a database-unique entity id, often abbreviated as simply e in documentation and API names. Entity ids are assigned by the ...Identity And Uniqueness · Idents · Unique IdentitiesMissing: core | Show results with:core
  16. [16]
    Transaction Model - Datomic Documentation
    A Datomic transaction is declarative. The d/transact API atomically accrues a transaction to a database. On their own, the datoms of a transaction 'do' nothing.Transaction Model · It's All About Information · Integrity And Composition
  17. [17]
    Transaction Data - Datomic Documentation
    Explore the Datomic transaction data reference. Get details on transaction functions, schemas, and examples.Transaction Data Grammar · Assert And Retract · Map Forms
  18. [18]
    Processing Transactions - Datomic Documentation
    After a transaction data structure is built, you must submit it to the transactor for processing. The transactor queues transactions and processes them ...
  19. [19]
    ACID - Datomic Documentation
    Datomic transactions are ACID: Atomic, Consistent, Isolated, and Durable. This document defines the four components of ACID, explains how Datomic works,Acid · Consistency · How It Works<|control11|><|separator|>
  20. [20]
    Transaction Functions | Datomic
    ### Summary of Transaction Functions in Datomic
  21. [21]
    Query - Datomic Documentation
    This section documents Datomic's query: the Datalog query language and the hierarchical Pull API. Executing Queries · Query Data Reference · Pull. Copyright © ...
  22. [22]
    Query Reference - Datomic Documentation
    This topic documents the data format for Datomic datalog queries and rules. If you want to follow along at a REPL, most of the examples on this page work use ...Query Grammar · Queries · Predicates · Built-in Predicates and Functions
  23. [23]
    Executing Queries - Datomic Documentation
    To query, acquire a database value using `db` and use `q` with a query and arguments. `q` returns a collection of tuples. `qseq` is a variant for lazy ...Missing: Datalog | Show results with:Datalog
  24. [24]
    Pull - Datomic Documentation
    Pull is a declarative way to make hierarchical selections of information about entities, applying a pattern to build a map for each entity.
  25. [25]
    Best Practices - Datomic Documentation
    Use the History Filter for Audit Trail Queries. With the history filter, your database will have a view that spans time and sees all historical datoms, ...Missing: formation | Show results with:formation
  26. [26]
    Database Filters | Datomic
    ### Summary of Datomic Time Filters
  27. [27]
    History - Datomic Documentation
    Discover how to utilize the Datomic history feature. Learn to query historical data, track changes, and gain insights into your data's evolution.Missing: creator | Show results with:creator
  28. [28]
    Datomic Deployment
    A Datomic deployment consists of a storage service, an active transactor, a standby transactor, and one or more peers.Datomic Deployment · Troubleshooting · Upgrading Datomic Schema
  29. [29]
    Transactor Reference - Datomic Documentation
    A Datomic transactor performs ACID transactions for a set of databases. You can launch a transactor for one or more databases with the bin/transactor script ...Missing: scalability | Show results with:scalability
  30. [30]
    Capacity Planning - Datomic Documentation
    Datomic capacity planning includes transactor memory, data imports, peer memory, transaction performance, indexing, multiple databases, DynamoDB, and storage ...Capacity Planning · Transactor Memory · Transaction Performance
  31. [31]
    Setting up Storage Services - Datomic Documentation
    This document walks through the process of provisioning a storage service for use with Datomic Pro.Sql Database · Dynamodb · Ddb Manual SetupMissing: scalability | Show results with:scalability
  32. [32]
    Indexes - Datomic Documentation
    Datomic maintains four indexes that contain ordered sets of datoms. Each of these indexes is named based on the sort order used.Eavt · Implementation · Real-Time Query
  33. [33]
    Indexes - Datomic Documentation
    Datomic maintains four covering indexes that contain ordered sets of datoms. Each of these indexes is named based on the sort order used.Missing: building scalability
  34. [34]
    Background Indexing - Datomic Documentation
    Understand background indexing in Datomic with our guide. Learn how it works, its benefits, and how to optimize indexing performance.Background Indexing · The Job · Implications
  35. [35]
    Datomic Cloud Architecture
    Datomic is designed from the ground up to run on AWS. Datomic automates AWS resources, deployment and security so that you can focus on your application.
  36. [36]
    Introduction | Datomic
    Datomic is a general purpose database system designed for data-of-record applications. A Datomic database is a set of immutable atomic facts called datoms.
  37. [37]
    Local Dev and CI with Datomic Local
    With Datomic Local you can develop and test applications with minimal connectivity and setup. Get the datomic local library, add it to your classpath, and you ...
  38. [38]
    Pro Setup - Datomic Documentation
    Get started with Datomic Pro. Follow our setup guide for installation, configuration, and initial deployment.
  39. [39]
    Setup - Datomic Documentation
    To create a Datomic Cloud system, follow the steps below. These tasks need only be performed once, by an AWS administrator.
  40. [40]
    Datomic Cloud Change Log
    Stay updated with the latest changes and release notes for Datomic Cloud. Explore new features, improvements, and fixes.
  41. [41]
    Moving to Cloud - Datomic Documentation
    Datomic lets you run your code in process with your data. In Datomic Pro, this takes the form of functions installed in a database, or functions you add to the ...
  42. [42]
    APIs - Datomic Documentation
    APIs. Peer API Clojuredoc · Peer API Javadoc · Client API Clojuredoc · Client API · Datomic Local API · Index Pull · Index APIs · Log API · REST API ...
  43. [43]
    Client API, Unlimited Peers, Enterprise Edition, and More - Datomic
    Oct 28, 2016 · We are pleased to announce that the latest (0.9.5530) release of Datomic includes a set of new features and licensing changes to address needs ...
  44. [44]
    Datomic Clojure API documentation
    Administer system. Takes an options map with a required :action key. Throws on failure. Actions include: Release Object Cache :action :release-object-cache ...<|control11|><|separator|>
  45. [45]
    Peer (Datomic Java API Documentation)
    public class Peer extends Object Main entry point, used to manage connections, submit transactions, and query.
  46. [46]
    Connect to a Database - Datomic Documentation
    May 13, 2024 · The Datomic Peer API names databases with a URI that includes the protocol name, storage connection information, and a database name.
  47. [47]
    datomic.client.api documentation
    since. (since db t). Returns the value of the database since some time-point. See https://docs.datomic.com/cloud/time/filters.html. sync. (sync conn t). Used to ...Missing: asOf | Show results with:asOf
  48. [48]
    Client Library Reference - Datomic Documentation
    This page covers everything you need to use the Datomic client API after you have added the client API to your project:
  49. [49]
    REST API - Datomic Documentation
    The REST service can be accessed from a browser or programmatically from a client program. When accessed from the browser, the service is self-describing and ...<|control11|><|separator|>