Fact-checked by Grok 2 weeks ago

Mainline DHT

Mainline DHT is a Kademlia-based (DHT) integrated into the peer-to-peer file sharing system, enabling clients to discover and connect with peers for files without relying on centralized servers. It functions as a decentralized by allowing participating nodes to store and retrieve peer contact information associated with torrent infohashes, using UDP-based messaging for efficient routing and queries. Commonly known as the Mainline DHT, it emerged as part of 's evolution to enhance availability and robustness in large-scale networks. The protocol employs 160-bit node identifiers and an XOR metric to measure distances in the , with each maintaining a of up to eight active per k-bucket for logarithmic-time lookups. Core operations include ping for liveness checks, find_node for locating nearby nodes, get_peers for retrieving peer lists tied to a specific infohash, and announce_peer for registering a node's participation in a , secured by short-lived to mitigate abuse. This supports trackerless , where peers bootstrap into the network via well-known nodes or embedded in files, iteratively refining searches to contact the closest nodes to a target key. Specified in BitTorrent Enhancement Proposal 5 (BEP-5) and authored by Andrew Loewenstern and Arvid Norberg, the Mainline DHT has been adopted by major clients such as the original BitTorrent software, uTorrent, and qBittorrent, forming a resilient network often comprising millions of concurrent nodes. While its decentralized structure promotes scalability and fault tolerance, security analyses have identified vulnerabilities, including susceptibility to Sybil attacks where adversaries insert malicious nodes to eclipse legitimate traffic, and efficient spam propagation due to the lack of robust authentication mechanisms. These properties have made Mainline DHT a foundational element of modern P2P systems, influencing designs in other distributed applications despite ongoing challenges in adversarial environments.

Overview

Description

Mainline DHT is a Kademlia-inspired (DHT) employed in the protocol for trackerless torrents, facilitating decentralized peer location through 160-bit node identifiers and torrent info hashes. This system allows BitTorrent clients to discover and connect with other peers sharing specific content without dependence on centralized trackers. The primary function of Mainline DHT is to store and retrieve contact details for peers, including IP addresses and ports, associated with torrent info hashes in a decentralized manner. Peers participate symmetrically as both clients querying the network and servers hosting data, forming a sloppy where information is replicated across nearby nodes based on XOR distance metrics. Communication occurs over using the RPC (KRPC) protocol for efficient, low-overhead exchanges. This design provides key advantages in scalability, enabling the system to handle large-scale swarms with millions of participants by distributing the load across the network, and , ensuring continued operation even if individual nodes or central components fail.

History

The concept of using a (DHT) for trackerless torrents in originated with the release of Azureus version 2.3.0.0 (later rebranded as ) on May 2, 2005, which introduced an initial DHT implementation to enable peer discovery without centralized trackers. This innovation addressed vulnerabilities in tracker-dependent systems by decentralizing peer coordination, drawing inspiration from earlier DHT research like . Shortly thereafter, , Inc., under the leadership of founder , developed an alternative and incompatible DHT variant known as Mainline DHT, first integrated into the Mainline client with version 4.2.0 in November 2005. This implementation was designed as a robust extension to the core protocol outlined in BEP 3 (initially proposed in 2001 but refined over time), with specifics for Mainline DHT formalized in BEP 5, which detailed mechanisms like announce tokens and a key-value store for peer announcements. The protocol evolved through subsequent BEPs, emphasizing sloppy hashing for resilience and efficiency in large-scale peer networks. Adoption of Mainline DHT accelerated after 2006, driven by increasing legal pressures on centralized trackers and their frequent shutdowns, which highlighted the need for decentralized alternatives. For instance, the 2009 decommissioning of The Pirate Bay's tracker—once the world's largest—pushed users toward DHT-reliant clients, as it rendered traditional tracker-based swarms unreliable. Popular clients like uTorrent, which added Mainline DHT support starting with version 1.2.1 in late 2005 and expanded it in subsequent releases, further propelled widespread use by 2007, enabling trackerless operation for millions of users. In the ensuing years, the protocol saw minor refinements for modern network challenges, including BEP 32 (proposed October 2009 and updated through 2016) to extend DHT functionality over , addressing and improving global scalability. Further tweaks in BEP 5, such as enhanced support via implied ports (added March 2013) and efficiency optimizations (last modified January 2020), ensured ongoing compatibility and performance in evolving internet environments.

Protocol Fundamentals

Node Identification

In Mainline DHT, each participating is assigned a unique 160-bit identifier known as the node ID, which is randomly generated from the same identifier space as infohashes using hashing with sufficient entropy to ensure uniqueness and uniform distribution across the keyspace. This random selection allows nodes to position themselves arbitrarily in the DHT's , facilitating decentralized without centralized authority. While some extensions and implementations, such as BEP-42, explore deriving node IDs from addresses for enhanced security, the core protocol relies on this random generation to prioritize simplicity and resistance to targeted attacks. Node contact information is exchanged in a compact format consisting of the 20-byte followed by a 6-byte representation of the 's and port for addresses, enabling efficient storage and transmission of up to 200 such contacts in responses (totaling 5200 bytes). This format supports through an 18-byte extension, but remains the default for compactness in most operations. The structure ensures that can quickly parse and utilize peer details during without excessive overhead. Unlike some DHT variants that employ proof-of-work for ID validation, Mainline DHT does not enforce strict computational challenges; instead, node legitimacy is verified through reachability tests via ping queries, where a responding node within 15 minutes is classified as "good" and retained in routing tables. This lightweight approach relies on the network's scale to mitigate sybil attacks, as unresponsive or malicious nodes are periodically evicted based on empirical responsiveness. Pings serve as the primary mechanism to confirm a 's ongoing participation and IP-port binding. Closeness between nodes or between a node and an infohash is measured using the XOR metric, where the distance is computed as the bitwise XOR of the two 160-bit values interpreted as an unsigned integer; smaller distances indicate proximity in the keyspace. The routing table is organized into k-buckets, each storing up to k=8 nodes whose IDs share a common prefix length with the local node ID, with this parameter set to 8 in the specification for logarithmic-time lookups. Special bootstrap nodes, such as router.bittorrent.com on port 6881, serve as initial entry points for new nodes joining the network. These nodes are contacted via known IP:port addresses, and their node IDs are obtained through initial ping queries to ensure reliable discovery without depending on resolution alone. These nodes seed the during the bootstrap process, providing a stable foundation for subsequent XOR-based lookups.

Routing Table

The routing table in Mainline DHT is organized into 160 k-buckets, one for each possible leading bit position in the 160-bit node ID space, based on the length of the common prefix in the XOR distance between the local node ID and other node IDs. This structure ensures that nodes in a given bucket are roughly equidistant from the local node and facilitates logarithmic-path routing for efficient node discovery. Each bucket stores up to k = 8 nodes as specified, though some implementations may adjust this value for enhanced fault tolerance against churn. The sloppy variant of the protocol permits temporary overflow in buckets during lookups by querying more than k nodes in parallel (with concurrency factor α = 3) to increase the chances of finding closer nodes without strict adherence to bucket limits. Maintenance of the involves regular verification of liveness through periodic pings, with classified as good if they have responded to or initiated a query within the last . Questionable , inactive for , are pinged for confirmation; failures lead to their removal and replacement by querying in adjacent buckets for suitable candidates. Buckets unchanged for are refreshed via a find_node query using a random target ID within the bucket's range to discover new active . Stored values, such as peer lists associated with info hashes, require periodic republishing to sustain availability amid node churn, with failures handled by rerouting queries to closer known for recovery. The lookup process leverages the through an iterative, XOR-based to identify the k closest nodes to a target , beginning with the most relevant buckets and progressively refining the set of known closest candidates. Queries are sent in parallel to up to α = 3 nodes from the current closest set, with responses updating the closest list and prompting further iterations until no closer nodes are discovered or the process converges. This approach enables scalable without exhaustive traversal, relying on the table's prefix-based organization for O(log n) efficiency. Bucket splitting occurs when a full bucket contains the local node ID within its range and a new node must be added, dividing it into two sub-buckets: one for nodes closer to the local node (sharing an additional prefix bit) and one for farther nodes, determined from the perspective of the node responding to the insertion query. This dynamic adaptation refines the local view of the ID space over time, ensuring balanced coverage and short routing paths as the table populates. In terms of storage, nodes acting as closest responders to announcements store peer lists associated with info hashes, with practical limits imposed by memory and UDP packet sizes for responses (typically returning up to around 200 compact peer addresses).

KRPC

The KRPC (Kademlia Remote Procedure Call) protocol serves as the foundational communication mechanism in Mainline DHT, enabling remote procedure calls between nodes over UDP. Messages are compact binary structures serialized using Bencoding, a simple encoding format that supports integers, byte strings, lists, and dictionaries. Each message forms a single Bencoded dictionary with three mandatory keys: "t" for the transaction ID (a short binary string, typically 2 bytes long to allow up to 65,536 concurrent transactions), "y" for the message type (a single-character string indicating "q" for query, "r" for response, or "e" for error), and type-specific keys for content. The transaction ID is generated by the querying node and echoed unchanged in the corresponding response or error to facilitate matching. Queries initiate all communications, consisting of the method name in the "q" key (an ASCII string) and arguments in the "a" key (a Bencoded dictionary). For instance, a basic query message might be encoded as:
d1:ad2:id20:abcdefghij0123456789abcdef1:q4:ping1:t2:aa1:y1:qe
Responses mirror the structure but use "y":"r" and include return values in the "r" key as a Bencoded dictionary. Errors use "y":"e" with the "e" key holding a list containing an error code (e.g., 201 for generic errors) followed by a human-readable string. All messages are sent as datagrams, ensuring low overhead but requiring careful handling of unreliability. Bencoding ensures efficient, human-readable ; for example, integers are prefixed with "i" and suffixed with "e" (e.g., i3e for 3), strings with their length (e.g., 4:spam), and structures recursively. Transaction handling relies on the unique "t" ID to correlate responses with queries, as lacks built-in session management. Implementing nodes generate random or sequential IDs for each outgoing query and maintain a pending table. Upon receiving a response or error with a matching ID, the entry is cleared; unmatched messages are discarded. Timeouts are enforced at 15 seconds per to prevent indefinite waits, after which the query is considered failed and the node may be probed again or marked unresponsive. This short timeout balances responsiveness with network variability, though nodes are deemed unreliable only after multiple failures over longer periods (e.g., 15 minutes of inactivity). Versioning in KRPC is minimal in the core protocol, with "y":"q"/"r"/"e" defining version 0 behavior. Extensions introduce an optional "v" key in the top-level dictionary for client identification (a 4-character string per BEP 20, e.g., "UT12" for uTorrent). Further extensions, such as BEP 44 for storing immutable data, repurpose "v" within argument dictionaries to hold the stored value (a byte string whose SHA-1 hash serves as the key), but retain the standard message types without altering the core versioning. This allows backward compatibility while enabling features like arbitrary data storage.

Core Operations

Bootstrap Process

The bootstrap process enables a new joining the Mainline DHT to initialize its by discovering nearby nodes in the keyspace. This begins with the node generating a random 160-bit node ID using a secure and contacting a set of hardcoded bootstrap nodes, which serve as entry points to the network. Common examples of such bootstrap nodes include router.utorrent.com:6881, router.bittorrent.com:6881, and dht.transmissionbt.com:6881, maintained by popular client developers to ensure reliable network access. The node initiates the discovery by issuing find_node queries via the KRPC protocol to these bootstrap nodes, specifying its own node ID as the target to retrieve contact information for the k=8 closest nodes known to the bootstrap nodes. Each response contains compact node announcements (IP address, port, and node ID) for up to 8 nodes, sorted by XOR distance to the target. The querying node then selects the closest responders—typically up to 3—and sends parallel find_node queries to them, iterating this process to uncover progressively closer nodes. This parallelism accelerates convergence, with the search halting when no closer nodes are returned, the set of k closest nodes is complete, or a timeout occurs after approximately 15 minutes of inactivity. Discovered nodes that respond are inserted into the appropriate k-buckets of the new node's routing table based on their XOR distance to the node's ID, respecting the per-bucket limit of k=8 entries. To ensure reliability, the node verifies each inserted contact by sending a ping query; responsive nodes (replying within 15 minutes) are classified as "good" and retained, while non-responsive ones are discarded or marked for replacement. This verification step helps maintain a table of active, low-latency contacts for subsequent operations. IPv6 support, formalized in BEP 32 published in 2008, treats IPv4 and IPv6 as separate DHT overlays with distinct routing tables and bootstrap nodes to avoid interoperability issues. Dual-stack nodes use the same node ID across both networks but issue find_node queries specifying the desired address family (via the "want" parameter as "n4" for IPv4 or "n6" for IPv6), allowing efficient bootstrapping into each overlay independently.

Queries

The Mainline DHT employs three primary query methods to facilitate node discovery and peer retrieval: ping, find_node, and get_peers. These queries are encoded using the KRPC , which structures messages as bencoded dictionaries transmitted over , including a transaction ID (t), message type (y set to "q" for queries), method name (q), and arguments (a). The query serves as the simplest mechanism to verify reachability. It includes a single argument, id, which is the querying 's 20-byte node ID. Upon receipt, the queried responds with its own 20-byte node ID in the id field, confirming availability without returning additional node contacts. This query is essential for basic connectivity checks during network interactions. The find_node query enables by identifying closest to a specified . Its arguments consist of id (the querying 's 20-byte node ID) and target (a 20-byte ID representing the sought location in the keyspace). The queried responds with its own id and a nodes field containing compact node —concatenated 26-byte entries consisting of the 20-byte node ID, 4-byte , and 2-byte port (all in network byte order)—for the k = 8 closest good from its , measured by XOR distance to the target. This process supports DHT traversal, where a querying initiates parallel requests to up to \alpha = 3 closest known , iteratively incorporating responses to refine its set of closest candidates until no closer are discovered, at which point it terminates. During traversal, the node maintains a temporary set of up to 20 closest to guide further queries, though responses are limited to k = 8. The get_peers query retrieves contact for peers associated with a specific . It requires arguments id (querying node's 20-byte node ID) and info_hash (20-byte hash of the torrent's info ). If the queried stores peers for the info_hash, it responds with its id, an opaque write for future announcements, and a values field listing compact peer entries (6 bytes each: and ). Otherwise, it provides the token along with a nodes field containing compact node information for the k = 8 closest nodes to the info_hash from its , using the same 26-byte format. A may return nodes even if it has peers, for load balancing or other reasons. This dual response mechanism balances direct peer with fallback , adhering to the same \alpha = 3 traversal strategy as find_node to locate optimal nodes.

Announcements and Tokens

In the Mainline DHT protocol, peers announce their presence for a specific torrent by issuing an announce_peer query to nodes responsible for the torrent's 20-byte info_hash. This query contains the peer's node ID, the info_hash, the peer's UDP port (typically 6881), an optional implied_port flag indicating whether the port matches the sender's source port, and crucially, a token obtained from a prior get_peers query to the same node. The receiving node verifies that the token corresponds to the sender's IP address before storing the peer's compact contact information—consisting of a 4-byte IPv4 address and 2-byte port, totaling 6 bytes per peer—under the info_hash. This storage enables subsequent get_peers queries to discover active peers in the swarm. The serves as an anti-abuse mechanism, ensuring that only that have demonstrated interest in the (via a prior get_peers query) can announce themselves. are opaque binary values generated by the responding during a get_peers response, typically as a hash of the querying 's concatenated with a per- secret that refreshes every 5 minutes. Their length is implementation-dependent and not fixed by the protocol, but common values range from 4 to 8 bytes, though lengths of 20 bytes (full output) or other sizes have been observed. remain valid for a limited time to bound storage liability; the reference Bit implementation accepts up to 10 minutes old, while broader practice may extend to 30-60 minutes depending on the client. Each is tied to a specific and , preventing reuse across different peers or addresses. Peer storage on a is temporary and resource-constrained to mitigate denial-of-service risks from excessive announcements. The does not mandate exact limits or durations, but implementations expire entries after some period of inactivity. If the queried has peers for the info_hash, a get_peers response includes them in a compact "values" list; it always includes a and may also provide a "nodes" field with compact information (26-byte entries) for closer nodes. This design balances discovery efficiency with network load. To sustain visibility in the swarm, announcing peers must refresh their entries before expiration, typically by reissuing announce_peer queries every 30 minutes using updated tokens from recent get_peers interactions. This interval aligns with standard announce periods and ensures peers remain discoverable without overwhelming the DHT. Failure to re-announce leads to from , reducing the peer's chances of being returned in future queries.

Implementations and Extensions

Official BitTorrent Integration

Mainline DHT was integrated into the official client starting with version 4.2.0 in , providing support for trackerless torrents by enabling decentralized peer discovery without relying on central trackers. This integration allowed the client to function as both a torrent participant and a lightweight tracker within the DHT network, storing peer contact information for infohashes. The DHT in the official client uses port 6881 by default for communication, with an optional configuration setting to specify a different port if needed for or network constraints. Bootstrap nodes, such as router.bittorrent.com and router.utorrent.com on port 6881, are contacted upon startup to populate the , which maintains up to 8 active nodes per k-bucket and refreshes unused buckets every 15 minutes through random lookups. If DHT peer discovery fails, the client automatically falls back to any available trackers specified in the to ensure connectivity. Key features include support specified in 2009 through extensions to the message and node addressing, allowing dual-stack operation over both IPv4 and networks. For torrents, the 'private' key set to 1 in the info dictionary instructs the client to disable DHT participation, preventing unauthorized peer sharing outside the designated tracker. The implementation employs a threaded engine for handling DHT operations, limiting concurrent active searches to 8 to balance performance and resource usage while avoiding network overload. Following BitTorrent Inc.'s acquisition of uTorrent in December 2006, elements of the uTorrent protocol—such as efficient peer exchange—were merged into the mainline client, enhancing DHT compatibility and overall efficiency. Subsequent updates incorporated BEP 9, enabling magnet links to leverage DHT for metadata retrieval from peers, allowing users to initiate downloads using only an infohash without a full .torrent file.

Third-Party Implementations

Libtorrent, developed by Rasterbar Software, is a widely used open-source C++ library implementing the protocol, including full compliance with BEP 5 for Mainline DHT functionality. Introduced in version 0.14 released in 2008, it provides efficient , , and peer announcements, emphasizing scalability for both embedded devices and desktops. This library powers several popular clients, such as and , enabling them to participate in the Mainline DHT network for trackerless torrents with minimal overhead. Transmission, a lightweight cross-platform client, integrated Mainline DHT support starting with version 1.50 in 2008, prioritizing low resource usage and simplicity in peer discovery. Its implementation focuses on essential DHT operations like and queries while avoiding unnecessary extensions to maintain performance on resource-constrained systems. Vuze, previously known as Azureus, originally featured its own proprietary DHT system introduced in , which differs from Mainline in structure and operations. To achieve , Vuze supports partial Mainline DHT compatibility through a dedicated that allows access to the Mainline network for , though it does not fully replace the native system. Development of Vuze ceased around 2019, with BiglyBT serving as its active open-source successor that includes full Mainline DHT support. Standalone libraries have also emerged for developers seeking modular Mainline DHT integration without full client dependencies. The btdht library, a implementation released in the , offers a flexible for DHT operations and supports extensions for storing custom key-value pairs beyond standard torrent . Similarly, the dht library in Go, a modern BEP 5-compliant implementation, facilitates Kademlia-based node management and is commonly used in custom torrent clients for efficient, lightweight DHT participation. Compliance variations exist among implementations, particularly with write tokens required for peer announcements under BEP 5 to prevent unauthorized inserts. Older clients or partial implementations, such as early versions without the plugin, often lack full token support, resulting in restricted access to certain network segments and reduced .

Security Considerations

Known Vulnerabilities

Mainline DHT is susceptible to eclipse attacks, where an adversary inserts numerous fake nodes into the routing tables of target nodes to isolate them from the rest of the network. This vulnerability exploits the protocol's k-buckets mechanism, which maintains only k=8 closest nodes per bucket, allowing a modest number of sybils—typically around 8—to dominate a target's view of the DHT and block legitimate queries for specific . In practical assessments, such attacks enable intermittent full pollution of torrent peer lists, preventing users from discovering real peers. Sybil attacks further undermine the network by enabling attackers to flood the DHT with fake node IDs, thereby gaining disproportionate control over routing paths without any proof-of-work requirement for ID generation. The absence of for node IDs allows adversaries to choose IDs strategically close to popular infohashes, intercepting queries and responses to monitor traffic or manipulate data. Real-world measurements from 2010-2011 revealed thousands of active sybils, including clusters controlling up to 290,000 fake nodes, demonstrating the low cost—under $100 monthly for hosting as of that time—and ease of launching large-scale attacks to specific targets or spy on user activity; later studies indicate these issues persist into the . The announcement mechanism in Mainline DHT suffers from weak token validation, permitting replay of tokens obtained via get_peers queries to spam announce_peer requests without sufficient checks, which can amplify distributed denial-of-service (DDoS) attacks. Tokens, intended to limit unauthorized peer announcements for a torrent, lack robust uniqueness or expiration enforcement, allowing attackers to reuse them across multiple sessions or IPs to flood the network with bogus announcements. This leads to responses containing up to 200 peer entries, creating an amplification factor of approximately 12x for IPv4 traffic and enabling reflective DDoS campaigns targeting arbitrary victims by spoofing queries. Index poisoning represents another critical flaw, where malicious nodes store and propagate fake peer information for targeted torrents in the DHT, disrupting formation and download efficiency. Attackers exploit the lack of in storage operations to insert invalid peers, which legitimate nodes then retrieve and attempt to connect to, wasting resources and slowing content distribution. Studies from 2009-2010 observed this attack's prevalence, with just around 8 sybils achieving over 90% pollution of peer lists for popular torrents, effectively rendering unusable for extended periods. Finally, privacy leaks arise from the protocol's design, where node IDs are often derived from or correlated with addresses during and queries, combined with the absence of in KRPC messages. This allows adversaries performing spy attacks—via sybils responding to find_node or get_peers—to and user , ports, and behaviors, exposing over unique from a single monitoring campaign. Without message authentication or , such correlations enable large-scale of participation.

Mitigation Strategies

To mitigate spam and denial-of-service attacks in Mainline DHT, clients implement on queries and announcements per , typically restricting the number of active peers from a single IP to prevent an attacker from launching multiple nodes from one machine. Additionally, the protocol uses tokens in announcement responses, generated as a hash of the info_hash and a rotating secret that changes every 10 minutes or so, with tokens valid for up to 10 minutes to verify the originator and curb unauthorized announcements. To resist Sybil attacks, node IDs are generated randomly using a secure of random data, introducing perturbations that make it harder for attackers to predict or coordinate ID assignments for targeted infiltration. BEP 42, proposed in 2010 as a extension, binds the high 32 bits of node IDs to a CRC32c of the node's compact (including ), limiting attackers' ability to choose arbitrary IDs close to target keys without controlling multiple IP-port combinations; however, adoption remains limited to select clients like , which integrated it in 2012. Proof-of-work mechanisms have been proposed in drafts to require computational effort for node joins or announcements, further deterring Sybil proliferation, though none have been standardized in Mainline DHT BEPs. At the network level, clients perform reachability checks via queries before adding discovered nodes to their tables, discarding unresponsive or faulty entries to maintain table integrity and avoid propagation of malicious contacts. Modern implementations separate IPv4 and IPv6 handling through BEP 32 extensions, using a "nodes2" key in responses to encode variable-length addresses (6 bytes for IPv4, 18 for ), preventing compatibility issues and enhancing privacy by isolating address families. Post-2015, libtorrent introduced encrypted DHT extensions via message stream encryption (MSE) for peer connections derived from DHT discoveries, obscuring traffic patterns and improving resistance to passive monitoring. As of 2023, security analyses confirm that while mitigations have reduced some risks, vulnerabilities like Sybil attacks and DDoS amplification persist in the Mainline DHT network, affecting millions of nodes.

References

  1. [1]
    bep_0005.rst_post - BitTorrent.org
    The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get ...
  2. [2]
    BitTorrent's Mainline DHT Security Assessment - IEEE Xplore
    In this paper we present a security study of the main decentralized tracker in BitTorrent, commonly known as the Mainline DHT.
  3. [3]
    [PDF] Real-World Sybil Attacks in BitTorrent Mainline DHT
    In this paper, we consider two kinds of attacks on a DHT, one already known attack and one new kind of an attack, and show how they can be targeted against ...
  4. [4]
    [PDF] Connectivity Properties of Mainline BitTorrent DHT Nodes
    Kademlia makes use of iterative routing to locate the value associated to the objectID, which is stored on the nodes whose nodeIDs are closest to the objectID.<|control11|><|separator|>
  5. [5]
    BitTorrent's DHT Turns 10 Years Old - TorrentFreak
    Jun 7, 2015 · The Vuze DHT debuted first, with version 2.3.0 of the Azureus client on May 2, 2005. In its announcements back then, they were keen to stress ...
  6. [6]
    bep_0003.rst_post
    ### Summary of BEP 3 Date and History
  7. [7]
    Pirate Bay Retires World's Largest BitTorrent Tracker - WIRED
    Nov 17, 2009 · Operators of The Pirate Bay shuttered the site's BitTorrent tracker on Tuesday, six years after it was founded.
  8. [8]
    DHT question - General - µTorrent Community Forums - uTorrent
    Dec 10, 2005 · μTorrent follows this scheme starting with version 1.2.1.x. Using DHT will harm your DIME share ratio and violates our tracker rules.
  9. [9]
    bep_0032.rst_post - BitTorrent.org
    Changes to the BitTorrent protocol. The PORT message, as defined in BEP-5, is extended to work over both IPv4 and IPv6. The information provided by the PORT ...
  10. [10]
  11. [11]
    The Kademlia DHT - by Akhilesh Manda - Project Whitepaper
    Aug 14, 2025 · BitTorrent's Mainline DHT. The problem: How do millions of people find who has the latest movie without a central tracker? Kademlia's ...Missing: specification | Show results with:specification
  12. [12]
    DHT walkthrough notes - GitHub Gist
    The key of such items is the SHA-1 hash of the data being stored. It also supports storign MUTABLE items, in this case the key, is the public key used to sign ...
  13. [13]
    [PDF] Crawling BitTorrent DHTs for Fun and Profit - J. Alex Halderman
    centralized BitTorrent infrastructure. (Indeed, the Pirate. Bay tracker has since been shuttered and Mininova has been forced to filter copyrighted content ...
  14. [14]
  15. [15]
    bep_0044.rst_post
    ### Summary on Versioning in KRPC for Immutable Keys
  16. [16]
    bittorrent/bootstrap-dht: DHT bootstrap server - GitHub
    The DHT bootstrap server can be used as an introducer to the bittorrent DHT network. Like the ones running at router.utorrent.com and router.bittorrent.com.
  17. [17]
    How does a DHT in a Bittorent client get "bootstrapped"?
    Jul 25, 2009 · The mainline DHT bootstrap nodes are router.utorrent.com and a CNAME to it, router.bittorrent.com . Port 6881 .Where can I find a list of bittorent dht bootstrap nodes?How to connect to the DHT bootstrap nodes? - Stack OverflowMore results from stackoverflow.com
  18. [18]
    Mainline DHT bootstrap process - Stack Overflow
    Sep 28, 2011 · Mainline DHT is a kademlia implementation, for details, see the paper. From the 8 nodes you receive, sort them by their node-id's closeness ...How does the routing table work in mainline dht?Enumerating Mainline DHTMore results from stackoverflow.comMissing: KRPC y= v
  19. [19]
  20. [20]
    Bittorrent: Token size in get_peers DHT response - Stack Overflow
    May 31, 2020 · I've read a BEP 5 specification and get expectation that token value in DHT message always has 20 bytes length. Because: The BitTorrent ...Getting peers quickly by announcing to the DHT frequently?Can you make basic mainline dht queries like find_node and ping ...More results from stackoverflow.com
  21. [21]
    Do I remove peers after a certain amount of time?(mainline dht)
    Feb 26, 2023 · I'm asking about BitTorrent peers. Yes, they should only be stored for a finite amount of time. The spec seems to be silent on that.Mainline DHT bootstrap process - Stack OverflowEnumerating Mainline DHT - Stack OverflowMore results from stackoverflow.comMissing: interval | Show results with:interval
  22. [22]
    [PDF] Traffic Localization for DHT-Based BitTorrent Networks - Hal-Inria
    Sep 29, 2017 · The announcing peer is responsible to re-announce the tuple <IP:port,I> over time. In the remainder of this Section, we focus on the ...
  23. [23]
    BitTorrent Inc. buys µTorrent - Ars Technica
    BitTorrent Inc. announced the acquisition of µTorrent AB, the developer of the popular Windows BitTorrent client of the same name.
  24. [24]
    Mainline DHT extensions - libtorrent
    Each string represents one contact and is encoded as 20 bytes node-id and then a variable length encoded IP address (6 bytes in IPv4 case and 18 bytes in ...Missing: k bucket
  25. [25]
    libtorrent - Browse /libtorrent/libtorrent-0.14 at SourceForge.net
    Torrent client, tracker and maker all-in-one. Now support DHT and FAST protocol extensions. Does not support magnet files (there are sites to convert magnets to ...Missing: history 2007
  26. [26]
    libtorrent manual
    Saves the storage state, piece_picker state as well as all local peers in a fast-resume file. has an adjustable read and write disk cache for improved disk ...
  27. [27]
    Transmission
    A fast, easy and free Bittorrent client for macOS, Windows and Linux. Download v4.0.6 stable Release Notes
  28. [28]
    Mainline DHT [Vuze Automation Plugin]
    Rating 4.4 (20) · FreeThis is an implementation of the alternative DHT developed by the Mainline client. For help with IPv6 see the Wiki. Requires Java version 6.0 (also known as 1. ...Missing: compatibility | Show results with:compatibility
  29. [29]
    btdht - PyPI
    The aim of btdht is to provide a powerful implementation of the Bittorrent mainline DHT easily extended to build application over the DHT.
  30. [30]
    nictuku/dht: Kademlia/Mainline DHT node in Go. - GitHub
    This is a golang Kademlia/Bittorrent DHT library that implements BEP 5. It's typically used by a torrent client such as Taipei-Torrent.
  31. [31]
    [PDF] BitTorrent's Mainline DHT Security Assessment - Hal-Inria
    Mar 16, 2011 · We have shown that with few nodes, one can highly pollute or even eclipse a given content on the Mainline DHT.
  32. [32]
    [PDF] P2P File-Sharing in Hell: Exploiting BitTorrent Vulnerabilities to ...
    Abstract. In this paper, we demonstrate that the BitTorrent proto- col family is vulnerable to distributed reflective denial- of-service (DRDoS) attacks.
  33. [33]
    bep_0042.rst_post - BitTorrent.org
    The purpose of this extension is to make it harder to launch a few specific attacks against the BitTorrent DHT and also to make it harder to snoop the network.Missing: Mainline public
  34. [34]
    DHT security - libtorrent blog
    Dec 23, 2012 · One of the vulnerabilities of typical DHTs, in particular the bittorrent DHT, is the fact that participants can choose their own node ID.Missing: extensions timeline
  35. [35]
    Putting Sybils on a Diet: Securing Distributed Hash Tables using ...
    May 5, 2025 · In this work, we investigate using Proof of Space (PoSp) to limit the number of Sybils DHTs. While PoW proves that a node wastes computation, PoSp proves that ...
  36. [36]
    settings_pack - libtorrent
    makes the first buckets in the DHT routing table fit 128, 64, 32 and 16 nodes respectively, as opposed to the standard size of 8. All other buckets have size 8 ...Missing: mainline k