Today on the GuardianDB blog we talk about one of the deepest transformations in the entire history of the project: the absolute refactoring behind GuardianDB 0.14.0 “Ironclad”. This is not just about adding features or fixing bugs. We changed paradigms. We rewrote our foundations. We aligned the architecture with a vision of networking that actually works, especially in distributed, P2P, and mobile scenarios.
At the heart of this change is the adoption of Iroh as the native networking layer, leaving behind dependencies such as IPFS and libp2p. This decision was driven by deep technical reasons, which I will explain below.
Meet Iroh. A different idea of networking.
A large part of the community knows IPFS and libp2p as “the standard path” for P2P. The problem is that, over time, these projects became generic infrastructures: highly configurable, but also heavy and complex.
Iroh starts from a different premise:
- direct connections whenever possible
- simple cryptographic identity
- a single, modern transport
- synchronization as a central problem, not an accessory
In practice, many concepts that already existed in libp2p/IPFS appear in Iroh in a more explicit and more lightweight form.
Iroh Endpoint: the new “node”
In Iroh, everything revolves around the Endpoint. An Endpoint is, at the same time: identity, network node, cryptographic entry point, and the basic unit of connection.
Each Endpoint is identified by an EndpointID, which is nothing more than a 32-byte ed25519 public key, directly replacing:
- the libp2p PeerId
- the implicit concept of an “IPFS node”
In Iroh:
- there is no format conversion
- there is no multihash
- there is no identity ambiguity
Connecting to a peer is connecting to its public key. Simple, direct, and cryptographically sound.
libp2p PeerId → Iroh EndpointID
IPFS Node → Iroh Endpoint
This simplicity has a huge impact on distributed systems.
QUIC transport
While libp2p supports multiple transports (TCP, WebSockets, QUIC, etc.), Iroh assumes QUIC as its foundation.
This brings important practical consequences:
- mandatory encryption (built-in TLS 1.3)
- native stream multiplexing
- lower latency on mobile networks
- no head-of-line blocking
As a result, GuardianDB now operates on top of a modern, consistent, and predictable transport, without having to negotiate dozens of intermediate layers.
NAT traversal and Magicsock
One of the most underestimated aspects of Iroh is its connectivity system, based on an adaptation of Magicsock, originally developed by Tailscale.
At a high level, Magicsock works like this:
- Attempts direct peer-to-peer connections via UDP/QUIC
- Automatically uses NAT hole-punching techniques
- Monitors active paths and dynamically switches if the network changes
- Uses relays only as a fallback, not as the default
In the Iroh ecosystem:
- the relay is not the center of the network
- there is no constant dependency on infrastructure
- direct connections are aggressively prioritized
This is fundamental for local-first, mobile, and distributed applications.
Gossip: Epidemic Broadcast Trees
Many developers are familiar with libp2p’s GossipSub. What few realize is that iroh-gossip solves the same problem, but with a more lightweight model.
iroh-gossip is based on Epidemic Broadcast Trees, inspired by works such as PlumTree and HyParView:
- messages spread in an epidemic fashion
- broadcast trees reduce redundancy
- node failures do not break propagation
- less metadata and less complex scoring
In practice:
libp2p GossipSub ≈ iroh-gossip
but with less complexity and less manual tuning
Iroh Docs
One of the most important, and least publicized, parts of Iroh lives in iroh-docs. Two fundamental concepts stand out here:
Range-Based Set Reconciliation
Instead of exchanging entire lists of operations or states, iroh-docs uses range-based reconciliation.
This means that two peers compare version ranges, quickly identify gaps, and transfer only what is missing.
The cost of synchronization grows much more slowly with the size of the history. For distributed databases, this is critical.
Last-Write-Wins (LWW)
The default conflict resolution model in iroh-docs is Last-Write-Wins, based on timestamps and writer identity, ensuring deterministic convergence, the absence of interactive conflicts, and predictability of the final state.
Iroh Blobs and deterministic hashes
iroh-blobs perfectly replaces the role that many developers associate with IPFS, but without the conceptual weight of CIDs.
In Iroh:
- data is identified by BLAKE3 hashes
- verification is straightforward
- there is no dependency on a global DHT
Direct comparison: libp2p / IPFS vs Iroh
| Concept | libp2p / IPFS | Iroh |
|---|---|---|
| Identity | PeerId (multihash) | EndpointId (ed25519) |
| Node | IPFS node | Iroh Endpoint |
| Transport | Multiple (TCP, WS, QUIC…) | QUIC required |
| Encryption | Optional per transport | Always enabled |
| PubSub | GossipSub | iroh-gossip (Epidemic Broadcast Tree) |
| Synchronization | Manual / application-dependent | Range-based set reconciliation |
| Conflict resolution | Varies | Last-Write-Wins |
| Content addressing | CID | BLAKE3 hash |
| NAT traversal | Configurable | Automatic (magicsock) |
| Relay usage | Frequent | Fallback only |
What changed in GuardianDB 0.14.0 “Ironclad” with Iroh:
- IPFS and libp2p completely removed
- CIDs and PeerIds eliminated
- Operation based on Endpoints and BLAKE3 hashes
- Radically simplified replication
- Identity, networking, and data aligned under the same cryptographic model
Ironclad is not just a new version. It is an architectural realignment with the way modern distributed networks actually work.
This absolute refactoring is not the end of the road.
It is the first version built on a foundation that finally makes sense.