When we started developing GuardianDB, the logical choice for the networking layer seemed obvious: IPFS. It’s the industry standard for decentralization, has a huge ecosystem, and solves the problem of content addressing.
However, as we dove deeper into engineering a high-performance peer-to-peer database in Rust, we realized that trying to fit a dynamic database into the architecture of IPFS introduces an overhead we are no longer willing to pay.
Today, we’re announcing a significant architectural shift: GuardianDB is migrating its networking and synchronization infrastructure to Iroh.
Here’s why and what gets better because of it.
The Problem with ipfs-log
Up until now, we’ve been using concepts like ipfs-log, inspired by OrbitDB, to manage data mutability. The issue is that ipfs-log works by creating linked lists of hashes (Merkle-DAGs) on top of an immutable file system. To know the current state of the database, a node had to download the history, traverse the DAG, and reconstruct the state. This is excellent for auditing but terrible for latency and real-time performance. Additionally, maintaining a traditional P2P stack based on libp2p/IPFS required managing a complex infrastructure: bootstrap servers, STUN servers for NAT traversal, and TURN servers for relaying. It was a lot of operational overhead just to make “Node A” talk to “Node B”.
Enter Iroh: Direct Connections and the Willow Protocol
Iroh is not just a reimplementation of IPFS, it has pivoted into something we desperately needed: a distributed systems library that “simply works”.
Migrating to Iroh brings three immediate advantages:
1. Goodbye ipfs-log. Hello Willow.
Instead of trying to hack mutability on top of static files, Iroh uses the Willow Protocol. Willow doesn’t focus on discovering files, it focuses on synchronizing state. This allows GuardianDB to perform efficient range queries. A node can ask: “What data do you have in this time range that I don’t?” and transfer only the delta. This eliminates the need to download entire historical logs just to get the current state.
2. Infrastructure Simplification
Iroh consolidates discovery and connectivity. It uses an aggressive QUIC implementation and a unified Relay concept. We no longer need to configure STUN, TURN, and separate signaling. Iroh Relay handles hole punching and peer discovery transparently. If two nodes can connect directly, Iroh ensures they will.
3. Rust-Native Performance
By removing the overhead of IPFS’s complex Merkle-DAGs and adopting the BLAKE3 hash, we can saturate network bandwidth far more easily.
The Future of GuardianDB
GuardianDB remains a decentralized database, but now it will be lighter, faster, and much easier to integrate. By adopting Iroh, we’re betting on a stack that prioritizes direct device-to-device connections and intelligent synchronization, rather than a global public DHT. This aligns perfectly with our vision of privacy and performance.
We’re excited to share upcoming benchmarks for this new architecture.
The code is already being refactored, and we invite the Rust community to follow along with this journey.