skip to Main Content
bitcoin
Bitcoin (BTC) $ 98,238.32 0.05%
ethereum
Ethereum (ETH) $ 3,437.00 1.46%
tether
Tether (USDT) $ 0.999578 0.02%
xrp
XRP (XRP) $ 2.25 2.29%
bnb
BNB (BNB) $ 707.05 0.57%
solana
Solana (SOL) $ 194.98 1.98%
dogecoin
Dogecoin (DOGE) $ 0.325485 2.50%
usd-coin
USDC (USDC) $ 1.00 0.02%
staked-ether
Lido Staked Ether (STETH) $ 3,433.27 1.46%
cardano
Cardano (ADA) $ 0.896005 3.25%

How Decentralized AI and Zero-Knowledge Proofs Will Democratize Compute

In late July, Mark Zuckerberg penned a letter explaining why “open source is necessary for a positive AI future,” where he waxes poetic about the need for open-source AI development. The once-nerdy teen founder, now turned into the wakeboarding, gold chain-wearing, and jiu-jitsu fighting “Zuck,” has been branded as the messiah of open-source model development.

But thus-far, he and the Meta team haven’t articulated much about how these models are being deployed. As model complexity drives compute requirements higher, if model deployment is controlled by a handful of actors, then have we not succumbed to a similar form of centralization? Decentralized AI promises to solve this challenge, but the technology requires advancements in industry-leading cryptographic techniques and unique hybrid solutions.

This op-ed is part of CoinDesk’s new DePIN Vertical, covering the emerging industry of decentralized physical infrastructure.

Unlike centralized cloud providers, decentralized AI (DAI) distributes the computational processes for AI inference and training across multiple systems, networks, and locations. If implemented correctly, these networks, a type of decentralized physical infrastructure network (DePIN), bring benefits in censorship resistance, compute access, and cost.

DAI faces challenges in two main areas: the AI environment and the decentralized infrastructure itself. Compared to centralized systems, DAI requires additional safeguards to prevent unauthorized access to model details or the theft and replication of proprietary information. For this reason, there is an under-explored opportunity for teams who focus on open-source models, but recognize the potential performance disadvantage of open-sourced models compared to their closed-source counterparts.

Decentralized systems specifically face obstacles in network integrity and resource overhead. The distribution of client data across separate nodes, for instance, exposes more attack vectors. Attackers could spin up a node and analyze its computations, try to intercept data transmissions between nodes, or even introduce biases that degrade the system’s performance. Even in a secure decentralized inference model, there must be mechanisms to audit compute processes. Nodes are incentivized to save cost on resources by presenting incomplete computations, and verification is complicated by the lack of a trusted, centralized actor.

Zero-knowledge proofs (ZKPs), while currently too computationally expensive, are one potential solution to some DAI challenges. ZKP is a cryptographic mechanism that enables one party (the prover) to convince another party (the verifier) of the truth of a statement without divulging any details about the statement itself, except its validity. Verification of this proof is quick for other nodes to run and offers a way for each node to prove it acted in accordance with the protocol. The technical differences between proof systems and their implementations (deep-dive on this coming later) are important for investors in the space.

Centralized compute makes model training exclusive to a handful of well-positioned and resourced players. ZKPs could be one part of unlocking idle compute on consumer hardware; a MacBook, for example, could use its extra compute bandwidth to help train a large-language model while earning tokens for the user.

Deploying decentralized training or inference with consumer hardware is the focus of teams like Gensyn and Inference Labs; unlike a decentralized compute network like Akash or Render, sharding the computations adds complexity, namely the floating point problem. Making use of idle distributed compute resources opens the door for smaller developers to test and train their own networks — as long as they have access to tools that solve associated challenges.

At present, ZKP systems are seemingly four to six orders of magnitude more expensive than running the compute natively, and for tasks that require high-compute (like model training) or low latency (like model inference) using a ZKP is prohibitively slow. For comparison, a drop of six orders of magnitude means that a cutting edge system (like a16z’s Jolt) running on an M3 Max chip can prove a program 150 times slower than running it on a TI-84 graphing calculator.

AI’s ability to process large amounts of data makes it compatible with zero-knowledge proofs (ZKPs), but more progress in cryptography is needed before ZKPs can be widely used. Work being done by teams such as Irreducible (who designed the Binius proof system and commitment scheme), Gensyn, TensorOpera, Hellas, and Inference Labs, among others, will be an important step in achieving this vision. Timelines, however, remain overly optimistic as true innovation takes time and mathematical advancement.

In the meantime, it’s worth noting other possibilities and hybrid solutions. HellasAI and others are developing new methods of representing models and computations that can enable an optimistic challenge game, allowing for only a subset of computation that needs to be handled in zero-knowledge. Optimistic proofs only work when there is staking, the ability to prove wrongdoing, and a credible threat that the computation is being checked by other nodes in the system. Another method, developed by Inference Labs, validates a subset of queries where a node commits to generate a ZKP with a bond, but only presents the proof if first challenged by the client.

In Sum

Decentralized AI training and inference will serve as a safeguard against the consolidation of power by a few major actors while unlocking previously inaccessible compute. ZKPs will be an integral part of enabling this vision. Your computer will be able to earn you real money imperceptibly by utilizing extra processing power in the background. Succinct proofs that a computation was carried out correctly will make the trust that the largest cloud providers leverage unnecessary, enabling compute networks with smaller providers to attract enterprise clientele.

While zero-knowledge proofs will enable this future and be an essential part of more than just compute networks (like Ethereum’s vision for single slot finality), their computational overhead remains an obstacle. Hybrid solutions that combine game theory mechanics of optimistic games with selective use of zero-knowledge proofs are a better solution, and will likely become ubiquitous as a bridging point until ZKPs become much faster.

For native and non-native crypto investors, understanding the value and challenges of decentralized AI systems will be crucial to effectively deploying capital. Teams should have answers to questions regarding node computation proofs and network redundancies. Furthermore, as we’ve observed in many DePIN projects, decentralization occurs over time, and teams’ clear plan toward that vision is essential. Solving the challenges associated with DePIN compute is essential for handing control back to individuals and small developers — a vital part of keeping our systems open, free and censorship-resistant.

Note: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.

Edited by Benjamin Schiller.

Loading data ...
Comparison
View chart compare
View table compare
Back To Top