skip to Main Content
bitcoin
Bitcoin (BTC) $ 76,130.40 1.53%
vested-xor
Vested XOR (VXOR) $ 3,405.08 99,999.99%
ethereum
Ethereum (ETH) $ 2,939.73 4.31%
tether
Tether (USDT) $ 1.00 0.05%
solana
Solana (SOL) $ 203.06 8.20%
bnb
BNB (BNB) $ 598.70 0.77%
usd-coin
USDC (USDC) $ 1.00 0.03%
xrp
XRP (XRP) $ 0.552487 0.69%
dogecoin
Dogecoin (DOGE) $ 0.198864 4.33%
staked-ether
Lido Staked Ether (STETH) $ 2,935.98 4.22%

Can We Ever Trust AI Agents?

The renowned Harvard psychologist B.F. Skinner once opined that the “real problem is not whether machines think but whether men do.” This witty observation underscores a once crucial point: that our trust in technology hinges on human judgment. It’s not machine intelligence we should worry about, but the wisdom and responsibility of those who control it. Or at least that was the case.

With software like ChatGPT now an integral part of many work lives, Skinner’s insight seems almost quaint. The meteoric rise of AI agents – software entities capable of perceiving their environment and taking actions to achieve specific goals – has fundamentally shifted the paradigm. These digital assistants, born from the consumer AI boom of the early 2020s, now permeate our digital lives, handling tasks from scheduling appointments to making investment decisions.

What are AI agents?

AI agents differ significantly from large language models (LLMs) like ChatGPT in their capacity for autonomous action. While LLMs primarily process and generate text, AI agents are designed to perceive their environment, make decisions, and take actions to achieve specific goals. These agents combine various AI technologies, including natural language processing, computer vision, and reinforcement learning, allowing them to adapt and learn from their experiences.

But as AI agents proliferate and iterate, so too does a gnawing unease. Can we ever truly trust these digital entities? The question is far from academic. AI agents operate in complex environments, making decisions based on vast datasets and intricate algorithms that even their creators struggle to fully comprehend. This inherent opacity breeds mistrust. When an AI agent recommends a medical treatment or predicts market trends, how can we be certain of the reasoning behind its choices?

The consequences of misplaced trust in AI agents could be dire. Imagine an AI-powered financial advisor that inadvertently crashes markets due to a misinterpreted data point, or a healthcare AI that recommends incorrect treatments based on biased training data. The potential for harm is not limited to individual sectors; as AI agents become more integrated into our daily lives, their influence grows exponentially. A misstep could ripple through society, affecting everything from personal privacy to global economics.

At the heart of this trust deficit lies a fundamental issue: centralization. The development and deployment of AI models have largely been the purview of a handful of tech giants. These centralized AI models operate as black boxes, their decision-making processes obscured from public scrutiny. This lack of transparency makes it virtually impossible to trust their decisions in high-stakes operations. How can we rely on an AI agent to make critical choices when we cannot understand or verify its reasoning?

Decentralization as the answer

However, a solution to these concerns does exist: decentralized AI. A paradigm that offers a path towards more transparent and trustworthy AI agents. This approach leverages the strengths of blockchain technology and other decentralized systems to create AI models that are not only powerful but also accountable.

The tools for building trust in AI agents already exist. Blockchains can enable verifiable computation, ensuring that AI actions are auditable and traceable. Every decision an AI agent makes could be recorded on a public ledger, allowing for unprecedented transparency. Concurrently, advanced cryptographic techniques like trusted execution environment machine learning (TeeML) can protect sensitive data and maintain model integrity, achieving both transparency and privacy.

As AI agents increasingly operate adjacent to or directly on public blockchains, the concept of verifiability becomes crucial. Traditional AI models may struggle to prove the integrity of their operations, but blockchain-based AI agents can provide cryptographic guarantees of their behavior. This verifiability is not just a technical nicety; it’s a fundamental requirement for trust in high-stakes environments.

Confidential computing techniques, particularly trusted execution environments (TEEs), offer an important layer of assurance. TEEs provide a secure enclave where AI computations can occur, isolated from potential interference. This technology ensures that even the operators of the AI system cannot tamper with or spy on the agent’s decision-making process, further bolstering trust.

Frameworks like the Oasis Network’s Runtime Off-chain Logic (ROFL) represent the cutting edge of this approach, enabling seamless integration of verifiable AI computation with on-chain auditability and transparency. Such innovations expand the possibilities for AI-driven applications while maintaining the highest standards of trust and transparency.

Towards a trustworthy AI future

The path to trustworthy AI agents is not without challenges. Technical hurdles remain, and widespread adoption of decentralized AI systems will require a shift in both industry practices and public understanding. However, the potential rewards are immense. Imagine a world where AI agents make critical decisions with full transparency, where their actions can be verified and audited by anyone, and where the power of artificial intelligence is distributed rather than concentrated in the hands of a few corporations.

There is also the chance to unlock significant economic growth, too. One 2023 study out of Beijing found that a 1% increase in AI penetration leads to a 14.2% increase in total factor productivity (TFP). However, most AI productivity studies focus on general LLMs, not AI agents. Autonomous AI agents capable of performing multiple tasks independently could potentially yield greater productivity gains. Trustworthy and auditable AI agents would likely be even more effective.

Perhaps it’s time to update Skinner’s famous quote. The real problem is no longer whether machines think, but whether we can trust their thoughts. With decentralized AI and blockchain, we have the tools to build that trust. The question now is whether we have the wisdom to use them.

Note: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.

Edited by Benjamin Schiller.

Disclosure

Please note that our

privacy policy,

terms of use,

cookies,

and

do not sell my personal information

has been updated

.

CoinDesk is an

award-winning

media outlet that covers the cryptocurrency industry. Its journalists abide by a

strict set of editorial policies.

In November 2023

, CoinDesk was acquired

by the Bullish group, owner of

Bullish,

a regulated, digital assets exchange. The Bullish group is majority-owned by

Block.one; both companies have

interests

in a variety of blockchain and digital asset businesses and significant holdings of digital assets, including bitcoin.

CoinDesk operates as an independent subsidiary with an editorial committee to protect journalistic independence. CoinDesk employees, including journalists, may receive options in the Bullish group as part of their compensation.

Marko Stokic

Marko Stokic is the Head of AI at the Oasis Protocol Foundation, where works with a team focused on developing cutting-edge AI applications integrated with blockchain technology. With a business background, Marko’s interest in crypto was sparked by Bitcoin in 2017 and deepened through his experiences during the 2018 market crash.

Loading data ...
Comparison
View chart compare
View table compare
Back To Top