Trend Tide News

AI and Crypto (Part I): Decentralizing AI -- Big Dreams, Bigger Hype?


AI and Crypto (Part I): Decentralizing AI -- Big Dreams, Bigger Hype?

This is the first article in a four-part series delving into the intersection of crypto and AI. We begin with a simple yet provocative question: can crypto's decentralized architecture effectively counter AI's seemingly unstoppable drive toward centralization?

For an industry rooted in the dry, analytical domains of cryptography and game theory, crypto has an uncanny ability to draw exceptional storytelling talent. As Steve Jobs allegedly said, "The most powerful person in the world is the storyteller," and nowhere is this more evident than in the crypto space. If you've spent any time in the crypto space, you've likely witnessed the ebb and flow of narratives and memes -- each wave elevating potential winners, only to crash ashore with the next market correction. Over the last decade, the crypto space has evolved through a series of shifting narratives: from Bitcoin as a hedge against inflation, to financial inclusion, decentralized finance, the NFT boom in arts and creative industries, DAOs, the metaverse, and most recently, stablecoins and dollarization as signs that crypto may have found its product-market fit.

The reality is that crypto is a general-purpose technology (GPT), much like the steam engine, electricity, and the internet before it -- each of which took years to weave their transformative impact through the fabric of the economy. While good storytelling in crypto moves at the speed of memes, actual deployment takes time, patience, and a willingness to tackle tedious "last mile" challenges. For instance, when it comes to crypto revolutionizing payments, the technology itself isn't the limiting factor; the real challenges lie in tackling practical but essential issues such as compliance, identity verification, and regulatory alignment. In fact, for many applications, it's the absence of clear regulation that continues to hinder broader adoption. The engineers have delivered -- now it's up to Congress to establish the regulatory framework needed to make crypto mainstream.

The latest narrative in crypto is its inevitable role as the critical infrastructure for AI. And while the love and attention predominantly flows from crypto to AI developers -- who are preoccupied with the race to AGI, securing chips, and scaling nuclear power, regardless of payment method -- the story remains, at least on the surface, compelling. But we shouldn't get carried away. As Kahneman aptly put it: "Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable."

When evaluating the intersection of crypto and AI from first principles, the prevailing narrative starts to show its cracks. Sure, the long-term potential is hard to deny. But for now -- beyond the headline-grabbing entertainment agents like Truth Terminal and Luna AI -- the reality is far more mundane than the hype machine would have you believe.

Spoiler alert for the final installment of this series: the short-term overlap isn't where most people are looking. It's less glamorous than advertised, but once you notice it, it's almost painfully obvious. Over the coming weeks, we'll explore three key areas where crypto and AI intersect: 1) using crypto to curb centralization in AI -- an issue that's top of mind for many, including regulators; 2) leveraging crypto to address generative AI's side-effects, like restoring digital scarcity, enhancing provenance, and enabling more robust identity systems; and 3) establishing crypto as the payments infrastructure for AI agents. Finally, we'll wrap up with a look at one high-impact opportunity where crypto and AI can deliver significant value right now.

AI decentralization spans at least three dimensions, none of which are easy -- or particularly practical -- with today's technology. Crypto could, in theory, decentralize: 1) the compute required to train or run AI models; 2) the data used to train or refine those models; and 3) the underlying business model itself.

Right now, all signs point to centralized compute as the key to AI scaling -- just look at the space-race scramble for capital, hardware, and even dedicated nuclear energy sources. But crypto enthusiasts argue that as decreasing returns to scale start to hit massive data centers, or as public unease grows over any one company wielding that much power, the demand for decentralization will inevitably rise.

The irony, of course, is that this vision didn't even pan out for crypto's leading networks. Satoshi's original ideal of a democratic "one CPU, one vote" consensus quickly gave way to the realities of physics and economics, resulting in massive concentration -- not just in Bitcoin mining, but even more so in hardware. Today, the top three ASIC producers dominate nearly all global supply, and they're all Chinese companies. Similarly, in Ethereum, the leading proof-of-stake network, staking is less decentralized than many would prefer, with a few large staking pools holding a substantial share of the market.

If Bitcoin's consensus didn't end up running on a fully decentralized network of toasters or industrial appliances, why should we expect AI to be any different? Some argue that decentralized AI could tap into the underutilized compute capacity of increasingly powerful consumer and business hardware -- a bit like idle black car drivers waiting for a ride before Uber came along. It's an appealing analogy, and efforts like Seti@home have demonstrated how this capacity can be harnessed for pro-social purposes, such as protein folding. However, at scale, these devices will inevitably operate as an extension of big tech, shaped and controlled by centralized entities.

Tech companies can rapidly deploy capital, replicate infrastructure across multiple locations, negotiate preferential deals with hardware and energy suppliers, and leverage top-down decision-making to accelerate execution. Case in point: Elon Musk assembled the world's most powerful AI training system in just 122 days.

While powerful smartphones, laptops, cars, and robots will increasingly handle some local AI processing, it's clear that the companies behind these devices will retain preferential access to that compute.

Apple and Google are already leveraging this dynamic with their hybrid approach to on-device and cloud-based training and inference. Meanwhile, Elon Musk has hinted at the possibility of parked Teslas forming a distributed cluster for AI inference. While connectivity might seem like a hurdle, it's not hard to imagine Tesla using Starlink for this. OpenAI and Anthropic are also venturing into hardware development. Ultimately, while edge computing will play a crucial role in AI, it's clear that incumbents will get to shape its trajectory first.

Where this leaves a network of edge devices incentivized through crypto is unclear, even before we get to the thorny issues of training on untrusted and heterogeneous hardware, effectively distributing the load to ensure parallelism and fault tolerance, and protecting the intellectual property tied to countless compute cycles -- especially if the model weights are ultimately visible to all.

In the end, the only projects likely to adopt decentralized, crypto-powered AI compute might be those with no other choice -- either because they are entirely grassroots, open-source initiatives without corporate backing, or because they operate in highly adversarial and contested environments where decentralization is a necessity for training or inference.

While crypto can enhance transparency and verifiability in inference, the willingness to pay for these benefits compared to just placing trust in centralized players' reputations remains uncertain. As with much of crypto, the question of who will truly value censorship resistance is hard to answer until we witness the failure modes of centralized solutions -- imagine a Cambridge Analytica moment, but for AI -- and whether such failures would meaningfully shift preferences.

Another argument for why AI needs crypto centers on its potential to deliver greater privacy and control over user data. The problem is that this mirrors the same reasoning crypto proponents (and others before them) have long used to predict the rise of decentralization across digital platforms like social media, messaging, and marketplaces -- predictions that have yet to materialize.

The reality is that, even in the face of major scandals, most consumers simply don't care, and the trade-offs in convenience and usability have been far too steep to justify the privacy benefits. Moreover, privacy-focused tech companies like Apple are already moving the most sensitive aspects of training and inference directly onto user devices. This minimizes the privacy advantage of a decentralized solution. For enterprises, OpenAI and Anthropic can employ similar controls to those that have already made corporations comfortable with cloud solutions -- all while delivering these capabilities at a fraction of the cost of decentralization.

Monetizing data for AI training or post-inference feedback is likely to be economically insignificant. Most user data holds little value -- except in rare, high-stakes moments like major life events -- and consumers are already willing to provide it in exchange for free services. A stronger emphasis on privacy didn't help early Web3 experiments attract mainstream users away from Web2 incumbents, and it's unclear why the outcome would be any different with AI. It's an incumbents' game, where a decentralized alternative is at an even greater disadvantage compared to a decentralized social network or creator economy platform.

Open-source AI models are gaining prominence and approaching state-of-the-art performance, thanks to contributions from organizations like Meta, xAI, MistralAI, and DeepSeek AI. Notably, some of these teams have achieved comparable results to the leading players with significantly smaller budgets. In response to U.S. export controls on advanced GPUs, Chinese companies have pushed the boundaries of what's possible through a clever mix of optimization techniques and architectural changes. If these trends continue, novel business models centered around open-source AI could become increasingly viable, including those that reshape the production of AI models themselves. While, as discussed above, this approach may not be particularly effective for large generalist models, an AI ecosystem with a dedicated crypto token tailored to a specialized domain might prove viable.

This is hardly new territory for crypto, which has been experimenting with market design and token economics since Ethereum's launch. However, open-source ecosystems have long faced challenges with developing radically new monetization models. Most contributors participate out of passion for the pro-social effects of open source, as a means to showcase their skills in the labor market, or because it aligns with their tech employers' strategic goals by producing code that complements their business models. While crypto projects have experimented with adding crypto tokens to repository contributions or rewarding maintainers of key open-source libraries, this has been mostly a fringe phenomenon. A specialized AI ecosystem that uses a native crypto token to reward scarce talent, compute, data, feedback, and other contributions is a theoretically fascinating idea, but in practice, it seems exceedingly difficult to execute successfully.

Native crypto tokens are an effective way to give early contributors and adopters upside in a project and address the cold-start problem. However, they have mostly backfired, attracting speculators instead of builders, and distracting founders with too much funding. They're the digital equivalent of the "resource curse," pulling teams away from the messy, unglamorous task of building actual products and toward the shinier, more immediate allure of price movements. While there may be a clever way to align these incentives effectively around an AI application, it remains a very complex endeavor.

While an interesting area of academic and R&D exploration, the use of crypto to decentralize AI remains commercially limited and is likely to remain so for the foreseeable future. Moreover, hybrid solutions -- like those being developed by Apple, Tesla, and other incumbents -- can already deliver key privacy and latency benefits of edge inference without the added complexity and costs of decentralization. That said, crypto's role shouldn't be entirely dismissed: in such a fast-evolving space, breakthroughs in distributed AI model development might quickly shift the cost-benefit equation, or the downsides of centralization could materialize much sooner than anticipated.

Similarly, open-source efforts might succeed in token economic design where previous consumer-focused crypto projects fell short, potentially unlocking innovative new business models. After all, the idea of a decentralized computing platform seemed laughable just a few years ago, yet Ethereum's growth has proven the skeptics wrong. Some speculate that open-source AI could be the first to achieve AGI -- and if it does, it's possible that a native crypto token might play a role in mobilizing the resources to make it happen. Post-AGI, all bets are off anyway -- at that point, we'll probably be a lot more concerned about using crypto to keep our robot overlords aligned than debating its utility for decentralization.

Previous articleNext article

POPULAR CATEGORY

commerce

9635

tech

10597

amusement

11578

science

5264

various

12332

healthcare

9314

sports

12273