Introduction

You may read the PDF version here

Introduction

Over the past decade, the field of Artificial Intelligence (AI) has experienced unprecedented growth, marked by remarkable achievements like ChatGPT, a product leveraging generative pre-trained transformers, which has expanded the horizons of AI capabilities, edging us closer to the theoretical concept of Artificial General Intelligence (AGI) – a machine endowed with human-like cognitive capacities. Nevertheless, this surge in advancement has raised significant apprehensions regarding the concentration of AI development.

The ascent of AI has prompted concerns about its centralization. Currently, AI research predominantly orbits a handful of formidable corporations and governmental bodies. These entities wield extensive training data, expensive computational resources, and substantial financial backing to drive pioneering research. While this centralization undeniably propels progress, it also presents pivotal challenges:

  • Monopolization: The concentration of power among a few dominant entities, endowed with extensive data and computational resources, poses a threat to competition. This concentration can stifle innovation by limiting the diversity of perspectives and methodologies crucial for progress. Consequently, there's a risk of homogenizing AI development, potentially hindering breakthroughs in pivotal domains.

  • Privacy and Security Risks: Centralized servers provoke significant anxieties regarding data privacy. The consolidation of power may incentivize the creation of AI models that prioritize the interests of those in control, potentially infringing upon individual liberties.

  • Lack of transparency and accountability: While regulatory frameworks like GDPR exist to safeguard data privacy, there's a notable absence of external mechanisms to verify how companies internally utilize data and train models. This lack of transparency and accountability undermines trust in centralized AI systems and raises concerns about ethical data practices.

  • Incentive Mechanisms: Enterprises that reap the benefits of AI often fail to share these rewards with the users who contribute data and provide evaluation feedback. Moreover, current large language models (LLMs) consume vast amounts of data, potentially including high-quality public data without adequate compensation for contributors. Introducing incentive mechanisms to encourage greater participation in data sharing and evaluation processes is essential for enhancing model performance and fostering a more equitable AI ecosystem.

In response to these concerns, there is a burgeoning movement towards decentralized AI (DeAI) development. This paradigm advocates for the dispersion of control and resources across a broader network of participants, fostering enhanced transparency, accountability, and inclusivity. DeAI endeavors to construct AI models within a decentralized infrastructure, where data providers and computing resources are distributed. Importantly, it aims to ensure the data privacy, and model security during training and inference processes.

While decentralized AI is still in its infancy, it holds immense potential for shaping the future of the field. By nurturing a more democratic and fair approach to AI development, we can ensure that this transformative technology serves the betterment of all humanity.

Motivation

With the advent of groundbreaking models like ChatGPT and Sora, the AI landscape has undergone significant evolution, ushering in a new era fraught with novel challenges that warrant careful consideration. Intriguingly, existing reviews and surveys often narrowly equate Decentralized AI (DeAI) with blockchain applications in AI, or delve into specific technical approaches, such as cryptography and privacy preservation, within the realm of deep learning. However, these reviews frequently overlook the nuanced techniques, and lack focus on solutions to challenges posed by deAI.

DeAI intersects deep learning, cryptography, and network technologies, yet researchers within each domain often lack insight into cutting edge progress from others. Hence, this review aims to bridge these knowledge gaps and disseminate the latest advancements in this arena. Acknowledging the rapid pace of innovation, we concede our inability to cover all recent developments and instead encourage readers to explore the continually updated resource at https://deai.gitbook.io. Contributions to this collaborative effort are warmly welcomed.

The contributions of this review can be distilled as follows:

  • Introducing a systematic definition of DeAI, a novel contribution to the field

  • Identifying and discussing the emerging DeAI challenges posed by large-scale models.

  • Conducting a comprehensive survey of these challenges, exploring various methodologies and their analyses and comparisons.

  • Investigating the problem from multifaceted perspectives, encompassing deep learning, cryptography, network and economics domains.

Last updated