DeAI: decentralized artificial intelligence
  • Introduction
    • General Terminology
  • Landscape
    • Data Providers
    • Computing Power
    • Model Training Task
    • Challenges
  • Privacy Preservation
    • Data Process
    • Privacy Preserved Training
    • Federated Learning
    • Cryptographic Computation
      • Homomorphic encryption
      • Multi-Party Computation
      • Trusted Execution Environment
    • Challenges
  • Security
    • Data Poisoning
    • Model Poisoning
    • Sybil Attacks
    • Impact of Large Models
    • Responsibility
  • Incentive mechanisms
    • Problem Formulation
    • Contribution Evaluation
    • Copyright
  • Verification of Computation
    • Computation on Smart Contract
    • Zero-Knowledge Proof
    • Blockchain Audit
    • Consensus Protocol
  • Network Scalability
    • Local Updating
    • Cryptography Protocol
    • Distribution Topology
    • Compression
    • Parameter-Efficient Fine Tuning
  • Conclusion
Powered by GitBook
On this page
  1. Landscape

Model Training Task

The model training task trains data on computing power to obtain a model checkpoint, storing it on a model host platform. In a centralized solution, model training tasks can select relevant data for their use case, clean, and deduplicate data to enhance quality. In a decentralized framework, ensuring data relevance and quality without disclosing content poses challenges, particularly as data providers may be incentivized to produce more data. Moreover, data providers may conceal harmful information within data, potentially polluting the model. Additionally, as computing power is not owned by the model training task, decentralized AI frameworks may source computing power from crowd-sourcing, complicating the provision of stable and reliable computing services. Decentralized computing power necessitates mechanisms to prove computation on given data, ensuring system error tolerance and low latency.

PreviousComputing PowerNextChallenges

Last updated 11 months ago