DeAI: decentralized artificial intelligence
  • Introduction
    • General Terminology
  • Landscape
    • Data Providers
    • Computing Power
    • Model Training Task
    • Challenges
  • Privacy Preservation
    • Data Process
    • Privacy Preserved Training
    • Federated Learning
    • Cryptographic Computation
      • Homomorphic encryption
      • Multi-Party Computation
      • Trusted Execution Environment
    • Challenges
  • Security
    • Data Poisoning
    • Model Poisoning
    • Sybil Attacks
    • Impact of Large Models
    • Responsibility
  • Incentive mechanisms
    • Problem Formulation
    • Contribution Evaluation
    • Copyright
  • Verification of Computation
    • Computation on Smart Contract
    • Zero-Knowledge Proof
    • Blockchain Audit
    • Consensus Protocol
  • Network Scalability
    • Local Updating
    • Cryptography Protocol
    • Distribution Topology
    • Compression
    • Parameter-Efficient Fine Tuning
  • Conclusion
Powered by GitBook
On this page

Security

Malicious actors within DeAI environments pose significant privacy and security concerns. While malicious computing nodes and training tasks can leak the privacy of data providers, malicious data providers may compromise the security of DeAI models. Attacks targeting models and federated learning can both impact DeAI systems by reducing model quality or inducing models to output desired contents.

DeAI involves permissionless participants in the network, making it vulnerable to network attacks. Common attack means from the perspective of network participants include:

  • Byzantine attack: Malicious agents upload arbitrary updates to degrade training performance.

  • Sybil attack: Attackers create multiple dummy participant accounts to gain larger influence.

In DeAI, attackers often focus on the model during its training or inference phases, aiming for either targeted or untargeted poisoning. Targeted poisoning involves attackers manipulating the model to produce desired outputs of their choosing. On the other hand, untargeted poisoning seeks to disrupt the convergence of the global model, diminish its accuracy, or even cause it to diverge from its intended behavior. Data poisoning and model poisoning are common methods employed by attackers to achieve these objectives.

PreviousChallengesNextData Poisoning

Last updated 11 months ago