Architecture
Architecture of the Trust Framework
To understand the approach, consider what makes an agent tick: the LLM as its brain, memory as context, tools for actions, and an orchestrator that holds it all together. Every component is designed for verifiability to enable bias detection and commitment enforcement.
Running agents fully on-chain isn’t practical due to astronomical compute costs, massive memory requirements, and quickly compounding gas fees. MoveAI Agents standardize the integration of verifiable tools—paving the way for verifiability across the entire AI stack.
Modular Verifiable Components
It address on-chain cost challenges through modular, verifiable components:
Inference co-processors for LLMs: Accelerate AI reasoning in a secure, verifiable environment.
AINET for Memory: Efficiently manage and verify contextual data.
AINET for Tool Calls: Ensure every tool invocation is authenticated and auditable.
Decentralized Runtime: Provides a scalable and secure execution environment.
Core Security Principles
At the heart of our security model is standardization. By offering a standardized interface for DeFAI operations, the trust framework enforces consistent security measures across all protocols and operations. Protocol whitelisting ensures that only vetted, audited protocols are integrated, establishing a solid foundation of trust while retaining the flexibility needed for innovative DeFAI operations.
Layered Security Architecture
Protocol Validation:
Access is restricted to whitelisted protocols with clearly defined operational boundaries.
Standardized risk parameters and continuous protocol health monitoring ensure safe interactions.
Transaction Safety:
Transaction-level security includes strict amount limits, optimized gas strategies, built-in slippage protection, and simulation before execution.
Each transaction is validated to prevent malicious activity and unexpected behavior.
State Validation:
Advanced state validation mechanisms verify operational safety and manage risk as DeFAI operations evolve.
AI Agent Boundaries:
The roadmap includes setting operation limits based on risk levels, implementing threshold checks, and embedding automated safeguards for high-risk actions.
Monitoring and Recovery:
Real-time transaction monitoring and automated failure detection paired with clear recovery procedures ensure ongoing operational integrity.
Framework Integration
MoveAI DAO’s trust framework consists of several core components that work together seamlessly:
Framework Adapters: Integrate with both AI frameworks and DeFAI protocols to standardize command interfaces, manage state updates, and handle error responses.
Guardians: Oversee transaction verification and secure execution.
Shield: Acts as the agent security middleware, continuously monitoring and safeguarding interactions.
Standardization for Enhanced Security
By abstracting protocols security into a standardized middleware, this approach minimizes errors and ensures predictable, secure behavior. Each vetted protocol adheres to predefined safety boundaries and consistent risk parameters. This standardization is the cornerstone of the secure AI agent interactions with DeFi protocols and beyond.
Error Handling and Feedback
Trust framework adapters not only execute operations but also provide structured feedback:
They return clear operation statuses (success or failure), detailed result data, and actionable context for subsequent actions.
In case of errors, adapters format responses to empower AI agents to implement retry strategies, consider alternative approaches, and effectively manage risk exposure while communicating with users.
By translating complex DeFAI operations into clear, actionable insights, our architecture ensures that every AI agent interaction is not only efficient but also secure and verifiable.
Last updated