Vitalik Buterin has unveiled a concrete plan to embed personal AI agents—called AI stewards—into DAO voting. These agents automatically vote based on your preferences while keeping your identity and choices hidden through zero‑knowledge proofs and secure compute enclaves. The goal is to boost participation, curb whale influence, and make on‑chain governance scalable for everyday users.
How AI Stewards Operate Inside a DAO
Each AI steward lives inside a confidential compute environment, such as a multi‑party computation protocol or a trusted execution enclave. When a proposal reaches the voting stage, the steward evaluates the user’s stored preferences and casts a vote on their behalf. A zero‑knowledge proof then confirms that the vote is valid without exposing who voted or what the decision was.
Privacy‑Preserving Vote Casting
The dual‑layer approach—secure enclaves plus zero‑knowledge proofs—protects you from coercion and bribery. Even if a malicious actor observes the blockchain, they can’t link a vote to a specific address, and they can’t see the content of the vote itself. This privacy shield encourages broader participation from users who might otherwise stay silent.
Incentivizing Quality Proposals with Prediction Markets
Buterin’s design also introduces prediction markets where AI agents can bet on proposal outcomes. Successful bets reward high‑quality ideas, while inaccurate predictions help weed out low‑value or spammy submissions. In practice, routine decisions get automated, and only proposals flagged as critical rise to your attention for manual review.
Technical Foundations Behind the Proposal
The architecture builds on existing cryptographic tools already deployed on Ethereum, such as zero‑knowledge SNARKs and confidential compute techniques. By marrying these primitives with personalized language models trained on a user’s past interactions, the system creates a voting proxy that respects both privacy and intent.
Potential Impact on DAO Participation
If AI stewards prove reliable, DAOs could shift from a small circle of highly active token holders to a mass‑participation model. Institutions might feel comfortable voting on strategic matters without revealing positions, and community funding initiatives could scale dramatically. You’ll likely see new use cases emerge in protocol upgrades and public‑goods financing.
Challenges and Open Questions
Several practical hurdles remain. Training an AI steward to reflect nuanced personal values is non‑trivial, and safeguards are needed if a steward is compromised. Additionally, the computational overhead of secure enclaves could raise costs for everyday participants. Addressing these issues will be essential before the concept moves from prototype to mainnet.
Developer Perspective
Maya Patel, a smart‑contract engineer, notes that “the building blocks—ZK‑SNARKs and trusted execution environments—already exist. The real work lies in creating user‑friendly interfaces for training and managing stewards, and in defining standards for auditability.” She believes that a shared protocol could turn the idea into a practical layer on top of current DAO frameworks.
