The role for ZKVMs in Verifiable AI
Date:
The role for ZKVMs in Verifiable AI
Verifiable AI is often misunderstood. The naive view—proving inference in zero-knowledge—sounds impressive but only shows that a model was run, not that it was trained correctly or behaves fairly.
The opposite extreme—proving full training in zero-knowledge—is elegant but infeasible at large scale. This is sometimes useful: in practice, many financial and compliance models—credit-risk estimators, AML detectors, fraud scorers—are sparse, structured learners such as gradient-boosted trees. For these, proofs of fair training are already tractable, even as the field dreams of “proof of training” for large language models.
The realistic middle ground is verifiable training of small surrogate models on LLM outputs—the ones used in interpretability and audit frameworks. Proving that these surrogate models were trained correctly on a studied system gives meaning to proofs of inference and makes fairness and interpretability frameworks adversarially robust—an essential step toward trustworthy AI.
Achieving this, however, requires rethinking proof systems. Every efficient ZKML stack today is bespoke: its own circuits, proving scheme, and verifier logic. Embedding all of them into a generic ZKVM is futile, and no one will deploy separate on-chain verifiers for each. The way forward is recursion over diversity: letting a ZKVM run the ZKML verifier itself, supported by rich precompiles—cryptographic subroutines the VM can call and commit to verifiably.
We’ll close by showing how Miden’s new precompile system enables this: a transcript-based mechanism linking each host computation to the proof trace, allowing Miden, in the end, recursively verify entire classes of bespoke ZKML systems. Verifiable AI begins when verifiable computing becomes efficiently programmable.
