2025-05-14
I’ll admit, the post is not really about provers, it’s more about a specific point in the software supply chain as it pertains to ZK provers. I think it’s a supply chain issue that, while historically has been important in some circles, will become increasingly important as data moves to be local to users, and especially when these data are never shared, just proofs about them are.
Succinct proofs are being deployed and used as we speak - in blockchains, browsers, desktop apps, mobile apps and more. When the proofs don’t involve zero-knowledge and you only need integrity, you have a simpler trust model - you can audit the deployed verifier, e.g. in a smart contract, and you can be absolutely certain that the integrity properties you were looking for hold. What about when you rely on the app that generates the proofs for privacy? That becomes trickier.
Open source code has the great properties of being able to read the code, verify the security properties you want are held, build it yourself and run it. Of course, only a small number of users have the sophistication (or desire) to check these properties, so these checks are usually delegated to other parties. An important party in this mix is an auditor, which reads the source code instead of you, and you delegate the job of verifying the security properties. The auditor, sometimes, also audits operational processes within the software lifecycle:
The third property is the hardest one to achieve, and it’s good that there are tools for it in some environments, often called reproducible builds. The Reproducible Builds effort describes well both the benefits of having reproduciblebuilds and the limits to which we can achieve it. In the Ethereum space it’s common to provide the Solidity source code and use a tool like Etherscan’s Verify Contract to recompile from source and check the bytecode matches. You can also do this locally relatively easily.
One article they reference is Ken Thompson’s “Reflections on Trusting Trust”, which illustrates how hard it is in practice to trust code you didn’t create yourself - and this extends not only to high-level source code, also generated assembly and firmware, for example. One of the thoughts in that lecture is “Perhaps it is more important to trust the people who wrote the software.”
This is likely good advice if you want to contain the complexity of deployment to a reasonable level. The challenge is that there are way too many people and organizations involved in deploying modern software. We have compilers, developers, CI systems, hardware manufacturers and countless more. You eventually have to trust some of them, like the hardware you use. Do you have to trust all of them?
Let’s take a step back and try to list a few of the parties that could be involved in deploying a desktop app to MacOS: