Verifiability in an Internet That Lies
The internet was never designed for truth. It was designed for transmission. What happens to knowledge, identity, and trust when the default state of the network is indistinguishable noise?
The default state of the network is noise.
Not malicious noise — just noise. High throughput, low signal, no intrinsic verification layer.
The internet was built to move packets. It was not built to answer: did this actually happen? Or: did this person actually say that? Or: is this the document I intended to receive, or has it been modified in transit?
These questions were left to the application layer. Which means they were left to whoever built the application, which means they were left to whoever had the resources to build the application, which means they were largely not solved.
What changed
For most of the network's history, the gap between truth and untruth was bounded by production cost.
Forging a convincing document required skill. Fabricating audio required equipment. Synthesizing a credible video at scale was prohibitive. The friction was a kind of informal verification — not because the tools to lie didn't exist, but because lying at scale was expensive.
That friction is now gone.
Generative models produce fluent text, plausible images, and increasingly convincing audio-visual content at near-zero marginal cost. The distribution channels that carry this content are the same ones that carry everything else. There is no visual difference between a fabricated statement and a real one. There is no auditory difference between a synthesized voice and a recorded one.
This is not a temporary state that will be corrected by better content moderation or detection models. Detection is structurally asymmetric — generation will always outpace it. The adversarial dynamic is one-sided.
What this actually means
Most discussions of this problem focus on the wrong layer.
The problem is not "misinformation" as a content category. The problem is the absence of verifiability as an infrastructure property.
Think about what it would mean for the internet to have a native identity and provenance layer — one where:
- A statement could be cryptographically attributed to a specific key pair
- That key pair could be tied to a real-world identity, or left pseudonymous, at the holder's discretion
- The binding between key and identity could be certified by a trusted third party, or established through web-of-trust, or left uncertified
- A document could carry a timestamp and a signature that proves it has not been modified since creation
- An agent — human or AI — could present credentials about itself without revealing more than necessary
This is not a speculative architecture. It exists. W3C Decentralized Identifiers and Verifiable Credentials are the relevant standards. The cryptographic primitives — digital signatures, zero-knowledge proofs, selective disclosure — are mature.
What doesn't exist is adoption at the layer where it would matter.
The structural problem
Verifiability requires coordination.
A signature is only useful if someone checks it. A credential is only meaningful if the issuer is trusted. A DID document is only valuable if the resolver is maintained.
This creates a bootstrapping problem that is partly technical and mostly social.
The web's existing trust infrastructure — Certificate Authorities, DNS, OAuth identity providers — was built on the assumption that a small number of centralized entities could be trusted to maintain it. That assumption has held imperfectly, but well enough. The CAs have mostly not gone rogue. The major identity providers have mostly not been catastrophically compromised.
But that architecture concentrates trust, and concentrated trust is a single point of failure. It's also a single point of coercion.
Decentralized identity architectures distribute trust. But distributed trust is harder to reason about, harder to recover from failure, and harder to explain to non-technical users. The tradeoffs are real.
What I actually think
The epistemic crisis of the current internet is not a problem that will be solved by literacy campaigns, platform policies, or AI detection tools.
It's an infrastructure problem.
The question of whether a piece of content is authentic — whether it originated where it claims to originate, whether it has been modified, whether the attributed speaker actually spoke it — is a question that can only be answered reliably at the infrastructure layer.
This is not a new insight. Cryptographers have known it for decades. The work of building that infrastructure is ongoing. The adoption curve is slow.
In the meantime, the rational posture toward internet content is something closer to:
Unverified claims are noise until verified. Verification requires infrastructure. Most infrastructure doesn't exist yet. Behave accordingly.
This is uncomfortable. It means operating with higher uncertainty, higher friction, and lower trust as defaults — until the tools to justify higher trust are built and deployed.
That is not pessimism. It is calibration.
Epilogue
The internet that exists is not the one that was imagined.
It was imagined as a network that would democratize access to information. What it became is a network that democratized access to distribution.
Distribution is not information. Distribution is just transmission.
The infrastructure for turning transmission into knowledge — attributable, verifiable, trustworthy knowledge — is the project. It's being built. Slowly.
Pay attention to where the work is actually happening.