In 1890, Louis Brandeis and Samuel Warren published what became one of the most cited law review articles in American history. Their argument was simple: the law needed to recognize “the right of the individual to be let alone.” They were responding to portable cameras and gossip columns, technologies that for the first time could capture and distribute private moments without consent.
More than a century later, we’re still working off their frame. And we’re stuck.
Not because the frame is wrong. Because it’s incomplete. Privacy was never about hiding. It was always about choosing: the right to decide what to reveal, to whom, under what conditions, with the ability to revoke that choice when the relationship changes. We built an entire regulatory infrastructure around protection and never built the architecture for choice.
The frame everyone inherited
The entire privacy conversation (regulatory, commercial, technical) is organized around protection. How do we keep data safe? How do we prevent unauthorized access? How do we build better walls?
GDPR. HIPAA. The EU AI Act (fully applicable August 2026). SOC 2. CCPA. They all assume the problem is that bad things happen to data when the wrong people touch it. So they define who the wrong people are, draw boundaries around what they can do, and enforce consequences when those boundaries get crossed.
Close to 80% of the world’s population now operates under modern privacy regulation. Gartner projected 75% by the end of 2024; UNCTAD’s global tracker shows the figure had already been exceeded by March of that year. Europe has issued over €7.1 billion in GDPR fines since 2018. The enforcement machinery is real. Even non-democracies like China have enacted and enforce privacy laws.
But enforcement is reactive. Every fine represents a failure that already happened. GDPR contains seeds of a different approach (data portability, the right to explanation), but the enforcement architecture remains protection-based. Even newer frameworks like India’s DPDPA and Brazil’s LGPD adopt the same frame. Every regulation presupposes an architecture that puts the data in someone else’s hands and then tries to constrain what they do with it. The frame is: your data is elsewhere, and we’ll punish people who mishandle it.
That frame was the right place to start. But it was never the whole answer. There’s another half. It starts with a different question.
Two questions that sound the same but aren’t
Protection asks: who should be prevented from accessing this data?
Ownership asks: who has the right to decide?
These aren’t competing questions. They’re complementary. But we built the entire infrastructure around the first one and almost none around the second.
Not ownership as property, not a claim of absolute dominion over bits. Ownership as durable authority: the right to grant, deny, delegate, and revoke access. The right to decide what gets exposed, and to change your mind.
In 1967, Alan Westin redefined privacy for the computing age. In Privacy and Freedom, he wrote that privacy is “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.”
Jeffrey Rosen, a law professor at George Washington University, later called Westin “the most important scholar of privacy since Louis Brandeis” because he transformed the debate by reframing it around the ability to control what we reveal.
Westin had it right. But the infrastructure never caught up. We built the regulations around his insight. We never built the architecture.
Protection is defensive. It assumes the data exists somewhere: in a vendor’s database, on a cloud provider’s infrastructure, in a hospital’s EHR system. The goal is to put controls on top to limit what happens to it. The entity who owns the data is a beneficiary of those controls. A recipient of protection. A data subject, in the language of European law. A consumer, in California’s. The language itself tells the story: a subject is acted upon, a consumer receives what’s offered. Neither word describes someone who decides.
Ownership is structural. It doesn’t start with “how do we limit access.” It starts with “who has the authority to grant it.” The entity who owns the knowledge isn’t being protected. They’re making decisions.
That distinction changes everything downstream.
Protection alone creates dependency
When privacy is framed as protection, someone has to do the protecting. A vendor. A platform. A compliance team. A regulation.
Bruce Schneier laid essential groundwork: surveillance is the business model of the internet, and data is the pollution problem of the information age. His analysis of power asymmetries between individuals and institutions pointed toward a structural problem that regulation alone couldn’t solve. The protection frameworks that followed his analysis have been genuinely important — they’ve constrained the worst abuses and established that data handling carries legal consequences. But even with those frameworks in place, the entity whose data it is (the person, the team, the organization) needs someone else to keep a promise. A Terms of Service. A law. An auditor. And when any of those fail (when the vendor gets acquired, when the government issues a subpoena, when the compliance team misses a configuration) the entity has no recourse. Because they were never in control. They were being protected.
Schneier himself acknowledged the structural trap: it’s insufficient to protect ourselves with laws; we need to protect ourselves with mathematics. That line points toward the right answer. But it leaves a question open: mathematics in service of what? If the math is deployed by the same platforms that hold the data, it’s still protection. The platform chose to encrypt. The platform can choose to stop. The structural position of the user hasn’t changed.
Mathematics in service of ownership, where the keys are held by the entity who owns the data and not by the platform that stores it, is a fundamentally different architecture. Schneier opened the door to this question. What’s changed since 2015 is the urgency: the question now isn’t whether to use math instead of law. It’s who holds the keys.
Sovereignty without ownership is still someone else’s decision
The data sovereignty conversation in 2026 is still inside the protection frame. The question has evolved from “where does the data physically live” to “which jurisdiction’s laws govern it.” That’s progress. But it’s still a question about who protects you, not about who decides.
Samm Sacks, a Senior Fellow at New America and Yale Law School’s Paul Tsai China Center and one of the foremost researchers on cross-border data governance, has been documenting this tension for over a decade. Her recent Lawfare piece on China’s agentic AI controversy surfaces what I think is the most important question in privacy right now. And it’s not the question most people take away from the article.
ByteDance’s Doubao AI phone, the first smartphone with an AI agent fused into the operating system, sparked a national reckoning in China. The agent has system-level permissions that make it indistinguishable from the user. It can read your screen, navigate your apps, tap buttons, access your bank balance. Chinese banks blocked it. WeChat blocked it. Citizens posted viral videos of their financial data appearing on other devices. Legal scholars questioned whether consent, purpose limitation, and minimization can survive contact with AI agents that autonomously cross every boundary those concepts were built to enforce.
But here’s the part nobody is talking about enough. Sacks raises a question buried in the debate that reframes the whole thing: since the data belongs to the owner of the device, and the owner has already tasked the agent to act on their behalf, does the agent even need additional permission from the apps it accesses? By this logic, user intent is already granted. You bought the phone. You turned on the agent. You asked it to book dinner and pay the bill. That is the permission.
And the protection framework has no idea what to do with this.
Data protection law is built around the idea that permission is something you give to someone else (a platform, a vendor, an app) and they are then bound by the terms of that permission. Consent flows. Privacy policies. Terms of service. Purpose limitation. Every mechanism assumes a transaction between the data subject and some external processor. But when the agent is you — when it operates with your credentials, on your device, at your direction — the permission model inverts. You’re not granting access to a third party. You’re exercising your own authority over your own data through a proxy.
The question isn’t whether you consented. It’s whether the architecture recognizes that the data was yours to direct in the first place.
This is exactly where protection and ownership need each other — and where having only one of them fails. In a protection-only framework, the response is more consent dialogs, more restrictions, more regulation. Chinese scholars are already proposing dynamic consent, mandatory suspensions of agent control for sensitive transactions, risk-graded processing rules. These aren’t wrong — some of them are genuinely necessary for cases involving sensitive transactions or vulnerable users. But they can’t be the whole answer. In an ownership framework, the foundation is simpler: the owner already decided. The infrastructure should enforce their decision, not second-guess it.
Signal President Meredith Whittaker warned at SXSW in March 2025 that agentic AI is “threatening to break the blood-brain barrier between the application layer and the OS layer.” She’s describing the same structural problem from the other side: an agent that needs root-level permission across every app and database on your device, processing data in the clear because there’s no model to do it encrypted, almost certainly routing through a cloud server. And her concern cuts deeper than security. It’s that the combination of root-level access and cloud processing may make meaningful ownership structurally harder to achieve, not just harder to protect. That’s the right challenge. Agentic AI forces the question: is the user directing their own data, or is the platform deciding on their behalf? The protection framework alone can’t answer that. It wasn’t designed to.
Ownership completes the architecture
If the entity who owns the knowledge holds the cryptographic keys — if the infrastructure is architected so that the platform literally cannot read the data without the owner’s active participation — then the protection conversation changes. Not because protection becomes unnecessary, but because it stops being the only layer.
You don’t need a regulation to prevent the vendor from reading your data if the vendor mathematically cannot read your data. You don’t need a DPA with your cloud provider if the cloud provider stores ciphertext it can’t decrypt. You don’t need to trust a privacy policy if the architecture makes the policy’s promises irrelevant.
The protection doesn’t go away. It becomes a byproduct of the architecture instead of a layer bolted on top of it. Regulations still matter — for enforcement, for accountability, for the cases where human behavior breaks what the architecture can’t prevent. But the structural risk is handled by the architecture, which gives the regulations a stronger foundation to build on.
This is the question the architecture has to answer. The technical details are a separate conversation, one I’ve written about in The Unsolved Search, which details how per-entity cryptographic transformations can make multi-tenant semantic search work without the server ever reading the data. But the point for this argument is that ownership isn’t aspirational. It’s buildable. The gap is engineering, not physics.
Privacy as a right to expose
Privacy is not about hiding. It never was. Go back to Westin’s definition: “determine for themselves when, how, and to what extent information about them is communicated to others.” The emphasis is on determine for themselves. Not on withholding. On choosing.
Privacy is the right to decide what to reveal, to whom, under what conditions, and to revoke that decision when the relationship changes.
A doctor who shares clinical notes with a specialist isn’t sacrificing privacy. They’re exercising it. A deliberate choice about what knowledge to make visible, to a specific person, for a specific purpose. A company that gives an AI agent a cryptographic key to work over internal documents for 24 hours, and revokes it automatically at hour 25, isn’t compromising privacy either. They’re using it. The privacy isn’t in the withholding. It’s in the choosing.
This gets harder when knowledge is co-created. A clinical note reflects both the patient’s condition and the clinician’s judgment. A conversation belongs to both parties. Ownership in these cases can’t resolve to a single key holder. The architecture has to support shared authority with independent revocation, not just individual control. That’s a harder problem than the pure single-owner case, and the ownership frame needs to solve it rather than assume it away. That’s a separate essay.
The problem with today’s infrastructure is that the choice doesn’t exist. Once data enters a system, the system decides what happens to it. You can consent to the initial collection, but you can’t control the downstream use. You can request deletion, but you can’t verify it happened. You can ask for a copy of your data, but you get a dead-end export that you can’t plug into anything else.
The EU Data Act, effective September 2025, introduced rights for users to access and port data generated by connected devices and prohibited vendor lock-in. That’s a step. But it’s still law telling platforms what they must allow. Still protection, enforced from the outside.
Genuine privacy means the entity holds the keys. Not metaphorically. Literally. And when they want to share knowledge — with a doctor, with a collaborator, with an AI, with a platform — they open a door. A door that only they can open, that they can close when they’re done, and that stays closed if they walk away.
That’s not protection. That’s agency.
Why this matters now
The EU AI Act goes into full effect in August 2026. On Friday, the White House released a national AI policy framework calling on Congress to preempt state AI laws with a single federal standard. Both documents address the same question: who regulates AI, and how. Neither asks who owns the data it runs on. Every enterprise in the world is about to be forced to think about data governance in the context of AI systems. The default conversation will be compliance: what are the rules, how do we follow them, what are the penalties if we don’t.
But the deeper question — the one that will actually determine whether AI creates value or liability — is this: when an AI system reasons over your data, who decided it could?
This isn’t hypothetical. The Doubao phone answered it one way: if you use the phone, you decided. Every agentic AI system shipping in 2026 will have to answer it. The wave is already here: autonomous coding agents, research assistants that browse and summarize without asking, scheduling tools that read your email and calendar and act on both. Each one crosses consent boundaries that existing frameworks weren’t built to handle. And the answer can’t come from a consent dialog. It has to come from the architecture.
The missing layer
Twenty-five years of building data systems taught me something I couldn’t see until I stepped back far enough.
Every data play (every platform, every health information exchange, every interoperability standard, every consent management tool) is trying to solve the access problem. How do we get the right data to the right people at the right time while keeping it away from the wrong people?
The actual problem is the ownership problem. Who has the right to make the decision about what gets exposed?
Solve ownership and access resolves itself. Not completely, because disputes over emergency use, fiduciary obligations, and shared records will always require governance. These aren’t edge cases. They’re where the protection framework is most justified and most necessary, and any ownership architecture has to be designed to work alongside them rather than wish them away. But structurally. The owner opens the door. The owner closes the door. The infrastructure enforces the decision cryptographically, not contractually. Not because a privacy policy promises it. Because the math makes any other outcome impossible.
This narrows the threat model. It doesn’t eliminate it. Cryptographic ownership protects against the platform misusing data, but not against coercion, social engineering, or the owner making poor delegation choices. The remaining threats are about human behavior, not architectural betrayal. An architecture that handles the structural risk honestly is a better foundation for addressing the human risk than one that pretends the structural risk doesn’t exist.
The strongest objection to this is obvious, and worth taking seriously: most people don’t want to manage cryptographic keys. Protection frameworks exist precisely because most people prefer delegation. They want a bank, a hospital, a platform to handle the complexity for them. Ownership-first architecture, the argument goes, trades one dependency for another: instead of depending on the platform to keep its promises, you depend on the user to manage their keys, handle delegation, and not lose access to their own data.
This objection is correct about the problem and wrong about the conclusion. The answer isn’t to abandon ownership because key management is hard. It’s to build the infrastructure so that the complexity of ownership (key management, delegation hierarchies, revocation protocols, agent authorization, recovery mechanisms) is handled well enough that the user’s structural position changes even if their daily experience doesn’t.
You don’t think about TLS when you load a website. The architecture handles it. The trust model is different — TLS delegates authority to certificate authorities, not to you — but the experience should be that seamless. Making cryptographic complexity invisible to the person who actually holds the authority is itself a design problem worth taking seriously.
You can delegate custody without delegating authority. The difference is whether the entity you delegate to can override your decisions, or whether they’re constrained by the architecture to execute them. Today’s infrastructure doesn’t make that distinction. It should.
The right to expose is the right that matters. Everything else follows from it.