AGI Privacy

| 116 views

Yesterday, I was discussing AGI governance with a founder working on AI safety infrastructure when they mentioned something concerning: the same systems we’re building to monitor and reduce AGI misuse could easily become the perfect tools for controlling humans.

This isn’t some distant sci-fi concern. Right now, today, major AI companies are implementing monitoring systems for their most capable models. Governments are drafting comprehensive AI oversight legislation. We’re literally watching the early infrastructure for AGI control take shape in real-time.

The question isn’t whether we need to monitor powerful AI systems. Given their potential for catastrophic misuse, from bioweapons to financial manipulation, oversight is inevitable. The question is whether we can build this infrastructure without accidentally creating a surveillance state that would make Big Brother jealous.

AGI Privacy is the socio-technical question of how we maintain personal autonomy while ensuring AGI remains a secure technology.

The Infrastructure is Already Here

Map of five eyes and the global surveillance infrastructure

Having grown up in Denmark, one of the world’s most surveilled democracies, I’ve seen firsthand how digital infrastructure built for convenience becomes a foundation for control. Our MitID system, digital government services, and comprehensive data integration create remarkable public services.

Personally, I’m fine with Danish mass surveillance because I trust Denmark as a government. They do an extreme amount to protect data, something I know researchers in the US are not subject to at the same level. But a state still has a monopoly on violence, and that changes the calculation.

Now scale this globally. The Five Eyes alliance already maintains fiber-optic cable taps that can store every communication for days and metadata for months. China operates over 200 million AI-powered cameras. The NSA’s XKEYSCORE system provides real-time access to internet traffic from 150 global collection sites.

This existing surveillance infrastructure provides the perfect foundation for AGI control. The networks designed to monitor human communications can easily be repurposed to monitor AI-human interactions.

Current legal frameworks for human surveillance already provide precedent for monitoring human use of AI systems. We’re not building AGI governance systems from scratch. We’re building them on top of an extensive surveillance apparatus.

Why This Should Terrify Anyone Working in AI

The effects of social media on mass manipulation should be the first indication of the default path we are taking. It is not just the threat of violence from authoritarian states against dissidents that should worry you. It is the effect of monitoring itself.

When people know they’re being monitored, they self-censor, avoid creative risks, and conform to perceived expectations. Following the Snowden revelations, Wikipedia saw a 20% decline in views of terrorism-related articles. People stopped researching legal topics simply because they knew someone might be watching.

Additionally, academic researchers report avoiding certain topics and international collaborations out of surveillance concerns. Creative professionals demonstrate measurably lower creativity when they know they’re being observed.

This isn’t new. Orwell wrote about it in “Politics and the English Language” (with “1984” as a narrative framing) - how the fear of being judged leads to intellectual conformity and the death of original thought. The chilling effect isn’t just about avoiding illegal activity. It’s about the slow strangulation of curiosity itself.

For you as the AI researcher, you’ve probably experienced self-censoring because of your employer or because of your social circle’s perspectives on AGI.

And this is even worse when we research sensitive areas like AGI security, consciousness, human cognition, and social dynamics. It is crucial that we avoid self-censoring our way into compliance with the established systems and remain unable to escape the status quo.

This worry extends even to the state AI race level. If you do not allow freedom of thought and individuality, your government may not be able to do the creative work necessary to make AGI.

As I discussed in my post on making AI agents safe, we need robust testing and oversight of all AI systems. But if we’re not careful, the infrastructure we build for safety could become the infrastructure for oppression.

The Technical Path Forward

Centralized surveillance vs. distributed verification

The good news? We already know how to solve this. The internet’s own security evolution provides the perfect model.

When Netscape introduced SSL in the mid-‘90s, it created client-server encryption that secured trillions of transactions while remaining completely transparent to users. This is the model we need for AGI governance: verification openly built into infrastructure, not surveillance bolted on afterward.

Here’s what privacy-preserving AGI control can look like:

Zero-knowledge safety verification: AI systems prove they passed safety evaluations without revealing evaluation criteria or internal reasoning. You can prove you have the driver’s license without ever sharing personal information.

Independent auditing before activation: Before an AI system executes potentially dangerous tasks, run it through outcome verification systems that check safety constraints without accessing user data or system internals. Delete the results or maintain privacy algorithmically. Auditors can confirm safety without seeing content.

Granular permission control: Instead of giving AI systems broad access that requires constant monitoring, we build fine-grained permission systems. Like smartphone app permissions, but designed for AI systems that could potentially cause catastrophic damage.

Distributed verification networks: Rather than centralized monitoring by single entities, we create networks of independent verifiers that provide oversight through consensus without any single party having surveillance capabilities.

Where Privacy and Safety Actually Align

The fascinating thing about privacy-preserving AGI control is that privacy and safety are not opposing forces. They’re complementary. When you force surveillance systems to be cryptographically verifiable and distributed, you actually get better safety properties, similar to the modern internet:

No single point of failure: Distributed verification is more robust than centralized monitoring.

Tamper resistance: Cryptographic verification is harder to game than human oversight.

Reduced insider threats: Centralized surveillance systems are incredibly vulnerable to abuse from within. Distributed systems eliminate single points of control that can be compromised.

Transparency without surveillance: Open-source verification tools provide accountability without creating surveillance infrastructure.

Innovation protection: Researchers can work freely while still enabling necessary safety oversight.

The best security systems protect both users and society without creating new attack vectors.

Next Steps for Research to Avoid the AGI Surveillance Trap

If you’re working in AI safety, here are concrete research directions that could help us avoid the surveillance state trap:

Privacy-preserving verification protocols: Develop production-ready zero-knowledge proof systems for AI safety compliance. “HTTPS for AI safety” - protocols that enable input/output verification without surveillance.

Decentralized AI governance infrastructure: Build networks for distributed AI oversight that prevent any single entity from gaining surveillance capabilities. This includes consensus mechanisms for safety decisions and cryptographic voting systems for governance.

Privacy-preserving capability evaluation: Extend current evaluation frameworks (like inspect from AISI) to work with encrypted model interactions. We need to test AI capabilities without exposing user data, reasoning traces, or proprietary systems.

Open-source verification tools: Create auditing tools that anyone can use to verify AI safety claims. Transparency through open algorithms instead of hidden surveillance systems.

Regulatory technology for AI: Develop “RegTech” that enables compliance with AI safety regulations through cryptographic means rather than surveillance. Something along the lines of smart contracts for AI governance.

The Window is Closing

The infrastructure decisions we make in the next 2-3 years will determine whether AGI transforms human freedom or enables unprecedented control. Once surveillance systems are built and normalized, historical precedent shows they’re nearly impossible to dismantle.

But here’s what gives me hope: the technical community can still shape this outcome. Just as cryptographers developed SSL/TLS standards before widespread e-commerce, we can develop privacy-preserving AGI governance before surveillance-heavy alternatives become entrenched.

The best time to build privacy-preserving infrastructure is before you need it. The second-best time is now.

What You Can Do

Developers contributing to shared projects

If this sounds interesting to you,, here are a few concrete next steps to avoid sitting around and actually doing something:

Start building, start contributing: Don’t just implement privacy-preserving verification in your current AI projects. Contribute to make existing open source protocols like Model Context Protocol (MCP) safer. Build better evaluation frameworks. Start new foundations around new projects. Share your ideas for new open source consortiums.

Get involved with standards bodies: IEEE, ISO, EU, AISI, and IETF are all working on AI governance standards right now. Here’s the thing - anyone with a decent academic or industry affiliation can contribute to these standards. It’s not some exclusive club. Technical input from people who understand both AI and privacy is desperately needed.

Join the conversation and build community: Whether through Apart Research, AI safety conferences, or technical working groups, we need more people advocating for privacy-preserving approaches. Make sure you write about it if you begin working on it. Share this post or its ideas. Make it the topic of a lecture.

Get involved in $1B+ scale solutions: Open source projects are necessary for trustworthy protocols everyone can use, but we also need large-scale hardware and AGI security companies. If you’re interested, check out Seldon to be part of building tomorrow’s infrastructure.

Support and build open alternatives: When you see privacy-preserving AI tools, use them, contribute to them, help them succeed. When you don’t see them, build them. The market needs to see that privacy-preserving approaches can win.

Think like a cryptographer: Before building any AI oversight system, ask: “How could this be done with zero-knowledge proofs?” Often, the answer leads to better security properties anyway.

The future of AGI governance doesn’t have to be a choice between safety and privacy. With the right technical approaches, we can have both. And create systems that are more robust, more trustworthy, and more aligned with human values than pure surveillance could ever be.

But only if we start building immediately.


If your work is pushing this idea forward, I’d love to hear from you. Reach out on X.

Further Reading

Tools and frameworks:

A few papers on privacy-preserving approaches to AI:

Historical context on the effects of surveillance:

Current AI governance developments:

Discuss this article on X

Related writing