At 22, I walked out of grad school to start Apart Research - now the fastest path to impact for AI safety research worldwide. Apart is accelerating research that matters: 20+ papers, award-winning benchmarks, and over 4,000 hackers building the future.
Most recently, I co-launched Seldon to fund the infrastructure humanity needs to survive what’s coming. First bets: Andon Labs, Lucid Computing, Workshop Labs, and Asymmetric Security.
Posts
- , AGI Security: Rethinking Society's Infrastructure (58 recent reads)
- , "Surrounded by Enemies" (Kringsatt av Fiender), or "To The Youth" (Til Ungdommen) English Translation (38 recent reads)
- , "Is The Light Merely For The Learned?" (Er Lyset For De Lærde Blot?) English Translation (35 recent reads)
- , How to host a hackathon (21 recent reads)
- , My Tools (20 recent reads)
- , Mediocre Intelligence, Outstanding Results (19 recent reads)
- , To An 18 Year Old Me (18 recent reads)
- , AGI Privacy (16 recent reads)
- , Sentware (15 recent reads)
- , Vibe Coding: Taste Is The Last Frontier (13 recent reads)
- , Raising for the Endgame: An AI Safety Founder’s Primer (11 recent reads)
- , Cybermorphism (10 recent reads)
- , The concrete risks of AI misdeployment (9 recent reads)
- , Go on a yearly hiatus (9 recent reads)
- , Learning From Architecture (7 recent reads)
- , The AGI endgame (5 recent reads)
- , Don't Give Up (3 recent reads)
- , The AI Bubble (3 recent reads)
- , "Just do the thing" (2 recent reads)
- , Let's make AI agents safe (2 recent reads)
- , Audiovisual reading (2 recent reads)
- , The Expensive Internet hypothesis (1 recent reads)
- , Confident optimism (0 recent reads)
Research
News
- [Interview] AI Safety From the Frontlines
- [VentureBeat] Beyond sycophancy: DarkBench exposes six hidden ‘dark patterns’ lurking in today’s top LLMs
- [Information] Vores tankefrihed er under angreb fra chatbotternes subtile manipulation
- [Die Zeit] “AI will soon be able to take over your job”
- [For Humanity Podcast] Dark Patterns in AI and Creating Secure Infrastructure
- Engineering a World Designed for Safe Superintelligence (2025)
- AI Futures & Our Paths Forward (2025)
- AI Safety Startups (2024)
- Challenges and Solutions for AI Security (2024)
- Testing Your Suitability For AI Alignment Research (2023)
- AI Safety Research Outside the Hubs: A Guide for Aspiring Researchers (2023)
- Intro to Interpretability (2022)
- Series on my Brain-Computer Interface (BCI) research project (2021)
Socials

For journalists: Bio & images