At 22, I walked out of grad school to start Apart Research - now the fastest path to impact for AI safety research worldwide. Apart is accelerating research that matters: 20+ papers, award-winning benchmarks, and over 4,000 hackers building the future.
Most recently, I co-launched Seldon to fund the infrastructure humanity needs to survive whatâs coming. First bets: Andon Labs, Lucid Computing, Workshop Labs, and Asymmetric Security.
Posts
- , AGI Security: Rethinking Society's Infrastructure (39 recent reads)
- , "Is The Light Merely For The Learned?" (Er Lyset For De LĂŠrde Blot?) English Translation (33 recent reads)
- , The concrete risks of AI misdeployment (33 recent reads)
- , "Surrounded by Enemies" (Kringsatt av Fiender), or "To The Youth" (Til Ungdommen) English Translation (28 recent reads)
- , To An 18 Year Old Me (26 recent reads)
- , My Tools (17 recent reads)
- , How to host a hackathon (16 recent reads)
- , Mediocre Intelligence, Outstanding Results (14 recent reads)
- , AGI Privacy (12 recent reads)
- , Cybermorphism (12 recent reads)
- , Confident optimism (9 recent reads)
- , Raising for the Endgame: An AI Safety Founderâs Primer (8 recent reads)
- , Let's make AI agents safe (6 recent reads)
- , Learning From Architecture (5 recent reads)
- , The AGI endgame (5 recent reads)
- , Go on a yearly hiatus (5 recent reads)
- , Vibe Coding: Taste Is The Last Frontier (4 recent reads)
- , The AI Bubble (4 recent reads)
- , Sentware (4 recent reads)
- , Audiovisual reading (4 recent reads)
- , Don't Give Up (3 recent reads)
- , "Just do the thing" (3 recent reads)
- , The Expensive Internet hypothesis (2 recent reads)
Research
News
- [Interview] AI Safety From the Frontlines
- [VentureBeat] Beyond sycophancy: DarkBench exposes six hidden âdark patternsâ lurking in todayâs top LLMs
- [Information] Vores tankefrihed er under angreb fra chatbotternes subtile manipulation
- [Die Zeit] âAI will soon be able to take over your jobâ
- [For Humanity Podcast] Dark Patterns in AI and Creating Secure Infrastructure
- Engineering a World Designed for Safe Superintelligence (2025)
- AI Futures & Our Paths Forward (2025)
- AI Safety Startups (2024)
- Challenges and Solutions for AI Security (2024)
- Testing Your Suitability For AI Alignment Research (2023)
- AI Safety Research Outside the Hubs: A Guide for Aspiring Researchers (2023)
- Intro to Interpretability (2022)
- Series on my Brain-Computer Interface (BCI) research project (2021)
Socials

For journalists: Bio & images