Don't Give Up

Today, I saw this tweet from davidad which prompted me to write this post.

Pasted image 20250203163523

I’ve chatted with many founders in relation to my partner position at Juniper Ventures and my work at the YC for AI safety, Seldon. These are the type of people who I would trust to change the world for the better. Two topics regularly come up:

  1. Whether their work can make a difference and
  2. Reaching the level of ambition to really make a difference

Especially the first one worries me a lot. There’s this tendency to say “since AGI arrives soon, I should probably just do something fun and start a high-growth tech company in AI wrapper land to earn a lot of money,” which is the opposite of what I’d like to see.

And it seems to come from a pervasive place…

The culture of AI safety

AI safety was originally crafted by contrarian academics and internet philosophers. The people entering the field think about the risks of a new idea before they consider the opportunities. Additionally, there has been a tendency of self-sacrifice and unhealthy overworking on visionary projects that tend towards being unrealistic (i.e. academic).

This culture seems to persist to this day within the Berkeley sphere of AI safety1. There’s regularly amazing, interesting, and highly impactful projects emerging from this community, however, the culture itself quickly settles on something too big-brained or something too unambitious.

I have a lot of respect to the D.C. crowd, Matt Clifford et al. in the UK, and Yoshua Bengio et al. because they translate these Berkeley insights into real, large-scale, and impactful action.

But, there isn’t a space for founders or people who can truly take action here to pursue the Builder + Doomer culture besides being close to these actors themselves. The closest might be LISA in London but even that is slightly too junior at this stage to garner the impact they’d want.

If I were to describe the culture that is needed, it is:

Bell Labs meets early YC meets Skunkworks meets the Apollo Program PLUS the vibe that Superintelligence is here soon and we need to rebuild society, all digital infrastructure, and everything physical to accommodate this change.

We’re hoping to build this with Seldon in San Francisco and are currently building this community at AISFounders.com if you want to join.

Let me give an example of how such a conversation might go:

A case

A couple of weeks ago, I chatted with Chris Canal, a successful AI safety startup founder who currently has a lot of impact working directly with METR and the AI Safety Institute with 10+ full-time employees.

He was considering whether to pivot to work on a very profitable client on an otherwise unimpactful project because we’re too late to make a real difference on existential risk anyways.

We discussed this in the context of a talk I had given where I lay out the case for actually building this new world we’re going to live in where we can feel secure with superintelligent AI agents in every piece of our system.

My main point is that if we’re going towards a completely unpredictable world, we should do the most exciting thing which is to do AI safety.

In the Berkeley culture, it is not exciting, not awesome, and not interesting to do AI safety. It is simply a necessary fact of the world. Optimism cannot thrive there and if you are not optimistic, you have a much harder time solving massive problems.

So, let’s be excited! Let’s do the most interesting thing and build the new world.

A culture for AI cypherpunks

The culture that seems most related to this cultural shift (but also slightly too constraining) is the cypherpunk culture. This is what I would most closely associate with ARIA’s agenda for AI safety, verifying and cryptographically securing the infrastructure to be prepared for generally intelligent AI. Per the Wikipedia page, cypherpunks advocate for

“the widespread use of strong cryptography and privacy-enhancing technologies as a means of effecting social and political change”

During a recent discussion prominent cypherpunk Mark Miller in Berkeley, he laid out the view as I would support it as well. The perspective that we need to design our digital systems to ensure the autonomy and security of intelligent agents, such as humans and AI.

A culture for founders

While cypherpunk is close, it is not nearly all-encompassing enough.

Digital infrastructure isn’t the only thing we have to change. We need to build the culture where founders can pursue the ambitions necessary for world-changing technologies to prepare us for the next stage of intelligent life.

That is my plea to you, founder, rationalist, and AI safety activist.

LFG, builders of the future.

Don’t Give Up.

  1. Much of modern AI safety research happen in a few offices near downtown Berkeley and includes organizations, such as OpenPhil (major funder of AI safety), MIRI, Palisade Research, METR, FAR AI, and many others. Each of the offices contain multiple organizations and have different cultures with FAR AI in my view being the most outcome oriented. 

Discuss this article on X