This page is an evergreen document that highlights the biggest guiding beliefs I hold. This serves as a reference page when I write about topics to provide context for these posts.
Beliefs
- Machine intelligence will become an existential risk to humanity within the 21st century: There is no hard limit to how much one can scale the capabilities of machine intelligence. This effectively means that we are creating a new species with an unbounded level of competitive advantage to humanity (replicability without lengthy reproduction cycles, the introduction of new geneologies with the push of a button, and geneological optimization using intelligent design methods vastly superior to evolution). Historically, when a new, vastly superior, species has dominated an ecosystem, the other species are at the complete mercy of said species. The best example is humanity compared to the natural world but extends to rabbits in Australia grazing to a level dispelling existing biodiversity and Brown Tree Snakes on Guam killing all native species after their introduction. Each of these examples happened despite the inefficiency of evolution as an architecture search algorithm and AI will quickly reach a state of much higher disparity to their progenitors (humanity) compared to the above examples.