If Anyone Builds It, Everyone Dies

A moderately accessible introduction about the very real risks of superintelligent AI.
Yudkowsky is not known for being concise. He’s very passionate, and has been extremely prolific online for decades, particularly in those corners of the Internet where you have to caveat and explain every last thing to an army of skeptics on the spectrum.
The existential risks around AGI can come off as some paranoid Hollywood-influenced rant when you don’t know the background. Add to the fact that, outside a few in the industry, few understand how these systems work and have no idea just how quickly things can change.
So the authors try to strike a balance between providing enough background and context to understand things (and address common rebuttals) and keeping things to a reasonable length.
Unfortunately, I think they failed and the book is too long for the audience they need to reach. I enjoyed the book, but I’m by no means a new audience.
After 12 chapters, including a full fictional narrative of how ASI could lead to human extinction (which I enjoyed, for what it’s worth), we finally get to their policy recommendations. And even then, it’s a few pages in and I am eliding it here to keep it relatively brief:
All the computing power that could train or run more powerful new AIs, gets consolidated in places where it can be monitored by observers from multiple treaty-signatory powers, to ensure those GPUs aren’t used to train or run more powerful new AIs…
… So the safest bet would be to set the threshold low—say, at the level of eight of the most advanced GPUs from 2024—and say that it is illegal to have nine GPUs that powerful in your garage, unmonitored by the international authority
One of the techniques they used in the book was to move a lot of the tangents and more detailed rebuttals to a dedicated website, which provides even more extensive endnotes. Ironically, the made-for-completionists companion content for Chapter 13 ends up feeling more direct than the actual book itself.
This was certainly a really challenging book to write, and I bear no illusions that I could have done any better. Yudkowsky and Soares are extremely knowledgeable and credible here, but I think the book would have benefited from partnering with an elite New Yorker-level writer in order to really be successful.
Regardless, I absolutely recommend reading it. Even if you don’t believe the risks, you’ll almost certainly learn something.