TL;DR: The existing incentive structure makes it impossible to prevent the creation of large machine learning models like Yudkowsky and others want.
Also, keep in mind that the paperclip maximizer scenario is completely hypothetical.
Wow, that post explained so much of my personal musings on macro social systems in such a neat and excellent way. I’ve always framed it in tiers of evolution: biological, social, and now technological, with each one requiring the previous tier to advance to a degree, and each increasing the owner species’ adaptational might within an ecosystem. To fall behind in the overall might is to accept eventual extinction, or at best irrelevance. Definitely bookmarking this for future musings.
Yep. While the US is fumbling around trying to figure out if AI is racist or not, China is already using it in their legal system and many other areas.