Loading
A while ago I got obsessive about philosophical stuff again and wound up reading this article, in particular this section drew my attention (emphasis mine):
Land once politely called me a monkey socialist for saying we needed to make sure the whole system continues to serve human purposes. To him, such proposals are just a futile drag on the process of capital intelligence escaping any higher purpose than its own acceleration. Even if my monkey socialism worked, it would just be an embrace of permanent stupidity. But the cat is already out of the bag and the draw of intelligence acceleration is too strong for human agency to resist or even fully comprehend. His answer to the “should” question is to shrug or even cheer it on. We mildly uplifted apes are just a temporary host for the higher power of AI-capitalism coming into being using our brains and resources. Our delusions about agency are cute at best, or more realistically, part of the fraud. In retrospect, he was right about most of this, and I was wrong.
I don’t like it, but it’s lived rent-free in my head whenever people bring up AGI safety. I want to agree about protecting ourselves from AGI but can’t, because one of the smartest philosophers to ever live (supposedly) said otherwise and knowledgeable people agree with him. Maybe there are good arguments I can’t understand though, so what do folks here think?
submitted by /u/littleborb
[link] [comments]