Loading
Popularly the idea seems to be: our AIs are getting more and more capable, and someday they will achieve AGI, which is basically a digital version of a human consciousness and can do everything a human brain can do.
From there, now that it can code itself at a human level or better, it will likely become a positive feedback loop leading to ASI.
But why would an AGI ever need to be able to do everything a human brain can do? Our brains have so many complex nuances and vestiges left over from evolution and from being primates that a close match in thinking is unlikely to ever arise in silicon.
It seems to me that an AI only needs to have a relatively basic grasp of human thinking, and more importantly, be really good at coding and AI architecture. It only has to surpass us at those things in order to start a runaway intelligence effect, right? And from there its type of intelligence will certainly never become what we think of as an AGI.
So to me it seems like an AGI will never really exist, because a super-coding AI will become ASI first.
submitted by /u/spongue
[link] [comments]