Loading
![]() |
With narrow AI, the score is out of reach, it can only take a reading. What’s much worse, is that the AGI’s reward definition is likely to be designed to include humans directly and that is extraordinarily dangerous. For any reward definition that includes feedback from humanity, the AGI can discover paths that maximise score through modifying humans directly, surprising and deeply disturbing paths. submitted by /u/Just-Grocery-2229 |