J A B B Y A I

Loading

Research Paper

  • AI 2027 Paper
  • Authors: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean

Scenario Takeaways

  1. By 2027, we may automate AI R&D leading to vastly superhuman AIs (“artificial super-intelligence” or ASI). In AI 2027, AI companies create expert-human-level AI systems in early 2027 which automate AI research, leading to ASI by the end of 2027.
  2. ASIs will dictate humanity’s future. Millions of ASIs will rapidly execute tasks beyond human comprehension. Because they’re so useful, they’ll be widely deployed. With superhuman strategy, hacking, weapons development, and more, the goals of these AIs will determine the future.
  3. ASIs might develop unintended, adversarial “misaligned” goals, leading to human disempowerment. In AI 2027, humans voluntarily give autonomy to seemingly aligned AIs. Everything looks to be going great until ASIs have enough hard power to disempower humanity.
  4. An actor with total control over ASIs could seize total power. If an individual or small group aligns ASIs to their goals, this could grant them control over humanity’s future. In AI 2027, a small committee has power over the project developing ASI. They could attempt to use the ASIs to cement this concentration of power. After seizing control, the new ruler(s) could rely on fully loyal ASIs to maintain their power, without having to listen to the law, the public, or even their previous allies.
  5. An international race toward ASI will lead to cutting corners on safety. In AI 2027, China is just a few months behind the US as ASI approaches which pressures the US to press forward despite warning signs of misalignment.
  6. Geopolitically, the race to ASI will end in war, a deal, or effective surrender. The leading country will by default accumulate a decisive technological and military advantage, prompting others to push for an international agreement (a “deal”) to prevent this. Absent a deal, they may go to war rather than “effectively surrender”.
  7. No US AI project is on track to be secure against nation-state actors stealing AI models by 2027. In AI 2027 China steals the US’s top AI model in early 2027, which worsens competitive pressures by reducing the US’ lead time.
  8. As ASI approaches, the public will likely be unaware of the best AI capabilities. The public is months behind internal capabilities today, and once AIs are automating AI R&D a few months time will translate to a huge capabilities gap. Increased secrecy may further increase the gap. This will lead to little oversight over pivotal decisions made by a small group of AI company leadership and government officials.

Note: The research paper is not a publication of OpenAI, but draws from the experience of former OpenAI employees and their collective experience.

submitted by /u/jstnhkm
[link] [comments]

Leave a Comment