Loading
I’ve actually had a wide ranging discussion with several different LLMs like Claude, ChatGPT, and Gemini about this subject. I can’t make up my mind because it seems to depend on what level you are discussing it. The nature of an LLM seems to be an informal system, and yet that may be just the appearance of an informal system as it’s probably using formal rules in its reasoning at some level. Even if it’s just the matrix manipulation that is a formal system that should be incomplete in a Gödelian sense. Yet it’s also true that at least from our perspective the output has a level of unpredictability that doesn’t exist in most valid formal recognized systems.
If you aren’t familiar with I incompleteness then I really recommend the nunberphile video to explain it.
https://youtu.be/O4ndIDcDSGc?si=jRuakJORpY9ZZwI1
There are also the related topic of the halting problem.
https://youtu.be/macM_MtS_w4?si=YH8J-gQm7Rfu2AYe
I’m actually going to take a side on this, and claim that it’s mathematically undecidable. If you want to replicate some of my research for yourself you can just use the following prompt.
“How might godels incompleteness theorem apply to large language models, and other forms of generative AI?”
submitted by /u/Memetic1
[link] [comments]