Hi!
I have browsed some articles about LLMs from non-AI experts, and I have realized that even people heavily criticizing chatGPT are dangerously underestimating the technology. Their understanding is very surface level. But they actively use that understanding to predict its dangers. This is likely the same with most policy makers.
Basically most of this boils down to something like these logical extremes:
- LLMs predict the next word, with a statistical model, so they cannot understand anything. They are simple functions (text goes in, probability distribution goes out)
- LLMs cannot understand anything, so they are not intelligent in any way.
- LLMs have have the same intelligence as a shovel, they have zero agency, so they pose absolutely zero dangers, no matter how advanced they would get in the future.
- LLMs are one of the most highly developed "AI" systems today, so no AI systems in the near future would have any intelligence, understanding, or agency at all. Therefore in the next few decades (or centuries) we are perfectly safe from any dangers of AI.
Each statement have some significant truth in it, but is catastrophically wrong if we would wish to base policy on it.
My problem is that many comments, and articles accept some or all of these logical extremes, without any hedging.
Articles in the LessWrong blog seem to deal with all of the points I have raised, and in the previous discussion we have touched upon some of it too. But we may want to more systematically discuss it, so everybody is on the same page.
"Popscience" completely misunderstands LLMs, so it is difficult to be actually informed about this topic.
Adam Rak