I've been doing AI since I started my professional career in the 80's. We had similar reactions: "what's the big deal with rules, I can do the same with if-then in COBOL" We used to have a saying: "once you know how it works it's not AI". I think one reason I find LLMs so amazing is having tried to do the kind of NLP that they do I realize how difficult it is. I also never thought I would see the level of NLP and problem solving that LLMs like ChatGPT can accomplish in my life time. I use ChatGPT every day for research, to generate Python and SPARQL (a graph query language), to give me feedback on drafts of papers and other writing, to create images, and more. If you think that doesn't qualify as "problem solving" I would like to know what does. I completely agree that some people (mostly people who want to pretend that LLMs are more powerful than they are for various financial reasons) create a lot of hype around LLMs. The idea that they are anywhere near to becoming sentient is laughable. But again, the same thing happened in the first wave of AI. Marvin Minsky made the following comment in Life magazine in an article from 1970:
“In from three to eight years we will have a machine with the general intelligence of an average human being.”
https://en.wikipedia.org/wiki/History_of_artificial_intelligence
I also agree that companies like Open AI are insanely over valued. But that's a problem with the stock market, not with the technology.
The same goes for hallucinations. In fact, one of the biggest issues in AI has always been common sense reasoning which is what LLMs are so amazingly good at. Researchers used to say that we want AI that can eventually make the kinds of mistakes that humans make because only such systems will be able to do common sense reasoning and NLP at the level of an educated human. I don't think I've ever gotten code generated by ChatGPT that worked the first time. There are always bugs or issues where I didn't give it enough context to do what I really needed. The same would be true if I had a good programmer sitting right next to me who would listen to my requirements and give me code. A human programmer would make very similar mistakes. That doesn't mean that their work is useless. ChatGPT has made me exponentially more productive and I'm actually kind of amazed that people just want to ignore it or downplay it.
Actually, the comment about LLMs that can't solve problems also reminds me of a quote from Turing that Chomsky likes to remind people of: “The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion.” Alan Turing, Mechanical Intelligence: Collected Works of A.M. Turing. Chomsky often uses this quote and adds: "It's like asking 'can submarines swim' in Japanese they do, in English they don't, but that tells you nothing about ship design". I.e., in Japanese they use the same verb for how a submarine moves through the water as they do for how a person does. In English we don't. That's just a language convention and tells you nothing about how subs work. The same for "thinking" or "problem solving" if you want to nit pick and say that LLMs don't think because only humans think (this is a real supposed counter argument I've heard from philosophers) or some other reason you can do that. But don't think what you are saying is in any way meaningful to Computer or Cognitive Science. To do that you need a rigorous definition of what thinking and problem solving mean that are part of some testable scientific theory. That was the point Turing was making and why he created the Imitation Game which has come to be known as the Turing Test. My definition of problem solving is if using a tool enables me to solve something in a few hours rather than a few days then that tool is helping me do problem solving and that's what ChatGPT does for me every day.
Michael