
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?
--
You received this message because you are subscribed to the Google Groups "netlogo-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netlogo-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/823fb559-4380-4149-a780-0ef14c74ae2bn%40googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/CAFkCkGfi%3DRQ9Px0u7kDZe0cKTuvN8bj-T%2BEKDUEWBh9hHrzcjA%40mail.gmail.com.
On Dec 2, 2025, at 12:24 AM, John Chen <yuehanc...@u.northwestern.edu> wrote:
I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts?The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.
On Mon, Dec 1, 2025 at 11:21 PM Michael Tamillow <mikaelta...@gmail.com> wrote:
My real curiosity lies in the question:will your students learn the importance of outlining the flow of code to understand what happens in the course of a program to better model the real world or will they learn that ChatGPT will solve their problem if they ever need to know it, so there is no point in thinking deeply on it now i.e. better get back to those YouTube shorts!
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?
--<Netlogo LIFE code flowchart.png>
You received this message because you are subscribed to the Google Groups "netlogo-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netlogo-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/823fb559-4380-4149-a780-0ef14c74ae2bn%40googlegroups.com.
I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts?
The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.
I've been doing AI since I started my professional career in the 80's. We had similar reactions: "what's the big deal with rules, I can do the same with if-then in COBOL" We used to have a saying: "once you know how it works it's not AI". I think one reason I find LLMs so amazing is having tried to do the kind of NLP that they do I realize how difficult it is. I also never thought I would see the level of NLP and problem solving that LLMs like ChatGPT can accomplish in my life time. I use ChatGPT every day for research, to generate Python and SPARQL (a graph query language), to give me feedback on drafts of papers and other writing, to create images, and more. If you think that doesn't qualify as "problem solving" I would like to know what does. I completely agree that some people (mostly people who want to pretend that LLMs are more powerful than they are for various financial reasons) create a lot of hype around LLMs. The idea that they are anywhere near to becoming sentient is laughable. But again, the same thing happened in the first wave of AI. Marvin Minsky made the following comment in Life magazine in an article from 1970: “In from three to eight years we will have a machine with the general intelligence of an average human being.” https://en.wikipedia.org/wiki/History_of_artificial_intelligenceI also agree that companies like Open AI are insanely over valued. But that's a problem with the stock market, not with the technology.The same goes for hallucinations. In fact, one of the biggest issues in AI has always been common sense reasoning which is what LLMs are so amazingly good at. Researchers used to say that we want AI that can eventually make the kinds of mistakes that humans make because only such systems will be able to do common sense reasoning and NLP at the level of an educated human. I don't think I've ever gotten code generated by ChatGPT that worked the first time. There are always bugs or issues where I didn't give it enough context to do what I really needed. The same would be true if I had a good programmer sitting right next to me who would listen to my requirements and give me code. A human programmer would make very similar mistakes. That doesn't mean that their work is useless. ChatGPT has made me exponentially more productive and I'm actually kind of amazed that people just want to ignore it or downplay it.Actually, the comment about LLMs that can't solve problems also reminds me of a quote from Turing that Chomsky likes to remind people of: “The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion.” Alan Turing, Mechanical Intelligence: Collected Works of A.M. Turing. Chomsky often uses this quote and adds: "It's like asking 'can submarines swim' in Japanese they do, in English they don't, but that tells you nothing about ship design". I.e., in Japanese they use the same verb for how a submarine moves through the water as they do for how a person does. In English we don't. That's just a language convention and tells you nothing about how subs work. The same for "thinking" or "problem solving" if you want to nit pick and say that LLMs don't think because only humans think (this is a real supposed counter argument I've heard from philosophers) or some other reason you can do that. But don't think what you are saying is in any way meaningful to Computer or Cognitive Science. To do that you need a rigorous definition of what thinking and problem solving mean that are part of some testable scientific theory. That was the point Turing was making and why he created the Imitation Game which has come to be known as the Turing Test. My definition of problem solving is if using a tool enables me to solve something in a few hours rather than a few days then that tool is helping me do problem solving and that's what ChatGPT does for me every day.Michael
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/588E062D-6CA3-4151-8351-836F6C718C28%40gmail.com.
I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts?
The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.
I've been doing AI since I started my professional career in the 80's. We had similar reactions: "what's the big deal with rules, I can do the same with if-then in COBOL" We used to have a saying: "once you know how it works it's not AI". I think one reason I find LLMs so amazing is having tried to do the kind of NLP that they do I realize how difficult it is. I also never thought I would see the level of NLP and problem solving that LLMs like ChatGPT can accomplish in my life time. I use ChatGPT every day for research, to generate Python and SPARQL (a graph query language), to give me feedback on drafts of papers and other writing, to create images, and more. If you think that doesn't qualify as "problem solving" I would like to know what does. I completely agree that some people (mostly people who want to pretend that LLMs are more powerful than they are for various financial reasons) create a lot of hype around LLMs. The idea that they are anywhere near to becoming sentient is laughable. But again, the same thing happened in the first wave of AI. Marvin Minsky made the following comment in Life magazine in an article from 1970: “In from three to eight years we will have a machine with the general intelligence of an average human being.” https://en.wikipedia.org/wiki/History_of_artificial_intelligenceI also agree that companies like Open AI are insanely over valued. But that's a problem with the stock market, not with the technology.The same goes for hallucinations. In fact, one of the biggest issues in AI has always been common sense reasoning which is what LLMs are so amazingly good at. Researchers used to say that we want AI that can eventually make the kinds of mistakes that humans make because only such systems will be able to do common sense reasoning and NLP at the level of an educated human. I don't think I've ever gotten code generated by ChatGPT that worked the first time. There are always bugs or issues where I didn't give it enough context to do what I really needed. The same would be true if I had a good programmer sitting right next to me who would listen to my requirements and give me code. A human programmer would make very similar mistakes. That doesn't mean that their work is useless. ChatGPT has made me exponentially more productive and I'm actually kind of amazed that people just want to ignore it or downplay it.Actually, the comment about LLMs that can't solve problems also reminds me of a quote from Turing that Chomsky likes to remind people of: “The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion.” Alan Turing, Mechanical Intelligence: Collected Works of A.M. Turing. Chomsky often uses this quote and adds: "It's like asking 'can submarines swim' in Japanese they do, in English they don't, but that tells you nothing about ship design". I.e., in Japanese they use the same verb for how a submarine moves through the water as they do for how a person does. In English we don't. That's just a language convention and tells you nothing about how subs work. The same for "thinking" or "problem solving" if you want to nit pick and say that LLMs don't think because only humans think (this is a real supposed counter argument I've heard from philosophers) or some other reason you can do that. But don't think what you are saying is in any way meaningful to Computer or Cognitive Science. To do that you need a rigorous definition of what thinking and problem solving mean that are part of some testable scientific theory. That was the point Turing was making and why he created the Imitation Game which has come to be known as the Turing Test. My definition of problem solving is if using a tool enables me to solve something in a few hours rather than a few days then that tool is helping me do problem solving and that's what ChatGPT does for me every day.Michael
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/588E062D-6CA3-4151-8351-836F6C718C28%40gmail.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/CALGFikdyi7_E7F559YKdbkTbTF1dtfzCVnfUMV6zyD-uhfXU8A%40mail.gmail.com.
On Dec 3, 2025, at 7:14 AM, diego diaz <didi...@gmail.com> wrote:
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/F14A6B8E-E5A6-40C1-9DC7-A52ECD892E8B%40gmail.com.
James, great points. Also, in my experience with ChatGPT it's default behavior is what people in improv theatre call "yes and". I.e., it tends to err on the side of agreeing with you and encouraging you in the direction you are going rather than to point out errors. I always try to keep this in mind and include things that explicitly say something like "please point out any points that may be in error or could be phrased more clearly".Also, I find the Project feature in ChatGPT to be very useful. With a project you can store specific documents as part of the project and ChatGPT will use those documents in addition to your prompt. Also, you can group threads into projects. E.g., I have projects for working with Semantic Web technology so ChatGPT knows the naming style I use for IRIs and other details about the way I write code and models. This way I can pick up threads on specific topics and give ChatGPT the best context info.One other thing about ChatGPT: it stores things about you that it considers important. This memory can get full so I check it once in a while and delete things that were stored but no longer important. Also, I have a "Do Not Remember" project for interactions that I don't want it to remember because they aren't important and would waste space in its memory. It knows that when I start a thread in that project to not store any info into its long term memory.Michael