Using ChatGPt for code visualization

126 views
Skip to first unread message

jarq...@gmail.com

unread,
Apr 7, 2025, 12:49:40 PMApr 7
to netlogo-users
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?

Netlogo LIFE code flowchart.png

Jackson Ville

unread,
Apr 14, 2025, 8:56:48 AMApr 14
to netlogo-users
G

Message has been deleted

jarq...@gmail.com

unread,
Apr 14, 2025, 3:26:35 PMApr 14
to netlogo-users
Based on the LIFE model contained in the Netlogo models library, please let me know if anyone spots any errors in the diagram made by ChatGPT.

shbe...@gmail.com

unread,
Dec 1, 2025, 9:58:40 PM (21 hours ago) Dec 1
to netlogo-users
Interesting. I haven't done that but I'm using ChatGPT to help me code.

On Monday, April 7, 2025 at 9:49:40 AM UTC-7 jarq...@gmail.com wrote:

Michael Tamillow

unread,
1:21 AM (17 hours ago) 1:21 AM
to jarq...@gmail.com, netlogo-users
My real curiosity lies in the question: 

will your students learn the importance of outlining the flow of code to understand what happens in the course of a program to better model the real world or will they learn that ChatGPT will solve their problem if they ever need to know it, so there is no point in thinking deeply on it now i.e. better get back to those YouTube shorts!

On Mon, Apr 7, 2025 at 11:49 AM jarq...@gmail.com <jarq...@gmail.com> wrote:
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?

Netlogo LIFE code flowchart.png

--
You received this message because you are subscribed to the Google Groups "netlogo-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netlogo-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/823fb559-4380-4149-a780-0ef14c74ae2bn%40googlegroups.com.

John Chen

unread,
1:24 AM (17 hours ago) 1:24 AM
to Michael Tamillow, jarq...@gmail.com, netlogo-users
I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 

The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

Michael Tamillow

unread,
8:23 AM (10 hours ago) 8:23 AM
to John Chen, jarq...@gmail.com, netlogo-users
Precisely!

And what is the point of knowing anything if others can know it for you?

Nothing is a problem if you just stop thinking about it. YouTube shorts are the embodiment of zen, freeing the mind of all thought.


On Dec 2, 2025, at 12:24 AM, John Chen <yuehanc...@u.northwestern.edu> wrote:


I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 

The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

On Mon, Dec 1, 2025 at 11:21 PM Michael Tamillow <mikaelta...@gmail.com> wrote:
My real curiosity lies in the question: 

will your students learn the importance of outlining the flow of code to understand what happens in the course of a program to better model the real world or will they learn that ChatGPT will solve their problem if they ever need to know it, so there is no point in thinking deeply on it now i.e. better get back to those YouTube shorts!

On Mon, Apr 7, 2025 at 11:49 AM jarq...@gmail.com <jarq...@gmail.com> wrote:
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?

<Netlogo LIFE code flowchart.png>

--
You received this message because you are subscribed to the Google Groups "netlogo-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netlogo-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/823fb559-4380-4149-a780-0ef14c74ae2bn%40googlegroups.com.

Michael Tamillow

unread,
10:39 AM (8 hours ago) 10:39 AM
to Michael DeBellis, John Chen, jarq...@gmail.com, netlogo-users
But…

Can HUMANS think?

My expert opinion: Some


On Dec 2, 2025, at 9:28 AM, Michael DeBellis <mdebe...@gmail.com> wrote:



I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 
The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

I've been doing AI since I started my professional career in the 80's. We had similar reactions: "what's the big deal with rules, I can do the same with if-then in COBOL"  We used to have a saying: "once you know how it works it's not AI". I think one reason I find LLMs so amazing is having tried to do the kind of NLP that they do I realize how difficult it is. I also never thought I would see the level of NLP and problem solving that LLMs like ChatGPT can accomplish in my life time. I use ChatGPT every day for research, to generate Python and SPARQL (a graph query language), to give me feedback on drafts of papers and other writing, to create images, and more. If you think that doesn't qualify as "problem solving" I would like to know what does. I completely agree that some people (mostly people who want to pretend that LLMs are more powerful than they are for various financial reasons) create a lot of hype around LLMs. The idea that they are anywhere near to becoming sentient is laughable. But again, the same thing happened in the first wave of AI. Marvin Minsky made the following comment in Life magazine in an article from 1970:  “In from three to eight years we will have a machine with the general intelligence of an average human being.” https://en.wikipedia.org/wiki/History_of_artificial_intelligence

I also agree that companies like Open AI are insanely over valued. But that's a problem with the stock market, not with the technology.

The same goes for hallucinations. In fact, one of the biggest issues in AI has always been common sense reasoning which is what LLMs are so amazingly good at. Researchers used to say that we want AI that can eventually make the kinds of mistakes that humans make because only such systems will be able to do common sense reasoning and NLP at the level of an educated human. I don't think I've ever gotten code generated by ChatGPT that worked the first time. There are always bugs or issues where I didn't give it enough context to do what I really needed. The same would be true if I had a good programmer sitting right next to me who would listen to my requirements and give me code. A human programmer would make very similar mistakes. That doesn't mean that their work is useless. ChatGPT has made me exponentially more productive and I'm actually kind of amazed that people just want to ignore it or downplay it.

Actually, the comment about LLMs that can't solve problems also reminds me of a quote from Turing that Chomsky likes to remind people of: “The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion.”   Alan Turing, Mechanical Intelligence: Collected Works of A.M. Turing. Chomsky often uses this quote and adds: "It's like asking 'can submarines swim' in Japanese they do, in English they don't, but that tells you nothing about ship design". I.e., in Japanese they use the same verb for how a submarine moves through the water as they do for how a person does. In English we don't. That's just a language convention and tells you nothing about how subs work. The same for "thinking" or "problem solving" if you want to nit pick and say that LLMs don't think because only humans think (this is a real supposed counter argument I've heard from philosophers) or some other reason you can do that. But don't think what you are saying is in any way meaningful to Computer or Cognitive Science. To do that you need a rigorous definition of what thinking and problem solving mean that are part of some testable scientific theory. That was the point Turing was making and why he created the Imitation Game which has come to be known as the Turing Test. My definition of problem solving is if using a tool enables me to solve something in a few hours rather than a few days then that tool is helping me do problem solving and that's what ChatGPT does for me every day.

Michael

Reply all
Reply to author
Forward
0 new messages