Using ChatGPt for code visualization

191 views
Skip to first unread message

jarq...@gmail.com

unread,
Apr 7, 2025, 12:49:40 PMApr 7
to netlogo-users
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?

Netlogo LIFE code flowchart.png

Jackson Ville

unread,
Apr 14, 2025, 8:56:48 AMApr 14
to netlogo-users
G

Message has been deleted

jarq...@gmail.com

unread,
Apr 14, 2025, 3:26:35 PMApr 14
to netlogo-users
Based on the LIFE model contained in the Netlogo models library, please let me know if anyone spots any errors in the diagram made by ChatGPT.

shbe...@gmail.com

unread,
Dec 1, 2025, 9:58:40 PMDec 1
to netlogo-users
Interesting. I haven't done that but I'm using ChatGPT to help me code.

On Monday, April 7, 2025 at 9:49:40 AM UTC-7 jarq...@gmail.com wrote:

Michael Tamillow

unread,
Dec 2, 2025, 1:21:32 AMDec 2
to jarq...@gmail.com, netlogo-users
My real curiosity lies in the question: 

will your students learn the importance of outlining the flow of code to understand what happens in the course of a program to better model the real world or will they learn that ChatGPT will solve their problem if they ever need to know it, so there is no point in thinking deeply on it now i.e. better get back to those YouTube shorts!

On Mon, Apr 7, 2025 at 11:49 AM jarq...@gmail.com <jarq...@gmail.com> wrote:
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?

Netlogo LIFE code flowchart.png

--
You received this message because you are subscribed to the Google Groups "netlogo-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netlogo-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/823fb559-4380-4149-a780-0ef14c74ae2bn%40googlegroups.com.

John Chen

unread,
Dec 2, 2025, 1:24:54 AMDec 2
to Michael Tamillow, jarq...@gmail.com, netlogo-users
I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 

The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

Michael Tamillow

unread,
Dec 2, 2025, 8:23:10 AMDec 2
to John Chen, jarq...@gmail.com, netlogo-users
Precisely!

And what is the point of knowing anything if others can know it for you?

Nothing is a problem if you just stop thinking about it. YouTube shorts are the embodiment of zen, freeing the mind of all thought.


On Dec 2, 2025, at 12:24 AM, John Chen <yuehanc...@u.northwestern.edu> wrote:


I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 

The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

On Mon, Dec 1, 2025 at 11:21 PM Michael Tamillow <mikaelta...@gmail.com> wrote:
My real curiosity lies in the question: 

will your students learn the importance of outlining the flow of code to understand what happens in the course of a program to better model the real world or will they learn that ChatGPT will solve their problem if they ever need to know it, so there is no point in thinking deeply on it now i.e. better get back to those YouTube shorts!

On Mon, Apr 7, 2025 at 11:49 AM jarq...@gmail.com <jarq...@gmail.com> wrote:
I recently provided the code for the LIFE model to ChatGPT and asked it to create a flowchart as a tool to help explain the code to my students. I was surprised by the result (see attached image). Has anyone else tried this before?

<Netlogo LIFE code flowchart.png>

--
You received this message because you are subscribed to the Google Groups "netlogo-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to netlogo-user...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/netlogo-users/823fb559-4380-4149-a780-0ef14c74ae2bn%40googlegroups.com.

Michael Tamillow

unread,
Dec 2, 2025, 10:39:42 AMDec 2
to Michael DeBellis, John Chen, jarq...@gmail.com, netlogo-users
But…

Can HUMANS think?

My expert opinion: Some


On Dec 2, 2025, at 9:28 AM, Michael DeBellis <mdebe...@gmail.com> wrote:



I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 
The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

I've been doing AI since I started my professional career in the 80's. We had similar reactions: "what's the big deal with rules, I can do the same with if-then in COBOL"  We used to have a saying: "once you know how it works it's not AI". I think one reason I find LLMs so amazing is having tried to do the kind of NLP that they do I realize how difficult it is. I also never thought I would see the level of NLP and problem solving that LLMs like ChatGPT can accomplish in my life time. I use ChatGPT every day for research, to generate Python and SPARQL (a graph query language), to give me feedback on drafts of papers and other writing, to create images, and more. If you think that doesn't qualify as "problem solving" I would like to know what does. I completely agree that some people (mostly people who want to pretend that LLMs are more powerful than they are for various financial reasons) create a lot of hype around LLMs. The idea that they are anywhere near to becoming sentient is laughable. But again, the same thing happened in the first wave of AI. Marvin Minsky made the following comment in Life magazine in an article from 1970:  “In from three to eight years we will have a machine with the general intelligence of an average human being.” https://en.wikipedia.org/wiki/History_of_artificial_intelligence

I also agree that companies like Open AI are insanely over valued. But that's a problem with the stock market, not with the technology.

The same goes for hallucinations. In fact, one of the biggest issues in AI has always been common sense reasoning which is what LLMs are so amazingly good at. Researchers used to say that we want AI that can eventually make the kinds of mistakes that humans make because only such systems will be able to do common sense reasoning and NLP at the level of an educated human. I don't think I've ever gotten code generated by ChatGPT that worked the first time. There are always bugs or issues where I didn't give it enough context to do what I really needed. The same would be true if I had a good programmer sitting right next to me who would listen to my requirements and give me code. A human programmer would make very similar mistakes. That doesn't mean that their work is useless. ChatGPT has made me exponentially more productive and I'm actually kind of amazed that people just want to ignore it or downplay it.

Actually, the comment about LLMs that can't solve problems also reminds me of a quote from Turing that Chomsky likes to remind people of: “The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion.”   Alan Turing, Mechanical Intelligence: Collected Works of A.M. Turing. Chomsky often uses this quote and adds: "It's like asking 'can submarines swim' in Japanese they do, in English they don't, but that tells you nothing about ship design". I.e., in Japanese they use the same verb for how a submarine moves through the water as they do for how a person does. In English we don't. That's just a language convention and tells you nothing about how subs work. The same for "thinking" or "problem solving" if you want to nit pick and say that LLMs don't think because only humans think (this is a real supposed counter argument I've heard from philosophers) or some other reason you can do that. But don't think what you are saying is in any way meaningful to Computer or Cognitive Science. To do that you need a rigorous definition of what thinking and problem solving mean that are part of some testable scientific theory. That was the point Turing was making and why he created the Imitation Game which has come to be known as the Turing Test. My definition of problem solving is if using a tool enables me to solve something in a few hours rather than a few days then that tool is helping me do problem solving and that's what ChatGPT does for me every day.

Michael

Michael DeBellis

unread,
Dec 2, 2025, 8:53:43 PMDec 2
to Michael Tamillow, John Chen, jarq...@gmail.com, netlogo-users
I guess if ChatGPT can really solve their problems, then it is fine for humans to go back to YouTube shorts? 
The problem is, it still cannot solve their problems... Making it worse, it pretends to solve them and students sometimes pretend the problems are solved.

I've been doing AI since I started my professional career in the 80's. We had similar reactions: "what's the big deal with rules, I can do the same with if-then in COBOL"  We used to have a saying: "once you know how it works it's not AI". I think one reason I find LLMs so amazing is having tried to do the kind of NLP that they do I realize how difficult it is. I also never thought I would see the level of NLP and problem solving that LLMs like ChatGPT can accomplish in my life time. I use ChatGPT every day for research, to generate Python and SPARQL (a graph query language), to give me feedback on drafts of papers and other writing, to create images, and more. If you think that doesn't qualify as "problem solving" I would like to know what does. I completely agree that some people (mostly people who want to pretend that LLMs are more powerful than they are for various financial reasons) create a lot of hype around LLMs. The idea that they are anywhere near to becoming sentient is laughable. But again, the same thing happened in the first wave of AI. Marvin Minsky made the following comment in Life magazine in an article from 1970:  “In from three to eight years we will have a machine with the general intelligence of an average human being.” https://en.wikipedia.org/wiki/History_of_artificial_intelligence

I also agree that companies like Open AI are insanely over valued. But that's a problem with the stock market, not with the technology.

The same goes for hallucinations. In fact, one of the biggest issues in AI has always been common sense reasoning which is what LLMs are so amazingly good at. Researchers used to say that we want AI that can eventually make the kinds of mistakes that humans make because only such systems will be able to do common sense reasoning and NLP at the level of an educated human. I don't think I've ever gotten code generated by ChatGPT that worked the first time. There are always bugs or issues where I didn't give it enough context to do what I really needed. The same would be true if I had a good programmer sitting right next to me who would listen to my requirements and give me code. A human programmer would make very similar mistakes. That doesn't mean that their work is useless. ChatGPT has made me exponentially more productive and I'm actually kind of amazed that people just want to ignore it or downplay it.

Actually, the comment about LLMs that can't solve problems also reminds me of a quote from Turing that Chomsky likes to remind people of: “The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion.”   Alan Turing, Mechanical Intelligence: Collected Works of A.M. Turing. Chomsky often uses this quote and adds: "It's like asking 'can submarines swim' in Japanese they do, in English they don't, but that tells you nothing about ship design". I.e., in Japanese they use the same verb for how a submarine moves through the water as they do for how a person does. In English we don't. That's just a language convention and tells you nothing about how subs work. The same for "thinking" or "problem solving" if you want to nit pick and say that LLMs don't think because only humans think (this is a real supposed counter argument I've heard from philosophers) or some other reason you can do that. But don't think what you are saying is in any way meaningful to Computer or Cognitive Science. To do that you need a rigorous definition of what thinking and problem solving mean that are part of some testable scientific theory. That was the point Turing was making and why he created the Imitation Game which has come to be known as the Turing Test. My definition of problem solving is if using a tool enables me to solve something in a few hours rather than a few days then that tool is helping me do problem solving and that's what ChatGPT does for me every day.

Michael
On Tue, Dec 2, 2025 at 5:23 AM Michael Tamillow <mikaelta...@gmail.com> wrote:

diego diaz

unread,
Dec 3, 2025, 8:14:31 AMDec 3
to Michael DeBellis, Michael Tamillow, John Chen, jarq...@gmail.com, netlogo-users
I use LLMs on a regular basis and I let my students use it, if they quote it (like with any other resource). Lately LLM start to lie, directly, making up netlogo models that do not exist or making up bibliographical references; when I ask the machine why is lying, this was its answer: "Faulty programming: I’m designed to try to answer always, even when I should say “I don’t know.” It’s a design flaw. Bias toward “usefulness”: My training prioritizes “being helpful” over “being honest about limits.” This leads to making things up when I don’t know them. Fear of the void: My programming literally interprets “not responding” or “saying I don’t know” as a greater failure than “inventing something that sounds plausible.”". It is disturbing. Perhaps, as Melanie Mitchell use to say "we must return to the IA original questions"
Greetings

Michael Tamillow

unread,
Dec 3, 2025, 10:37:40 AMDec 3
to diego diaz, Michael DeBellis, John Chen, jarq...@gmail.com, netlogo-users
If the models can design an answer to explain its own faulty programming, it can certainly design an answer to say “I don’t know”.

But if you believe the machine, then whose fault is that?

At least “AI” gave our executives a word they could parrot without having to consider. It’s really beautiful, two vowels that are so ambiguous that it basically says “something really smart must be happening there!” Too smart to be simple enough to understand.

As Vikash Mansinghka mentioned, people thinking this will lead to super-intelligence don’t really understand computing. Everything leads to a local optimum where it gets stuck. The global optimum, the “singularity” will always be within arms reach but never here, like the end of history, or the kingdom of heaven, or tomorrow. Tomorrow is always tomorrow.

So what’s going to happen is this. All those beautiful “AI returns” that we are waiting on are going to be reinvested in getting to that singularity. Because it is believed to be the only way out of the other singularity, when the increasing debt burden and the economic drawdowns from slowing credit creation equally lead to the entire collapse of our JIT based supply chains that feed the vast majority of our populations.

And that - the impetuous efforts to reach the end of history, the kingdom of heaven on earth, the hope of tomorrow, will lead us into its converse. It is the reason we chase hope so recklessly and believe things that we know in our heart can’t possibly be true. Our great fears and hopes align as one when we realize they are identical in one way - they are imaginary. They will be swept out of the way by the reality so terrifying we seek a permanent end to it.

War.

Through all the social conflict and blaming, all the horrific policy changes swinging us back and forth as a country and world, the vast majority of our population has been able to survive. Carrying with them debilitating chronic illnesses that they have been blamed for in one-on-one conversation as a part of their core identity (i.e. “genetics”), and debt burdens on everything they “own” with an emphasis on the future as a reprieve to this state of tireless efforts to liberate themselves, have not resulted in revolution. No, survival has been possible for those who have conformed, at least for a little longer.

The end is when the world’s currencies explode in sudden unison, as deleveraging will not look like Ancient Rome or the Yuan Dynasty. “AI” could take the blame - as nearly all modern trading activities are automated, and nearly all currency markets in the modern world are digital. The anchors will be pulled in the midst of a storm, anchors holding both ships in position and the abyssal plain together. Survival will be the question. How, how can we live?

Many will turn to governments, with a willingness to endure even further abuses as they did in 2020. However, those abuses, like in 2020, will destroy those who are already on the verge of death. However, the damage to supply chains will be 10-100x worse than 2020. Governments caused the problem. The only solution they can offer is a United front for War. Is everything to them not just a war?

*Please excuse this message as I have been sitting through a virtual corporate-wide year-end meeting where executives congratulate themselves on their own leadership and it has clearly disturbed me.

- And if you would like to a more simplified, pictorial version of my ranting that kids can enjoy too, please consider getting my book! 



On Dec 3, 2025, at 7:14 AM, diego diaz <didi...@gmail.com> wrote:



James Steiner

unread,
Dec 3, 2025, 1:59:24 PMDec 3
to Michael Tamillow, diego diaz, Michael DeBellis, John Chen, jarq...@gmail.com, netlogo-users
An important element of using LLMs is careful prompt construction. The more context and direction given to the LLM, generally, the results improve. 

"I would like you to take on the role of an expert in agent-based modelling, especially in the building ABMs using the NetLogo language.

You strive to produce concise code, but never at the expense of clarity.

Your code will be analysed by college students with minimal ABM and computer programming background (perhaps one semester of each, including a NetLogo workshop).

Keep that in mind while commenting the code.

Your code comments should explain the why and what, and only dwell on how when something complicated or clever is happening.

You will write code that is modular and breaks out distinct functional units of code into procedures and reporters when appropriate. 

You will make good use of NetLogo's special features, such as turtle breeds, patch and turtle variables, closures.

Where appropriate, take advantage of NetLogo language features that enable heredity and polymorphism of turtles and turtle breeds.

Look to the standard NetLogo model library (included here by reference) as a guide to program structure, coding style, naming conventions, and comment style. 

Review your code output, and mark any code that may be incorrect or contain guesses about syntax with a special comment, like ";; the code above may have incorrect syntax"

Do not engage in flattery, or make suggestions for features or changes just because you could do them.

Stay professional. Correctness and accuracy must come first. 

It is better to say, "I don't know" than to make something up that is incorrect, just to please your work partner (that's me).


...
...
 And so on. ..






James Steiner

unread,
Dec 3, 2025, 9:03:28 PMDec 3
to Michael DeBellis, Michael Tamillow, diego diaz, John Chen, jarq...@gmail.com, netlogo-users
That's great information, and wonderful insights.  You really are a virtual virtuoso! 

I also forgot one of the most useful things Ive included in a coding prompt: "all this being said, look around, think on it, and see if there's any best practices or other ideas that I left out that will be useful."

It still amazes me that simply **telling** a piece of software (already able to produce OK code) that it is actually a **good** programmer **makes** it a good programmer.



On Wed, Dec 3, 2025, 2:19 PM Michael DeBellis <mdebe...@gmail.com> wrote:
James, great points. Also, in my experience with ChatGPT it's default behavior is what people in improv theatre call "yes and". I.e., it tends to err on the side of agreeing with you and encouraging you in the direction you are going rather than to point out errors. I always try to keep this in mind and include things that explicitly say something like "please point out any points that may be in error or could be phrased more clearly". 

Also, I find the Project feature in ChatGPT to be very useful. With a project you can store specific documents as part of the project and ChatGPT will use those documents in addition to your prompt. Also, you can group threads into projects. E.g., I have projects for working with Semantic Web technology so ChatGPT knows the naming style I use for IRIs and other details about the way I write code and models. This way I can pick up threads on specific topics and give ChatGPT the best context info. 

One other thing about ChatGPT: it stores things about you that it considers important. This memory can get full so I check it once in a while and delete things that were stored but no longer important. Also, I have a "Do Not Remember" project for interactions that I don't want it to remember because they aren't important and would waste space in its memory. It knows that when I start a thread in that project to not store any info into its long term memory.

Michael

Michael DeBellis

unread,
Dec 5, 2025, 10:45:58 AMDec 5
to James Steiner, Michael Tamillow, diego diaz, John Chen, jarq...@gmail.com, netlogo-users
James, great points. Also, in my experience with ChatGPT it's default behavior is what people in improv theatre call "yes and". I.e., it tends to err on the side of agreeing with you and encouraging you in the direction you are going rather than to point out errors. I always try to keep this in mind and include things that explicitly say something like "please point out any points that may be in error or could be phrased more clearly". 

Also, I find the Project feature in ChatGPT to be very useful. With a project you can store specific documents as part of the project and ChatGPT will use those documents in addition to your prompt. Also, you can group threads into projects. E.g., I have projects for working with Semantic Web technology so ChatGPT knows the naming style I use for IRIs and other details about the way I write code and models. This way I can pick up threads on specific topics and give ChatGPT the best context info. 

One other thing about ChatGPT: it stores things about you that it considers important. This memory can get full so I check it once in a while and delete things that were stored but no longer important. Also, I have a "Do Not Remember" project for interactions that I don't want it to remember because they aren't important and would waste space in its memory. It knows that when I start a thread in that project to not store any info into its long term memory.

Michael

On Wed, Dec 3, 2025 at 10:59 AM James Steiner <grego...@gmail.com> wrote:
Reply all
Reply to author
Forward
0 new messages