ChatGPT and Teaching Economic Theory

467 views
Skip to first unread message

Mark Whitmeyer

unread,
May 21, 2025, 9:51:11 AMMay 21
to decision_theory_forum
Dear friends,

Like many of you, I teach an economic theory class for undergraduates (game theory). Over the past few years ChatGPT (and other AI tools) have become something of a nuisance: it has become a real challenge to design worthwhile homework assignments. The issue is that anything that I could reasonably ask an undergraduate is now easy work for ChatGPT. The problem is reminiscent of the infamous difficulty in designing the Yosemite park trashcans, as a ranger famously (apocryphally?) put, designing the trashcans is difficult because "there is a considerable overlap between the intelligence of the smartest bears and the dumbest tourists."

I’m sure many of you are running into this issue as well.
 
My question is, how are you dealing with it?
 
Of course, one solution is to shift everything to being in person, but this is a bit impractical. Another solution is technological--there are various ways of detecting AI use on assignments--but I’m not that interested in this sort of fix either.  So, in asking this question, I guess what I’m really after is whether you have identified ways of assessing students at home that are not totally vulnerable to zero-effort zero-knowledge copy and pasting into AI. Moreover, as with any math class, I think that it is essential that students ``get their hands dirty.’’ They need to struggle through proofs and derivations. How can I make it so this is still so?

A related and perhaps more interesting question is: assuming that students will use AI, what are questions that are nevertheless mutually valuable in the sense that the questions not only require that students understand things/learn, but also can credibly reveal to me that they understand things/have learned?

-M

Robert Bordley

unread,
May 21, 2025, 11:09:37 AMMay 21
to Mark Whitmeyer, decision_theory_forum
Mark,

I tell students that employers want employees they can trust (as well as employees who are capable.)  I argue that there are probably hackers who can identify who is using ChatGPT.  These hackers can then provide this information to potential employers for a minor fee.  I ask them not to risk their careers.    

I don't know how well this works.  I still have to report cheaters for using ChatGPT.

Like you, I find it a nuisance.

Bob

--
You received this message because you are subscribed to the Google Groups "decision_theory_forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to decision_theory_...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/decision_theory_forum/cd78859a-5c2a-4b6a-a792-324e77da27e4n%40googlegroups.com.


--
Robert F. Bordley, PhD, ESEP, PMP, CAP, PSTAT
Professor and Program Director
Industrial and Operations Engineering Department
University of Michigan, Ann Arbor






Jack Stecher

unread,
May 21, 2025, 11:29:13 AMMay 21
to Robert Bordley, Mark Whitmeyer, decision_theory_forum
I give students exercises and then have them have their favorite LLM do the exercise. Then I have them explain to a hypothetical ill-informed HR person what they bring to the table, that it, EU a bot can't do their job. A bot will never make s hiring manager attend its kid's wedding, so the hiring manager will be predisposed toward the bot. 

As a favor for a colleague, my wife and I tried two approaches. The first was to misdirect ChatGPT, stating the game as a conversation between a knight and a squire at a tavern. We thought that there are enough unemployed theater majors posting enough useless nonsense online to send ChatGPT on the wrong track. Superficially, this worked. However, we then asked, "Can you identify the game these characters are playing, in terms of game theory?" ChatGPT found the answer. 

The second approach we tried was to get ChatGPT to inform on itself. We asked it to design a syllabus for a game theory bachelors level course so that it would be difficult for the students to use ChatGPT in the assignments. It came back with a reasonable syllabus and list of suggestions. 

Jack Stecher

unread,
May 21, 2025, 11:51:44 AMMay 21
to Robert Bordley, Mark Whitmeyer, decision_theory_forum
Forgive typos in the last email-I need to learn to proofread when typing on my phone 

Tilman Borgers

unread,
May 21, 2025, 8:43:53 PMMay 21
to Mark Whitmeyer, decision_theory_forum
My favorite task for students is: find a question related to what we are learning to which ChatGPT gives an incorrect answer. (One year ago, it was easy to find such questions for intermediate micro topics. I am not sure whether that's still true. I do know that much of the maths advice that ChatGPT gives me these days for my research turns out to be incorrect.)

Regards
Tilman




On Wed, May 21, 2025 at 9:51 AM Mark Whitmeyer <mark.wh...@gmail.com> wrote:
--

Attila Ambrus

unread,
May 21, 2025, 9:35:33 PMMay 21
to Tilman Borgers, Mark Whitmeyer, decision_theory_forum
That’s a good one, Tilman!

Sent from my iPhone

On May 21, 2025, at 8:43 PM, Tilman Borgers <tbor...@umich.edu> wrote:



Elias Tsakas

unread,
May 22, 2025, 1:49:08 PMMay 22
to decision_theory_forum
I ask them to video tape themselves explaining their solutions, and send me a link. 
This relies of course on the assumption that one does not want to look like a fool on camera. 

Best,
Elias

Ps. Maybe we can repeat this thread a year from now. I am sure by then, more ideas/solutions will have arisen.

Mohammed Khan

unread,
May 26, 2025, 6:41:49 AMMay 26
to Tilman Borgers, Mark Whitmeyer, decision_theory_forum, econprofs...@lists.jh.edu
A very good morning to Tilman, and greetings to others, 

My distinguished co-author  Eddie Schlee of ASU came down to Baltimore to work on complementarity of commodities -- expost, our efforts went into re-reading of Samuelson's masterful 1974 JEL paper, and also John Quah's 2007 and 2024 Econometrica papers. But I stole time to talk to Eddie about the sentiments expressed in this correspondence, and he began by  teaching a Luddite like  me how to use CHATGPT. 

As apprentice exercises:  I put the following questions to CHATGPT: (i) who is Eddie Schlee? (ii) who is David Schrittesser? (iii) who is M. Ali Khan (iv) who is M. Ali Khan excluding economists? (v) who is Mohammed A Khan? (vi) who is Mohammad A Khan? (vii) Who is Mohammed Aliuddin Khan? 

[Forgive my narcissism, but we live in narcissistic times, and one tries to adapt.]

 

I was amused that in answer to (iii), Chatgpt singled out as the only two notable publications of mine  my EL piece with Arthur Paul Pedersen and David Schrittesser, and an undistinguished piece with Eddie Schlee. I would not know myself by these publications.  I was also amused that in terms of (iv) and (v), not identical questions by any means, it gave non-identical  replies that had a non-empty overlap.  Finally, the only quasi-complete answer to the question as to who am I, was the answer to (vii). 

 

This allowed Eddie Schlee and myself to observe the following characteristics: 

 

(i)                  The answers are computer-dependent.   I am pretty sure that if  question (iii) is asked on someone else's computer, one will  get a different answer. 

(ii)                Also, as Eddie put it, it is time-dependent.   By this one means that if you ask the same question again even without refining the question, the answer when asked at the t^{th} time will be different from the (t+1)^{th} time. As Eddie put it, he keeps learning and updating. 

(iii)               The answer is question dependent.  If one thinks about it, this is not as banal as it may first seem. 

 

I hope to continue my getting to know Chatgpt, but now let me turn to the subject at issue. To begin with,  I am now clear that it will transform graduate education in non-vocational schools, i.e. schools whose curriculum and programs are  also tied to learning and empowerment rather simply functioning as  employment agencies.  

Since I began teaching I give 5-10 claims and ask the students to (i) give a proof if the claim is correct, (ii) counterexample if false. Now I hope to modify this a little bit by giving the question and CHAPGPT's answer to it, and asking students to write a critical writing-intensive response to it. This is very much in keeping with Tilman's solution.  It also  has an added advantage  that it'll lead students to attach importance to  reading and writing rather than calculating, coding  and deriving. It is my considered judgement that we economists have lost our past edge as far as reading, writing and drawing is concerned. Look, for example at the footnotes of Georgescu-Roegen's 1952 SEJ paper on "complementarity." 

Thank you for reading, Ali 

PS: In case you are not tired of my prose, perhaps, and especially if you are, I quote what David Schrittesser wrote to me this morning. Unlike me, he really knows what he is talking about. 

\bqu 

I see, thanks for clarifying!

It is actually a huge issue with AI that results are not reproducible.

I think there are several unrelated ways in which this happens. Firstly, I think the algorithms that we are allowed to interact with generally include some factor of randomness. In other words, it is not always the most likely next token that is chosen, but one of several very likely tokens. I think this is to make sure the user feels the machine is being creative, which, in a sense, it is of course. Theoretically, if we had enough access to the algorithm, it’s possible that we could turn this randomness off. Maybe we can, too, I don’t know enough about it — it’s possible there is a payed for API option where you can force it to always choose the most likely token.

But even this would not guarantee reproducibility: That’s firstly because I’m sure the thing gets updated and tinkered with frequently, and secondly, I wouldn’t be certain that at this point it’s even possible to eliminate all possible sources of randomness. E.g., maybe the thing is allowed to do restricted web searches. Those are just not deterministic, and obviously that will influence how the algorithm answers in very unpredictable ways.  

 \equ

Larry Blume

unread,
May 26, 2025, 6:28:20 PMMay 26
to decision_theory_forum
I am pleased to say that chatgpt-4-turbo is the first LLM I've found that can solve a prisoner's dilemma with asymmetric payoffs.  It seems now that copilot too can do it.  I've posted on Facebook on how this exercise has failed in the past.  Copilot once told me that a prisoners' dilemma had an asymmetric equilibrium "because 0>1".  This newfound skill is exciting for teaching game theory.  (Although more investigation is needed.)  One of the hardest things to teach is mixed equilibrium, because they are counterintuitive.  Slightly change a payoff of column and it does not affect columns equilibrium mixed strategy, but it does effect row's.   My hope is that alms can be our 24/7 TA's.   Give students 50 games that you know your favorite AI can do. Tell students to work as many as they need, and check the answers with the AI, asking for hints as needed.  Drill baby, drill!

For grading, I've moved to low-stakes in-class grading exercises - 10 minutes at the beginning or end of class.  I also hand out problem sets that are "representative of questions that will appear on the final", publish answers, and grade them S/U - did you turn it in or not.  I like the frequent quizzes because, although it burns class time, it strongly incentivizes students to keep up.

Colin Rowat

unread,
May 27, 2025, 5:19:51 AMMay 27
to decision_theory_forum
Very nice!

It reminds me of some of Vince Conitzer's explorations - e.g. https://sigmoid.social/@conitzer/114501169665371150

Reply all
Reply to author
Forward
0 new messages