Postdoc opp @ MIT/Cornell

153 views
Skip to first unread message

David Rand

unread,
Jun 27, 2024, 3:22:54 AMJun 27
to esa-an...@googlegroups.com
Hi all, 

Gord Pennycook, Tom Costello and I are looking to hire one or more postdocs as we launch a research program using dialogues with LLMs as a tool to study human psychology and decision-making. Details below, and PDF of the call at this link. We are open to candidates from a very wide range of backgrounds, include basically any social science as well as computer science, information science etc. 

Please share widely - many thanks!!

Dave


Opening for Post-Doctoral Fellow (MIT/Cornell) in Human ↔ AI Experimentation

David Rand (MIT), Thomas Costello (American University), and Gordon Pennycook (Cornell University) are seeking at least 1 postdoctoral researcher to join their research team for 2 years (with the possibility of extending depending on funding). We are flexible on starting date, although sooner (e.g. Fall 2024) is ideal.

The researcher will hold a joint appointment in David Rand's Human Cooperation Lab at MIT Sloan and Gordon Pennycook’s Behavioral Science Lab at Cornell University while collaborating closely with Thomas Costello’s new laboratory at American University. Fellows will design and run research studies, analyze data, prepare publications, and be core members of the collective intellectual community spanning the labs. 

Our aim for this position is to leverage the considerable and growing potential of generative artificial intelligence to (1) help answer key questions in behavioral science using experiments involving interactions between humans and AI systems, (2) develop, test, and apply scalable interventions, and/or (3) help build the methodological, statistical, and infrastructural foundations of an emerging subfield devoted to exploring human-AI interactions. 

We are particularly open to candidates who are interested in using AI to understand how and why people change their minds about topics of great societal importance (as exemplified in this recent paper on conspiracy theories). Researchers interested in developing safer AI systems using the tools of behavioral science are also welcome to apply. 

Potential assets for applicants include: Experience with lab/online experiments, computational skills (e.g. machine learning, web programming, natural language processing, developing and working with LLMs), experience with social media data collection/experimentation, and knowledge of fields such as social psychology, artificial intelligence, cognitive science, marketing, political science, and/or communications. That being said, we do not have a set vision of the skill sets we are looking to add to our groups, so we would encourage anyone interested in the topics of human-AI experiments, belief change/persuasion, mis/disinformation, or AI safety to apply, regardless of background! 

Ideal candidates will be creative, independent, and deeply engaged. Funds for conducting experiments, travel, and equipment will be available to the fellow, as will many opportunities for outside collaboration. 

Individuals with a Ph.D., or those expecting to complete their Ph.D. by the start of the position, are encouraged to apply. Applications will be reviewed on a rolling basis. If you are interested, please apply (at any point) – no need to email inquiring as to whether the position is still available! 

Please send your CV, statement of interest (two pages max), 2 reprints/preprints, and at least 2 email addresses of references to Antonio Arechar (aa.ar...@gmail.com). 


--
David G. Rand (he/him)
Sloan School & Brain and Cognitive Sciences, MIT

Latest publication: Durably reducing conspiracy beliefs through dialogues with AI Working paper [Tweet thread] [Experimental materials]


Reply all
Reply to author
Forward
0 new messages