Agentic Interactions | 9am PT Tues, Jan 20, 2025

24 views
Skip to first unread message

Grigory Bronevetsky

unread,
Jan 16, 2026, 12:03:03 AMJan 16
to ta...@modelingtalks.org
image.pngModeling Talks

Agentic Interactions

Alex Imas, University of Chicago

image.png

Tues, January 20, 2025 | 9am PT

Meet | Youtube Stream


Hi all,


The presentation will be via Meet and all questions will be addressed there. If you cannot attend live, the event will be recorded and can be found afterward at

https://sites.google.com/modelingtalks.org/entry/agentic-interactions


More information on previous and future talks: https://sites.google.com/modelingtalks.org/entry/home


Abstract:
Do human differences persist and scale when decisions are delegated to AI agents? We study an experimental marketplace in which individuals author instructions for buyer-and seller-side agents that negotiate on their behalf. We compare these AI agentic interactions to standard human-to-human negotiations in the same setting. First, contrary to predictions of more homogenous outcomes, agentic interactions lead to, if anything, greater dispersion in outcomes compared to human-mediated interactions. Second, crossing agents across counterparties reveals systematic dispersion in outcomes that tracks the identity and characteristics of the human creators; who designs the agent matters as much as, and often more than, shared information or code. Canonical behavioral frictions reappear in agentic form: personality traits shape agent behavior and selection on principal characteristics yields sorting. Despite AI agents not having access to the human principal's characteristics, demographics such as gender and personality variables have substantial explanatory power for outcomes, in ways that are sometimes reversed from human-to-human interactions. Moreover, we uncover significant variation in "machine fluency"-the ability to instruct an AI agent to effectively align with one's objective function-that is predicted by principals' individual types, suggesting a new source of heterogeneity and inequality in economic outcomes. These results indicate that the agentic economy inherits, transforms, and may even amplify, human heterogeneity. Finally, we highlight a new type of information asymmetry in principal-agent relationships and the potential for specification hazard, and discuss broader implications for welfare, inequality, and market power in economies increasingly transacted through machines shaped by human intent.

 

Bio:
Alex studies behavioral economics with a focus on how people understand and mentally represent the choices they are facing. His research explores topics related to how people learn and make choices in settings with risk and uncertainty. He also studies the economics of artificial intelligence and discrimination. Alex’s work utilizes a variety of methods, including controlled laboratory experiments, field experiments, analysis of observational data and theoretical modeling.


Alex Imas is the recipient of the 2023 Alfred P. Sloan Research Fellowship, the Review of Financial Studies Rising Scholar Award, the New Investigator Award from the Behavioral Science and Policy Association, the Hillel Einhorn New Investigator Award from the Society of Judgment and Decision Making, the Distinguished CESifo Affiliate Award, and the NSF Graduate Research Fellowship. He is the co-author, with Richard Thaler, of The Winner’s Curse: Behavioral Economics Anomalies, Then and Now. He is an Associate Editor at the Journal of the European Economic Association and on the editorial board of Psychological Science.

Grigory Bronevetsky

unread,
Jan 22, 2026, 9:42:20 PMJan 22
to Talks, Grigory Bronevetsky
Video Recording: https://www.youtube.com/live/xnRUQ8zcbcY

Slides:

Summary:

  • Focus: the impact of AI agents performing economic interactions

  • Insight: outputs of agentic models are not random IID, they depend on the priors from the user and their use history

  • Principal-agent framework: principal wants an agent to do something and needs to set up incentives and contracts to get the agent to voluntarily do the required work

    • Principal wants to minimize cost, maximize outcome

    • Agent wants to minimize effort, maximize income

  • Economy = combination of interactions among many agents

    • We don’t know how adding AI agents to the economy will affect it

    • Coding agents are very popular, can also be used to drive business workflows, like email, flight bookings, etc.

  • AI Agent

    • Perception

    • Representation and Reasoning

    • Decision making / Planning

    • Action / Output

    • Learning / Adaptation

  • Representative agent model: 

    • In reality agents are heterogeneous

    • Here you assume that they’re similar enough that you can represent them as a single homogeneous agent

    • This is a poor approximation of the real economy, captures many aspects of steady state behavior but misses most dynamic behavior

    • Will AI agents be more homogeneous or heterogeneous?

  • The challenge of the principal-agent relationship is that no contract can cover all scenarios

    • Each contract will vary across principal and agent pairs (more/less detailed, more/less effective incentives)

    • What does the agent do in novel scenarios?

    • The agent’s actions will be biased by the contract. In the context of AI agents, this is their prompt.

    • Incentive for human agents is money, pride, status, etc.

    • AI agents have an opaque objective function (training loss and training dataset)

    • Principals have the opportunity to learn the AI agent’s objective experimentally

    • Interaction between principal and agent in designing prompt/contract creates a tighter correlation between the two via the contract

  • Hypothesis: outcomes in agentic interactions will be a function of human heterogeneity

  • Experiment: Nash bargaining game

    • Participants will be bargaining to buy or sell cars

    • Agents negotiate on behalf of their principals

    • Bad prompts: create strict boundaries on behavior (e.g. min/max price)

    • Good prompts: rich set of instructions that describe the negotiating strategy and overall goals

    • Surplus (target of optimization): 

      • Buyer: difference between the negotiated price and max of Blue Book price range

      • Seller: difference between the negotiated price and min of Blue Book price range

      • All participants get the same thing: money but dynamics are influenced by the natural heterogeneity among all the human principals

    • Human principals got a chance to practice with writing prompts and their results before starting their experiments

    • Collected demographic and personal characteristics about human principals (including paying buying games)

      • If there was no human bias, this data should not be predictive of the games outcomes

    • Benchmark task: humans performing the same negotiation task

    • Experimental results

      • Distribution of outcomes is very broad, with multiple spikes

        • Why?

        • Null hypothesis: stochasticity from the models

          • If you use identical prompts

        • 73% of the variation is predictive by properties of individual humans

        • 17% due to measured individual characteristics

          • Demographics, game behaviors and negotiation experience are strongest predictions

        • 63% explained by differences among the prompts (i.e. unmeasured individual variability)

      • Human negotiations are very different

        • Most common outcome is 50/50 fair splits

      • Demographics:

        • Gender gap affects outcomes in human-human and AI-AI negotiations

        • Opposite effect between the two styles

      • Changing AI models doesn’t substantially change the outcome of the negotiations

    • As AI agents are specifically trained on negotiation you can expect the heterogeneity in their behavior to drop as it focuses on the key directives from the principal, rather than the way they are expressed or the expressed negotiating strategy

    • Heterogeneity can become much larger if the principals preferences are very different from each other


Reply all
Reply to author
Forward
0 new messages