I looked over a bunch of posts and I chose
one,
two,
three,
four,
five. These posts address the question of what UDT accomplishes, and why we might want to regard its decisions as rational. I'm interested in them as a starting point for the more general question of when decisions are rational. How can the criteria with respect to which UDT is shown to be optimal be generalized to give criteria for rationality in more situations? What agents would satisfy those generalized criteria?
I haven't had time to think about this, but these posts might assume a lot of provability logic. Some of this background may be covered in
An Introduction to L öb's Theorem in MIRI Research (LaVictoire) or
Provability logic—a short introduction (Lindström). I will also bring Boolos'
The Logic of Provability to the workshop.
I also want to mention
this post by Benja that I found while looking for the UDT posts, in order to disagree with it. EDT seems to simple to be right, given that I know of no argument that it is right, but I know of no proof that it is not equivalent to UDT (I think
Gaifman's arguments run into the same problems as others I have seen, but I read that paper before I had identified the problem, and I recall the paper being good). Rather, it seems that UDT just isn't formalized sufficiently to make the comparison, that is, the only problems for which UDT has been formalized are ones where EDT gives the same answer, or ones where EDT fails to give any answer because it would need to condition on probability zero events. Further, I am suspicious of the claim that the agents in the problems that run into probability zero events are really EDT agents. Anyway, this is all rather vague, since the issue is subtle and I don't have much time. If you want to practice philosophy, you can read the post by Benja and see if you can reconstruct my argument (you might want to think about simpler versions of this first, like the smoking lesion problem).