sorry for the late response. i've been on the road. The iteration i've done is along the lines that jimmy retzlaff describes.
run python program locally that runs MR job in a loop. The local program uses s3cmd to pull down a relatively small data set from s3 in order to check convergence and then either stops or repeats. the algorithms i'm running (k-means clustering, for example), have some relatively simple data structure that characterizes the ultimate answer and the progress in the iteration. that's stored on s3. the MrJob mappers and reducers also use s3cmd to initialize at the beginning of each pass through the iteration.
This approach forces me to have two different versions of the MrJob mappers and reducers for a local, development version versus a version that runs on aws. The local one initializes from local files while the other initializes from s3. All-in-all, it's not too bad. There's a long long list of algorithms that will fit into this framework.
Mike