It's a bit ugly but for this you could use the same JSON endpoint as the web interface is using.
For example:
http://example.com:8988/disco/ctrl/jobinfo?name=1c4e6ec5f23d9d2239f5a06e6bc10779@5a6:ace30:3319dThe "1c4e6ec5f23d9d2239f5a06e6bc10779@5a6:ace30:3319d" part is the .name property of the Job object after you call .run() on it.
The JSON contains a pipeline object with information about each stage and how many workers are Pending, Waiting, Running, Done and Failed just like in the interface.
In theory you can calculate the total % using these.
If you also want to know the per worker % and you actually know this percentage inside your worker (you know how many lines you map function has processed, and still needs to process) (can be know using a custom map_reader)
You can just print this information and use the jobevents endpoint to fetch the log for the specific job:
http://example.com:8988/disco/ctrl/jobevents?name=1c4e6ec5f23d9d2239f5a06e6bc10779%405a6%3Aace30%3A3319d&num=100&filter=This will return a JSON feed with log messages (including things you print) that you can then parse to get a more specific %.
This endpoint already contains lines like "MSG: [map:2] 1000000 entries mapped" produced by disco itself.
Hope that helps,
Erik