You can instead use
PUSH Task Queues to replicate a Map-Reduce job. Simply have a master method shard a job, and 'Map' the shards to other instances by simply enqueuing tasks. The instances that accept the shards (aka tasks) then perform the work and can write their results to the Datastore. Your master method then checks the Datastore on a looped timer until all shards are finished computing to finally perform your 'Reduce' phase.
You can also experiment with deploying different services (aka separate groups of instances) and Shard (aka enqueuing tasks) across these different services to ensure each shard never waits in a pending queue for an available instance. You can of course lower the min-pending-latency of a single service to achieve this same goal of minimizing the pending queue and forcing new instance creations for faster Map-Reduce.
- As for performing Dataflow work via the Console, I am not aware of this happening any time soon. If this is a real show stopper for you I recommend filing a
feature request with them, specifying your exact use-case in detail.