There should be no actual need to mass-put a new property to all of your entities, and set that new property to a default value since the Datastore supports entities with and without set property values (as you have noticed with the failed Map Reduce job).
You can assume that if an entity does not have the property, that it is equal to the default "indexed=0". You can then set this value directly in your application during read time. If it exists, read it and use it, else use a hard-coded default and set the value then in your code (aka only when the entity is being read).
Updating existing entities is documented here.
Without knowing what happened exactly, it is not possible to know the reason for 70M reads. However, I would recommend to view this post which might answer your question.
Dataflow SDKs provide an API for reading data from and writing data to a Google Cloud Datastore database. Its programming model is designed to simplify the mechanics of large-scale data processing. When you program with a Dataflow SDK, you are essentially creating a data processing job to be executed by one of the Cloud Dataflow runner services. This model lets you concentrate on the logical composition of your data processing job, rather than the physical orchestration of parallel processing. You can focus on what you need your job to do instead of exactly how that job gets executed.