OK, first, reading what -fdefault-double-8 does, it only affects variables
and constants that are already double precision or KIND 8.
Instead, -freal-4-real-8 applies to all KIND 4 variables and constants.
I suspect the answer depends on the program, but I believe that in many
cases double precision constants are overrated. It depends much on how
well you know your program. Are all variables already KIND 8?
Most often in physics problems, constants are expected to have the
precision based on the number of digits specified. This goes into
the idea of SIGFIGs, (that my TAs used to like to write on my papers).
How many constants that have more than seven digits, but don't
have a D exponent, does your program have? grep should be able
to find them pretty fast.
In a large fraction of physics problems, double precision is needed
to make sure that precision isn't lost in intermediate calculations.
That is especially true in matrix processing.
The IBM 7030 Stretch has a feature designed to help understand
precision loss in programs. One can specify the bits shifted in
for post-normalization on floating point operations. Pretty much,
that translates into either rounding down or up in hand calculation.
You then run the program both ways, and look for different results.
Adding D0 won't convert 3.14159 into a 16 digit approximation to pi.
I suspect that the process could be automated, and for less time than
hand editing 850,000 lines.