Ouch, forgot the Cafe.
Would you object to this particular optimisation (replacing an algorithm with an entirely different one) if you were guaranteed that the space behaviour would not change?
nevertheless I objected to his opinion, claiming that if compiler performed such a high-level
optimization - replace underlying data structure with a different one and turn one algorithm into
a completely different one - programmer wouldn't be able to reason about space behaviour of a
program. I concluded that such a solution should not be built into a compiler but instead turned
into an EDSL.
The code goes into production and, disaster. The new "improved"
version runs 3 times slower than the old, making it practically
unusable. The new version has to be rolled back with loss of uptime
and functionality and management is not happy with P.
It just so happened that the old code triggered some aggressive
optimization unbeknownst to everyone, **including the original
developer**, while the new code did not. (This optimization maybe even
Maybe this is something that would never happen in practice, but how
to be sure...
I guess I fall more to the "reason about code" side of the scale
rather than "testing the code" side. Testing seem to induce false
hopes about finding all defects even to the point where the tester is
blamed for not finding a bug rather than the developer for introducing
it.
> Surely there will be a canaryIs that common? I have not seen it and I do think my workplace is a
> period, parallel running of the old and new system, etc.?
rather typical one.
Also, would we really want to preserve the old "bad" code just because
it happened to trigger some optimization?