I agree to you, and believe the optimize_coding flag stands
separate from quality.
So it's hard to solve the problem without adjusting the API,
right? We should provide an exact method to let developer specify
the behavior that app need.
Besides the exact way, lot's public APIs depends on Skia cannot
take the advantage without adjusting the API too. Is there a
compromise way to adjust the default behavior, which like "Perhaps
we can decide a certain quality or above means to use
optimize_coding."
i.e. (Just based on my understanding. I have no experience on
public API development. So, maybe these are bad idea.)
* The high bit of quality means optimize_coding.
This will cause ordinarily illegal parameters appear. And further
new features can use other high bits as new options.
* When quality > certain quality value enable
optimize_encoding.
This will cause backwards compatibility. But if we adjust the
quality value based on user experience, maybe we can find that
it's almost the same behavior that user expect in instinct.
quality low = cpu mem low, time short, size small
quality medium = cpu mem medium, time medium, size medium
quality high = cpu mem high, time long, size big <- with the
optimize_encoding enabled, The cpu mem consumed higher and time
longer, but size less bigger and quality higher.
Except the case that the application need high quality image
processed in specified time, this way seems acceptable too, right?
And if we have stat data, among the cases, we can find out which
common case is and which rear case is. Let new API make common
cases better.
* Add a global envronment mode variable to let developer choose
the behavior.