Look at it the other way. Assume f(max) is 400KHz.
Would you expect that to dictate that t(hi) is 1250
and t(lo) is also 1250 -- so that t(cyc) would be 2500?
Or, if t(lo) is 1300, then t(hi) *must* be 1200?
I.e., so there is no way *possible* to create a device
that would operate at f(max) > 400KHz?
What the spec does is lay out what you can reliably expect
from a conforming set of signals. So, if you can tolerate
those extremes, you'll be golden.
Think of a pair of FF's one (Q) feeding the other (D).
There's a max propagation delay from clock to Q output
on the first FF. There's a min setup time for the D
input to the clock on the second FF. Yet, the maximum
clock frequency need not be related to t(pd)+t(sup).
Furthermore, there's likely a t(hold) on the D input...
some time AFTER the clock edge that the input must remain
stable for proper operation of the FF. The t(pd) from the
first FF doesn't GUARANTEE that the output WON'T change
quicker than advertised. I.e., it could change in 0ns.
If so (assuming no propagation delays), any t(hold) > 0
will cause the second FF's specifications NOT to be met.
You can only count on the min/max numbers that are
published. Often, t(min) is not present and, increasingly,
t(max) is replaced by t(typ)!
The manufacturer (and/or specification creator) is
giving leeway to the silicon vendor in defining the
criteria that he *must* meet (over voltage, temperature
and process variations). And, at the same time,
telling you what his "contract" with you will be.
He can exceed ("better than") those numbers but never
*worse*. So, if you design for those limits, you are
"guaranteed" to have a successful implementation.