I still think that you are trying to solve the wrong problem.
By using some add-hoc driver design, you are most likely going to
create much more compatibility issues with devices from various
vendors.
Since the RS-485 standard requires a +/-200 mV swing between the
receiver terminal, with 54 ohm total load, the driver must be able to
supply _at_lest_ +/-3.7 mA.
With +/- 2V Tx voltage swing, the total loop resistance should be
below 540 ohms, i.e 240 ohms in a single conductor, so at least 0.3 mm
wire diameter is required for 1000 m.
The power dissipation issue is worse in point-to-point RS-422
connections, in which the transmitters are constantly on and must be
able to supply 2 mA to the termination resistor at the opposite end,
even when the line is idle.
However, Modbus over RS-485 is a half duplex protocol with at most one
transmitter in the active state feeding the termination, while all the
other are in the tri-state. Thus the total system consumption is quite
low. Due to the request/response nature of Modbus, the master Tx duty
cycle would be around 50 %, while in a multidrop system, the duty
cycle for the Tx is much lower, perhaps a few percent, keeping the
total energy consumed at a low value for each device.
By using 120 /120 /1500 ohms, it appears that you do not intend to use
polarization a.k.a pullup/pulldown resistors. While no data is being
transmitted, no transmitter is active the bus in a high impedance
state, with only the receiver leakages as a load. That 1500 ohms was
for 32 slaves, but with only a few slaves, the impedance is quite high
and sensitive to electrostatically connected interference (typically
false start bits).
In Modbus standard, this has been accounted for, by requiring that the
pause between individual bytes must not be greater than 1.5 character
times and that when the transmitter is turned from tri-state to
active, it must send the idle (Mark, "1") state for at least 3.5
character times. This will allow any reflections to die out and during
this period, any receiver will flush any spurious line noise (the 1.5
character time rule). Then the actual message transmission is
performed, followed by the Tx driven actively to idle (Mark, "1")
state for an additional 3.5 character times. This allows reliable end
of frame detection, by suppressing any reflections etc. immediately
after the data frame.
However, in a multivendor environment, it is not clear how well these
timing rules are followed, especially at high speeds, thus to ensure
maximum interoperability, I would definitively recommend using those
pullup/pulldon resistors on the bus to lower the impedance levels
while all transmitters are in the tri-state.
Regarding the 1 nF+120 ohm termination, the RC time constant is 0.12
us, hence after a few RC time constants, say 0.5 us after the last
signal transition, practically no power is delivered to the
termination resistance. At 115k2 with bit times of 9 us, power is
dissipated during 6 % of the bit time. Since there can be consecutive
"00..." or "11..." sequences in a message, without state changes
between bits (NRZ), the actual duty cycle is even less.
So I really do not understand, why you want to design a custom driver
to reduce power dissipation.