Greetings Neonixie group,
Anode current limiting seems to be a fundamental design element when using Nixie tubes. Yet is there anything fundamentally wrong with limiting cathode current instead of anode current, assuming a design will reliably turn on one cathode at a time?

If my math and understanding checks out, one should be able to accurately limit cathode current with a common collector transistor circuit and eliminate the anode resistor entirely. The circuit shown bellow controls current based on the equation: Ic = Ie = (Vb - Vbe)/Re

Controlling a Nixie this way should allow a designer to, eliminate the anode resistor, ignore the voltage drop of the Nixie tube, and not be concerned with the exact value of anode voltage. The only variables that really matter are base voltage (Vb), base-emitter voltage drop across the transistor (Vbe), and emitter resistance (Re). Control these three variables and Collector, Emitter and Nixie currents should all be limited to the same value. The only requirements for anode voltage being; anode voltage needs to be greater than the Nixie tube striking voltage, the collector emitter voltage rating of the transistor needs to be greater than the anode voltage and the transistor needs to dissipate the heat generated by the voltage drop. Overall, keeping anode voltage somewhat close to the Nixie tube striking voltage will improve efficiency.
Are there any fundamental issues with the design direction I'm considering?
Thanks,
Allen Dutra