I've gone through the earlier discussions on this topic, including the Google Groups thread and the enhancement proposal #202. The enhancement issue makes it clear that ReactorNet currently treats bulk phases as infinite reservoirs, and that surface reaction rates aren't limited when an adjacent bulk phase is depleted. That matches what I've observed, but I wanted to confirm my understanding and see what others recommend..
My initial attempt was straightforward: a single gas-phase reactor with a ReactorSurface, where the surface Interface has a BulkPhase registered as an adjacent phase. I expected that since the surface reactions involve bulk species, the reactor network would evolve the bulk moles as those reactions proceed. That didn't happen the gas composition and surface coverages evolved as expected, but the bulk composition and moles stayed fixed. The bulk phase acts as an infinite reservoir, providing thermodynamic properties but not participating in the mass balance.
I also tried a dual-reactor setup with separate gas and bulk reactors sharing a surface, thinking I could manually update the bulk mass using setDensity(). That didn't work either — the bulk density isn't part of the reactor ODE state, so any changes I make get ignored or cause errors.
So my takeaway from these attempts is that in the current Reactor + ReactorSurface framework, a BulkPhase is always treated as an infinite reservoir. Its moles and mass aren't part of the state vector that ReactorNet integrates. Is that understanding correct?
Manual bulk evolution — my current workaround :
Given those limitations, I've settled on a workaround where I wrap a manual integration loop around the ReactorNet. The basic idea is to let Cantera handle the gas-phase species and surface coverages as usual, but track the bulk moles myself in a separate ledger and update them using the surface production rates.
For the area law, I compute the reactive surface area from the instantaneous bulk mass as:
A(t) = reactive_SA * M_solid(t)^alpha
Here, reactive_SA is the BET surface area multiplied by an active fraction (in m^2/g), M_solid is the total bulk mass in grams, and alpha is an exponent (I use 1.0 for a linear relationship, though 2/3 would correspond to a shrinking sphere model). The physical intuition is that as char is consumed and M_solid drops, the available surface area decreases, which in turn reduces the reaction rates — a stabilizing negative feedback.
The integration loop looks roughly like this:
# 1. Advance Cantera (gas + surface)
sim.advance(t + dt)
# 2. Normalize coverages (Cantera doesn't enforce sum(theta) = 1)
theta = np.array(rsurf.coverages)
rsurf.coverages = theta / theta.sum()
# 3. Get bulk production rates
# RECOMMENDED: omega = surf.net_production_rates[bulk_species_index]
# ALTERNATIVE: omega = nu_s @ surf.net_rates_of_progress
# Units: kmol/m^2/s
# 4. Update bulk moles
dN_bulk_i = omega_i * A(t) * dt # kmol
# 5. Non-negativity clamp
Nbulk = np.maximum(Nbulk, 0.0)
# 6. Update area from new mass
M_solid = sum(Nbulk * MW)
A(t) = reactive_SA * M_solid^alpha
# 7. Feed area back to ReactorSurface
rsurf.area = A(t)
The feedback loop here is that the surface rates depend on area, the area depends on mass, and the mass changes based on the rates. When things work well, consumption reduces mass, which reduces area, which slows down the rates — a nice self-limiting behavior. But when the rates are very fast relative to the timestep, things can go wrong.
Where this approach breaks down :
The main issue I've encountered is with the non-negativity clamp. When a reaction rate is fast enough that rate * dt exceeds the available moles of a species, the explicit Euler update tries to drive that species negative. I clamp it to zero to avoid unphysical values, but this breaks mass conservation in a subtle way.
Here's a concrete example: suppose I have a reaction A(B) -> B(B) running at 1000 mol/s, and I'm using dt = 0.001s. If A has only 0.01 mol left, the update wants to remove 1.0 mol from A (way more than available) and add 1.0 mol to B. After clamping, A goes to zero, but B still gets the full 1.0 mol — so I've effectively created 0.99 mol out of nothing. This is a problem.
The issue gets worse when internal solid-phase reactions (like phase transformations or polymerization steps in the mechanism) are much faster than the gas-solid gasification reactions. The intermediates accumulate faster than they're consumed, and the clamp keeps injecting artificial mass at every overshoot.
Non-isothermal operation makes this even harder. With energy="on", the temperature can rise during exothermic reactions, which accelerates the Arrhenius rates exponentially. What was a manageable rate at the initial temperature can become impossibly fast at elevated temperatures, and the whole thing runs away.
I've also had to deal with coverage sum drift. Cantera's ODE solver integrates the coverage derivatives, but it doesn't enforce that the coverages sum to one. I have to manually renormalize after each step, which adds another layer of bookkeeping.
What does work : For what it's worth, the approach does work reasonably well under certain conditions. Isothermal operation at moderate temperatures (around 1100K for my system) with a timestep of 0.001s gives sensible results — the bulk mass decreases over time as expected. Using tighter tolerances (rtol=1e-10, atol=1e-14) helps avoid CVodes errors from the stiff surface kinetics. Non-isothermal cases need much smaller timesteps (around 0.00001s), and even then they can fail if the temperature rises too much.
Questions for the community :
I'd really appreciate any feedback on the following: