Convergence failure with 2D Simulations

75 views
Skip to first unread message

Shane Keniley

unread,
Oct 1, 2019, 7:15:23 PM10/1/19
to zapdos-users
Hello all, 

Apologies in advance - this is a long message.

I'm having a lot of trouble getting Zapdos working effectively in 2D. I have a very simple model, at least in theory: it's just the same model shown in mean_en.i, but extended to 2D instead of 1D. Four variables, three reactions, DC bias on one boundary, grounded on another, and open BCs everywhere else. I've also removed the water region so it's just a pure argon simulation with a grounded wall. For reference I've uploaded a test branch (keniley1/zapdos, branch is 2d_test). The relevant input file is pin_water_new01.i and pin_water_new01.geo. (If you want to run this, note that the mesh file was made with Gmsh 4.4, and you may need to update CRANE; I might be using syntax that isn't compatible with the Zapdos bundled version. I should submit a PR to update that.)

Here's the problem I'm having:
When I run with a Dirichlet BC on the potential, the problem runs for a little while and qualitatively looks like what I would expect (e.g. an ionization "bullet" seems to form), but it soon begins to run very slowly. It tends to hang up in the dt = 1e-10 to 2e-9 region. Eventually the simulation will fail to converge and slowly fall down to dtmin. I've attached a Paraview snapshot of the behavior I'm seeing. It looks like an ionization "bullet" in 2D.

When I try running with the NeumannCircuitVoltageMoles_KV boundary condition, the simulation simply fails to converge unless the timesteps are unbearably small (something like 10^-16...clearly unreasonable), and even then it only runs for several timesteps before failing. I've tried many combinations of preprocessing options, field splitting, scaling variables, using PJFNK instead of Newton, using automatic scaling, mesh refinement, and running in second order...nothing seems to work. I also thought that my pin electrode might just be a bit complicated to resolve, so I tried a much simpler plate electrode. No luck - it still fails. I understand that advection-dominated problems are tricky, but this is a relatively simple one so I'm curious as to why it's struggling so much.

So long story short, I guess I have three questions: 
  1. Any general suggestions for running advection-diffusion-reaction problems in 2D? 

  2. What exactly is going on with the NeumannCircuitVoltageMoles_KV boundary condition? I know that it's meant to represent a ballast resistor circuit (Vc + Vs = current * resistance). I'm just not sure why it's an integratedBC since that equation seems to be a dirichlet condition on the potential, not a Neumann condition.

  3. How do I incorporate the electrode area to this BC in 2D? I was told that in cartesian coordinates I just need to supply the missing depth since it's an integrated BC and the length is included by definition. That makes sense to me, but I just wanted to make sure I didn't misunderstand. I have a strange electrode shape so it's not entirely clear to me how to supply the "missing depth" anyway.

Sorry again for the wordy message! Any help would be appreciated.
-Shane

pin_water_test_15ns.png

Corey DeChant

unread,
Oct 1, 2019, 7:52:26 PM10/1/19
to zapdos-users
Good afternoon,

I see that you are still running the voltage at 1.25 kV. I know this voltage works when there is a water surface over the grouped plate, but wouldn't this high of a voltage cause a spark discharge in reality (I could be wrong, I haven't done too much experimental work). I would suggest lowering the supply voltage to around 300 - 500V and see if that helps. As for the NeumannCircuitVoltageMoles_KV boundary condition, I believe it is setup to equal the grad V in the electron flux (i.e. current_em = current_ion - (Vc +Vs)/resistance) turning it into a Neumann condition. Also, Zapdos does have a dirichlet condition version of this BC called CircuitDirichletPotential (if you want to try that), but it requires the current as a Postprocessor. 

Thank you,
Corey DeChant

Shane Keniley

unread,
Oct 1, 2019, 8:12:04 PM10/1/19
to zapdos-users
Corey,

Thanks for the quick response! 

I see that you are still running the voltage at 1.25 kV. I know this voltage works when there is a water surface over the grouped plate, but wouldn't this high of a voltage cause a spark discharge in reality (I could be wrong, I haven't done too much experimental work). I would suggest lowering the supply voltage to around 300 - 500V and see if that helps.
 
Yeah, I'm fairly certain that 1.25 kV will arc in real life, especially with no ballast resistor. The file I uploaded does have 1.25 kV but I've tried it at lower voltages and it doesn't help much. If I remember correctly the Dirichlet potential boundary condition still runs slowly and eventually breaks (it just breaks later), and the integrated circuit BC still fails very quickly. I'll run this again at lower voltages to confirm. It definitely looks like I'm arcing with a Dirichlet BC though; it will run for a very long time (I've run it overnight on 8 cores, which was something like 7000 timesteps) and the density will just keep increasing while the timestep keeps decreasing. That's not at all surprising to me. I'm more concerned that the circuit BC works so poorly in this setup. 

-Shane

Alexander Lindsay

unread,
Oct 1, 2019, 10:57:25 PM10/1/19
to zapdos...@googlegroups.com
- Are you seeing oscillations in any of the solution fields?
- what line search are you using?
- you said that you tried with field split. How does it run with LU? What is the condition number if you run with -pc_type svd -pc_svd_monitor?
- If you do use the dirichlet version of the boundary condition, then you would need to use PJFNK because you wouldn’t be otherwise able to capture the non-local dependence of the residual on the integrated current. When using PJFNK it is absolutely critical that the residual functions be scaled well so that finite differencing of the residuals doesn’t yield garbage 
- have you tried ramping up the potential?
--
You received this message because you are subscribed to the Google Groups "zapdos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/ae379119-d82f-4ac2-8764-6a0983f96e50%40googlegroups.com.

Shane Keniley

unread,
Oct 2, 2019, 10:42:24 AM10/2/19
to zapdos-users
Hey Alex,

- Are you seeing oscillations in any of the solution fields?

I don't really see any spurious oscillations, no. Running with a straight DC bias (e.g. no circuit) the solution looks pretty smooth. When I'm running with the circuit BC I haven't been able to get the simulation past timestep 0 yet. 

- what line search are you using?
I've tried the default, basic, and none. Basic and none cause the nonlinear residual to bounce up and down before it fails so I usually just stick to default.

- you said that you tried with field split. How does it run with LU? What is the condition number if you run with -pc_type svd -pc_svd_monitor?
Running with LU and without field split? Pretty much the same behavior as with other preconditioners. The solution will run with a straight DC bias, but with a circuit BC it quickly gets to a nonlinear residual of around 1e-3 and then slowly goes down from there. It almost always gets to 50 Nonlinear iterations without converging and fails.

To run with svd I needed to decrease the size of the problem dramatically so the mesh is very coarse, but this is the output I get running with pc_svd_monitor:
      SVD: condition number 7.091545651196e+06, 0 of 1840 singular values are (nearly) zero
      SVD: smallest singular values: 2.043477444858e-07 2.087756735347e-07 2.115385222108e-07 2.647895787156e-07 2.738410194476e-07
      SVD: largest singular values : 1.401713920667e+00 1.401761762183e+00 1.402421242063e+00 1.408907242099e+00 1.449141358740e+00

- If you do use the dirichlet version of the boundary condition, then you would need to use PJFNK because you wouldn’t be otherwise able to capture the non-local dependence of the residual on the integrated current. When using PJFNK it is absolutely critical that the residual functions be scaled well so that finite differencing of the residuals doesn’t yield garbage 
Good to know! 

- have you tried ramping up the potential?

 Yes. It doesn't seem to help at all.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos...@googlegroups.com.

Alexander Lindsay

unread,
Oct 2, 2019, 11:37:59 AM10/2/19
to zapdos...@googlegroups.com
Sounds like plasmas are just too non-linear and we should all give up and go home! Haha I'm just kidding...

If you could create a branch with an input file that exhibits this behavior, that would be great. I'd be interested in looking at it.

Alex

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/d92f29de-a55e-404d-aa32-afe3664542cf%40googlegroups.com.

Shane Keniley

unread,
Oct 2, 2019, 12:27:31 PM10/2/19
to zapdos-users

Sounds like plasmas are just too non-linear and we should all give up and go home! Haha I'm just kidding...
You gave me a heart attack...

I've uploaded a 2d_test branch with the relevant files. The input file I'm talking about is pin_water_new01.i. pin_water_new01.geo is a better refined mesh, and pin_water_new02.geo is the coarser mesh I ran with pc_type = svd.

Alexander Lindsay

unread,
Oct 7, 2019, 2:20:38 AM10/7/19
to zapdos...@googlegroups.com
I'm running pin_water_new01.i and not even the first time step converges. Is that representative? Based on what you are saying, I would hope to get this to go a bit farther (to the ionization bullet).

Ah, I see that if I change to the Dirichlet version, then it runs. (Just like you said in your email haha). So while debugging what's going on with the physics, e.g. the **residual** statements, I recommend running with:

Outputs/checkpoint=true Executioner/compute_scaling_once=false Executioner/verbose=true -snes_linesearch_type basic -pc_type lu -snes_mf_operator

I would recommend trying out the `CircuitDirichletPotential` object. I don't actually know whether it "works". We have a syntax check on it, but no physics tests use it. I think the concept is good though...it's a pure dirichlet type condition which you seem to grasp more intuitively (and rightfully so). It takes current as a postprocessor value; because of that PJFNK becomes especially important to use. Without the circuit feedback, (e.g. just using DirichletBC) I think this discharge will just run away (and become very difficult to solve). I have the simulation still running and I'm just seeing the density continue to grow near the electrode (after the initial plasma bullet phase).

Can you report back after you try that object?

Alex



To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/349c0e20-9142-4523-a9bf-70c1a0bb15d6%40googlegroups.com.

Shane Keniley

unread,
Oct 7, 2019, 10:20:44 AM10/7/19
to zapdos-users
Yes, that's exactly the behavior I'm seeing. I think I managed to get it past the first timestep once, and I can't recall what I did to get that running - I think the timestep was incredibly small (10^-16 or so) and the variable scaling was terrible (automatic scaling was turned off and I was just playing around with values). It failed immediately after the first timestep though. There's clearly something weird going on with this BC. 

I would recommend trying out the `CircuitDirichletPotential` object. I don't actually know whether it "works". We have a syntax check on it, but no physics tests use it. I think the concept is good though...it's a pure dirichlet type condition which you seem to grasp more intuitively (and rightfully so). It takes current as a postprocessor value; because of that PJFNK becomes especially important to use. 

It looks like there isn't a postprocessor for calculating current in Zapdos at the moment, but there is a SideTotFluxIntegral postprocessor that as far as I can tell only deals with advective flux. I'll modify that really quick to compute current and report back with the results. 

Without the circuit feedback, (e.g. just using DirichletBC) I think this discharge will just run away (and become very difficult to solve). I have the simulation still running and I'm just seeing the density continue to grow near the electrode (after the initial plasma bullet phase).

I agree, and that's why this is so concerning to me. I ran this for hours and it just runs slower and slower until it just gives up. Like Corey said, at 1kV this would just arc. (Running at 500 V with a DirichletBC seems to run and converge to a solution though, albeit slowly. That's probably a corona discharge.) I could run a pulsed discharge with just a DirichletBC by turning off the potential bc after ~10 ns or so and do some studies with that, but most experimental setups use ballast resistors to run a stable DC discharges so I'd really like to have this working. 

Shane Keniley

unread,
Oct 7, 2019, 12:35:26 PM10/7/19
to zapdos-users
Okay, I've uploaded a new test (pin_water_new04.i, with the geo file pin_water_center04.geo) and the postprocessor. (I've also made a modification to the circuit BC to make it compatible with my postprocessor...there was some double counting of _r_units.) The new mesh is a little less refined in the center and more refined along the bottom boundary. It should run a bit faster than the old one.

Good news: This setup runs with PJFNK! It's slower than Newton obviously and that's annoying, but it works. I have absolutely no idea if my equations are right in terms of signs and units, but it's changing the potential as the simulation runs so it's at least qualitatively correct. I'm running a simulation now to see if it will converge to some kind of steady state.

So I guess the question is...what's up with the Neumann boundary condition? 

Shane Keniley

unread,
Oct 7, 2019, 1:24:46 PM10/7/19
to zapdos-users
Another update: I started seeing some ugly-looking oscillations near the electrode tip in argon ions around time = 10^-7 seconds (see attached image). The problem also started running into convergence issues around this time as well, so I assume these oscillations are related. I thought the mesh may just be too coarse around the electrode, but when I tried halving the characteristic length around those nodes I got a negative jacobian error! I've never seen that before.

I assume this BC actually ran because the postprocessor was only running at the default value (I assume it's timestep_end?), so the fluxes were not coupled as strongly as in the Neumann BC. When I tried running every nonlinear step it failed to converge very quickly. Running it at every linear step ran, but it wouldn't move above 0.1 picosecond timesteps.
pin_water_test_circuit.png

Shane Keniley

unread,
Oct 7, 2019, 4:24:46 PM10/7/19
to zapdos-users
Okay, sorry for all the updates/emails. I'm getting better results now though! I added a proper Jacobian term to the circuit BC (not sure if that's necessary with PJFNK because of the whole "jacobian free" part, but I thought it might help the initial preconditioning at least), and I used the executioner and preconditioning options you gave me in your last post. I also turned the postprocessor on every nonlinear AND linear step. I forgot to turn steady_state_check on, but I got to a point where nothing changed for a couple timesteps. The nonlinear residual was getting to about 10^-13 and wouldn't get any lower. I'm calling that steady state.

I'm not sure why the electron (and ion) density gets so large in the area to the right; I'd expect the density to be higher directly below the tip. I also don't know my units in the current or BC calculation are correct, like I said before. I'll need to check that. Or maybe this was a "false" steady state and the problem just wouldn't converge for some reason. I have no idea, but this is getting closer to something I can work with! Still unsure about why the Neumann BC is misbehaving if this one is working. If the implementation is wrong I wouldn't expect it to work in 1D, but it clearly does.
ne.png
ne_vs_narp_center.png

Alexander Lindsay

unread,
Oct 8, 2019, 6:36:23 AM10/8/19
to zapdos...@googlegroups.com
Awesome that you're getting some results! The line plot for the center density is very encouraging. As far as the higher electron density on the outside...you said you're not setting any BCs on those boundaries right? So what you're implicitly doing is setting a no-flux condition, which probably isn't really valid for your mesh size. You'd probably have to make the domain very large for there to actually be no ions/electrons to flux out at those outer boundaries. I don't know what the proper solution should be for you here. Maybe Corey can chime in because I know he's done some 2D discharges. I believe Comsol used to put a dielectric or something on non-electrode boundaries that would accumulate charge and then force the discharge to the center. That would probably introduce additional convergence challenges however (maybe not).

Regarding the current units...I feel like if you switched to RZ then maybe they would be correct...? I saw that you had RZ commented out at least when I checked-out your branch. Not sure if there was a reason for that.

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/1a6b9735-61aa-4fed-9da9-be93c4564ebd%40googlegroups.com.

Shane Keniley

unread,
Oct 8, 2019, 10:39:02 AM10/8/19
to zapdos-users
As far as the higher electron density on the outside...you said you're not setting any BCs on those boundaries right? So what you're implicitly doing is setting a no-flux condition, which probably isn't really valid for your mesh size. You'd probably have to make the domain very large for there to actually be no ions/electrons to flux out at those outer boundaries.
Correct, I only have boundary conditions on the electrode and the bottom surface. I can certainly try making it larger; that would be an easy (but costly) fix. Would setting a do-nothing boundary condition be better? Unless I'm mistaken, that would just let the boundary be open rather than arbitrarily forcing the flux to be zero, so it might allow things to advect out of the system more naturally. 

I believe Comsol used to put a dielectric or something on non-electrode boundaries that would accumulate charge and then force the discharge to the center. That would probably introduce additional convergence challenges however (maybe not).
I tried implementing a charge accumulation boundary condition last year, and it just would not converge at all. There should be a post somewhere on here about it...If I remember correctly, we tried including the ODE as a material property using stateful Material properties. I'm not sure how best to approach that problem. One thing at a time!

Regarding the current units...I feel like if you switched to RZ then maybe they would be correct...? I saw that you had RZ commented out at least when I checked-out your branch. Not sure if there was a reason for that.
Haha the reason is almost entirely that I don't understand that option, mostly with regards to _r_units. Actually, that's a good question in general. If I'm scaling the domain by 1e3 (e.g. the mesh length is ~1, but the physical domain size is ~1mm in each dimension), where should I scale the value of current? In 2D XYZ the postprocessor is integrated along the electrode, so my interpretation was that it was essentially a line integral and the resulting total current calculation would be off by a factor of 1e3, so I scaled the value inside the postprocessor by 1e-3. The missing "depth" of the area was included (in physical units) as the _A parameter in the CircuitDirichletPotential BC. 

In 2D RZ, would the current integral need to be scaled by 1 mm^2 instead? I'm not sure how RZ works.

Alexander Lindsay

unread,
Oct 8, 2019, 11:48:37 AM10/8/19
to zapdos...@googlegroups.com
On Tue, Oct 8, 2019 at 7:39 AM Shane Keniley <smkfi...@gmail.com> wrote:
As far as the higher electron density on the outside...you said you're not setting any BCs on those boundaries right? So what you're implicitly doing is setting a no-flux condition, which probably isn't really valid for your mesh size. You'd probably have to make the domain very large for there to actually be no ions/electrons to flux out at those outer boundaries.
Correct, I only have boundary conditions on the electrode and the bottom surface. I can certainly try making it larger; that would be an easy (but costly) fix. Would setting a do-nothing boundary condition be better? Unless I'm mistaken, that would just let the boundary be open rather than arbitrarily forcing the flux to be zero, so it might allow things to advect out of the system more naturally. 

It is certainly worth a try. Yea the reflecting boundary conditions there could definitely lead to the charge accumulation we are seeing.


I believe Comsol used to put a dielectric or something on non-electrode boundaries that would accumulate charge and then force the discharge to the center. That would probably introduce additional convergence challenges however (maybe not).
I tried implementing a charge accumulation boundary condition last year, and it just would not converge at all. There should be a post somewhere on here about it...If I remember correctly, we tried including the ODE as a material property using stateful Material properties. I'm not sure how best to approach that problem. One thing at a time!

My current recommendation would be to try using NodalKernels the next time you want to tackle it.


Regarding the current units...I feel like if you switched to RZ then maybe they would be correct...? I saw that you had RZ commented out at least when I checked-out your branch. Not sure if there was a reason for that.
Haha the reason is almost entirely that I don't understand that option, mostly with regards to _r_units. Actually, that's a good question in general. If I'm scaling the domain by 1e3 (e.g. the mesh length is ~1, but the physical domain size is ~1mm in each dimension), where should I scale the value of current? In 2D XYZ the postprocessor is integrated along the electrode, so my interpretation was that it was essentially a line integral and the resulting total current calculation would be off by a factor of 1e3, so I scaled the value inside the postprocessor by 1e-3. The missing "depth" of the area was included (in physical units) as the _A parameter in the CircuitDirichletPotential BC. 

In 2D RZ, would the current integral need to be scaled by 1 mm^2 instead? I'm not sure how RZ works.

Yea you're multiplying times r*dr so as opposed to the dx line integral, so if you were scaling by 1e-3 before then hopefully 1e-6 would do the trick.

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/b85f54bf-df0e-4b5a-9ec1-dd2111f3ae31%40googlegroups.com.

John Haase

unread,
Oct 8, 2019, 8:37:50 PM10/8/19
to zapdos-users
Shane,

It might make sense to do a Robin BC to approximate the condition at infinity. Here are a few pages of notes to get you started.
p2.jpg
p3.jpg
p4.jpg
p1.jpg

Shane Keniley

unread,
Oct 11, 2019, 12:44:07 AM10/11/19
to zapdos-users
I think I managed to get "do nothing" conditions working...but I'm still seeing some weird behavior. I implemented a kind of pseudo-open BC for the potential based on the first-order expansion in this archived paper. (Equation 16, although it's hard-coded for only the right-hand boundary for now)...I'm not sure if that's correct.

I started to get some large oscillations in the bottom right corner in one simulation, but only for the argon ions - everything else was fine. See the attached images. The line plot is along the white line shown in the 2D plot. I have a default BC on the left (for symmetry), outflow BCs on the top and right, the ion, electron, and energy BCs on the cathode and grounded bottom. Bias was set to -800 volts with a ramp up time of 1 ns, and I believe there are no BCs applied to the potential on the right or top wall. I'm not sure if this is a mesh refinement issue, if one (or several) of my BCS are incorrect, or what. If anyone has any insight I'd appreciate it! The input file is pin_bctest.i and should be uploaded to my 2d_test branch. 

In any case it looks like I'm getting closer to having a good stable discharge in 2D. I'm going to go ahead and throw a thin water layer in to see how it works. 

It might make sense to do a Robin BC to approximate the condition at infinity. Here are a few pages of notes to get you started.
Are "do nothing" conditions not Robin BCs? I'll take a look at this, thanks for the images. Looks like it might be what I need.
em_vs_arp.png
arp_density.png

Alexander Lindsay

unread,
Oct 11, 2019, 12:44:08 PM10/11/19
to zapdos...@googlegroups.com
A Do Nothing BC is pretty much just saying that the flux is equal to the flux. In the strong sense of the PDE it leaves the problem underconstrained. However, it works in the weak form and yields a condition, at least for a simple advection-diffusion problem, outlined by Griffiths in his paper here that the p+1 derivative of the solution variable is equal to zero somewhere near the outflow boundary.

A Robin BC describes a relationship between the solution value and its derivative on the boundary. I haven't looked into the equations that John shared, but I feel like they would be great to try out!

What does the potential field look like in the bottom-right corner?

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/5179908e-ad1b-4f36-9111-2fb98e52a639%40googlegroups.com.

Shane Keniley

unread,
Oct 11, 2019, 5:22:52 PM10/11/19
to zapdos-users
What does the potential field look like in the bottom-right corner?
I've attached a plot of the potential along the same line as the plot in my last post. The potential looks mostly normal, except for the fact that it gets slightly above 0 volts near the wall before falling back to 0 at the boundary. I think the argon ion oscillations was just a result of my failed attempt of an open boundary condition for the potential. My guess is that the argon ions are just getting trapped in that small potential well? I'm not sure what else would be going on because the same doesn't happen when I simply use the default BC (e.g. zero flux) for the potential. Electrons increase there too, but they don't "explode" like the argon ions do.

I've also attached another two 2D plots of a discharge ("do nothing" on the right and top for ions and electrons, but zero flux for potential). This is what I'm getting consistently now, and the simulations are relatively fast! I'm interested in using this to model a simple plasma-liquid interaction system so I don't think the right boundary matters too much (I really only care about what's happening just below the needle tip, and that region is converged by the time things start moving out toward the right boundary), but it bothers me that the density increases so much on the right. It definitely looks a lot better than the one I posted three days ago so I think the do nothing BCs are working as intended; I tested it on a simple gaussian distribution that flowed out due to advection and diffusion and it seemed to not build up at the boundary, and this is what COMSOL uses as well for electron, energy, and heavy species transport. I think my only issue is the Poisson equation. An "open" boundary for elliptic problems is apparently not a trivial thing. I think I might need to look into applying a dielectric for the time being.

A Robin BC describes a relationship between the solution value and its derivative on the boundary. I haven't looked into the equations that John shared, but I feel like they would be great to try out!
Yeah, I don't know why I was confused about that. I've used Robin BCs before (on paper, not in MOOSE).
potential_bottom.png
narp_233us.png
ne_233us.png

Alexander Lindsay

unread,
Oct 11, 2019, 6:58:17 PM10/11/19
to zapdos...@googlegroups.com
On Fri, Oct 11, 2019 at 2:22 PM Shane Keniley <keni...@illinois.edu> wrote:
What does the potential field look like in the bottom-right corner?
I've attached a plot of the potential along the same line as the plot in my last post. The potential looks mostly normal, except for the fact that it gets slightly above 0 volts near the wall before falling back to 0 at the boundary. I think the argon ion oscillations was just a result of my failed attempt of an open boundary condition for the potential. My guess is that the argon ions are just getting trapped in that small potential well? I'm not sure what else would be going on because the same doesn't happen when I simply use the default BC (e.g. zero flux) for the potential. Electrons increase there too, but they don't "explode" like the argon ions do.

I've also attached another two 2D plots of a discharge ("do nothing" on the right and top for ions and electrons, but zero flux for potential). This is what I'm getting consistently now, and the simulations are relatively fast! I'm interested in using this to model a simple plasma-liquid interaction system so I don't think the right boundary matters too much (I really only care about what's happening just below the needle tip, and that region is converged by the time things start moving out toward the right boundary), but it bothers me that the density increases so much on the right. It definitely looks a lot better than the one I posted three days ago so I think the do nothing BCs are working as intended; I tested it on a simple gaussian distribution that flowed out due to advection and diffusion and it seemed to not build up at the boundary, and this is what COMSOL uses as well for electron, energy, and heavy species transport. I think my only issue is the Poisson equation. An "open" boundary for elliptic problems is apparently not a trivial thing. I think I might need to look into applying a dielectric for the time being.

Definitely a vast improvement! Yea I'm also bothered by the fact that the Argon and electron densities are fairly substantial out near the boundaries. I'm glad that the open boundary condition for electrons, ions, and energy is working as intended. It's definitely useful I think to be comparing your BCs to what COMSOL uses. Do you know what they do for the potential boundary condition? I believe sometimes they use that surface charge boundary condition I talked about.

This is really neat exploring that you're doing here Shane. I appreciate you keeping us updated.

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/79af42fd-3b43-4f81-8a68-131474f904a6%40googlegroups.com.

John Haase

unread,
Oct 12, 2019, 8:15:38 PM10/12/19
to zapdos-users
Shane,
Building on Alex's point about Ar/e densities at the boundary, you might want to do a study of maximum domain radius to see if that value decays to something closer to background

Shane Keniley

unread,
Oct 15, 2019, 12:03:37 PM10/15/19
to zapdos-users
Shane,
Building on Alex's point about Ar/e densities at the boundary, you might want to do a study of maximum domain radius to see if that value decays to something closer to background
Yeah, I've thought about that. I increased the domain from ~2mm to 0.1 cm just to see if it would make a difference. It did - the problem actually ran to end_time instead of failing to converge, but the species still increased dramatically near the edge. That said, they certainly seemed to level off a bit more. I wonder if I'm actually seeing a kind of corona discharge and my domain is just too narrow to encompass it fully. 0.1cm is still very small. I'll try running even larger.

Do you know what they do for the potential boundary condition? I believe sometimes they use that surface charge boundary condition I talked about.
I don't personally have access to COMSOL so this is second hand information, but apparently the default is a zero charge boundary condition, or \hat{n} \cdot D = 0 (where D = \epsilon_0 E...sorry, don't know how to put equations into these posts.) So it is effectively a zero flux condition on the potential. I've definitely seen examples with the dielectric condition you've mentioned though.

On another note, I've run into a new issue that I have no idea how to resolve. I tried adding a water layer to the mesh and now I'm just getting a segmentation fault. I anticipated trouble the first time I tried this in 2D, but I was hoping for a more descriptive error than "Segmentation fault (core dumped)". I tried including MeshModifiers to define SideSetsBetweenSubdomains as we did in 1D for the plasma-liquid test case, but that didn't help. I've uploaded my test in my branch 2d_test (input pin_water_rxn_3.i and pin_water_rxn_3.geo).

I found a previous thread about this in the MOOSE user group, but I don't really understand what the solution was (if there was any).

Alexander Lindsay

unread,
Oct 15, 2019, 12:18:11 PM10/15/19
to zapdos...@googlegroups.com
Can you run in debug mode with a debugger and get a backtrace?

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/ea64442c-24c6-4143-89d8-78524fefad9d%40googlegroups.com.

Shane Keniley

unread,
Oct 15, 2019, 1:43:11 PM10/15/19
to zapdos-users
When I run in a debugger the problem doesn't seg fault at all. I get this message during the "computing initial residual" step:

Computing initial residual ....Assertion `i < _val.size()' failed.

i = 0

_val.size() = 0


and then this error: 

[0] /Users/keniley/projects/moose/scripts/../libmesh/installed/include/libmesh/dense_vector.h, line 403, compiled Oct 15 2019 at 12:15:41

We caught a libMesh error

Nonlinear solve did not converge due to DIVERGED_FUNCTION_DOMAIN iterations 0

 Solve Did NOT Converge!


The problem continues to run though, which is odd. 

 The solve continues but just fails. 

Alexander Lindsay

unread,
Oct 15, 2019, 2:59:11 PM10/15/19
to zapdos...@googlegroups.com
Set breakpoints on `report_error` and `abort` and then run again. Once it stops, you should be able to move through the stack and see where the failed assertion is coming from. You can also send the output of `bt`

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/45a813d9-084b-4240-afda-0da7f6f2d558%40googlegroups.com.

Shane Keniley

unread,
Oct 15, 2019, 3:21:14 PM10/15/19
to zapdos-users
Setting breakpoint on report_error: 

Time Step 1, time = 1e-12

          old time = 0

                dt = 1e-12

            old dt = 1e-12


Computing initial residual .....Assertion `i < _val.size()' failed.

i = 0

_val.size() = 0



Process 4969 stopped

* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1

    frame #0: 0x0000000106e74ce7 libmesh_dbg.0.dylib`libMesh::MacroFunctions::report_error(file="/Users/keniley/projects/moose/scripts/../libmesh/installed/include/libmesh/dense_vector.h", line=403, date="Oct 15 2019", time="12:15:41") at libmesh_common.C:74:7

   71    // It is possible to have an error *inside* report_error; e.g. from

   72    // print_trace.  We don't want to infinitely recurse.

   73    static bool reporting_error = false;

-> 74    if (reporting_error)

   75      {

   76        // I heard you like error reporting, so we put an error report

   77        // in report_error() so you can report errors from the report.


Running bt: 

* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1

  * frame #0: 0x0000000106e74ce7 libmesh_dbg.0.dylib`libMesh::MacroFunctions::report_error(file="/Users/keniley/projects/moose/scripts/../libmesh/installed/include/libmesh/dense_vector.h", line=403, date="Oct 15 2019", time="12:15:41") at libmesh_common.C:74:7

    frame #1: 0x00000001000e338f libzapdos-dbg.0.dylib`libMesh::DenseVector<double>::operator(this=0x000000010e351b38, i=0)(unsigned int) at dense_vector.h:403:3

    frame #2: 0x00000001014ef71c libmoose-dbg.0.dylib`IntegratedBC::computeResidual(this=0x000000010e351818) at IntegratedBC.C:91:7

    frame #3: 0x0000000101dfb19f libmoose-dbg.0.dylib`ComputeResidualThread::onBoundary(this=0x00007ffeefbfd780, elem=0x0000000111c6cb60, side=2, bnd_id=5) at ComputeResidualThread.C:143:13

    frame #4: 0x000000010173dfa0 libmoose-dbg.0.dylib`ThreadedElementLoopBase<libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> >::operator(this=0x00007ffeefbfd780, range=0x000000010d4ef170, bypass_threading=false)(libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> const&, bool) at ThreadedElementLoopBase.h:221:15

    frame #5: 0x0000000101e79f42 libmoose-dbg.0.dylib`void libMesh::Threads::parallel_reduce<libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*>, ComputeResidualThread>(range=0x000000010d4ef170, body=0x00007ffeefbfd780) at threads_pthread.h:380:5

    frame #6: 0x0000000101e73b16 libmoose-dbg.0.dylib`NonlinearSystemBase::computeResidualInternal(this=0x000000010d828418, tags=size=3) at NonlinearSystemBase.C:1385:5

    frame #7: 0x0000000101e732cd libmoose-dbg.0.dylib`NonlinearSystemBase::computeResidualTags(this=0x000000010d828418, tags=size=3) at NonlinearSystemBase.C:704:5

    frame #8: 0x000000010185dec9 libmoose-dbg.0.dylib`FEProblemBase::computeResidualTags(this=0x000000010d826018, tags=size=3) at FEProblemBase.C:4917:8

    frame #9: 0x000000010185d059 libmoose-dbg.0.dylib`FEProblemBase::computeResidualInternal(this=0x000000010d826018, soln=0x0000000111d70650, residual=0x0000000111d70ae0, tags=size=3) at FEProblemBase.C:4755:5

    frame #10: 0x000000010185cc9d libmoose-dbg.0.dylib`FEProblemBase::computeResidual(this=0x000000010d826018, soln=0x0000000111d70650, residual=0x0000000111d70ae0) at FEProblemBase.C:4710:3

    frame #11: 0x000000010185cac6 libmoose-dbg.0.dylib`FEProblemBase::computeResidualSys(this=0x000000010d826018, (null)=0x0000000111d70060, soln=0x0000000111d70650, residual=0x0000000111d70ae0) at FEProblemBase.C:4687:3

    frame #12: 0x0000000101e66cb8 libmoose-dbg.0.dylib`NonlinearSystem::solve(this=0x000000010d828418) at NonlinearSystem.C:159:17

    frame #13: 0x000000010185abfa libmoose-dbg.0.dylib`FEProblemBase::solve(this=0x000000010d826018) at FEProblemBase.C:4444:10

    frame #14: 0x000000010143f694 libmoose-dbg.0.dylib`FEProblemSolve::solve(this=0x000000010e097d08) at FEProblemSolve.C:145:12

    frame #15: 0x0000000101444f60 libmoose-dbg.0.dylib`PicardSolve::solveStep(this=0x000000010e097de0, begin_norm_old=1.7976931348623157E+308, begin_norm=0x000000011252aff0, end_norm_old=1.7976931348623157E+308, end_norm=0x0000000112501fb0, relax=false, relaxed_dofs=size=0) at PicardSolve.C:378:22

    frame #16: 0x0000000101443510 libmoose-dbg.0.dylib`PicardSolve::solve(this=0x000000010e097de0) at PicardSolve.C:199:28

    frame #17: 0x0000000101ce6d14 libmoose-dbg.0.dylib`TimeStepper::step(this=0x000000010d38c5a8) at TimeStepper.C:162:43

    frame #18: 0x000000010144af63 libmoose-dbg.0.dylib`Transient::takeStep(this=0x000000010e097c18, input_dt=-1) at Transient.C:415:18

    frame #19: 0x000000010144a6b4 libmoose-dbg.0.dylib`Transient::execute(this=0x000000010e097c18) at Transient.C:309:5

    frame #20: 0x000000010232058f libmoose-dbg.0.dylib`MooseApp::executeExecutioner(this=0x000000010e05a818) at MooseApp.C:869:19

    frame #21: 0x0000000102321251 libmoose-dbg.0.dylib`MooseApp::run(this=0x000000010e05a818) at MooseApp.C:979:5

    frame #22: 0x00000001000020ef zapdos-dbg`main(argc=3, argv=0x00007ffeefbfee78) at main.C:33:8

    frame #23: 0x00007fff780b6015 libdyld.dylib`start + 1

    frame #24: 0x00007fff780b6015 libdyld.dylib`start + 1


The error occurs while computing the residuals of the boundary conditions, as far as I can tell. (Which makes sense, given that the only difference here from my previous simulation is a second region with an interface.) ComputeResidualThread.C calls "bc->computeResidual()", which goes into IntegratedBC.C:

   89    for (_qp = 0; _qp < _qrule->n_points(); _qp++)

   90      for (_i = 0; _i < _test.size(); _i++)

-> 91        _local_re(_i) += _JxW[_qp] * _coord[_qp] * computeQpResidual();

   92  

   93    accumulateTaggedLocalResidual();


Which is where we find our error:

400 inline

   401 T & DenseVector<T>::operator () (const unsigned int i)

   402 {

-> 403   libmesh_assert_less (i, _val.size());

   404 

   405   return _val[i];

   406 }


So I'm guessing this is an issue with the integrated boundary condition at the interface, though I still can't figure out what the issue is or how to resolve it from this.
...

Shane Keniley

unread,
Oct 16, 2019, 8:00:47 AM10/16/19
to zapdos-users
So here's a fun fact that I am totally the first person in the history of using MOOSE to discover: turns out if you tell a variable to exist in block 0, but then try to apply a BC to that variable on a boundary that exists in a different block, you get a segmentation fault.  In my defense, it was only a single variable out of ~6 that I forgot to change the boundary ID for. Still kind of embarrassing. 

The simulation is running now! This is pretty neat. I'll post some results when the simulation ends (or breaks; either way). 
Message has been deleted
Message has been deleted

Shane Keniley

unread,
Oct 18, 2019, 12:10:45 PM10/18/19
to zapdos-users
Good news: the simulation works! In the attached plot you can see the water layer in that thin blue stripe at the bottom of the domain. The tiny red band just above that is the aqueous electrons. 

Bad news: Estimates I've seen of the solvated electron depth are typically around 10-100 nm, so I'm going to need nm precision near the interface. Needless to say, the simulation gets very big, very fast. I've tried using adaptive meshing to cut down on computational cost, but it just heavily refines the already thin region near the interface so it doesn't really help. This test only has two reactions in the water layer and it was already taking something like ~2 minutes per timestep, and the timesteps wouldn't go above dt = 10^-11. I wonder if I can get away with doing a box adaptive refinement of just the first 10-20 nm of the interface while leaving everything else relatively coarse. 

Still, the fact that I'm dealing with computational cost rather than trying to figure out how to get the simulation working is a vast improvement.

-Shane

(Sorry if you all are getting multiple notifications from this post. My responses kept getting deleted for some reason.)
electron_2d_results.png

Alexander Lindsay

unread,
Oct 18, 2019, 12:18:51 PM10/18/19
to zapdos...@googlegroups.com
On Fri, Oct 18, 2019 at 9:10 AM Shane Keniley <smkfi...@gmail.com> wrote:
Good news: the simulation works! In the attached plot you can see the water layer in that thin blue stripe at the bottom of the domain. The tiny red band just above that is the aqueous electrons. 

Awesome!


Bad news: Estimates I've seen of the solvated electron depth are typically around 10-100 nm, so I'm going to need nm precision near the interface. Needless to say, the simulation gets very big, very fast. I've tried using adaptive meshing to cut down on computational cost, but it just heavily refines the already thin region near the interface so it doesn't really help.

I'm confused...isn't this where you want the mesh to be refined? E.g. isn't this where you want the precision to be?

This test only has two reactions in the water layer and it was already taking something like ~2 minutes per timestep, and the timesteps wouldn't go above dt = 10^-11. I wonder if I can get away with doing a box adaptive refinement of just the first 10-20 nm of the interface while leaving everything else relatively coarse. 

Still, the fact that I'm dealing with computational cost rather than trying to figure out how to get the simulation working is a vast improvement.

-Shane

(Sorry if you all are getting multiple notifications from this post. My responses kept getting deleted for some reason.)

On Tuesday, October 1, 2019 at 6:15:23 PM UTC-5, Shane Keniley wrote:
Hello all, 

Apologies in advance - this is a long message.

I'm having a lot of trouble getting Zapdos working effectively in 2D. I have a very simple model, at least in theory: it's just the same model shown in mean_en.i, but extended to 2D instead of 1D. Four variables, three reactions, DC bias on one boundary, grounded on another, and open BCs everywhere else. I've also removed the water region so it's just a pure argon simulation with a grounded wall. For reference I've uploaded a test branch (keniley1/zapdos, branch is 2d_test). The relevant input file is pin_water_new01.i and pin_water_new01.geo. (If you want to run this, note that the mesh file was made with Gmsh 4.4, and you may need to update CRANE; I might be using syntax that isn't compatible with the Zapdos bundled version. I should submit a PR to update that.)

Here's the problem I'm having:
When I run with a Dirichlet BC on the potential, the problem runs for a little while and qualitatively looks like what I would expect (e.g. an ionization "bullet" seems to form), but it soon begins to run very slowly. It tends to hang up in the dt = 1e-10 to 2e-9 region. Eventually the simulation will fail to converge and slowly fall down to dtmin. I've attached a Paraview snapshot of the behavior I'm seeing. It looks like an ionization "bullet" in 2D.

When I try running with the NeumannCircuitVoltageMoles_KV boundary condition, the simulation simply fails to converge unless the timesteps are unbearably small (something like 10^-16...clearly unreasonable), and even then it only runs for several timesteps before failing. I've tried many combinations of preprocessing options, field splitting, scaling variables, using PJFNK instead of Newton, using automatic scaling, mesh refinement, and running in second order...nothing seems to work. I also thought that my pin electrode might just be a bit complicated to resolve, so I tried a much simpler plate electrode. No luck - it still fails. I understand that advection-dominated problems are tricky, but this is a relatively simple one so I'm curious as to why it's struggling so much.

So long story short, I guess I have three questions: 
  1. Any general suggestions for running advection-diffusion-reaction problems in 2D? 

  2. What exactly is going on with the NeumannCircuitVoltageMoles_KV boundary condition? I know that it's meant to represent a ballast resistor circuit (Vc + Vs = current * resistance). I'm just not sure why it's an integratedBC since that equation seems to be a dirichlet condition on the potential, not a Neumann condition.

  3. How do I incorporate the electrode area to this BC in 2D? I was told that in cartesian coordinates I just need to supply the missing depth since it's an integrated BC and the length is included by definition. That makes sense to me, but I just wanted to make sure I didn't misunderstand. I have a strange electrode shape so it's not entirely clear to me how to supply the "missing depth" anyway.

Sorry again for the wordy message! Any help would be appreciated.
-Shane

--
You received this message because you are subscribed to the Google Groups "zapdos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/fcacceba-a76f-41ed-92b7-1bc41919c395%40googlegroups.com.

Shane Keniley

unread,
Oct 18, 2019, 12:34:22 PM10/18/19
to zapdos-users

Bad news: 
Estimates I've seen of the solvated electron depth are typically around 10-100 nm, so I'm going to need nm precision near the interface. Needless to say, the simulation gets very big, very fast. I've tried using adaptive meshing to cut down on computational cost, but it just heavily refines the already thin region near the interface so it doesn't really help.

I'm confused...isn't this where you want the mesh to be refined? E.g. isn't this where you want the precision to be?

Sorry, yeah, that was confusing. It definitely refines the right region. What I meant was that it didn't help with computational time. The vast majority of the mesh nodes were already concentrated in that thin region near the interface without refinement, so doubling the number of nodes in that ~few nm region effectively doubles the total problem size. It did exactly what I asked it to do; it was just painfully slow.

Is there any way to have different levels of refinement in each region? The mesh needs to be much finer in the water region than in the plasma region, so I'd like to set something like ~2 levels of refinement in one block but 4-5 levels of refinement in the other. 
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos...@googlegroups.com.

Alexander Lindsay

unread,
Oct 18, 2019, 4:40:24 PM10/18/19
to zapdos...@googlegroups.com
If using InterfaceKernels, you must have a conformal mesh at the interface, e.g. each node on the interface must be shared between the subdomains. However, we could look at using the mortar method to set-up a different refinement level on each side of the interface. However, if you're looking to get this done before GEC, we probably won't be able to get that done on time.

To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/324b0aaa-ff9b-43f8-9c8b-029341d60481%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages