I see that you are still running the voltage at 1.25 kV. I know this voltage works when there is a water surface over the grouped plate, but wouldn't this high of a voltage cause a spark discharge in reality (I could be wrong, I haven't done too much experimental work). I would suggest lowering the supply voltage to around 300 - 500V and see if that helps.
--
You received this message because you are subscribed to the Google Groups "zapdos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/ae379119-d82f-4ac2-8764-6a0983f96e50%40googlegroups.com.
- Are you seeing oscillations in any of the solution fields?
- what line search are you using?
- you said that you tried with field split. How does it run with LU? What is the condition number if you run with -pc_type svd -pc_svd_monitor?
- If you do use the dirichlet version of the boundary condition, then you would need to use PJFNK because you wouldn’t be otherwise able to capture the non-local dependence of the residual on the integrated current. When using PJFNK it is absolutely critical that the residual functions be scaled well so that finite differencing of the residuals doesn’t yield garbage
- have you tried ramping up the potential?
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/d92f29de-a55e-404d-aa32-afe3664542cf%40googlegroups.com.
Sounds like plasmas are just too non-linear and we should all give up and go home! Haha I'm just kidding...
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/d92f29de-a55e-404d-aa32-afe3664542cf%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/349c0e20-9142-4523-a9bf-70c1a0bb15d6%40googlegroups.com.
I would recommend trying out the `CircuitDirichletPotential` object. I don't actually know whether it "works". We have a syntax check on it, but no physics tests use it. I think the concept is good though...it's a pure dirichlet type condition which you seem to grasp more intuitively (and rightfully so). It takes current as a postprocessor value; because of that PJFNK becomes especially important to use.
Without the circuit feedback, (e.g. just using DirichletBC) I think this discharge will just run away (and become very difficult to solve). I have the simulation still running and I'm just seeing the density continue to grow near the electrode (after the initial plasma bullet phase).
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/349c0e20-9142-4523-a9bf-70c1a0bb15d6%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/1a6b9735-61aa-4fed-9da9-be93c4564ebd%40googlegroups.com.
As far as the higher electron density on the outside...you said you're not setting any BCs on those boundaries right? So what you're implicitly doing is setting a no-flux condition, which probably isn't really valid for your mesh size. You'd probably have to make the domain very large for there to actually be no ions/electrons to flux out at those outer boundaries.
I believe Comsol used to put a dielectric or something on non-electrode boundaries that would accumulate charge and then force the discharge to the center. That would probably introduce additional convergence challenges however (maybe not).
Regarding the current units...I feel like if you switched to RZ then maybe they would be correct...? I saw that you had RZ commented out at least when I checked-out your branch. Not sure if there was a reason for that.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/1a6b9735-61aa-4fed-9da9-be93c4564ebd%40googlegroups.com.
As far as the higher electron density on the outside...you said you're not setting any BCs on those boundaries right? So what you're implicitly doing is setting a no-flux condition, which probably isn't really valid for your mesh size. You'd probably have to make the domain very large for there to actually be no ions/electrons to flux out at those outer boundaries.Correct, I only have boundary conditions on the electrode and the bottom surface. I can certainly try making it larger; that would be an easy (but costly) fix. Would setting a do-nothing boundary condition be better? Unless I'm mistaken, that would just let the boundary be open rather than arbitrarily forcing the flux to be zero, so it might allow things to advect out of the system more naturally.
I believe Comsol used to put a dielectric or something on non-electrode boundaries that would accumulate charge and then force the discharge to the center. That would probably introduce additional convergence challenges however (maybe not).I tried implementing a charge accumulation boundary condition last year, and it just would not converge at all. There should be a post somewhere on here about it...If I remember correctly, we tried including the ODE as a material property using stateful Material properties. I'm not sure how best to approach that problem. One thing at a time!
Regarding the current units...I feel like if you switched to RZ then maybe they would be correct...? I saw that you had RZ commented out at least when I checked-out your branch. Not sure if there was a reason for that.Haha the reason is almost entirely that I don't understand that option, mostly with regards to _r_units. Actually, that's a good question in general. If I'm scaling the domain by 1e3 (e.g. the mesh length is ~1, but the physical domain size is ~1mm in each dimension), where should I scale the value of current? In 2D XYZ the postprocessor is integrated along the electrode, so my interpretation was that it was essentially a line integral and the resulting total current calculation would be off by a factor of 1e3, so I scaled the value inside the postprocessor by 1e-3. The missing "depth" of the area was included (in physical units) as the _A parameter in the CircuitDirichletPotential BC.In 2D RZ, would the current integral need to be scaled by 1 mm^2 instead? I'm not sure how RZ works.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/b85f54bf-df0e-4b5a-9ec1-dd2111f3ae31%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/b85f54bf-df0e-4b5a-9ec1-dd2111f3ae31%40googlegroups.com.
It might make sense to do a Robin BC to approximate the condition at infinity. Here are a few pages of notes to get you started.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/5179908e-ad1b-4f36-9111-2fb98e52a639%40googlegroups.com.
What does the potential field look like in the bottom-right corner?
A Robin BC describes a relationship between the solution value and its derivative on the boundary. I haven't looked into the equations that John shared, but I feel like they would be great to try out!
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/5179908e-ad1b-4f36-9111-2fb98e52a639%40googlegroups.com.
What does the potential field look like in the bottom-right corner?I've attached a plot of the potential along the same line as the plot in my last post. The potential looks mostly normal, except for the fact that it gets slightly above 0 volts near the wall before falling back to 0 at the boundary. I think the argon ion oscillations was just a result of my failed attempt of an open boundary condition for the potential. My guess is that the argon ions are just getting trapped in that small potential well? I'm not sure what else would be going on because the same doesn't happen when I simply use the default BC (e.g. zero flux) for the potential. Electrons increase there too, but they don't "explode" like the argon ions do.I've also attached another two 2D plots of a discharge ("do nothing" on the right and top for ions and electrons, but zero flux for potential). This is what I'm getting consistently now, and the simulations are relatively fast! I'm interested in using this to model a simple plasma-liquid interaction system so I don't think the right boundary matters too much (I really only care about what's happening just below the needle tip, and that region is converged by the time things start moving out toward the right boundary), but it bothers me that the density increases so much on the right. It definitely looks a lot better than the one I posted three days ago so I think the do nothing BCs are working as intended; I tested it on a simple gaussian distribution that flowed out due to advection and diffusion and it seemed to not build up at the boundary, and this is what COMSOL uses as well for electron, energy, and heavy species transport. I think my only issue is the Poisson equation. An "open" boundary for elliptic problems is apparently not a trivial thing. I think I might need to look into applying a dielectric for the time being.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/79af42fd-3b43-4f81-8a68-131474f904a6%40googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/79af42fd-3b43-4f81-8a68-131474f904a6%40googlegroups.com.
Shane,Building on Alex's point about Ar/e densities at the boundary, you might want to do a study of maximum domain radius to see if that value decays to something closer to background
Do you know what they do for the potential boundary condition? I believe sometimes they use that surface charge boundary condition I talked about.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/ea64442c-24c6-4143-89d8-78524fefad9d%40googlegroups.com.
Computing initial residual ....Assertion `i < _val.size()' failed.
i = 0
_val.size() = 0
[0] /Users/keniley/projects/moose/scripts/../libmesh/installed/include/libmesh/dense_vector.h, line 403, compiled Oct 15 2019 at 12:15:41
We caught a libMesh error
Nonlinear solve did not converge due to DIVERGED_FUNCTION_DOMAIN iterations 0
Solve Did NOT Converge!
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/ea64442c-24c6-4143-89d8-78524fefad9d%40googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/45a813d9-084b-4240-afda-0da7f6f2d558%40googlegroups.com.
Time Step 1, time = 1e-12
old time = 0
dt = 1e-12
old dt = 1e-12
Computing initial residual .....Assertion `i < _val.size()' failed.
i = 0
_val.size() = 0
Process 4969 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
frame #0: 0x0000000106e74ce7 libmesh_dbg.0.dylib`libMesh::MacroFunctions::report_error(file="/Users/keniley/projects/moose/scripts/../libmesh/installed/include/libmesh/dense_vector.h", line=403, date="Oct 15 2019", time="12:15:41") at libmesh_common.C:74:7
71 // It is possible to have an error *inside* report_error; e.g. from
72 // print_trace. We don't want to infinitely recurse.
73 static bool reporting_error = false;
-> 74 if (reporting_error)
75 {
76 // I heard you like error reporting, so we put an error report
77 // in report_error() so you can report errors from the report.
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
* frame #0: 0x0000000106e74ce7 libmesh_dbg.0.dylib`libMesh::MacroFunctions::report_error(file="/Users/keniley/projects/moose/scripts/../libmesh/installed/include/libmesh/dense_vector.h", line=403, date="Oct 15 2019", time="12:15:41") at libmesh_common.C:74:7
frame #1: 0x00000001000e338f libzapdos-dbg.0.dylib`libMesh::DenseVector<double>::operator(this=0x000000010e351b38, i=0)(unsigned int) at dense_vector.h:403:3
frame #2: 0x00000001014ef71c libmoose-dbg.0.dylib`IntegratedBC::computeResidual(this=0x000000010e351818) at IntegratedBC.C:91:7
frame #3: 0x0000000101dfb19f libmoose-dbg.0.dylib`ComputeResidualThread::onBoundary(this=0x00007ffeefbfd780, elem=0x0000000111c6cb60, side=2, bnd_id=5) at ComputeResidualThread.C:143:13
frame #4: 0x000000010173dfa0 libmoose-dbg.0.dylib`ThreadedElementLoopBase<libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> >::operator(this=0x00007ffeefbfd780, range=0x000000010d4ef170, bypass_threading=false)(libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> const&, bool) at ThreadedElementLoopBase.h:221:15
frame #5: 0x0000000101e79f42 libmoose-dbg.0.dylib`void libMesh::Threads::parallel_reduce<libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*>, ComputeResidualThread>(range=0x000000010d4ef170, body=0x00007ffeefbfd780) at threads_pthread.h:380:5
frame #6: 0x0000000101e73b16 libmoose-dbg.0.dylib`NonlinearSystemBase::computeResidualInternal(this=0x000000010d828418, tags=size=3) at NonlinearSystemBase.C:1385:5
frame #7: 0x0000000101e732cd libmoose-dbg.0.dylib`NonlinearSystemBase::computeResidualTags(this=0x000000010d828418, tags=size=3) at NonlinearSystemBase.C:704:5
frame #8: 0x000000010185dec9 libmoose-dbg.0.dylib`FEProblemBase::computeResidualTags(this=0x000000010d826018, tags=size=3) at FEProblemBase.C:4917:8
frame #9: 0x000000010185d059 libmoose-dbg.0.dylib`FEProblemBase::computeResidualInternal(this=0x000000010d826018, soln=0x0000000111d70650, residual=0x0000000111d70ae0, tags=size=3) at FEProblemBase.C:4755:5
frame #10: 0x000000010185cc9d libmoose-dbg.0.dylib`FEProblemBase::computeResidual(this=0x000000010d826018, soln=0x0000000111d70650, residual=0x0000000111d70ae0) at FEProblemBase.C:4710:3
frame #11: 0x000000010185cac6 libmoose-dbg.0.dylib`FEProblemBase::computeResidualSys(this=0x000000010d826018, (null)=0x0000000111d70060, soln=0x0000000111d70650, residual=0x0000000111d70ae0) at FEProblemBase.C:4687:3
frame #12: 0x0000000101e66cb8 libmoose-dbg.0.dylib`NonlinearSystem::solve(this=0x000000010d828418) at NonlinearSystem.C:159:17
frame #13: 0x000000010185abfa libmoose-dbg.0.dylib`FEProblemBase::solve(this=0x000000010d826018) at FEProblemBase.C:4444:10
frame #14: 0x000000010143f694 libmoose-dbg.0.dylib`FEProblemSolve::solve(this=0x000000010e097d08) at FEProblemSolve.C:145:12
frame #15: 0x0000000101444f60 libmoose-dbg.0.dylib`PicardSolve::solveStep(this=0x000000010e097de0, begin_norm_old=1.7976931348623157E+308, begin_norm=0x000000011252aff0, end_norm_old=1.7976931348623157E+308, end_norm=0x0000000112501fb0, relax=false, relaxed_dofs=size=0) at PicardSolve.C:378:22
frame #16: 0x0000000101443510 libmoose-dbg.0.dylib`PicardSolve::solve(this=0x000000010e097de0) at PicardSolve.C:199:28
frame #17: 0x0000000101ce6d14 libmoose-dbg.0.dylib`TimeStepper::step(this=0x000000010d38c5a8) at TimeStepper.C:162:43
frame #18: 0x000000010144af63 libmoose-dbg.0.dylib`Transient::takeStep(this=0x000000010e097c18, input_dt=-1) at Transient.C:415:18
frame #19: 0x000000010144a6b4 libmoose-dbg.0.dylib`Transient::execute(this=0x000000010e097c18) at Transient.C:309:5
frame #20: 0x000000010232058f libmoose-dbg.0.dylib`MooseApp::executeExecutioner(this=0x000000010e05a818) at MooseApp.C:869:19
frame #21: 0x0000000102321251 libmoose-dbg.0.dylib`MooseApp::run(this=0x000000010e05a818) at MooseApp.C:979:5
frame #22: 0x00000001000020ef zapdos-dbg`main(argc=3, argv=0x00007ffeefbfee78) at main.C:33:8
frame #23: 0x00007fff780b6015 libdyld.dylib`start + 1
frame #24: 0x00007fff780b6015 libdyld.dylib`start + 1
89 for (_qp = 0; _qp < _qrule->n_points(); _qp++)
90 for (_i = 0; _i < _test.size(); _i++)
-> 91 _local_re(_i) += _JxW[_qp] * _coord[_qp] * computeQpResidual();
92
93 accumulateTaggedLocalResidual();
400 inline
401 T & DenseVector<T>::operator () (const unsigned int i)
402 {
-> 403 libmesh_assert_less (i, _val.size());
404
405 return _val[i];
406 }
...
Good news: the simulation works! In the attached plot you can see the water layer in that thin blue stripe at the bottom of the domain. The tiny red band just above that is the aqueous electrons.
Bad news: Estimates I've seen of the solvated electron depth are typically around 10-100 nm, so I'm going to need nm precision near the interface. Needless to say, the simulation gets very big, very fast. I've tried using adaptive meshing to cut down on computational cost, but it just heavily refines the already thin region near the interface so it doesn't really help.
This test only has two reactions in the water layer and it was already taking something like ~2 minutes per timestep, and the timesteps wouldn't go above dt = 10^-11. I wonder if I can get away with doing a box adaptive refinement of just the first 10-20 nm of the interface while leaving everything else relatively coarse.Still, the fact that I'm dealing with computational cost rather than trying to figure out how to get the simulation working is a vast improvement.-Shane(Sorry if you all are getting multiple notifications from this post. My responses kept getting deleted for some reason.)
On Tuesday, October 1, 2019 at 6:15:23 PM UTC-5, Shane Keniley wrote:Hello all,Apologies in advance - this is a long message.I'm having a lot of trouble getting Zapdos working effectively in 2D. I have a very simple model, at least in theory: it's just the same model shown in mean_en.i, but extended to 2D instead of 1D. Four variables, three reactions, DC bias on one boundary, grounded on another, and open BCs everywhere else. I've also removed the water region so it's just a pure argon simulation with a grounded wall. For reference I've uploaded a test branch (keniley1/zapdos, branch is 2d_test). The relevant input file is pin_water_new01.i and pin_water_new01.geo. (If you want to run this, note that the mesh file was made with Gmsh 4.4, and you may need to update CRANE; I might be using syntax that isn't compatible with the Zapdos bundled version. I should submit a PR to update that.)Here's the problem I'm having:When I run with a Dirichlet BC on the potential, the problem runs for a little while and qualitatively looks like what I would expect (e.g. an ionization "bullet" seems to form), but it soon begins to run very slowly. It tends to hang up in the dt = 1e-10 to 2e-9 region. Eventually the simulation will fail to converge and slowly fall down to dtmin. I've attached a Paraview snapshot of the behavior I'm seeing. It looks like an ionization "bullet" in 2D.When I try running with the NeumannCircuitVoltageMoles_KV boundary condition, the simulation simply fails to converge unless the timesteps are unbearably small (something like 10^-16...clearly unreasonable), and even then it only runs for several timesteps before failing. I've tried many combinations of preprocessing options, field splitting, scaling variables, using PJFNK instead of Newton, using automatic scaling, mesh refinement, and running in second order...nothing seems to work. I also thought that my pin electrode might just be a bit complicated to resolve, so I tried a much simpler plate electrode. No luck - it still fails. I understand that advection-dominated problems are tricky, but this is a relatively simple one so I'm curious as to why it's struggling so much.So long story short, I guess I have three questions:
- Any general suggestions for running advection-diffusion-reaction problems in 2D?
- What exactly is going on with the NeumannCircuitVoltageMoles_KV boundary condition? I know that it's meant to represent a ballast resistor circuit (Vc + Vs = current * resistance). I'm just not sure why it's an integratedBC since that equation seems to be a dirichlet condition on the potential, not a Neumann condition.
- How do I incorporate the electrode area to this BC in 2D? I was told that in cartesian coordinates I just need to supply the missing depth since it's an integrated BC and the length is included by definition. That makes sense to me, but I just wanted to make sure I didn't misunderstand. I have a strange electrode shape so it's not entirely clear to me how to supply the "missing depth" anyway.
Sorry again for the wordy message! Any help would be appreciated.-Shane
--
You received this message because you are subscribed to the Google Groups "zapdos-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/fcacceba-a76f-41ed-92b7-1bc41919c395%40googlegroups.com.
Bad news: Estimates I've seen of the solvated electron depth are typically around 10-100 nm, so I'm going to need nm precision near the interface. Needless to say, the simulation gets very big, very fast. I've tried using adaptive meshing to cut down on computational cost, but it just heavily refines the already thin region near the interface so it doesn't really help.I'm confused...isn't this where you want the mesh to be refined? E.g. isn't this where you want the precision to be?
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos...@googlegroups.com.
To unsubscribe from this group and stop receiving emails from it, send an email to zapdos-users...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/zapdos-users/324b0aaa-ff9b-43f8-9c8b-029341d60481%40googlegroups.com.