Dear Thea,
There is a lot of small things here. I try to answer them all one by one, but if you want to get more details on any of
it, please create a new thread (since this one is really mixed up).
> I've also encountered this problem. Following Professor Maxim's advice, I tried adding “-eps 3” to the command line:
> ./adda_ocl -lambda 0.532 -m 1.3116 0 -eq_rad 6.2576 -shape read ./dpl10/droxtal15.dat -orient avg
> avg_params_32α5β4γ.dat -beam besselCS 3 10 -scat_matr both -save_geom -eps 3
>
> It turned out that when the maximum particle length(Dmax) was 10μm (eq_rad = 4.1717μm), the error “ERROR:
> (../iterative.c:1620) Residual norm haven't decreased for maximum allowed number of iterations (50000)” no longer
> occurred, and the calculation could be completed normally. However, when the particle was a bit larger (for example,
> Dmax = 15μm, eq_rad = 6.2576μm), the error still popped up. The error file has been placed in the attachment.
This is expected - see the discussion below.
> Additionally, I'm using a Bessel vortex beam, and “-init_field wkb” seems to only apply to the case of plane incident
> wave.
Yes, that's true.
> Moreover, I'm not sure if the implementation of the Bessel vortex beam is correct,
It should be fine and if you're asking if the Bessel beam can effect the convergence of the iterative solver - it
shouldn't. You can easily test it by running the same command with the incident plane wave.
> and considering our discussion in a previous email that "when calculating the scattering of large particles,
> especially after averaging over directions and sizes, many dipoles become redundant when using the DDA method," I did
> not use dpl=10|m|=13, but instead chose a smaller value of dpl=10. If there are any issues with it, I would greatly
> appreciate your pointing them out and discussing them.
This can be fine. To get any meaningful error estimate, there is no other way than to compare simulations for different
dpl. But your choice seems reasonable (at least as first try) given all the averaging. Note, however, that the number of
iterations should not change significantly with dpl. So you can't solve the problem of large Niter by decreasing dpl. In
some rare cases (combination of problem, DDA formulation, etc.) you can get significant decrease of Niter, but that
usually indicates that the accuracy becomes very bad (i.e. such small dpl is inadequate).
> Meanwhile, I noticed in the manual that the maximum number of iterations can be set via "-maxiter". So I have an idea:
> for larger particles, can we appropriately increase this value to alleviate the error "Residual norm haven't decreased
> for maximum allowed number of iterations (50000)"? However, I'm not sure if this would cause any other impacts on the
> calculation.
Yes, you can change this limit in the code and recompile. However, don't expect to extend the limit too much by that.
The current value corresponds to reliable-convergence region, when Niter is within, say, 100 000. Then stagnation for 50
000 iterations is a strong indication that some breakdown have occured (so waiting further does not make sense). For
very poor convergence, it can happen that Niter is much larger but still converges. Then, the convergence curve may
include large stagnation period. Thus, increasing the threshold may somewhat increase the maximum achievable size, but
not a lot (since Niter rapidly increases with size at this range).
> Another thing I want to ask: in ADDA, for the “-lambda” command, should the value be set as wavelength/refractive
> index, i.e. λ/m? Because in the command line for reproducing Fig. 12 in "adda-master\examples\papers\2022_bessel", it
> seems that dividing by the refractive index is not considered. Therefore, I'm not sure when to use λ/m and when to
> directly set it as λ.
There seems to be confusion here. Scaling of the wavelength is required by refractive index of the host medium (which
spans to infinity), not by that of the particle. So for any examples for particles in the vacuum (or air), no scaling of
the wavelength is required.
Maxim.