hello, Ive been looking into the time it took for a simulation to run VS some change in scan variables and came across a weird result.
Below are a-scans of a simulation I ran four times with different scan resolutions (dx,dy,dz were all equal and changed between scans) the file names were res"x" where "x" was the value I chose for dx,dy,dz.
examples of .in file I ran:
1.
#domain: 0.8 2 0.8
#dx_dy_dz: 0.02 0.02 0.02
#time_window: 2e-8
#material: 5 0 1 0 half_space
#waveform: ricker 1 6e8 my_ricker
#hertzian_dipole: z 0.44 1.7 0.4 my_ricker
#rx: 0.38 1.7 0.4
#box: 0 0 0 0.4 0.95 0.4 half_space
#cylinder: 0.4 0.3 0.4 0.4 0.32 0.4 0.12 pec
2.
#domain: 0.8 2 0.8
#dx_dy_dz: 0.01 0.01 0.01
#time_window: 2e-8
#material: 5 0 1 0 half_space
#waveform: ricker 1 6e8 my_ricker
#hertzian_dipole: z 0.44 1.7 0.4 my_ricker
#rx: 0.38 1.7 0.4
#box: 0 0 0 0.4 0.95 0.4 half_space
#cylinder: 0.4 0.3 0.4 0.4 0.32 0.4 0.12 pec
The output files are also added below, but I also put the Ez component of each a-scan in a single graph

and also a zoom photo of the target and air/ground signature that are hard to see: