Re: pyphantom changing the max time and timestep of the simulation

12 views
Skip to first unread message

Daniel Price

unread,
Oct 24, 2018, 6:50:54 AM10/24/18
to Augustus Porter, Phantom users, Ward Homan
Dear Gus,

pyphantom exists but hasn’t had much use, so I can’t help you much with the specific issue with dtmax in the python script.

However, for what you’re trying to do, a good setup already exists and we have been actively working on it over the last few weeks. There’s a bit more left to do, but the gist of it is:

~/phantom/scripts/writemake.sh wind > Makefile
make setup
make
./phantomsetup wind
(edit .setup file as desired)
./phantom wind

There’s one important thing missing which is the timestep control from the wind injection. This means you currently have to set dtmax to something artificially small to ensure the wind injection is resolved correctly.

Also, this module has been developed in collaboration with Lionel Siess (we have a paper in preparation), so please be sure to cite this paper if you use these modules.

I’d also encourage you to get in touch with Ward Homan (ideally via a public slack channel so other users can join too) as there’s a growing community interested in these kind of wind-binary interactions.

Hope this helps,

Daniel

> On 24 Oct 2018, at 2:27 am, Augustus Porter <augustu...@pmb.ox.ac.uk> wrote:
>
> Dear Daniel,
>
> I am a recent beginner in using Phantom, but have already used it to make some great circumbinary disk simulations using the “disk” setup.
>
> I just noticed the “pyphantom” utility this morning, allowing a pythonic interface to Phantom simulations, and thought this looked interesting for an easy way to simulate outflows from one of my binary components. (I’m afraid I’m not good enough at Fortran to do this myself in the Phantom code!)
>
> However, when running the simulation through pyphantom, as opposed to just phantom, I have found that for some reason the simulation max time and max timestep between dumps is changed to a silly small number, and this is reflected in the *.in file.
>
> Here is what the top of the *.in file is at the beginning of the simulation:
> # Runtime options file for Phantom, written 23/10/2018 16:15:12.7
> # Options not present assume their default values
> # This file is updated automatically after a full dump
>
> # job name
> logfile = CP01.log ! file to which output is directed
> dumpfile = CP_00000.tmp ! dump file to start from
>
> # options controlling run time and input/output
> tmax = 53.4143693 ! end time
> dtmax = 0.0534143693329 ! time between dumps
> nmax = -1 ! maximum number of timesteps (0=just get derivs and stop)
> nout = -1 ! number of steps between dumps (-ve=ignore)
> nmaxdumps = -1 ! stop after n full dumps (-ve=ignore)
> twallmax = 000:00 ! maximum wall time (hhh:mm, 000:00=ignore)
> dtwallmax = 012:00 ! maximum wall time between dumps (hhh:mm, 000:00=ignore)
> nfulldump = 10 ! full dump every n dumps
> iverbose = 0 ! verboseness of log (-1=quiet 0=default 1=allsteps 2=debug 5=max)
>
>
> And here is what it is changed to by the program:
> # Runtime options file for Phantom, written 23/10/2018 16:16:21.4
> # Options not present assume their default values
> # This file is updated automatically after a full dump
>
> # job name
> logfile = CP02.log ! file to which output is directed
> dumpfile = CP_00020 ! dump file to start from
>
> # options controlling run time and input/output
> tmax = 1.063E-05 ! end time
> dtmax = 1.063E-08 ! time between dumps
> nmax = -1 ! maximum number of timesteps (0=just get derivs and stop)
> nout = -1 ! number of steps between dumps (-ve=ignore)
> nmaxdumps = -1 ! stop after n full dumps (-ve=ignore)
> twallmax = 000:00 ! maximum wall time (hhh:mm, 000:00=ignore)
> dtwallmax = 012:00 ! maximum wall time between dumps (hhh:mm, 000:00=ignore)
> nfulldump = 10 ! full dump every n dumps
> iverbose = 0 ! verboseness of log (-1=quiet 0=default 1=allsteps 2=debug 5=max)
>
> Notice the change in tmax and dtmax. The output data from the simulation is fine, but with such a miniscule timestep between dumps, nothing changes!
>
> And here is the python code that I use to begin the simulation:
> from pyphantom import Simulation
>
> if __name__ == "__main__":
> tmax=53.4143693
> dtmax=0.0534143693329
> simulation = Simulation("CP.in", tmax, dtmax)
> while True:
> simulation.next()
>
> As you can see, my code is very barebones; is there anything that you can spot that I am missing which is causing this issue?
>
> I have found this same issue occurs with two different architectures: one is a Mac with the phantom libraries (libphantom.so) built with gfortran and python2.7 built with gcc, and the other is linux with both phantom libraries and python2.7 built with the intel compilers.
>
> I hope that made sense! I have been messing around with this for most of the day trying to get it to work, so any help would be much appreciated. Please get in touch if you would like more details about the issue. I am finding Phantom a great code and am looking forward to using it more :)
>
> Best wishes,
> Gus Porter
>
> Astrophysics DPhil Student, University of Oxford

Reply all
Reply to author
Forward
0 new messages