Dear Francesco,
short answer: using "time" is better, but I think what you've been doing
is fine. In the future, I would recommend switching to "time".
Long answer:
In Basel, we generally use the downward-lab experiment framework for
paper experiments (
https://lab.readthedocs.io/en/latest/). Perhaps
someone that uses it frequently can give you more information about what
the different time measurements it reports mean, or at least the major ones.
If you want to do things more manually, it is good to be aware that
three things contribute to the overall planner runtime:
1. The wrapper script, fast-downward.py. This contributes just a small
amount in general because it doesn't do much, but it is larger than zero.
2. The translator component, implemented in Python, which is the first
thing that the wrapper script runs. It reports its own runtime in a line
that looks like this:
Done! [0.050s CPU, 0.046s wall-clock]
3. The search component, implemented in C++, which is the second thing
that the wrapper script runs. It reports its own runtime in a line that
looks like this:
Total time: 1.13475s
This is CPU time, not wall-clock time.
If you are not interested in these component runtimes, I would say the
cleanest way is to use something like "time" on unix. It will include
the summed runtime of all three things, and it will also include any
additional runtime that the two components might incur at the very end,
after they print their own runtime. (Printing their own runtime is
almost the last thing that the components do in their actual code, but
both in Python and C++, the final garbage collection/cleanup/destruction
after the "main" code can conceivably take a nontrivial amount of time.)
If running the "time" command is for some reason inconvenient, taking
the two lines above and summing the two times mentioned should be a very
good proxy in almost all cases.
More generally, not just for Fast Downward, I think measuring runtime
externally should be the way to go for almost all experiments because it
is objective and easy. It has an issue for processes that fork or spawn
subthreads and then kill them, but Fast Downward doesn't do such things.
If something like this is a concern, perhaps something can be done with
containers. Alternatively, wall-clock time is always an option, but
requires a bit more care regarding the execution environment.
For portfolio configurations of Fast Downward, things are more
complicated. I won't elaborate; measuring runtime for the kind of
sequential portfolios Fast Downward uses makes little sense anyway.
Best,
Malte
On 29.06.20 19:28, Francesco Delfanti wrote:
> Dear FastDownward community,
>
> In your experience what is the best way to measure FastDownward runtime?
> I noticed that the logs reports different kind of time: (i) the
> translation time and (ii) the actual search time. Is it sufficient to
> sum these two time measurements to have a correct estimate or is it
> better to use an external command (e.g. *time* in unix)?
>
> I would use the time command but, since I conducted many experiments
> without using it, I was wondering if it was possible to use the sum of
> (i) and (ii) reported in the logs to get a precise estimate.
>
> Thanks for your availability.
>
> With best regards
>
> Francesco
>
> --
> You received this message because you are subscribed to the Google
> Groups "Fast Downward" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to
fast-downwar...@googlegroups.com
> <mailto:
fast-downwar...@googlegroups.com>.
> To view this discussion on the web visit
>
https://groups.google.com/d/msgid/fast-downward/1c32c1c0-617f-4106-9947-09fb0030e649o%40googlegroups.com
> <
https://groups.google.com/d/msgid/fast-downward/1c32c1c0-617f-4106-9947-09fb0030e649o%40googlegroups.com?utm_medium=email&utm_source=footer>.