Ok, so here's the deal with that.
We don't support a no op mode, and there's a pretty good reason for
it. It's my strong belief that systems that do support this aren't
providing a really high level of detail, and the results of a dry run
can also instill a misleading sense of confidence that don't
apply when you actually go to run the task.
For instance, if you restart a service, what if the configuration of
the service was wrong, and the service fails to restart, and this
breaks things further down the chain.
Ansible is very very capable of having the actions of one host depend
on the state of the other (or other variables), and dry run mode never
really takes into account side effects. In this vein, I believe it
does users a disservice.
It would technically be possible to support a comparision tool that
attempted to find all of your templates and config files, and report
which ones changed, and what tasks they might notify, but it would not
be a reasonable safeguard in production -- nor is it in any other
tool.
Another problem is that then every single module must then write a
simulated dry run mode, which I believe is unrealistic for many
modules, and clearly inaccurate in some others.
When we have things like only_if in Ansible -- which are very powerful
concepts, and register variables, tasks often depend on the results of
other tasks.
I think we're all willing to explore the "what files and templates
might change", but we're much more apt to call that "--wild-guess"
mode or something. I really feel that dry-run mode is a marketing
feature and in general, changes should always be tested
in a stage environment closely resembling production (VMware, vagrant,
etc, your choice), which is easy to set up especially if your whole
configuration is Ansible managed.
--Michael
> --
>
>