You do not have permission to delete messages in this group
Copy link
Report message
Sign in to report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to tlaplus
Hi,
Is there guidance or any in depth write ups about optimizing specifications for model checking runtime?
I already use state constraints in my model as the primary way of limiting TLC runtime and resource usage. Also I turn off profiling and use symmetry sets where possible.
I would love to know of any useful state space limiting techniques people have used (beyond simple state constraints).
After that I am thinking that choice of data structure matters. For example, looking up a value in a function would be more efficient than scanning a set or sequence.
Any guidance or write-ups on this subject would be greatly appreciated.
Jack
Markus Kuppe
unread,
Dec 24, 2019, 12:05:17 PM12/24/19
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Sign in to report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to tla...@googlegroups.com
Hi Jack,
"Large Scale Model Checking 101" [5] discusses how to speed up TLC. In
addition, you might want to profile a smaller model to identify
operators that are bottlenecks ("The TLA+ Toolbox" [6] has a section
about the profiler). Afterwards, redefine those operators in the model
with more efficient ones. If this is still not good enough, consider to
implement your TLA+ operators in Java. The CommunityModules [1] have
several examples [2][3] how this can be done and recent TLC nightly
builds comes with annotations that make this easier [7].
Symmetry sets and state constraints indicates that you primarily check
safety properties. If you don't mind running an early prototype, I can
share a version of TLC that scales better with more cores (on a machine
with 16 cores, it doubles throughput [4]).