TLC on the GPU

164 views
Skip to first unread message

Jones Martins

unread,
Feb 6, 2022, 11:46:19 AM2/6/22
to tlaplus
Hi everyone,

Would a GPU model-checker be useful for large models in TLA⁺? For example, when compared to multicore CPU systems, GPUs are much faster in certain scenarios. Would model-checking be one of them?
I'm assuming that, in order to model-check, one would need to bring the parser to the GPU as well, so that reading from RAM  doesn't become a bottleneck.

Best,

Jones

sgl...@sglaser.com

unread,
Feb 6, 2022, 1:07:09 PM2/6/22
to tlaplus
Let me know if I can help with this.

I don’t know enough about the TLA+ internals to know what makes sense here.

I’m an occasional TLA+ user and work for Nvidia. 

Steve Glaser
sgl...@nvidia.com
--
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/cc2b665d-9863-485e-bbac-04c652f0e92fn%40googlegroups.com.

Alex Weisberger

unread,
Feb 6, 2022, 1:35:53 PM2/6/22
to tla...@googlegroups.com
You can find the details of how TLC model checking works in Section 14.3.1 of Specifying Systems: https://lamport.azurewebsites.net/tla/book-02-08-08.pdf.

I think the challenge is that TLC effectively "brute-forces" the state space exploration by performing a graph search. GPU instructions are generally optimized for parallel arithmetic, so I don't think traditional graph search algorithms translate neatly to optimizations with GPU instructions.

There may be a way to rephrase algorithms though to make them amenable to vector processing. I found a couple papers / presentations on the topic:


Also, if you are just interested in performance and aren't tied to using the GPU as a solution for that, there is the Apalache model checker as well: https://apalache.informal.systems/. This converts model-checking to an SMT constraint problem which can be solved very quickly by SMT solvers such as Z3. I haven't used it, but have always thought this project looks very interesting.


Jones Martins

unread,
Feb 6, 2022, 3:58:09 PM2/6/22
to tla...@googlegroups.com
Hi, Alex and Steve

I had found BFS on GPU articles before, and the ones Alex referenced are also impressive.
Of course, if there's already a "very good” (as opposed to optimal) alternative to simple multicore verification, that's fine by me.

Jones


You received this message because you are subscribed to a topic in the Google Groups "tlaplus" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/tlaplus/lgEKB9GBUxc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to tlaplus+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CAEPrOmLAZapKg3Nhs%3DhTBBAK7xvOGznfC1And7o2v%3D9dUpPtsw%40mail.gmail.com.

Igor Konnov

unread,
Feb 18, 2022, 11:21:40 AM2/18/22
to tla...@googlegroups.com, Igor Konnov
Hi,

I am joining this conversation a bit late. It seems to me that the main challenge with parallelizing explicit-state enumeration is that transition systems of concurrent and distributed algorithms do not have much locality.

Since Apalache was mentioned in this thread, there is probably a way to exploit GPUs for speeding up symbolic model checking. The main SMT encoding that is implemented in Apalache is almost propositional, with the exception being made for integers and string constants [1]. So instead of translating a TLA+ spec to SMT, we could translate it to SAT. (If we fix the bit width, we can translate integers by bit blasting.) That would let us run a GPU-enabled SAT solver such as ParaFROST [2].

Apalache was designed with alternative encodings in mind. For instance, we are currently experimenting with SMT arrays, and this required well-isolated changes in the rewriting rules. So this all sounds nice in theory. In practice an alternative encoding would require at least 6 months of devoted work. If somebody is looking for an internship on this topic at Informal Systems [3] or wants to do it as part of their research, please let me know. It is much easier to extend Apalache than to write a new model checker for TLA+ from scratch.

PS: Alex, thanks for the kind words about Apalache :)


[1] TLA+ model checking made symbolic: https://dl.acm.org/doi/10.1145/3360549
[2] https://github.com/muhos/ParaFROST
[3] https://informal.systems/


Best regards,
Igor
> To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/CABW84KxsSaMQR-SHfWipAYa3pbW8BJBv6NRBV2u11XEqxsMLYw%40mail.gmail.com.

tlaplus-go...@lemmster.de

unread,
Feb 18, 2022, 4:11:28 PM2/18/22
to tla...@googlegroups.com
The search for strongly connected components (SCC) is the (scalability) bottleneck of TLC's liveness checking.  It would be interesting to find out how GPU-based SCC search algorithms such as [1,2] perform on TLA+ liveness graphs.  In previous work, we collected statistics about the shape of the average TLA+ liveness graph.  One could use these statistics to generate synthetic graphs in addition to the existing real-world graphs.

Markus

Reply all
Reply to author
Forward
0 new messages