Hi Antonios,
I'm “reviving” this thread because I ran into the same question regarding whether TLC's model checking was CPU-bound vs. I/O bound.
I'm running TLC on a computer with over 55 GB of free RAM (64 GB in total). After about 12 hours of model checking (and still going), it generated 37 GB of fingerprint files.
Here's the full command for reference (tla2tools rev: 3ea3222):
java
-XX:+UseParallelGC \
-Dtlc2.tool.fp.FPSet.impl=tlc2.tool.fp.OffHeapDiskFPSet \
-Dtlc2.tool.ModelChecker.BAQueue=true \
-cp tla2tools.jar tlc2.TLC \
-cleanup \
-metadir .states \
-workers auto \
-fpmem 0.9 \
-fpbits 2 \
-lncheck final \
-modelcheck {tla_file} \
-config {config_file}
I increased fpmem to `0.9` to allow TLC to use 90% of available RAM for the fingerprint set, and increased fpbits to 2 so that TLC creates 4 files, one for each core, to decrease read/write contention. Maybe my reasoning is completely off.
By the way, I did try increasing memory usage of the JVM using `-Xms`, `-Xmx` and `MaxDirectMemorySize` and running with a smaller model, and it didn't help. I'm not very familiar with Java's memory handling, so maybe that's why.
Best,
Jones