Hi NF,
My version works, it takes some minutes per instance. Just for fun (and having access to a machine with a lot of RAM) I tried after the fact to ran it with parallelism 50 (one instance per call), and thus in a few hours it solved the 1000 instances but for 15, which exceeded Picat stack+heap area size. I've tried for one of those to call Picat with -s and to give it 20 GB, it seemed interestingly to build the model faster (fewer GCs I guess) and then got to solving it.
What I find interesting, after realising that all instances are easy, is that my modelling leads to instantaneous refutation of the instances without solution, but requires several minutes for equally easy instances for which trivial solution exist. I thought I'll just change the variable search mode to random or something like this, to "spread" the translations, but the solve prredicatr for sat doesn't seem to have this sort of option, only cp - and cp was very slow even on the smaller examples from the text of the problem.
So I'm left wondering, how to model or call the solver in such a way that also the trivial positive instances are also quickly solved by the solver, not only the negative ones.
best regards,
CG