Hello,
I have read some documentation and ran experiments in the TLA+ toolbox to test how labels work.
I know that it meant to identify units of atomic execution, but I discovered that it doesn't work entirely as I expect.
Here are a few cases I would like to understand:
Q1: Assume I have two fair processes A and B, and that I have a globally defined seq cache = <<>>.
Each process has label_1 that performs an operation on the cache.
what does atomicity mean in this case? can both processes access the cache at the same time?
Q2: I want to be able to loop in an atomic fashion, but it appears that the label applies to a single iteration of the while loop. Is that correct? why?
I discovered that by having the processes add tuples of {value, pc} to some shared sequence, where the value incremented from 1 to some n. and I saw interleaving writes of processes. adding mutex allows for one process to finish adding variables before another one begins.
Q3: If a process contains for example 3 labels, lb1, lb2, lb3, will that process attempt to execute them sequentially?
I implemented mutex to lock resources with the intent that I can lock a resource in one label and release in the next label (to mimic a prepare phase on an entity), but I discovered that sometimes the process skips the first label and goes directly to the second label. For example:
process A
variables ...
While_label:
while some condition do:
L_Invalidate:
if some other condition:
entity_lock();
cache_lock();
call invalidate(verGentime); \* inside this procedure cache_unclock(); is called
L_Gen_Version:
\* do something
entity_unlock();
end if;
end while;
I was sure that it will never execute L_Gen_Version without first executing L_Invalidate since it is defined within the if statement. However, I saw that the process translated to TLA+ as follow: A == While_label \ / L_Gen_Version, and sometimes when the condition of the if is not satisfied it jumps to L_Gen_Version.
I am very confused by this behavior.
Clarification of the labels behavior will be greatly appreciated.
If it is important I implemented mutex as following:
macro cache_lock() begin
await c_lock = 0;
c_lock := 1;
end macro;
macro cache_unlock() begin
c_lock := 0;
end macro;
Thanks,
Dana