Hi Ali,
This part is indeed a bit tricky to understand at first. The key point that may be misleading is that the lemma "secret_B" is NOT a "serious" property that anyone would want to prove in the first place, it's just here in the manual to try and explain how lemmas work and show the difference between a valid lemma that can be proven and an invalid one with a counterexample.
> it does not prove that "the protocol run does not provide message secrecy", but "injection and impersonation is possible".
That's the idea, yes. But again, "secret_B" is not a serious property, it was not TRYING to prove "the protocol run does provide message secrecy". In fact, we DO know that the protocol provides secrecy, because that's what "secret_A" shows, that one is the "serious" version.
Yes, that's basically what "secret_A" means. It's just that the model in this example allows for a leak of the Ltk, so the rigorous property has to consider this possibility.
To be precise, "secret_A" translates to:
if A sends a secret n, then either the adversary never knows n, or B leaked their Ltk.
___
So why's "secret_A" the right way to show that, and why is "secret_B" wrong, with the tool finding a counterexample?
The way I like to look at it, the Action fact "Secret(n)" means "the agent executing this rule thinks that n is Secret". So in "secret_A", the first part translates to "when A thinks n is secret, then either...", and this makes sense because A is the source of the data, and the one who should be worried about secrecy.
But in "secret_B" on the other hand, the condition "when B thinks n is secret..." does not make any sense, B just received a message from the network, secrecy is not their problem, they should be worried about authentication (which is the very next section in the manual).
(Similarly, "Honest(X)" means "the agent executing this rule thinks that X is Honest", not that X is actually Honest.)
Hope that helps,
Simon