So 'probability' is a common language word, and as such has multiple referents that in practice usually run together, but there will be certain edge cases where some of them might not be present.
From the standpoint of how the word 'probability' is normally used, it is clear that the die had a 1/6 chance of coming up 1 is a thing that you can sensible say, even if the person actually rolled a 4.
To the extent that the election is like throwing dice, saying that 'there was only a ten percent chance of this outcome, but Torok Gabor was unlucky' matches the way the word 'chance' is normally used.
Obviously though there is a sense in which the odds of a Tisza victory was never 10 percent, or 90 percent, or anything but 100 percent. In the world that we actually live in, Tisza was going to win. We just did not know several months ago if we were in a 100 percent Tisza world, or a 100 percent Fidesz world. After the election we now know that we were all along in a 100 percent Tisza world.
Of course this is the same thing with dice: Throwing a die is probably a sufficiently macroscopic action that even under a multiple worlds model, when a particular person starts to throw a particular die, it will probably come up the same way in the vast, vast majority of branches. A sufficiently powerful intelligence could simply watch the way the person was moving their arms, and know the instant the decision to release the dice was made what it would come up as.
On the other hand, I suspect that it would be impossible, or at least extremely difficult, for that intelligence to know what it would come up as before the person has even picked up the dice. So at some fundamental level, how the dice will come up is seen as unknowable--being smarter will not help me know what will come up the next time you throw a piece of dice, even if the outcome is actually deterministic. If I'd been smarter I could have known that Tisza would win, instead of simply taking the prediction market estimate and rounding down a bit.
Anyways, as you are probably aware, there is a great deal of discussion about whether probability and chance is part of reality (which it seems to be in the case of quantum level phenomenon, but that may or may not be the bottom level of reality), or a description of how we think about reality.
In so far as it is a description of how we think about reality, we can judge probability estimates after the fact based on whether they still seem like they were a good map or way of making decisions. Was the estimate well calibrated?
Under this view, if Torok Gabor had predicted a 100 percent chance of a Tisza victory, he would still seem stupid after the fact, because clearly he didn't actually know enough to be that confident--even though he was living in a world where that outcome would definitely happen. On the other hand, it does make a 20 percent estimate look like it was based on a worse model than a 90 percent estimate, but a single event isn't enough to tell us whether he got unlucky in his guess or if he had a bad model. This is why people are interested in tracking pundit predictions and seeing how accurate they are. It lets us distinguish those who are well calibrated in their probabilities from those whose probability estimates aren't correlated with the outcomes in any way.
I think saying, 'but the conditional probability of a Tisza victory given a Tisza victory is 1', while tautalogically true, doesn't tell us anything about how we should think about a question like 'what is the probability, given what we know, of Tisza getting back the EU funds'.
Tim