So I played around a bit with
UnBBayes in an attempt to create a prototype or early design, and here's something that I came up with.
I only came up with this after thinking about the game for a few days, so please take all of this as just preliminary suggestions. We're still very much at a "make ideas for throwing them away" stage, and it could very well be that some one of you has an idea for a better design and this one sucks completely. If so, please feel free to say so!
Here is a very simple toy problem that could possibly be one of the first tasks in the game (structure-wise, at least: don't mind the specific content). We are trying to figure out the nature of some person who we haven't met (call him Charles), and have background knowledge that tells us that he might have been raised from the dead as a
a) normal human
b) zombie
c) angel. A prior, we think each of these possibilities is equally likely. We also have two sources of knowledge, Alice and Bob.
Alice is pessimistic: she tends to make the worst out of everything. If Charles is actually a human, she will be 60% likely to say that he is a zombie, and 10% likely to say that he is a vampire. If he is a zombie, she is 70% likely to say that he is a vampire. And if he is an angel, she is 60% likely to say that he is just a human and 10% likely of saying that he is a zombie. In all cases, Alice has a 30% chance of saying things just as they are.

Bob, on the other hand,
tends to exaggerate everything that he hears. If Charles is a human, Bob's 60% likely to say that he is an angel and 10% likely to say that he is an archangel. If Charles is a zombie, Bob's 70% likely to say that he is a vampire. And if Charles is an angel, Bob is 70% likely to say that he is an archangel. Bob, too, always has a 30% chance of telling the truth.

Suppose that we know all of this. Now we can ask Alice and Bob their opinions of Charles. The player asks Alice first, and she says that Charles is a zombie.
That lets us infer 60/30/10 probabilities in favor of the human/zombie/angel hypotheses, respectively. So the "human" hypothesis is the most likely, but that still a 40% chance of being wrong, so the player decides to seek more evidence, and asks Bob. And a good thing that they did, because Bob says that Charles is a vampire. That information lets us infer that Charles must, in fact, be a zombie with 100% certainty.

First problem: Alice saying that Charles was a zombie meant that there was a 60% chance of him being a human. Bob saying that he was a vampire caused the most likely hypothesis to become that of Charles becoming a zombie, with a 100% certainty. Why? It's no good if the game just automatically updates the network and tells the player this: the player has to understand for themselves why this is the case.
My first inclination would be to solve this by turning the act of updating the probabilities into a mini-game. I like the graphical approach that komponisto took in his "
Bayes' theorem illustrated" post - perhaps we could visualize the probabilities in a similar way and then have a small puzzle where the player needs to perform the steps of the update process, manually resizing the shapes and deleting the ones that have been eliminated? Alternatively, the style of visualization could be more like the applets in
Eliezer's Bayes' theorem article. Actually, if we could make both fun, it could be nice to alternate between both: having two different ways of representing the same information makes for better learning. There is also
this visualization, if we want to use three different ones...
Anyway, at this point it's not necessary for the player to really understand what it is that they're doing. It should be doable just as an abstract graphical manipulation thing, though there may certainly be helpful text on the side that explains what it is that the character is doing. One could even have the player play this minigame for a couple of times before doing any belief updates, and only later bring it back up in that context. In any case, we have to figure out how this minigame could be made at least moderately entertaining by itself.
So far, we have assumed perfect knowledge about the structure of the problem. We knew, among other things:
- That Charles could have been a human, zombie, or an angel (but not a vampire or an archangel)
- That each of the three possibilities of what Charles could have been were equally likely
- That Alice and Bob both knew something about Charles's nature
- That Alice tends to be pessimistic and that Bob tends to exaggerate things
- The pessimism/exaggeration hierarchy of the various possible states of Charles (e.g. being pessimistic about "angel" first gets "human" or if you're really pessimistic, "zombie"; exaggerating "zombie" by one step gets you "vampire")
- The exact probabilities for Alice's and Bob's transformations
That's a lot of knowledge! The good thing about this is that we could simultaneously appeal both to players' natural curiosity,
and the same hoarding / collecting instinct that many other games exploit, by making these different pieces of knowledge into things that you could collect in-game. So, for example:
- Domain knowledge about various domains can be collected from books, teachers, etc. Maybe here, the character had "Basic Resurrection Magic Knowledge", which let them know the possible outcomes of somebody being resurrected (there's an equal chance that somebody will come back as a zombie, human, or angel, but that kind of a ritual isn't going to produce vampires or archangels). More sophisticated domain knowledge could reveal subgraphs of conditional probabilities relating to some domain.
- Psychological knowledge about the nature of various biases and personality traits, which gave them the probabilities of Alice's and Bob's personality traits shifting the claims by one or two steps in some direction. This can similarly be collected from books and teachers, but also from observing various characters.
- Folk lore knowledge, which let them know that non-experts generally think that one step up from a zombie gets you a vampire. The folk understanding of a domain may or may not match what is actually true, or what the experts believe: maybe one step up from a zombie actually gets you a demon. In the early stages, the folk and domain knowledge should always match, though.
- Familiarity with Alice and Bomb, which let them know that Alice was pessimistic and Bob tended to exaggerate, and also that they might know something about Charlie.
This should hopefully teach thoughtful players the value of looking for those things in real life as well, and help us make the explosion in the size of the belief network a bit more manageable: e.g. learning domain knowledge about some fact is going to make it a lot easier to figure out the truth about that fact. If character doesn't have background knowledge about e.g. the outcomes of resurrection rituals, we could automatically prefill the priors with some stuff that is weakly correlated with the truth (the character's gut feelings probably aren't pure noise), but which could still easily be completely wrong. The player may freely modify these "gut feel priors", if they wished.
Later on, the player might also need to verify e.g. the domain knowledge that they collected for truthfulness... but in the beginning, all domain knowledge would be reliable. The game should start off with several problems where the player had perfect knowledge about the structure of the problem, and then move on to situations where they didn't have the necessary domain knowledge, psychological knowledge, etc.
In this design, various claims would consist of one or more atomic beliefs. In this example, there was just one atomic belief: "Charlie is a resurrected creature X". A claim would be originally generated by some primary source of evidence, and then be passed on via different evidence channels. Each channel would be associated with a transformation that it might perform on the claim, with some fixed probability depending on the type of the source.
So in this example, "exaggeration" had a 30% chance of leaving the claim as is, a 60% chance of increasing the "intensity" of one of the beliefs by one step, and a 10% chance of increasing the "intensity" of one of the beliefs by two steps. (If it wasn't possible to increase the intensity by two steps, there was just a 70% chance to increase it by one step.) Other kinds of transformations could change other kinds of attributes for one or more beliefs associated with the claim, add new beliefs to it ("Charlie is a resurrected creature X, and of occupation Y"), and delete beliefs.
Thoughts?