Belief network mechanics

43 views
Skip to first unread message

Kaj Sotala

unread,
Feb 19, 2013, 4:31:15 AM2/19/13
to the-fundamen...@googlegroups.com
So I played around a bit with UnBBayes in an attempt to create a prototype or early design, and here's something that I came up with.

I only came up with this after thinking about the game for a few days, so please take all of this as just preliminary suggestions. We're still very much at a "make ideas for throwing them away" stage, and it could very well be that some one of you has an idea for a better design and this one sucks completely. If so, please feel free to say so!

Here is a very simple toy problem that could possibly be one of the first tasks in the game (structure-wise, at least: don't mind the specific content). We are trying to figure out the nature of some person who we haven't met (call him Charles), and have background knowledge that tells us that he might have been raised from the dead as a a) normal human b) zombie c) angel. A prior, we think each of these possibilities is equally likely. We also have two sources of knowledge, Alice and Bob.

Alice is pessimistic: she tends to make the worst out of everything. If Charles is actually a human, she will be 60% likely to say that he is a zombie, and 10% likely to say that he is a vampire. If he is a zombie, she is 70% likely to say that he is a vampire. And if he is an angel, she is 60% likely to say that he is just a human and 10% likely of saying that he is a zombie. In all cases, Alice has a 30% chance of saying things just as they are.

Inline image 3
Bob, on the other hand, tends to exaggerate everything that he hears. If Charles is a human, Bob's 60% likely to say that he is an angel and 10% likely to say that he is an archangel. If Charles is a zombie, Bob's 70% likely to say that he is a vampire. And if Charles is an angel, Bob is 70% likely to say that he is an archangel. Bob, too, always has a 30% chance of telling the truth.

Inline image 4

Suppose that we know all of this. Now we can ask Alice and Bob their opinions of Charles. The player asks Alice first, and she says that Charles is a zombie. That lets us infer 60/30/10 probabilities in favor of the human/zombie/angel hypotheses, respectively. So the "human" hypothesis is the most likely, but that still a 40% chance of being wrong, so the player decides to seek more evidence, and asks Bob. And a good thing that they did, because Bob says that Charles is a vampire. That information lets us infer that Charles must, in fact, be a zombie with 100% certainty.

Inline image 5

First problem: Alice saying that Charles was a zombie meant that there was a 60% chance of him being a human. Bob saying that he was a vampire caused the most likely hypothesis to become that of Charles becoming a zombie, with a 100% certainty. Why? It's no good if the game just automatically updates the network and tells the player this: the player has to understand for themselves why this is the case.

My first inclination would be to solve this by turning the act of updating the probabilities into a mini-game. I like the graphical approach that komponisto took in his "Bayes' theorem illustrated" post - perhaps we could visualize the probabilities in a similar way and then have a small puzzle where the player needs to perform the steps of the update process, manually resizing the shapes and deleting the ones that have been eliminated? Alternatively, the style of visualization could be more like the applets in Eliezer's Bayes' theorem article. Actually, if we could make both fun, it could be nice to alternate between both: having two different ways of representing the same information makes for better learning. There is also this visualization, if we want to use three different ones...

Anyway, at this point it's not necessary for the player to really understand what it is that they're doing. It should be doable just as an abstract graphical manipulation thing, though there may certainly be helpful text on the side that explains what it is that the character is doing. One could even have the player play this minigame for a couple of times before doing any belief updates, and only later bring it back up in that context. In any case, we have to figure out how this minigame could be made at least moderately entertaining by itself.

So far, we have assumed perfect knowledge about the structure of the problem. We knew, among other things:
  • That Charles could have been a human, zombie, or an angel (but not a vampire or an archangel)
  • That each of the three possibilities of what Charles could have been were equally likely
  • That Alice and Bob both knew something about Charles's nature
  • That Alice tends to be pessimistic and that Bob tends to exaggerate things
  • The pessimism/exaggeration hierarchy of the various possible states of Charles (e.g. being pessimistic about "angel" first gets "human" or if you're really pessimistic, "zombie"; exaggerating "zombie" by one step gets you "vampire")
  • The exact probabilities for Alice's and Bob's transformations
That's a lot of knowledge! The good thing about this is that we could simultaneously appeal both to players' natural curiosity, and the same hoarding / collecting instinct that many other games exploit, by making these different pieces of knowledge into things that you could collect in-game. So, for example:
  • Domain knowledge about various domains can be collected from books, teachers, etc. Maybe here, the character had "Basic Resurrection Magic Knowledge", which let them know the possible outcomes of somebody being resurrected (there's an equal chance that somebody will come back as a zombie, human, or angel, but that kind of a ritual isn't going to produce vampires or archangels). More sophisticated domain knowledge could reveal subgraphs of conditional probabilities relating to some domain.
  • Psychological knowledge about the nature of various biases and personality traits, which gave them the probabilities of Alice's and Bob's personality traits shifting the claims by one or two steps in some direction. This can similarly be collected from books and teachers, but also from observing various characters.
  • Folk lore knowledge, which let them know that non-experts generally think that one step up from a zombie gets you a vampire. The folk understanding of a domain may or may not match what is actually true, or what the experts believe: maybe one step up from a zombie actually gets you a demon. In the early stages, the folk and domain knowledge should always match, though.
  • Familiarity with Alice and Bomb, which let them know that Alice was pessimistic and Bob tended to exaggerate, and also that they might know something about Charlie.

This should hopefully teach thoughtful players the value of looking for those things in real life as well, and help us make the explosion in the size of the belief network a bit more manageable: e.g. learning domain knowledge about some fact is going to make it a lot easier to figure out the truth about that fact. If character doesn't have background knowledge about e.g. the outcomes of resurrection rituals, we could automatically prefill the priors with some stuff that is weakly correlated with the truth (the character's gut feelings probably aren't pure noise), but which could still easily be completely wrong. The player may freely modify these "gut feel priors", if they wished.

Later on, the player might also need to verify e.g. the domain knowledge that they collected for truthfulness... but in the beginning, all domain knowledge would be reliable. The game should start off with several problems where the player had perfect knowledge about the structure of the problem, and then move on to situations where they didn't have the necessary domain knowledge, psychological knowledge, etc.

In this design, various claims would consist of one or more atomic beliefs. In this example, there was just one atomic belief: "Charlie is a resurrected creature X". A claim would be originally generated by some primary source of evidence, and then be passed on via different evidence channels. Each channel would be associated with a transformation that it might perform on the claim, with some fixed probability depending on the type of the source.

So in this example, "exaggeration" had a 30% chance of leaving the claim as is, a 60% chance of increasing the "intensity" of one of the beliefs by one step, and a 10% chance of increasing the "intensity" of one of the beliefs by two steps. (If it wasn't possible to increase the intensity by two steps, there was just a 70% chance to increase it by one step.) Other kinds of transformations could change other kinds of attributes for one or more beliefs associated with the claim, add new beliefs to it ("Charlie is a resurrected creature X, and of occupation Y"), and delete beliefs.

Thoughts?

Prototype2.png
Prototype3.png
Prototype1.png

Kaj Sotala

unread,
Feb 19, 2013, 4:41:33 AM2/19/13
to the-fundamen...@googlegroups.com
On Tue, Feb 19, 2013 at 11:31 AM, Kaj Sotala <xue...@gmail.com> wrote:
Each channel would be associated with a transformation that it might perform on the claim, with some fixed probability depending on the type of the source.

Sorry, depending on the type of channel. (Though some channels may be particularly hostile towards information coming from specific sources, and intentionally twist it.)

Kaj Sotala

unread,
Apr 5, 2013, 8:30:20 AM4/5/13
to the-fundamen...@googlegroups.com
Hi all,

I haven't forgotten or abandoned this project, just been very quiet about it due to being busy. But here are some brief follow-up thoughts on the previous e-mail, just so that I'll have written at least something. More to come soon, hopefully.

Upon consideration, I reached the conclusion that we should not start out with belief networks that express explicit numerical probabilities. Rather, start off by representing different possible worlds graphically, and then gradually move on to numbers once the worlds become too complicated to express in pictures. (Ideally, this would not only teach causal networks, but basic probabilities as well.) For example, below are some interface sketches (also available as links in case you can't see the pictures in this e-mail):

Inline image 1

Picture 1: Here's Bob, claiming that (somebody) is a vampire. We don't know where he got that information from, so the origin is marked as a white question mark within a black sphere.

Inline image 2

Picture 2: We still don't know the exact source of Bob's beliefs, but we also know that the source in question is also connected to somebody else (say Alice), although we don't know what her beliefs are. At this point, we can speculate on the claim that the source originally made: it could have claimed that (somebody) is an angel, a human, or a vampire. These three possibilities show up as pictures next to the source.

Inline image 3

Picture 3: We may hypothesize that the claim made by the original source was actually "(someone) is a human" by clicking on the "human" graphic next to the source. If we do this, the belief network will be updated to reflect the consequences of this belief. In particular, the connection from the original source to Bob will become associated with the rule, "Bob tends to claim that humans are vampires".

Inline image 4

Picture 4: By exploring the network further, you can find out that sources that you thought to be independent actually got all of their information from the same source.



Sketch2.png
Sketch3.png
Sketch4.png
Sketch1.png
Reply all
Reply to author
Forward
0 new messages