where A is a set of possibilities, E is a set of sources of evidence, and ie is the event that a source of evidence indicated whichever possibility it indicated (as opposed to indicating some other possibility). It's really a general formula for deriving probability from any number of conflicting, corroborating, and possibly biased sources of evidence. So long as the sources of evidence are independent and they each indicate exactly one of a limited number of possibilities, this formula applies. For instance, suppose you have several people making claims about something that happened, some agreeing and some disagreeing. How do you weigh their testimony against each other? If you can find a way to estimate how likely each person was to make the claim that he or she claimed given each of the possibilities, this formula provides you with a way forward.
Here's a math paper that states the formula as a theorem and proves it: Bayes' Theorem Under Conditional Independence. It's Theorem 4. (The paper expresses the formula with different symbols, but it's essentially the same thing.) I haven't found it anywhere else in print or online. Since the numerator on the right-hand side is the basis for naive Bayes classifiers, maybe it's included in some AI course material somewhere. I suppose it falls between two stools; it isn't one of the most basic applications of Bayes' theorem, so it wouldn't be in introductory material, but to a professional mathematician, it might seem too obvious to deserve mentioning.
No comments:
Post a Comment