Trouble viewing the formulas? You need a MathML compatible browser.

# Confirmation theory

Confirmation theory studies the relation between evidence and hypothesis; this relation is called confirmation. We speak of a piece of evidence confirming (or disconfirming) an hypothesis - this means that the evidence makes the hypothesis more (or less) likely to be true. That is, finding out that certain pieces of evidence are true changes the probability of the hypothesis that you are interested in.

The goal of confirmation theory is to formalize and study this relation, and this is done within the project of using Bayes' Theorem as a formal model of inference.

## Inductive Probabilities

(main article: Bayes' theorem and inductive inference)

Consider an inductive argument from evidence ${E}_{1},{E}_{2},\dots ,{E}_{n}$ to conclusion $H$. The inductive probability of this argument is measured by $P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)$, which, via Bayes' Theorem, is equal to:

$\frac{P\phantom{\rule{0}{0ex}}r\left(H\right)\cdot P\phantom{\rule{0}{0ex}}r\left({E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}|H\right)}{P\phantom{\rule{0}{0ex}}r\left({E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)}$

In order to calculate the inductive probability of this argument, then, you must first have prior probabilities assigned to all of the relevant terms. What learning the new evidence does is change the probability assigned to H from $P\phantom{\rule{0}{0ex}}r\left(H\right)$ to $P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)$.

This change is caused by the evidence, and the aim of confirmation theory is to measure this change.

## Measuring Confirmation

There are three things that could happen to the probability of H after learning ${E}_{1},{E}_{2},\dots ,{E}_{n}$:

$•P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)>P\phantom{\rule{0}{0ex}}r\left(H\right)$.

In this case, the evidence increases the probability of H; it makes H more likely to be true than it was before. In such a case, the evidence confirms H.

$•P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right).

In this case, the evidence decreases the probability of H; it makes H less likely to be true than it was before. In such a case, the evidence disconfirms H.

$•P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)=P\phantom{\rule{0}{0ex}}r\left(H\right)$.

In the final case, the evidence does not change the probability of H at all. In such a case, ${E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}$ and H are independent.

These are the three different things that evidence can do to an hypothesis, but there is more to it than that. Confirmation admits of a degree: a certain piece of evidence can confirm an hypothesis more than another piece of evidence. We would like to measure not only when change occurs, but by how much the probability was changed.

This suggests the following definition of a measure of how strongly evidence ${E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}$ supports the hypothesis H:

 $\frac{P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)}{P\phantom{\rule{0}{0ex}}r\left(H\right)}$

If $\frac{P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)}{P\phantom{\rule{0}{0ex}}r\left(H\right)}>1$, then the evidence confirms the hypothesis, and this number tells us by how much it confirms it. The higher the number, the greater the confirmation.

If $\frac{P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)}{P\phantom{\rule{0}{0ex}}r\left(H\right)}<1$, then the evidence disconfirms the hypothesis. If it is equal to zero, then the evidence has falsified the hypothesis: it has proven it to be false. Any other number between zero and one measures by how much the evidence decreases the probability of the hypothesis: the lower the number, the stronger the disconfirmation.

If $\frac{P\phantom{\rule{0}{0ex}}r\left(H|{E}_{1}\wedge {E}_{2}\wedge \dots \wedge {E}_{n}\right)}{P\phantom{\rule{0}{0ex}}r\left(H\right)}=1$, then the evidence and hypothesis are independent of each other. There is no confirmation or disconfirmation: the relevant sentences do not affect each other.

Note that this degree of confirmation is NOT a probability! Probabilties are numbers between zero and one inclusive, while degree of confirmation is a number that is merely greater than or equal to zero (it is not bound above like probabilities are).

## Example

Consider the medical test example. Initially, the probability of having the disease is only 0.001. After a positive test, the probability of having the disease jumps to 0.047.

One can ask at this stage how much this evidence (getting a positive test) confirms the hypothesis that you have the disease. To answer this, use the measure of confirmation given above:

$\frac{P\phantom{\rule{0}{0ex}}r\left(D|T\right)}{P\phantom{\rule{0}{0ex}}r\left(D\right)}=\frac{0.047}{0.001}=47.$

This is incredibly strong confirmation.

What about after a second positive test? This, as detailed in the example, tells you that the probability of having the disease given that you tested positive twice is 0.7094. How strongly did this second test confirm the hypothesis that you have the disease?

$\frac{P\phantom{\rule{0}{0ex}}r\left(D|T\right)}{P\phantom{\rule{0}{0ex}}r\left(D\right)}=\frac{0.7094}{0.047}=15.1.$

This is still strong confirmation, though it is not as strong as the first piece of evidence was.

What about the third positive test? This test raised the probability of having the disease to 0.99179. How strongly did the third test confirm the disease hypothesis?

$\frac{P\phantom{\rule{0}{0ex}}r\left(D|T\right)}{P\phantom{\rule{0}{0ex}}r\left(D\right)}=\frac{0.99179}{0.7094}=1.4.$

This is still confirmation, though it is not very strong at all.