Today we will look at some common distributions used for Bayesian inference.

**Beta**

The first distribution we will look at is the Beta distribution. The beta distribution is equal to: [B(a,b)]^{-1}π^{a-1}(1-π)^{b-1}. We can show that:

- If the prior ~ π
^{a}(1-π)^{b} - And likelihood ~ π
^{S}(1-π)^{F} - Then the posterior ~ π
^{a+S}(1-π)^{b+F}

Where ‘~’ denotes *equal except for a constant *or *proportional to*. If:

- S* = a + S + 1
- F* = b + F + 1

Then

- n* = S* + F*
- P* = S*/n*

We can use *P** and *n** just like we would *p* and *n* in a classical binomial framework. *P** is our expected mean and the Bayesian confidence intervals can be calculated as follows.

- Conf. Int.: P* +/- (t
_{α/2})*[P*(1-P*)/n*]^{1/2}

**Normal Distribution**

With the normal distribution, we will again have a prior normal distribution and a likelihood function. The question is, which should we rely on more in creating our posterior distribution: our prior assumptions or the data collected.

Let the variance of the prior be *σ ^{2}_{0}* and the variance of the sample be

*σ*. Also let μ

^{2}_{0}be the prior mean and X* be the sample mean. If ‘

*n*‘ is the number of observations in the sample, we can calculate the posterior mean as:

- Posterior mean = (n
_{0}μ_{0}+ nX*)/(n_{0}+ n) - where: n
_{0}= (σ^{2})/(σ^{2}_{0})

We can see that the prior mean is more important when the prior’s variance is small relative to the sample variance. On the other hand, the sample mean is given more wieght when the sample variance is small relative to the prior variance. We can calculate the posterior standard error as:

- Posterior S.E. = σ/(n
_{0}+ n)^{1/2}