Stats Works
  • About This Website

Bayesian Analysis: Binomial Discrete

A Simple Illustration of Bayesian Probability with a Binomial Distribution¶

Suppose we have a discrete set of possible probabilities $q$ of an event E with some prior beliefs about which probability is the true probability of the event E. That is, we have a probability mass function that takes discrete values within $q$ and maps them to a discrete probability space, where

$$ P(p=w) = 0, \forall{w \notin q} $$$$ \sum_{w}{P(p=w)} = 1, w \in q $$

We then observe some independent events and wish to update our prior beliefs accordingly. We observe $k$ occurences over $n$ trials, and wish to update our probabilities accordingly.

From Bayes' Rule, we know that

$$ P(p=w | k, n) = \frac{P(k | p=w, n)P(p=w|n)}{P(k|n)} $$

We can now use this information to calculate our posterior of $p$ by using the fact that independent events occuring with some probability $p$ follow a Bernoulli distribution, and so the total number of events follow a binomial distribution. Using this for $P(n, k|p=w)$, we can now solve for our posterior for each defined value of p.

This is implemented below:

Code to Find Posterior Distribution Given Binomial Prior¶

In [1]:
## Finds posterior distribution of observed values assuming that k follows a binomial distribution. p is discrete in this case.

numerator <- function(k, n, posprobs, probsofprobs){
  ## finds the numerator -- ie P(k|p)P(p)
  ## k (integer) : the total number of observed successes.
  ## n (integer) : the total number of observed trials.
  ## posprobs (vector): possible p values associated with the random variable k.
  ## probsofprobs (vector): this is our prior, mapping a probability to each possible probability

  function(p){
    ## For a particular p, this will find the posterior probability of p given observed values. This is a function generator -- given
    ## values of k and n, we can then find the probability of any particular p.
    x <- posprobs[p]
    px <- probsofprobs[p]

    res <- (x^k)*((1 - x)^(n-k))*px
    return(res)
  }
}

denominator <- function(k,n, posprobs, probsofprobs){
  ## Finds the denominator, the multiplicative constant. Same for all values of p. P(k)
  sum(sapply(X = 1:length(posprobs),FUN= numerator(k,n, posprobs, probsofprobs)))
}

probabilityfinder <- function(k,n,p, posprobs, probsofprobs){
  ## Finds the posterior probability of a particular p value.
  numerator(k,n, posprobs, probsofprobs)(p)/denominator(k,n, posprobs, probsofprobs)
}

posterior <- function(k,n, posprobs, probsofprobs){
  ## Finds the posterior probability mass function of the p values.
  sapply(X = 1:length(posprobs),FUN=function(p) probabilityfinder(k,n,p, posprobs, probsofprobs))
}

log_transf_numerator <- function(k,n){
  ## finds the numerator -- ie P(k|p)P(p), using log transform to avoid machine zeros
  ## k (integer) : the total number of observed successes.
  ## n (integer) : the total number of observed trials.
  ## posprobs (vector): possible p values associated with the random variable k.
  ## probsofprobs (vector): this is our prior, mapping a probability to each possible probability
  function(p){
    ## For a particular p, this will find the posterior probability of p given observed values. This is a function generator -- given
    ## values of k and n, we can then find the probability of any particular p.

    x <- posprobs[p]
    px <- probsofprobs[p]

    mx <- max(sapply(posprobs, function(pr) k*log(pr) + (n - k)*log(1 - pr)))

    res <- exp(k*log(x) + (n - k)*log(1 - x) - mx) * probsofprobs[p]
    return(res)
  }
}

log_transf_denominator <- function(k,n){
  ## Finds the denominator, the multiplicative constant, using log transform for machine zeros. Same for all values of p. P(k)

  sum(sapply(X = 1:length(posprobs),FUN= log_transf_numerator(k,n)))
}

log_transf_probabilitydist <- function(k,n,p){
  ## Finds the posterior probability of a particular p value using log transform.
  log_transf_numerator(k,n)(p)/log_transf_denominator(k,n)
}

log_transf_posterior <- function(k,n){
  ## Finds the posterior probability mass function of the p values using log transform

  sapply(X = 1:length(posprobs),FUN=function(x) log_transf_probabilitydist(k,n,x))
}

Example: Thunderstorms¶

Consider if we observed 11 positive events in 27 days -- as an example, the number of thunderstorms in a month. We want to estimate the probability of a thunderstorm given the new information. Assuming weather each day is independent (which is a simplifying assumption), we can assume that the number of thunderstorms in $n$ days can be modeled by a binomial distribution.

Suppose we have the following prior probability mass function about the discrete values of $p$:

\begin{equation} P(p = w) = \begin{cases} .07 ,& \text{if } w=.05\\ .13 ,& \text{if } w=.15\\ .26 ,& \text{if } w=.25\\ .26 ,& \text{if } w=.35\\ .13 ,& \text{if } w=.45\\ .07 ,& \text{if } w=.55\\ .02 ,& \text{if } w=.65\\ .02 ,& \text{if } w=.75\\ .02 ,& \text{if } w=.85\\ .02 ,& \text{if } w=.95\\ 0, & \text{otherwise} \end{cases} \end{equation}

If we believe prior to this that the distribution looks as it does above (with the highest belief in 35 and 45%), we can now find the posterior of the probabilities after observing the 11 thunderstorms as follows:

In [2]:
posprobs <- c(.05,.15,.25,.35,.45,.55,.65,.75,.85,.95)
probsofprobs <- c(.07,.13,.26,.26,.13,.07,.02,.02,.02,.02)

pos <- posterior(11,27,posprobs, probsofprobs)
round(pos, 3)
  1. 0
  2. 0.002
  3. 0.128
  4. 0.524
  5. 0.287
  6. 0.057
  7. 0.002
  8. 0
  9. 0
  10. 0

So we now believe with 0.524 probability that the actual $p$ is .35. This means the data supports the assumption of .35 chance of thunderstorms a day, given our prior beliefs and the independence of events.

Behaviors for different $n$ and $k$¶

Now, let us investigate the behaviors for various sample sizes, $n$, and number of observed thunderstorms $k$.

In [3]:
n <- c(10,50,100,150,200,300,500)
k <- lapply(n, function(n) seq(n/10,n,n/10))
col_names = as.factor(1:10 /10)
row_names = paste('P(p=', posprobs, ')')
list_names = paste('n=', n)

iter_j = 1:10; names(iter_j) = col_names
iter_i = 1:7; names(iter_i) = list_names
dimthreedfs = lapply(iter_i,function(i) data.frame(sapply(iter_j, function(j) posterior(k[[i]][j],n[i], posprobs, probsofprobs)), check.names=FALSE))
In [4]:
dimthreedfs = lapply(dimthreedfs, function(y) round(y, 3))
In [5]:
dimthreedfs
$`n= 10`
0.10.20.30.40.50.60.70.80.91
0.1600.0300.0040.0000.0000.0000.0000.0000.0000.000
0.3280.2090.0960.0350.0100.0020.0000.0000.0000.000
0.3540.4270.3710.2550.1410.0620.0200.0040.0000.000
0.1370.2660.3730.4150.3720.2630.1360.0430.0060.000
0.0200.0580.1230.2080.2830.3040.2390.1160.0260.003
0.0020.0090.0300.0750.1520.2450.2870.2080.0700.010
0.0000.0000.0020.0090.0290.0700.1240.1370.0700.015
0.0000.0000.0000.0020.0110.0430.1230.2190.1820.064
0.0000.0000.0000.0000.0020.0120.0640.2150.3380.225
0.0000.0000.0000.0000.0000.0000.0050.0580.3060.683
$`n= 50`
0.10.20.30.40.50.60.70.80.91
0.2320.0000.0000.0000.0000.0000.0000.0000.0000.000
0.7030.2920.0090.0000.0000.0000.0000.0000.0000.000
0.0650.6460.4710.0540.0010.0000.0000.0000.0000.000
0.0010.0610.4890.6130.1350.0050.0000.0000.0000.000
0.0000.0000.0310.3110.5550.1600.0060.0000.0000.000
0.0000.0000.0000.0230.2990.6400.1790.0030.0000.000
0.0000.0000.0000.0000.0100.1800.4080.0470.0000.000
0.0000.0000.0000.0000.0000.0160.3920.4990.0280.000
0.0000.0000.0000.0000.0000.0000.0150.4510.6020.004
0.0000.0000.0000.0000.0000.0000.0000.0010.3700.996
$`n= 100`
0.10.20.30.40.50.60.70.80.91
0.1680.0000.0000.0000.0000.0000.0000.0000.0000
0.8280.2880.0000.0000.0000.0000.0000.0000.0000
0.0040.7060.4790.0050.0000.0000.0000.0000.0000
0.0000.0060.5170.6550.0190.0000.0000.0000.0000
0.0000.0000.0040.3370.6370.0260.0000.0000.0000
0.0000.0000.0000.0030.3430.7620.0280.0000.0000
0.0000.0000.0000.0000.0010.2110.5050.0050.0000
0.0000.0000.0000.0000.0000.0020.4670.5480.0020
0.0000.0000.0000.0000.0000.0000.0010.4470.7250
0.0000.0000.0000.0000.0000.0000.0000.0000.2731
$`n= 150`
0.10.20.30.40.50.60.70.80.91
0.1110.0000.0000.0000.0000.0000.0000.0000.0000
0.8890.2690.0000.0000.0000.0000.0000.0000.0000
0.0000.7300.4710.0000.0000.0000.0000.0000.0000
0.0000.0010.5290.6560.0020.0000.0000.0000.0000
0.0000.0000.0010.3430.6480.0040.0000.0000.0000
0.0000.0000.0000.0000.3490.7820.0040.0000.0000
0.0000.0000.0000.0000.0000.2140.5270.0000.0000
0.0000.0000.0000.0000.0000.0000.4690.5750.0000
0.0000.0000.0000.0000.0000.0000.0000.4240.8120
0.0000.0000.0000.0000.0000.0000.0000.0000.1881
$`n= 200`
0.10.20.30.40.50.60.70.80.91
0.0710.00 0.0000.0000.00 0.0000.0000.0 0.0000
0.9290.25 0.0000.0000.00 0.0000.0000.0 0.0000
0.0000.75 0.4610.0000.00 0.0000.0000.0 0.0000
0.0000.00 0.5380.6530.00 0.0000.0000.0 0.0000
0.0000.00 0.0000.3460.65 0.0000.0000.0 0.0000
0.0000.00 0.0000.0000.35 0.7870.0000.0 0.0000
0.0000.00 0.0000.0000.00 0.2120.5380.0 0.0000
0.0000.00 0.0000.0000.00 0.0000.4610.6 0.0000
0.0000.00 0.0000.0000.00 0.0000.0000.4 0.8760
0.0000.00 0.0000.0000.00 0.0000.0000.0 0.1241
$`n= 300`
0.10.20.30.40.50.60.70.80.91
0.0280.0000.0000.0000.00 0.0000.0000.0000.0000
0.9720.2140.0000.0000.00 0.0000.0000.0000.0000
0.0000.7860.4420.0000.00 0.0000.0000.0000.0000
0.0000.0000.5580.6470.00 0.0000.0000.0000.0000
0.0000.0000.0000.3530.65 0.0000.0000.0000.0000
0.0000.0000.0000.0000.35 0.7930.0000.0000.0000
0.0000.0000.0000.0000.00 0.2070.5580.0000.0000
0.0000.0000.0000.0000.00 0.0000.4420.6480.0000
0.0000.0000.0000.0000.00 0.0000.0000.3520.9490
0.0000.0000.0000.0000.00 0.0000.0000.0000.0511
$`n= 500`
0.10.20.30.40.50.60.70.80.91
0.0040.0000.0000.0000.00 0.0000.0000.0000.0000
0.9960.1530.0000.0000.00 0.0000.0000.0000.0000
0.0000.8470.4050.0000.00 0.0000.0000.0000.0000
0.0000.0000.5950.6330.00 0.0000.0000.0000.0000
0.0000.0000.0000.3670.65 0.0000.0000.0000.0000
0.0000.0000.0000.0000.35 0.8020.0000.0000.0000
0.0000.0000.0000.0000.00 0.1980.5950.0000.0000
0.0000.0000.0000.0000.00 0.0000.4050.7340.0000
0.0000.0000.0000.0000.00 0.0000.0000.2660.9920
0.0000.0000.0000.0000.00 0.0000.0000.0000.0081
In [6]:
library(ggplot2)
library(reshape2)

create_gg_heatmap <- function(df){
    mat = as.matrix(df)
    melted = melt(mat)
    colnames(melted) = c('k', 'p', 'posterior')
    plt = ggplot(data = melted, aes(x=k,y=p, fill=posterior)) +  geom_tile() + 
    scale_fill_gradientn(limits = c(0, 1), colors=c("navyblue", "darkmagenta", "darkorange1"))
    return(plt)
}
In [7]:
for(df in dimthreedfs){
    print(create_gg_heatmap(df))
}

Each list represents $n$ for $n \in \{10, 50, 100, 150, 200, 300, 500\}$, and each column represents possible $k$ values from $\frac{n}{10}$ to $n$.

The heatmaps shows us that the probabilities become more polarized as $n$ increases, which is what we would expect. We see that with small $n$ the colors are fuzzy, reflecting our uncertainty concerning our estimate.

We see what is expected with Bayesian probabilities: as $n$ increases, our confidence in our posterior probability becomes stronger. This is reflected in the fact that we get values that are nearly 1 for the observations which have p = k/n and 0 otherwise. This shows that as we get more observations, the data will overwhelm the prior.

For small values of $n$, we see the opposite: it is highly skewed by our prior, so that we are not so confident that $p$ = $\frac{k}{n}$ if $p$ was not deemed probable prior.

Machine Zeros problem:¶

In [8]:
posterior(1000,2000, posprobs, probsofprobs)
  1. NaN
  2. NaN
  3. NaN
  4. NaN
  5. NaN
  6. NaN
  7. NaN
  8. NaN
  9. NaN
  10. NaN

Notice that we have many machine zeros for large n. This interferes with our calculations, and gives us NaNs for the calculations. The following uses a log transformation to avoid machine zeros:

In [9]:
posterior(1000,2000, posprobs, probsofprobs)
log_transf_posterior(1000,2000)
  1. NaN
  2. NaN
  3. NaN
  4. NaN
  5. NaN
  6. NaN
  7. NaN
  8. NaN
  9. NaN
  10. NaN
  1. 0
  2. 5.59622100226468e-289
  3. 3.46746548211335e-121
  4. 3.31238703307168e-37
  5. 0.650000000000052
  6. 0.349999999999948
  7. 2.54799002543976e-38
  8. 2.66728114008719e-122
  9. 8.60957077271489e-290
  10. 0
Sean Ammirati - creator of Stats Works. He can be reached on Github, LinkedIn and email.
Comments
comments powered by Disqus

Articles in Bayesian Analysis

  • « Approaches to Contingency Tables A Bayesian Approach
  • Bayesian Analysis: Poisson Discrete »

All Articles

  • « Determining Sampling Schemes for Hierarchies A Simulated Approach using R
  • Bayesian Analysis: Poisson Discrete »

Published

Oct 23, 2018

Category

Bayesian Analysis

Tags

  • bayesian methodology 2
  • binomial 1
  • discrete 2

Contact

  • Stats Works - Making Statistics Work for You