top of page
Search

Learning Bayesian statistics

Writer's picture: Kennie Nybo PontoppidanKennie Nybo Pontoppidan


My estimate of the prior probability of this dog biting me is ½

I have always struggled with the definition and meaning of p-values in statistics

The p-value is defined as the probability, under the null hypothesis, of obtaining a result equal to or more extreme than what was actually observed.

https://en.wikipedia.org/wiki/P-value

Even now writing this, I have a hard time expressing the meaning of this probability. Let’s assume a p-value of 0.05. “If you hypothetically run the same experiment 100 times (but I might not be able to do that), then in five times, this result (the data from the original experiment) will be by chance. Wait what?”

If our experiment is throwing a dice fifty times to determine whether it is fair (all outcomes are equally likely), then the p-value is not about the probability of fairness. It is about how likely we are that nature cheats us, and data does not fit our model.

And then there is the notion of threshold for accepting or declining the null hypothesis based on the calculated p-value. The 5% threshold used in many text books was proposed by Ronald Fisher in 1925 as a good limit (a 1 in 20 chance). When learning physics and statistics, I have always struggled with rules of thumbs like these. And I remember having the feeling that because of this, statistics was not real science, build on proper grounds (mathematics).

Last month I finished a four-week course on Bayesian statistics. I have always wondered why people deemed it hard, and why I heard that the computations quickly became complicated. The course wasn’t that hard, and it gave a nice introduction to prior/posterior distributions and I many cases also how to interpret the parameters in the prior distribution as extra data points.

An interesting aspect of Bayesian statistics is that it is a mathematically rigorous model, with no magic numbers such as the 5% threshold for p-values. And I like the way it naturally caters sequential hypothesis testing with where the sample size of each iteration is not fixed in advance. Instead data are evaluated and used to update the model as they are collected.

Before taking the course, I thought that the probability of me ever using these techniques was low. Now I have changed that probability to a “definitely maybe”…

Go try it out yourself. It is free (will only cost you time and head-scratching):


22 views0 comments

Recent Posts

See All

Comments


bottom of page