Bayesian Statistics on Jørgen Bølstad
https://www.boelstad.net/tags/bayesian-statistics/
Recent content in Bayesian Statistics on Jørgen BølstadHugo -- gohugo.ioj.b [AT] post.harvard.edu (Jørgen Bølstad)j.b [AT] post.harvard.edu (Jørgen Bølstad)Wed, 20 Jan 2021 00:00:00 +0000A New Website Is Born: Estimite.com
https://www.boelstad.net/post/2021_norwegian_election_forecast/
Wed, 20 Jan 2021 00:00:00 +0000j.b [AT] post.harvard.edu (Jørgen Bølstad)https://www.boelstad.net/post/2021_norwegian_election_forecast/<p>Bayesian modeling has proven its usefulness for poll aggregation and election forecasting – for instance through <a href="https://fivethirtyeight.com">FiveThirtyEight</a>. However, in Norway, media coverage of public opinion trends still tends to focus on a single poll at a time, or a simple average of polls at best. I was convinced it would be possible to do better, and after about three times as many long days and nights as I thought it would take, the result is finally live – both in <a href="https://www.estimite.com/no/post/stortingsvalget-2021/">Norwegian</a> and <a href="https://www.estimite.com/post/norwegian-election-2021/">English</a> – at <a href="https://www.estimite.com/">Estimite.com</a>.</p>
<p>I have landed on a fully automated production process, where a script checks for new data, refits the model if necessary, and then rebuilds and publishes the website. This is all done using R, Stan, and a few shell scripts.</p>
<p>Here are some key lessons from this project:</p>
<ol style="list-style-type: decimal">
<li><p>Stan is great for this, because it lets us define generated quantities, and get all kinds of relevant outputs from a single, joint simulation.</p></li>
<li><p>You should not underestimate how many technical challenges such projects entail and how much time it takes to fix them – especially if you care about details.</p></li>
<li><p>[Bonus lesson] If you do work that the media find interesting, and you generally pick up the phone when journalists call, your research agenda may get completely derailed – but, hey, the problem is mostly self-inflicted.</p></li>
</ol>
<!--### Notes on the approach
Here is a brief description of the model:
I have landed on a fully automated production process, where a script checks for new data, refits the model if necessary, and then rebuilds and publishes the website. This is all done using R, Stan, and a few shell scripts. Here is a brief description of the model:
- This is a state space model where center logratio transformed latent trends are given multivariate *t*-distributed innovations. The multivariate *t* has been given a non-centered specification through Cholesky factorization, which makes a notable difference.
- The model estimates and corrects for average polling bias relative to previous elections. This contrasts with Nate Silver's models at FiveThirtyEight, which assume that house effects have a mean of zero -- I never fully bought that idea. Looking at Norwegian data, there seems to be a consistent pattern across elections, and ignoring it might be risky. At the same time, correcting for these biases carries its own risk, as it assumes polling companies do not fix their methodology from one election to the next.
- The model estimates latent trends in party support at the national level, and predicts latent support for each electoral district based on previous election results. The predictions are restricted to sum up to the national trend. Votes at the district-level are predicted from the district-level trends using a Dirichlet-Multinomial distribution -- the same kind of distribution as the one assumed to generate the polling data.
- Predictions for future time points are based on: (1) Vote shares at the last election, (2) the degree of change in each party's vote share from one election to the next, (3) current party support, (4), the degree of change in each party's support from one week to the next, and (5) the degree of covariation in how support for each possible pair of parties evolves on a weekly basis.
### Conclusions
-->
How Efficient is Stan Compared to JAGS?
https://www.boelstad.net/post/stan_vs_jags_speed/
Wed, 02 Jan 2019 00:00:00 +0000j.b [AT] post.harvard.edu (Jørgen Bølstad)https://www.boelstad.net/post/stan_vs_jags_speed/<p>For a good while JAGS was your best bet if you wanted to do MCMC on a distribution of your own choosing. Then Stan came along, potentially replacing JAGS as the black-box sampler of choice for many Bayesians. But how do they compare in terms of performance? The obvious answer is: It depends. In fact, the question is nearly impossible to answer properly, as any comparison will be conditional on the data, model specifications, test criteria, and more. Nevertheless, this post offers a small simulation study.</p>
<p>The approach taken here is to compare the models in terms of seconds required to produce an effective simulation draw for the parameter with the least effective draws. Simulation time is taken to exclude compilation time, but include adaptation and sufficient warm-up for the chains to get into the typical set. The samplers are tested in a hierarchical setting, using six different models. The models differ in terms of pooling, conjugacy, and centered parameter specifications. The results are roughly as expected: JAGS exhibits both blazing fast and incredibly slow sampling, while Stan delivers somewhat more stable performance, being relatively efficient also in harder scenarios.</p>
<div id="data" class="section level3">
<h3>Data</h3>
<p>This mini-study generates hierarchical datasets, containing <span class="math inline">\(m\)</span> noisy observations for each of <span class="math inline">\(n\)</span> individuals. For a given individual <span class="math inline">\(i \in \{1, \ldots, n\}\)</span> and trial number <span class="math inline">\(j \in \{1, \ldots, m\}\)</span>, we have the observed outcome <span class="math inline">\(y_{ij}\)</span>. In addition to <span class="math inline">\(y\)</span>, the data contain the covariate <span class="math inline">\(x\)</span>, which also varies by <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span>. The outcome <span class="math inline">\(y\)</span> will have normally distributed, homoskedastic errors, and depend linearly on <span class="math inline">\(x\)</span>, according to the following process: <span class="math inline">\(y_{ij} \sim N(\alpha_i + \beta_i x_{ij},\sigma^2)\)</span>. Each individual will have its own true intercept, <span class="math inline">\(\alpha_i\)</span>, as well as its own coefficient on <span class="math inline">\(x\)</span>, <span class="math inline">\(\beta_i\)</span>. These individual-specific parameters are in turn drawn from their own distributions: <span class="math inline">\(\alpha \sim N(2,1)\)</span>, and <span class="math inline">\(\beta \sim N(-2,1)\)</span>. The code to define the data-generating process (and load necessary packages) is:</p>
<pre class="r"><code>library(rstan)
library(rjags)
library(MASS) # To sample from multivariate normal
library(dclone) # To run JAGS in parallel
set.seed(1)
create.data <- function(n = 500, m = 5, covar = 0, xmean = 0) {
Sigma <- matrix(c(1, covar, covar, 1), 2, 2)
coefs <- mvrnorm(n, mu = c(2, -2), Sigma)
x <- matrix(rnorm(n * m, xmean, 1), ncol = m)
y <- matrix(NA, n, m)
for (i in 1:n) { for (j in 1:m) { y[i, j] <- rnorm(1, coefs[i, 1] + coefs[i, 2] * x[i, j], 2) } }
dat <- list(n = n, m = m, x = x, y = y)
return(dat)
}</code></pre>
<div id="posterior-parameter-correlations" class="section level4">
<h4>Posterior parameter correlations</h4>
<p>A key issue for MCMC samplers is how strong posterior correlations there are between the parameters, which in turn depends on both the data and the model specification. Strong correlations mean the parameters are hard to separate, posing a more difficult task for the samplers. JAGS, which relies on either Gibbs sampling or Metropolis-Hastings, is likely to suffer under such circumstances, while Stan might be expected perform better. With the present data, we can minimize posterior correlations by centering <span class="math inline">\(x\)</span>. Conversely, we can create correlations by moving its mean away from zero. The larger the mean relative to the standard deviation, the harder it will be to separate the constants from the coefficients. The tests below entail two scenarios that differ in this respect. One scenario with <strong>“weak correlations”</strong>: <span class="math inline">\(x_{ij} \sim N(0,1)\)</span>, and one with <strong>“strong correlations”</strong>: <span class="math inline">\(x_{ij} \sim N(2,1)\)</span>.</p>
</div>
</div>
<div id="models" class="section level3">
<h3>Models</h3>
<p>We will use a likelihood function that accurately reflects the data-generating process. In other words, we have:</p>
<p><span class="math display">\[p(y|x,\alpha,\beta,\sigma) = \prod^n_{i=1}\prod^m_{j=1} N(y_{ij}|\alpha_i + \beta_i x_{ij}, \sigma^2)\]</span></p>
<p>We will test five different models to see how the results differ across key specification choices. The main choices are: whether to do some pooling of individual coefficients, whether to use conjugate priors, and whether to use non-centered parameter specifications.</p>
<div id="hierarchical-weakly-informative-fully-conjugate-centered-and-non-centered" class="section level4">
<h4>Hierarchical, weakly informative, fully conjugate (centered and non-centered)</h4>
<p>We will start with a model doing partial pooling, assuming the <span class="math inline">\(\alpha\)</span>’s and <span class="math inline">\(\beta\)</span>’s are drawn from common population distributions: <span class="math display">\[\alpha_i \sim N(\mu_\alpha,\sigma_\alpha^2), ~~~ \beta_i \sim N(\mu_\beta,\sigma_\beta^2)\]</span> This model is fully conjugate, with normal priors on the means and inverse Gamma priors on the variances. (Note that the last parameter in the inverse gamma here is a scale and not a rate parameter.) The hyperpriors and the prior on <span class="math inline">\(\sigma\)</span> are set to be weakly informative: <span class="math display">\[\mu_\alpha,\mu_\beta \sim N(0,3^2), ~~~ \sigma_\alpha^2,\sigma_\beta^2,\sigma^2 \sim \text{Inv-Gamma}(1.5,2)\]</span></p>
<p>In some situations (with little data and/or correlated parameters), one can achieve more efficient and accurate sampling by using a non-centered parameterization (see Papaspiliopoulos et al. 2007, Betancourt and Girolami 2013). For the <span class="math inline">\(\alpha\)</span>-parameters, this can be done by specifying <span class="math inline">\(\alpha_i = \mu_\alpha + \sigma_\alpha \alpha^*_i\)</span>, where <span class="math inline">\(\alpha^*_i \sim N(0,1)\)</span>. The tests below entail both a centered and non-centered version of the conjugate model. (It is tricky to do non-centering without breaking JAGS’ ability to exploit the conjugacy, so the non-centered JAGS version only separates out the common mean).</p>
<!--#### Hierarchical, weakly informative, partly conjugate, non-centered model
Same as partly conjugate, but non-centered alpha and beta.-->
</div>
<div id="hierarchical-weakly-informative-partly-conjugate-centered-and-non-centered" class="section level4">
<h4>Hierarchical, weakly informative, partly conjugate (centered and non-centered)</h4>
<!--The next model is very similar, but only partly conjugate, as the t-distributions are replaced with normal distributions.-->
<p>The next model is very similar, but only partly conjugate, as the inverse Gamma priors on the variances are replaced with gamma priors on the standard deviations. The new priors are set to be approximately as informative as the inverse Gamma’s (although they are necessarily different). The model is then:</p>
<p><span class="math display">\[\alpha_i \sim N(\mu_\alpha,\sigma_\alpha^2), ~~~ \beta_i \sim N(\mu_\beta,\sigma_\beta^2)\]</span></p>
<p><span class="math display">\[\mu_\alpha,\mu_\beta \sim N(0,3^2), ~~~ \sigma_\alpha,\sigma_\beta,\sigma \sim \text{Gamma}(2,.5)\]</span></p>
<p>(Note that the last parameter in the gamma is a rate parameter, i.e. inverse scale).</p>
</div>
<div id="hierarchical-weakly-informative-non-conjugate-centered" class="section level4">
<h4>Hierarchical, weakly informative, non-conjugate, centered</h4>
<!--The next is a non-conjugate partially pooled model, with weakly informative priors. To avoid conjugacy, we use student-t distributions with 100 degrees of freedom for the means, and Gamma-distributions for the standard deviations. -->
<p>The next model is completely non-conjugate, replacing the normal distributions with student-t distributions with 100 degrees of freedom. This is practically the same as the normal, but should prevent JAGS from using Gibbs sampling.</p>
<p><span class="math display">\[\alpha_i \sim \text{Student-t}(100,\mu_\alpha,\sigma_\alpha^2), ~~~ \beta_i \sim \text{Student-t}(100,\mu_\beta,\sigma_\beta^2)\]</span></p>
<p><span class="math display">\[\mu_\alpha,\mu_\beta \sim \text{Student-t}(100,0,3^2), ~~~ \sigma_\alpha,\sigma_\beta,\sigma \sim \text{Gamma}(2,.5)\]</span></p>
</div>
<div id="unpooled-with-uniform-priors" class="section level4">
<h4>Unpooled with uniform priors</h4>
<!--The simplest model entails no pooling and has uniform priors on all parameters (means and SD's): -->
<p>The final model is an unpooled model with uniform priors on all parameters. This model does not make too much sense here (as we know the individual parameters are drawn form a common distribution), but it may serve to illustrate performance in more difficult circumstances. The model is potentially challenging in that there is no conjugacy, no pooling of information across units, and no other help from the priors, which results in larger posterior variances (and potentially covariances): <span class="math display">\[\alpha_i,\beta_i \sim U(-100,100), ~~~ \sigma \sim U(0,100)\]</span></p>
<p>This model only takes a few lines of Stan code, as we get uniform priors by not specifying anything (except relevant limits).</p>
</div>
</div>
<div id="approach" class="section level3">
<h3>Approach</h3>
<p>A key question is how to compare the samplers in way that is both fair and relevant for actual applications. As MCMC draws tend to be auto-correlated, the nominal number of simulation draws is less interesting than the number of effective draws (adjusting for auto-correlation). Furthermore, if we decide on a lower limit for how many effective draws we require, we will most likely want to apply this requirement to all parameters. In other words, the key question is how efficiently we are sampling the parameter with the fewest effective draws (after the chains have found the typical set). Finally, the most relevant measure of efficiency is probably time per effective draw.</p>
<!--mention Ac - > n_eff. after convergence.-->
<p>However, it is not clear what is the most relevant specification of time-consumption either. Stan needs to compile models before running, while JAGS does not. Fortunately, Stan models only need to be compiled once, and it usually takes a minute or two. If you are using an already-compiled model, there is no loss of time. Similarly, if your model is sufficiently complicated to require a long run, compilation time is essentially irrelevant. There is really only one situation in which compilation-time counts, and that is when you are developing a new model, testing it on a small dataset to make things faster, but still having to re-compile for every little change you to make. Given the points above, however, I think it is most reasonable to compare run-times without including compilation-time.</p>
<p>Another issue is whether to include adaptation and warm-up time. I think it is reasonable to include this, as these phases are typically required for any analysis. If a sampler takes very long to converge to the target distribution, this is a relevant cost of using it. Including these phases may, however, introduce some inaccuracy: As we take more post-warm-up draws, the share of time spent on warm-up decreases. This makes the results depend on the length of the post-warm-up run, which is clearly not ideal, but I am still sticking to this approach for now. (I suppose the best approach might be to set the run-length for each model aiming for a specific minimum number of effective draws for all parameters, but this might take extremely long in the slowest instances.)</p>
<p>The models have been fit to 200 datasets for each of the two posterior-correlation scenarios. The datasets have been kept fairly small: <span class="math inline">\(n = 250\)</span> and <span class="math inline">\(m=5\)</span>. The patterns reported below seem to remain when I scale up the data, but further testing may be called for (as the speed of JAGS may suffer more as the scale increases). For each software, I use three out of four available processor cores, and run one parallel chain on each core. To calculate split Rhats and the number of effective draws, I am using the specifications from Gelman et al. 2014 (BDA3). Setting the length of the warm-up phase is a bit tricky, as a longer warm-up is required for the more challenging combinations of data and models. By trial and error, I have set specific warm-up lengths for each model in each scenario so that the models typically converge. In the most challenging situations, the models will sometimes still not fully converge in time, and in these instances the results are not reported. (This may slightly favor models that are performing poorly, but it should not change the general patterns.)</p>
</div>
<div id="results" class="section level3">
<h3>Results</h3>
<p>The results are shown on a logged scale in the boxplot below. The scenario with weak posterior correlations offers ideal conditions, and when JAGS is given a fully conjugate hierarchical model, it is sampling very fast here – showing the fastest performance in this test. Stan is also fast in this situation, but does take about 3 times as long JAGS. The non-centered parameterization appears to slow down both samplers slightly, suggesting it adds to the computational load without much gain. When we move to the partly conjugate models, JAGS gets notably slower, and about as fast as Stan, whether we use the centered or non-centered specification.</p>
<p>Moving from a partly conjugate to completely non-conjugate model (replacing normals with t-distributions), makes JAGS about three times slower still, while Stan retains its performance, being several times faster. Turning to the unpooled models with uniform priors, Stan gets slightly faster, while JAGS gets a bit slower – now sampling nearly five times slower than Stan.</p>
<p>As the posterior correlations increase, the patterns are similar, but with some notable changes: JAGS gets comparatively slower for the fully conjugate models, but still delivers some of the fastest performance. It is now on par with Stan’s performance for the non-centered specification. If we look at the partly conjugate models, non-centering speeds up Stan seven-fold in this scenario, while JAGS sees no such benefit. Finally, for the unpooled model with uniform priors, Stan is pretty fast, while JAGS is very slow – in fact, JAGS is 65 times slower than Stan here.</p>
<p>In sum, JAGS’s performance varies a lot: It is about 765 times faster using a fully conjugate, centered model in the easy scenario than using an unpooled model in the harder scenario. For Stan, the equivalent factor is 4.</p>
<p><img src="https://www.boelstad.net/post/Stan_vs_JAGS_speed_files/figure-html/unnamed-chunk-1-1.svg" width="705.6" style="display: block; margin: auto auto auto 0;" /></p>
<p>Another question is how well the samplers are exploring the target distribution. The paper on NUTS by Hoffman and Gelman (2014) shows that NUTS can perform much better than Gibbs sampling and Random Walk Metropolis in exploring difficult posteriors with correlated parameters. At the most extreme, the authors use a highly correlated 250-dimensional distribution, whereas the data used here mainly entail pairwise parameter correlations (between pairs of <span class="math inline">\(\alpha\)</span>’s and <span class="math inline">\(\beta\)</span>’s, and between their hyperparameters in the partially pooled models). The plots below show 1,000 randomly selected draws of <span class="math inline">\(\alpha_1\)</span> and <span class="math inline">\(\beta_1\)</span> from a single trial using unpooled models with uniform priors. Visually, the samples produced by JAGS and Stan are very similar here. It is just that JAGS takes much longer to converge and produce effective draws.</p>
<p><img src="https://www.boelstad.net/post/Stan_vs_JAGS_speed_files/figure-html/unnamed-chunk-2-1.svg" width="700.8" style="display: block; margin: auto auto auto 0;" /></p>
</div>
<div id="final-notes" class="section level3">
<h3>Final notes</h3>
<p>A key lesson to draw from these results is that model specification matters. For instance, a non-centered specification can be of great help to Stan, and when this is the case, there is no reason not to go this route. For JAGS, it is obviously a good idea to use conjugate priors wherever possible, and thus avoid turning JAGS into Just Another Metropolis Sampler. Another point to note is that centering the predictor <span class="math inline">\(x\)</span> would have turned the scenario with high posterior correlations into one with low correlations – we could actually have avoided the posterior correlations altogether.</p>
<p>In addition to efficiency, there are other considerations that may be relevant for choosing between JAGS and Stan. Stan has a large development team and a very active user group, which makes it easy to solicit expert advice. Stan is also not reliant on a specific type of priors (conjugacy) and can handle a wide range of models. Furthermore, Stan is likely explore complicated posterior distributions better, and thus give more accurate results (see references below). While JAGS can be very fast for certain well-suited problems, Stan may offer the best all-round performance, especially if you are analyzing large datasets using complicated models.</p>
<!--If you find any mistakes, or have suggestions for improving this post, please use the comment section at the bottom.-->
</div>
<div id="how-to-cite" class="section level3">
<h3>How to cite</h3>
<p>
This material can be cited as follows:
</p>
<div class="Bib">
<p>
Bølstad, Jørgen. 2019. “How Efficient is Stan Compared to JAGS? Conjugacy, Pooling, Centering, and Posterior Correlations”. <i>Playing with Numbers: Notes on Bayesian Statistics</i>. www.boelstad.net/post/stan_vs_jags_speed/.
</p>
</div>
<p>Here is a BibTex-formatted reference, which should work when using the natbib and hyperref packages together with a suitable Latex style:</p>
<pre class="text"><code>@Misc{boelstad:2019a,
Author = {B{\o}lstad, J{\o}rgen},
Title = {How Efficient is Stan Compared to JAGS? Conjugacy, Pooling, Centering, and Posterior Correlations},
Howpublished = {Playing with Numbers: Notes on Bayesian Statistics},
Url = {\url{http://www.boelstad.net/post/stan_vs_jags_speed/}},
Year = {2019},
Month = {January 2}}</code></pre>
</div>
<div id="further-reading" class="section level3">
<h3>Further reading</h3>
<div class="Bib">
<p>
Betancourt, Michael, and Mark Girolami. 2013. “<a href="https://arxiv.org/abs/1312.0906">Hamiltonian Monte Carlo for hierarchical models</a>”. arXiv:1312.0906.
</p>
<p>
Betancourt, Michael. 2017. “<a href="https://arxiv.org/abs/1701.02434">A conceptual introduction to Hamiltonian Monte Carlo</a>”. arXiv:1701.02434v2.
</p>
<p>
Gelman, Andrew, John B Carlin, Hal S Stern, David B Dunson, Aki Vehtari and Donald B Rubin. 2014. <i>Bayesian Data Analysis.</i> 3rd ed. London: Chapman & Hall/CRC Press.
</p>
<p>
Hoffman, Matthew D., and Andrew Gelman. 2014. “<a href="http://www.stat.columbia.edu/~gelman/research/published/nuts.pdf">The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo</a>”. <i>Journal of Machine Learning Research</i>, 15(1), pp.1593-1623.
</p>
<p>
Neal, Radford M. 2011. “<a href="http://www.mcmchandbook.net/HandbookChapter5.pdf">MCMC Using Hamiltonian Dynamics</a>”. In Steve Brooks, Andrew Gelman, Galin L. Jones, Xiao-Li Meng. <i>Handbook of Markov Chain Monte Carlo</i>. London: Chapman & Hall/CRC Press.
</p>
<p>
Papaspiliopoulos, Omiros, Gareth O. Roberts, and Martin Sköld. 2007. “<a href="https://arxiv.org/pdf/0708.3797.pdf">A general framework for the parametrization of hierarchical models</a>”. <i>Statistical Science</i>, 59-73.
</p>
</div>
<!--```{r}
#pander(sessionInfo())
#sessionInfo()
# I a using Harvard citation style + full names, and quotation marks around titles...
```-->
<!--```{r}
#pander(sessionInfo())
#sessionInfo()
# I a using Harvard citation style + full names, and quotation marks around titles...
```-->
</div>
Bayesian Hierarchical Modeling
https://www.boelstad.net/post/bayesian_hierarchical_modeling/
Wed, 08 Aug 2018 00:00:00 +0000j.b [AT] post.harvard.edu (Jørgen Bølstad)https://www.boelstad.net/post/bayesian_hierarchical_modeling/<p>I used to think so-called multilevel models were a little boring. I was interested in causal inference, and the people using these models did not seem to have better causal identification strategies than those running plain old regressions. I have gradually come to change my mind on these models, although it is not because I think they solve challenges of causal identification. It is rather because I think a large share of our data can be thought of as hierarchical, and that proper modeling help us make the most of such data.</p>
<p>This post illustrates the benefits of Bayesian hierarchical modeling, by expanding the Metropolis sampler from my previous post to deal with more parameters. I will refer to hierarchical rather than multilevel models, as this highlights the use of hierarchical priors. The key advantage of the hierarchical approach is that it uses information across groups of observations to reduce our lower-level parameters’ sensitivity to noise. To demonstrate how this improves our estimates, this post compares a hierarchical model to a so-called unpooled model. (If you are unfamiliar with the basics of Bayesian inference, my <a href="http://www.boelstad.net/post/bayesian_statistics_introduction/">earlier posts</a> may be a better start.)</p>
<div id="hierarchical-data" class="section level3">
<h3>Hierarchical data</h3>
<p>Hierarchical modeling is relevant when we have observations that are somehow grouped. It could be that we observe several individuals within different states, or that we have multiple observations per individual. In standard econometric terms, this will often give rise to autocorrelation and heteroskedasticity, calling for estimation of group-specific means and variances. In a regression setting, one might additionally estimate how coefficients vary between groups.</p>
<p>Here, we will create a small hierarchical dataset, containing 5 noisy observations for each of 75 individuals (i.e. 375 observations in total). For a given individual <span class="math inline">\(i \in \{1, \ldots, n\}\)</span> and trial number <span class="math inline">\(j \in \{1, \ldots, m\}\)</span> we will denote the observed outcome <span class="math inline">\(y_{ij}\)</span>. In addition to <span class="math inline">\(y\)</span>, our data will contain the covariate <span class="math inline">\(x\)</span>, which also varies by <span class="math inline">\(i\)</span> and <span class="math inline">\(j\)</span>. The outcome <span class="math inline">\(y\)</span> will have normally distributed (and homoskedastic) errors, and depend linearly on <span class="math inline">\(x\)</span>, according to the following process: <span class="math inline">\(y_{ij} \sim N(\alpha_i + \beta_i x_{ij},\sigma^2)\)</span>. A key point here is that each individual will have its own true intercept, <span class="math inline">\(\alpha_i\)</span>, as well as its own coefficient on <span class="math inline">\(x\)</span>, <span class="math inline">\(\beta_i\)</span>. These individual-specific parameters are in turn drawn from their own distributions: <span class="math inline">\(\alpha \sim N(2,1)\)</span>, and <span class="math inline">\(\beta \sim N(-2,1)\)</span>.</p>
<pre class="r"><code>set.seed(1)
n = 75 # Number of individuals
m = 5 # Number of observations for per individual
alpha <- rnorm(n, 2, 1) # True individual-specific intercepts
beta <- rnorm(n, -2, 1) # True individual-specific coefficients
x <- matrix(rnorm(n * m, 0, 1), ncol = m)
y <- matrix(NA, n, m)
for (i in 1:n) {
for (j in 1:m) {
y[i, j] <- rnorm(1, alpha[i] + beta[i] * x[i, j], 2) } }</code></pre>
</div>
<div id="the-likelihood" class="section level3">
<h3>The likelihood</h3>
<p>We will use a likelihood function that accurately reflects the data-generating process. In other words, we have:</p>
<p><span class="math display">\[p(y|x,\alpha,\beta,\sigma) = \prod^n_{i=1}\prod^m_{j=1} N(y_{ij}|\alpha_i + \beta_i x_{ij}, \sigma^2)\]</span></p>
<p>However, as we increase the number of observations, some of these products will get exceedingly small, so we will need to work with log-densities to prevent computational underflows. Taking the log of the densities means we will sum the point-wise log-probabilities: <span class="math display">\[\text{ln}(p(y|x,\alpha,\beta,\sigma)) = \sum^n_{i=1}\sum^m_{j=1} \text{ln}(N(y_{ij}|\alpha_i + \beta_i x_{ij}, \sigma^2))\]</span></p>
<p>In order to keep track of the parameters when we pass their values between functions, we will pass them as a named list containing both single parameters and vectors of parameters. We will also use <code>apply</code> for somewhat efficient looping over observations:</p>
<pre class="r"><code>log.likelihood <- function(y, x, pars) {
# Loop over each row in data and its according alpha- and beta-parameter:
sum(log(apply(cbind(y, x, pars$alpha, pars$beta), 1, function(mat_i) {
# Loop over each observation in this row (i.e. individual):
apply(cbind(mat_i[1:m], mat_i[(m + 1):(2 * m)]), 1, function(dat_ij) {
# Compute the probability density for observation ij:
dnorm(dat_ij[1], mat_i[(2 * m) + 1] + mat_i[(2 * m) + 2] * dat_ij[2], pars$sigma)
})
})))
}</code></pre>
</div>
<div id="priors-for-hierarchical-data" class="section level3">
<h3>Priors for hierarchical data</h3>
<p>The key question is how we model the <span class="math inline">\(\alpha\)</span>- and <span class="math inline">\(\beta\)</span>-parameters. A simple option is to do <em>complete pooling</em>, assuming the <span class="math inline">\(\alpha\)</span>’s and <span class="math inline">\(\beta\)</span>’s are equal for all individuals, and estimating a single <span class="math inline">\(\alpha\)</span> and a single <span class="math inline">\(\beta\)</span>. This is similar to running a single regression on the whole dataset, which is not particularly interesting, so we will leave this option aside here.</p>
<div id="no-pooling" class="section level4">
<h4>No pooling</h4>
<p>At the other extreme, we can give each individual their own <span class="math inline">\(\alpha\)</span>- and <span class="math inline">\(\beta\)</span>-parameter and place wide, flat priors on each: <span class="math inline">\(\alpha_i \sim U(-100,100)\)</span>, and <span class="math inline">\(\beta_i \sim U(-100,100)\)</span>. Models of this kind are often referred to as <em>no pooling</em> models, and in our case, this is similar to running a separate regression for each individual. If also add a flat prior on the variance, we get the following prior distribution: <!-- -- which we will do below to save a tiny bit of time.--></p>
<p><span class="math display">\[\begin{aligned}p(\alpha,\beta,\sigma) = &\prod^n_{i=1}\big[U(\alpha_i|-100,100) \times U(\beta_i|-100,100)\big] \times U(\sigma|0,10)\end{aligned}\]</span></p>
<p>However, we will be using the log-prior:</p>
<p><span class="math display">\[\begin{aligned}\text{ln}(p(\alpha,\beta,\sigma)) = &\sum^n_{i=1}\big[\text{ln}(U(\alpha_i|-100,100)) + \text{ln}(U(\beta_i|-100,100))\big] + \text{ln}(U(\sigma|0,10))\end{aligned}\]</span> In R, we have:</p>
<pre class="r"><code>log.prior.1 <- function(pars) {
sum(log(dunif(pars$alpha, -100, 100))) +
sum(log(dunif(pars$beta, -100, 100))) +
log(dunif(pars$sigma, 0, 10)) }</code></pre>
</div>
<div id="partial-pooling" class="section level4">
<h4>Partial pooling</h4>
<p>Between the extremes of complete pooling and no pooling, we have the option of <em>partial pooling</em>, which is typically what is meant by hierarachical modeling. Here, we start from the assumption that the parameters come from a common population distribution – which is typically self-evident. While we could specify a certain prior representing our beliefs regarding population distribution, there is a better option. If we choose a parametric density function – assuming, for instance, that the population follows a normal distribution – then we can let the parameters of that distribution be estimated from the data. Instead of fixing those <em>hyperparameters</em>, we give them their own, higher-level priors – <em>hyperpriors</em>. An interesting point about the hyperparameters is that they do not change the likelihood function – they only influence the posterior through the prior on the lower-level parameters.</p>
<p>In our case, we will assume the <span class="math inline">\(\alpha\)</span>’s and <span class="math inline">\(\beta\)</span>’s come from two distinct normal distributions: <span class="math inline">\(\alpha_i \sim N(\mu_\alpha,\sigma_\alpha^2)\)</span>, and: <span class="math inline">\(\beta_i \sim N(\mu_\beta,\sigma_\beta^2)\)</span>. We further add hyperpriors on the population parameters: <span class="math inline">\(\mu_\alpha, \mu_\beta \sim N(0,3^2)\)</span>, and: <span class="math inline">\(\sigma_\alpha, \sigma_\beta \sim U(0,10)\)</span>. For the variance, we use the same flat prior as above: <span class="math inline">\(\sigma \sim U(0,10)\)</span>. Our joint prior distribution is then:</p>
<!-- May need ot switch to mathjax for linebreaks? But it can take up to 10 times longer to render.-->
<p><span class="math display">\[\begin{aligned}p(\alpha,\beta,\mu_\alpha,\mu_\beta,\sigma_\alpha,\sigma_\beta,\sigma) = &\prod^n_{i=1}\big[N(\alpha_i|\mu_\alpha,\sigma_\alpha^2) \times N(\beta_i|\mu_\beta,\sigma_\beta^2)\big]~\times \cr[3ex]
&N(\mu_\alpha|0,3^2) \times N(\mu_\beta|0,3^2)~\times \cr[3ex]
&U(\sigma_\alpha|0,10) \times U(\sigma_\beta|0,10) \times U(\sigma|0,10)\end{aligned}\]</span></p>
<p>And the log-prior is:</p>
<p><span class="math display">\[\begin{aligned}\text{ln}(p(\alpha,\beta,\mu_\alpha,\mu_\beta,\sigma_\alpha,\sigma_\beta,\sigma)) = &\sum^n_{i=1}\big[\text{ln}(N(\alpha_i|\mu_\alpha,\sigma_\alpha^2))+\text{ln}(N(\beta_i|\mu_\beta,\sigma_\beta^2))\big]~+\cr[3ex]
&\text{ln}(N(\mu_\alpha|0,3^2))+\text{ln}(N(\mu_\beta|0,3^2))~+\cr[3ex]
&\text{ln}(U(\sigma_\alpha|0,10))+\text{ln}(U(\sigma_\beta|0,10))+\text{ln}(U(\sigma|0,10))\end{aligned}\]</span></p>
<p>Writing this up as a function in R, we have:</p>
<pre class="r"><code>log.prior.2 <- function(pars) {
sum(log(dnorm(pars$alpha, pars$mu.alpha, pars$sigma.alpha))) +
sum(log(dnorm(pars$beta, pars$mu.beta, pars$sigma.beta))) +
log(dnorm(pars$mu.alpha, 0, 3)) + log(dnorm(pars$mu.beta, 0, 3)) +
log(dunif(pars$sigma.alpha, 0, 10)) + log(dunif(pars$sigma.beta, 0, 10)) +
log(dunif(pars$sigma, 0, 10)) }</code></pre>
<!--It is worth noting that the hyperparameters only influence the posterior through the prior on the lower-level parameters. In other words they are not directly part of the likelihood.-->
</div>
</div>
<div id="posterior-distributions" class="section level3">
<h3>Posterior distributions</h3>
<p>We will be working with the unnormalized log-posterior density, which is the sum of the log-likelihood and the log-prior. For the unpooled model, we have:</p>
<p><span class="math display">\[\text{ln}(p(\alpha,\beta,\sigma|y,x) \propto \text{ln}(p(y|x,\alpha,\beta,\sigma))+ \text{ln}(p(\alpha,\beta,\sigma)) \]</span></p>
<pre class="r"><code>log.unnorm.posterior.1 <- function(y, x, pars) {
log.likelihood(y, x, pars) + log.prior.1(pars) }</code></pre>
<p>For the hierarchical model, we have:</p>
<p><span class="math display">\[\text{ln}(p(\alpha,\beta,\mu_\alpha,\mu_\beta,\sigma_\alpha,\sigma_\beta,\sigma|y,x) \propto \text{ln}(p(y|x,\alpha,\beta,\sigma))+ \text{ln}(p(\alpha,\beta,\mu_\alpha,\mu_\beta,\sigma_\alpha,\sigma_\beta,\sigma)) \]</span></p>
<pre class="r"><code>log.unnorm.posterior.2 <- function(y, x, pars) {
log.likelihood(y, x, pars) + log.prior.2(pars) }</code></pre>
</div>
<div id="setting-up-the-sampler" class="section level3">
<h3>Setting up the sampler</h3>
<p>To estimate our parameters we will use a slightly modified version of the Metropolis sampler that was introduced in the previous post. Most importantly, we will make the code a bit more general, so that it can be applied to an arbitrary number of parameters. Our proposal function now takes a list of parameters – or vectors of parameters – and returns a list with the same dimensions:</p>
<pre class="r"><code>proposals <- function(pars, proposal.sd) {
sapply(pars, function(x) { sapply(x, function(x) { x + rnorm(1, 0, proposal.sd) }) }) }</code></pre>
<p>To use log-probabilities as long as possible, we now calculate the acceptance ratio as a difference in log-probabilities before exponentiating. We also add a counter of accepted proposals to calculate the sampler’s average acceptance rate. The sampler code now looks as follows:</p>
<pre class="r"><code>metropolis <- function(n.it,proposal.sd, y, x, log.unnorm.posterior, starting.values) {
draws <- vector()
n.accept <- 0
current.draw <- starting.values
for (t in 1:n.it) {
proposal <- proposals(current.draw, proposal.sd)
if (sum(c(proposal$sigma.alpha, proposal$sigma.beta, proposal$sigma) <= 0) > 0) {
acceptance.ratio = 0 }
else {acceptance.ratio <- exp(log.unnorm.posterior(y, x, proposal) -
log.unnorm.posterior(y, x, current.draw)) }
random.threshold <- runif(1, 0, 1)
if (random.threshold <= acceptance.ratio) {
current.draw <- proposal
n.accept <- n.accept + 1 }
draws <- rbind(draws, unlist(current.draw)) }
return(list(draws = draws, accept.rate = n.accept / n.it)) }</code></pre>
</div>
<div id="obtaining-estimates" class="section level3">
<h3>Obtaining estimates</h3>
<p>Now, let us obtain 10,000 draws for each of our two specifications (which may take a while):</p>
<pre class="r"><code>n.it <- 10000
# Unpooled model:
starting.values.1 <- list(alpha=rnorm(n, 2, .75), beta = rnorm(n, -2, .75),
sigma = runif(1, 1.75, 2.25))
results.1 <- metropolis(n.it, proposal.sd = .125, y, x,
log.unnorm.posterior.1, starting.values.1)
# Hierarchical model:
starting.values.2 <- list(alpha = rnorm(n, 2, .75), beta = rnorm(n, -2, .75),
mu.alpha = rnorm(1, 2, .25), mu.beta = rnorm(1, -2, .25),
sigma.alpha = runif(1, .75, 1.25),
sigma.beta = runif(1, .75, 1.25),
sigma = runif(1, 1.75, 2.25))
results.2 <- metropolis(n.it, proposal.sd = .1, y, x,
log.unnorm.posterior.2, starting.values.2)</code></pre>
<!--
```r
results.1$accept.rate # Acceptance rate for the unpooled model
```
```
## [1] 0.3057
```
```r
results.2$accept.rate # Acceptance rate for the hierarchical model
```
```
## [1] 0.0997
```
-->
<p>We will drop the first 2000 draws to allow the sampler to find its equilibrium distribution. (I am otherwise leaving the issue of convergence aside here, so you will just have to trust me.)</p>
<pre class="r"><code>warmup <- n.it / 5
draws.subset.1 <- results.1$draws[(warmup + 1):n.it, ]
draws.subset.2 <- results.2$draws[(warmup + 1):n.it, ]</code></pre>
<p>Finally, we obtain point estimates from the models by calculating posterior means:</p>
<pre class="r"><code>unpooled.est <- colMeans(draws.subset.1)
hierarchical.est <- colMeans(draws.subset.2)</code></pre>
</div>
<div id="comparing-the-models" class="section level3">
<h3>Comparing the models</h3>
<p>Asymptotically, the unpooled and the hierarchical model will yield the same result: With an infinite number of observations per individual, the likelihood will prevail anyhow (as long as the prior does not rule out the true parameter values). However, the typical situation for hierarchical analysis is one where we have few observations per cluster (i.e. individual in our case), and then these choices matter. To illustrate the differences, the figure below plots the posterior means for the <span class="math inline">\(\alpha\)</span>’s and <span class="math inline">\(\beta\)</span>’s for each model. The estimates are shown in dark blue, while the true values are in lighter transparent blue:</p>
<p><img src="https://www.boelstad.net/post/Bayesian_hierarchical_modeling_files/figure-html/plotestimates-1.svg" width="705.6" style="display: block; margin: auto auto auto 0;" /></p>
<p>One might think the hierarchical model has more informative priors on the lower-level parameters, but this would not be entirely correct. The uniform priors of the unpooled models are not “uninformative” – in fact, they attribute significant prior probability to areas of the parameter space that we do not truly consider plausible. In other words, these priors can be seen as both informative and implausible. The consequence of the flat priors of the unpooled model is to pull the estimates away from each other. In frequentist terms, these estimates are unbiased, but have high sampling variance – they are very sensitive to noise, as a few large errors in the same direction are allowed to pull the estimates into implausible regions of the parameter space.</p>
<p>In contrast, the hierarchical model uses information from all individuals to estimate the population distribution from which the lower-level parameters are drawn. Compared to the unpooled model, this prior structure pulls the estimates towards the population distribution – an effect often referred to as <em>shrinkage</em>. This yields estimates that are considerably less sensitive to noise, as we require more observations and a narrower likelihood to pull the posterior probabilities into unusual regions of the parameter space. Accordingly, the plot shows hierarchical estimates that are a lot more concentrated than the unpooled ones.</p>
<p>Is one model better? If our sole criterion is for the lower-level parameters to be unbiased, then we would prefer the unpooled model. But this would typically not be a good choice, as it ignores the sampling variance and thus sensitivity to noise. A straightforward way to assess bias and variance together is to consider mean squared errors (MSE) or their root (RMSE). Here, it is useful to do so across the sets of <span class="math inline">\(\alpha\)</span>- and <span class="math inline">\(\beta\)</span>-estimates. For a given set of <span class="math inline">\(\alpha\)</span>-estimates, this means: <span class="math display">\[\text{RMSE}(\hat\alpha) = \sqrt{\frac{1}{n}\sum_{i=1}^n (\hat\alpha_i - \alpha_i)^2}\]</span> The RMSEs for each model are shown in the upper-right corner of the plot above. The values are considerably lower for the hierarchical model, which means this model on average yields estimates that are closer to the true parameter values.</p>
<pre class="r"><code># The RMSEs can be calculated as follows:
# alpha.unpooled <- unpooled.est[1:n]
# sqrt(mean((alpha.unpooled-alpha)^2)) # RMSE, alpha, unpooled model
# alpha.hierarchical <- hierarchical.est[1:n]
# sqrt(mean((alpha.hierarchical - alpha)^2)) # RMSE, alpha, hierarchical model
# beta.unpooled <- unpooled.est[(n + 1):(2 * n)]
# sqrt(mean((beta.unpooled - beta)^2)) # RMSE, beta, unpooled model
# beta.hierarchical <- hierarchical.est[(n + 1):(2 * n)]
# sqrt(mean((beta.hierarchical - beta)^2)) # RMSE, beta, hierarchical model</code></pre>
<pre class="r"><code># For a basic version of the plot above run: par(mfrow=c(1,2)); plot(alpha.unpooled, beta.unpooled, type = "n", xlim = c(-5, 7), ylim = c(-7, 5)); for (i in 1:n) { arrows(alpha.unpooled, beta.unpooled, alpha[i], beta[i], col = "lightgray", length = 0) }; points(alpha, beta, col = "lightgray"); points(alpha.unpooled, beta.unpooled); plot(alpha.hierarchical, beta.hierarchical, type = "n", xlim=c(-5, 7), ylim = c(-7, 5)); for (i in 1:n) { arrows(alpha.hierarchical, beta.hierarchical, alpha[i], beta[i], col = "lightgray", length = 0) }; points(alpha, beta, col = "lightgray"); points(alpha.hierarchical, beta.hierarchical)</code></pre>
</div>
<div id="final-notes" class="section level3">
<h3>Final notes</h3>
<p>In this post, I have left aside the issues of MCMC convergence and how to sample efficiently. The Metropolis sampler was used because it is easy to implement manually – not because it is well-suited. For an actual analysis of this kind, it would make more sense to use <a href="http://www.mc-stan.org" target="_blank">NUTS</a> (see the next post). It could also be noted that while this post has focused on hierarchical modeling within a Bayesian framework – which is intuitive and straightforward – it is possible to achieve something similar within a frequentist framework. As Gelman <a href="http://andrewgelman.com/2018/03/24/economist-wrote-asking-make-sense-fit-bayesian-hierarchical-models-instead-frequentist-random-effects/" target="_blank">notes</a>, “anything Bayesian can be done non-Bayesianly” – it is just not clear why you would want to.</p>
<!--"anything Bayesian can be done non-Bayesianly" (<a href="http://andrewgelman.com/2018/03/24/economist-wrote-asking-make-sense-fit-bayesian-hierarchical-models-instead-frequentist-random-effects/" target="_blank">Gelman</a>).-->
<!--If you find any mistakes in this post, please let me know! If you this post useful, I would be happy to hear that too.-->
</div>
<div id="how-to-cite" class="section level3">
<h3>How to cite</h3>
<p>
This material can be cited as follows:
</p>
<div class="Bib">
<p>
Bølstad, Jørgen. 2019. “The Benefits of Bayesian Hierarchical Modeling: Comparing Partially Pooled and Unpooled Models in R”. <i>Playing with Numbers: Notes on Bayesian Statistics</i>. www.boelstad.net/post/bayesian_hierarchical_modeling/.
</p>
</div>
<p>Here is a BibTex-formatted reference, which should work when using the natbib and hyperref packages together with a suitable Latex style:</p>
<pre class="text"><code>@Misc{boelstad:2018c,
Author = {B{\o}lstad, J{\o}rgen},
Title = {The Benefits of Bayesian Hierarchical Modeling: Comparing Partially Pooled and Unpooled Models in R},
Howpublished = {Playing with Numbers: Notes on Bayesian Statistics},
Url = {\url{http://www.boelstad.net/post/bayesian_hierarchical_modeling/}},
Year = {2018},
Month = {August 8}}</code></pre>
</div>
<div id="further-reading" class="section level3">
<h3>Further reading</h3>
<div class="Bib">
<p>
Gelman, Andrew, John B Carlin, Hal S Stern, David B Dunson, Aki Vehtari and Donald B Rubin. 2014. <i>Bayesian Data Analysis.</i> 3rd ed. London: Chapman & Hall/CRC Press.
</p>
<p>
McElreath, Richard. 2016. <i>Statistical Rethinking: A Bayesian Course with Examples in R and Stan.</i> London: Chapman & Hall/CRC Press.
</p>
</div>
</div>