Setting the stage
1
Introduction
1.1
What does your theory say about your data?
1.2
What do your data say about your theory?
1.3
What do your parameters say about other things?
1.4
What does your expertise say about your parameters?
2
Getting started in
Stan
2.1
Installation in
R
2.2
The anatomy of a
Stan
program
2.2.1
Data block
2.2.2
Transformed data block
2.2.3
Parameters block
2.2.4
Model block
2.2.5
Generated quantities block
2.2.6
The final product
2.3
Estimating a model
2.4
Looking at the results
3
Probabilistic models of behavior
3.1
The problem with deterministic models
3.2
What
is
a probabilistic model?
3.3
Example dataset and model
3.4
Optimal choice plus an error
3.4.1
Estimating a model
3.5
Utility-based models
3.5.1
Estimating a model
3.5.2
Doing something with the estimates
4
Considerations for choosing a prior
4.1
Example model and experiment
4.2
Getting the support right
4.3
Eliciting reasonable priors
4.3.1
Parameter values and the prior pushforward check
4.3.2
Predictions and other derived quantities: The prior predictive check
4.4
Assessing the sampling performance of a prior
4.4.1
Does our model recover its parameters well?
4.4.2
Do we see any pathologies in the estimation process?
4.5
R
code used for this chapter
Building blocks
5
Representative agent and participant-specific models
5.1
Participant-specific models
5.1.1
Example data and economic model
5.1.2
Going to the probabilistic model
5.1.3
A short side quest into canned estimation techniques
5.1.4
Assigning priors
5.1.5
Estimating the model for one participant
5.1.6
Estimating the model for all participants
5.1.7
But we could be learning more!
5.2
Actual representative agent models (pooled models)
6
Hierarchical models
6.1
A random sample of participants walks into your lab
6.2
The anatomy of a basic hierarchical model
6.3
Accounting for unobserved heterogeneity
6.3.1
The last time you will integrate the likelihood, probably
6.3.2
Data augmentation
6.4
A multivariate normal hierarchical model
6.4.1
Decomposing the variance-covariance matrix
6.4.2
Transformed parameters and normal distributions
6.5
Example: again with
Bruhin, Fehr, and Schunk (2019)
6.5.1
No correlation between individual-level parameters
6.5.2
Correlation between individual-level parameters
7
Mixture models
7.1
A menu of models
7.2
Dichotomous and toolbox mixture models
7.3
Coding peculearities
7.4
Example experiment:
Andreoni and Vesterlund (2001)
7.4.1
As basic as it gets
7.4.2
Adding some heterogeneity
7.5
Some code used to estimate the models
8
Model evaluation
8.1
Example dataset and models
8.2
Model posterior probabilities
8.2.1
Implementation using bridge sampling and the
bridgesampling
library
8.3
Cross-validation
8.3.1
Expected Log Predicted Density (ELPD) and other measures of goodness of fit
8.3.2
1-round cross-validation
8.3.3
Leave-one-out cross-validation (LOO)
8.3.4
Approximate LOO
8.3.5
\(k\)
-fold cross-validation
9
Speeding up your
Stan
code
9.1
Example dataset and model
9.2
A really slow way to estimate the model
9.3
Pre-computing things
9.4
Vectorization
9.5
Within-chain parallelization with
reduce_sum()
9.6
Evaluating the implementations
9.6.1
Pre-computing and vectorization
9.6.2
Within-chain parallelization
9.7
R
code to estimate models
9.7.1
Slow, pre-computed, and vectorized models
9.7.2
Parallelized model
Applications
10
Application: Experience-Weighted Attraction
10.1
The model at the individual level
10.2
Some computational and coding issues
10.3
Representative agent models
10.3.1
Prior calibration
10.3.2
The
Stan
model
10.3.3
Results
10.4
Hierarchical model
10.4.1
Prior calibration
10.4.2
The
Stan
model
10.4.3
Results
10.5
Some code used to estimate the models
10.5.1
Loading the data
10.5.2
Estimating the representative agent models
10.5.3
Estimating the hierarchical model
11
Application: Strategy Frequency Estimation
11.1
Simplifying the individual likelihood functions
11.2
Example experiment:
Dal Bó and Fréchette (2011)
11.2.1
The SFEM with homogeneous trembles
11.2.2
Adding heterogeneous trembles and integrating the likelihood
11.3
R
code to do these estimations
12
Application: Strategy frequency estimation with a mixed strategy
12.1
Example dataset and strategies
12.2
The likelihood function
12.3
Implementation in
Stan
12.4
Results
12.5
R
code used to estimate the models
13
Computing Quantal Response Equilibrium
13.1
Overview of quantal response equilibrium
13.2
Computing Quantal Response Equilibrium
13.2.1
Setting up the problem
13.2.2
A predictor-corrector algorithm
13.2.3
Initial conditions
13.2.4
Algorithm tuning
13.3
The predictor-corrector algorithm in
R
13.4
Some example games
13.4.1
Generalized matching pennies
(Ochs 1995)
13.4.2
Stag hunt
13.4.3
\(n\)
-player Volunteer’s Dilemma imposing symmetric strategies
14
Application: Quantal Response Equilibrium and the Volunteer’s Dilemma
(Goeree, Holt, and Smith 2017)
14.1
Solving logit QRE and estimating the model
14.2
Adding some heterogeneity
14.2.1
Computing quantal response equilibrium with heterogeneous parameters
14.2.2
Warm glow volunteering
14.2.3
Duplicate aversion
14.2.4
Results
14.3
R
code to run estimations
15
Application: A Quantal Response Equilibrium with discrete types
15.1
Example dataset and models
15.2
A note on replication
15.3
Three models that make different assumptions about bracketing
15.3.1
Broad bracketing only
15.3.2
Narrow bracketing only
15.3.3
A mixture of broad and narrow bracketing
15.4
Results
15.5
R
code used to estimate these models
16
Application: QRE in a Bayesian game and cursed equilibrium
16.1
Example game and dataset
16.2
Solving for QRE
16.2.1
Baseline model
16.2.2
Cursed equilibrium
16.3
A quick prior calibration
16.4
Model results
16.5
Model evaluation
16.6
R
code used in this chapter
17
Application: Level-
\(k\)
models
17.1
Data and game
17.2
The level-
\(k\)
model
17.2.1
The deterministic component of the model
17.2.2
Exact and probabilistic play
17.3
Assigning probabilities to types for each participant separately
17.3.1
The
Stan
program
17.3.2
Prior calibration
17.3.3
Results
17.4
Doing the averaging within one program
17.4.1
The
Stan
program
17.4.2
Results
17.5
A mixture model
17.5.1
Stan
program
17.5.2
A prior for
\(\psi\)
17.5.3
Results
17.6
A mixture over levels and hierarchical nuisance parameters
17.6.1
Prior calibration
17.6.2
Stan
program
17.6.3
Results
17.7
A different assumption about mixing
17.7.1
Stan
program
17.7.2
Results
17.8
R
code to estimate the models
17.8.1
Participant-specific estimation conditional on
\(k\)
with Bayesian model averaging
17.8.2
Participant-specific estimation with a prior over
\(k\)
17.8.3
Mixture model
17.8.4
Hierarchical model
17.8.5
Mixture model with beliefs consistent with truncated type distribution
18
Application: Estimating risk preferences
18.1
Example dataset
18.2
We might not just be interested in the parameters
18.3
Introducing some important models
18.3.1
Expected utility theory
18.3.2
Rank-dependent utility
(Quiggin 1982)
18.3.3
Comparing the certainty equivalents estimated using EUT and RDU
18.4
A hierarchical specification
18.4.1
Population-level estimates
18.4.2
Participant-level estimates
18.5
R
code used to estimate these models
19
Application: Meta-analysis using (some of) the METARET data
19.1
Data
19.2
A basic model
19.3
But the data are really interval-valued!
19.4
Heterogeneous standard deviations
19.5
Student-
\(t\)
distributions, because why not?
19.6
R
code to estimate the models
20
Application: Ranked choices and the Thurstonian model
20.1
The Thurstonian model
20.2
Computational issues
20.3
Example dataset and model
20.4
A representative agent model
20.5
A hierarchical model
20.6
R
code used to run this
Links to data
References
Structural Bayesian Techniques for Experimental and Behavioral Economics
Building blocks