Preface
Setting the stage
1
Introduction
1.1
What does your theory say about your data?
1.2
What do your data say about your theory?
1.3
What do your parameters say about other things?
1.4
What does your expertise say about your parameters?
2
Getting started in
Stan
2.1
Installation in
R
2.2
The anatomy of a
Stan
program
2.2.1
Data block
2.2.2
Transformed data block
2.2.3
Parameters block
2.2.4
Model block
2.2.5
Generated quantities block
2.2.6
The final product
2.3
Estimating a model
2.4
Looking at the results
3
Probabilistic models of behavior
3.1
The problem with deterministic models
3.2
What
is
a probabilistic model?
3.3
Example dataset and model
3.4
Optimal choice plus an error
3.4.1
Estimating a model
3.5
Utility-based models
3.5.1
Estimating a model
3.5.2
Doing something with the estimates
4
Considerations for choosing a prior
4.1
Example model and experiment
4.2
Getting the support right
4.3
Eliciting reasonable priors
4.3.1
Parameter values and the prior pushforward check
4.3.2
Predictions and other derived quantities: The prior predictive check
4.4
Assessing the sampling performance of a prior
4.4.1
Does our model recover its parameters well?
4.4.2
Do we see any pathologies in the estimation process?
4.5
R
code used for this chapter
Building blocks
5
Representative agent and participant-specific models
5.1
Participant-specific models
5.1.1
Example data and economic model
5.1.2
Going to the probabilistic model
5.1.3
A short side quest into canned estimation techniques
5.1.4
Assigning priors
5.1.5
Estimating the model for one participant
5.1.6
Estimating the model for all participants
5.1.7
But we could be learning more!
5.2
Actual representative agent models (pooled models)
6
Hierarchical models
6.1
A random sample of participants walks into your lab
6.2
The anatomy of a basic hierarchical model
6.3
Accounting for unobserved heterogeneity
6.3.1
The last time you will integrate the likelihood, probably
6.3.2
Data augmentation
6.4
A multivariate normal hierarchical model
6.4.1
Decomposing the variance-covariance matrix
6.4.2
Transformed parameters and normal distributions
6.5
Example: again with
Bruhin, Fehr, and Schunk (2019)
6.5.1
No correlation between individual-level parameters
6.5.2
Correlation between individual-level parameters
7
Mixture models
7.1
A menu of models
7.2
Dichotomous and toolbox mixture models
7.3
Coding peculearities
7.4
Example experiment:
Andreoni and Vesterlund (2001)
7.4.1
As basic as it gets
7.4.2
Adding some heterogeneity
7.5
Some code used to estimate the models
8
Filling in the blanks: Imputation
8.1
Example model and dataset: yet again with
Bruhin, Fehr, and Schunk (2019)
8.2
How is this going to work?
8.3
Implementation
8.4
Results
8.4.1
Using the correlation matrix
8.4.2
The correlation matrix is doing a lot of heavy lifting here
8.5
Conclusion
8.6
R
code used for this chapter
Acknowedgements
9
More filling in the blanks: data augmentation
9.1
Example 1: two ways to estimate a probit model
9.2
Example 2: accounting for rounded answers
9.2.1
The
Holt and Smith (2016)
task
9.2.2
A model for behavior in
Holt and Smith (2016)
9.2.3
A note on replication
9.2.4
Augmenting the data
9.2.5
Results
9.3
R
code used for this chapter
9.3.1
Loading the data
9.3.2
Estimating the models
10
Using your structural estimates as explanatory variables in regression models
10.1
Example 1: a probit model with risk preferences measured by
Holt and Laury (2002)
.
10.2
Example 2: A random effects logit model with risk and time preferences.
10.3
Example 3: Logit and ordered logit with risk preferences from
Gneezy and Potters (1997)
10.4
A note of caution
10.5
Conclusion
10.6
R
code used for this chapter
10.6.1
Example 1
10.6.2
Example 2
10.6.3
Example 3
11
Model evaluation
11.1
Example dataset and models
11.2
Model posterior probabilities
11.2.1
Implementation using bridge sampling and the
bridgesampling
library
11.3
Cross-validation
11.3.1
Expected Log Predicted Density (ELPD) and other measures of goodness of fit
11.3.2
1-round cross-validation
11.3.3
Leave-one-out cross-validation (LOO)
11.3.4
Approximate LOO
11.3.5
\(k\)
-fold cross-validation
12
Speeding up your
Stan
code
12.1
Example dataset and model
12.2
A really slow way to estimate the model
12.3
Pre-computing things
12.4
Vectorization
12.5
Within-chain parallelization with
reduce_sum()
12.6
Evaluating the implementations
12.6.1
Pre-computing and vectorization
12.6.2
Within-chain parallelization
12.7
R
code to estimate models
12.7.1
Slow, pre-computed, and vectorized models
12.7.2
Parallelized model
Applications
13
Application: Experience-Weighted Attraction
13.1
The model at the individual level
13.2
Some computational and coding issues
13.3
Representative agent models
13.3.1
Prior calibration
13.3.2
The
Stan
model
13.3.3
Results
13.4
Hierarchical model
13.4.1
Prior calibration
13.4.2
The
Stan
model
13.4.3
Results
13.5
Some code used to estimate the models
13.5.1
Loading the data
13.5.2
Estimating the representative agent models
13.5.3
Estimating the hierarchical model
14
Application: Strategy Frequency Estimation
14.1
Simplifying the individual likelihood functions
14.2
Example experiment:
Dal Bó and Fréchette (2011)
14.2.1
The SFEM with homogeneous trembles
14.2.2
Adding heterogeneous trembles and integrating the likelihood
14.3
R
code to do these estimations
15
Application: Strategy frequency estimation with a mixed strategy
15.1
Example dataset and strategies
15.2
The likelihood function
15.3
Implementation in
Stan
15.4
Results
15.5
R
code used to estimate the models
16
Computing Quantal Response Equilibrium
16.1
Overview of quantal response equilibrium
16.2
Computing Quantal Response Equilibrium
16.2.1
Setting up the problem
16.2.2
A predictor-corrector algorithm
16.2.3
Initial conditions
16.2.4
Algorithm tuning
16.3
The predictor-corrector algorithm in
R
16.4
Some example games
16.4.1
Generalized matching pennies
(Ochs 1995)
16.4.2
Stag hunt
16.4.3
\(n\)
-player Volunteer’s Dilemma imposing symmetric strategies
17
Application: Quantal Response Equilibrium and the Volunteer’s Dilemma
(Goeree, Holt, and Smith 2017)
17.1
Solving logit QRE and estimating the model
17.2
Adding some heterogeneity
17.2.1
Computing quantal response equilibrium with heterogeneous parameters
17.2.2
Warm glow volunteering
17.2.3
Duplicate aversion
17.2.4
Results
17.3
R
code to run estimations
18
Application: A Quantal Response Equilibrium with discrete types
18.1
Example dataset and models
18.2
A note on replication
18.3
Three models that make different assumptions about bracketing
18.3.1
Broad bracketing only
18.3.2
Narrow bracketing only
18.3.3
A mixture of broad and narrow bracketing
18.4
Results
18.5
R
code used to estimate these models
19
Application: QRE in a Bayesian game and cursed equilibrium
19.1
Example game and dataset
19.2
Solving for QRE
19.2.1
Baseline model
19.2.2
Cursed equilibrium
19.3
A quick prior calibration
19.4
Model results
19.5
Model evaluation
19.6
R
code used in this chapter
20
Application: Level-
\(k\)
models
20.1
Data and game
20.2
The level-
\(k\)
model
20.2.1
The deterministic component of the model
20.2.2
Exact and probabilistic play
20.3
Assigning probabilities to types for each participant separately
20.3.1
The
Stan
program
20.3.2
Prior calibration
20.3.3
Results
20.4
Doing the averaging within one program
20.4.1
The
Stan
program
20.4.2
Results
20.5
A mixture model
20.5.1
Stan
program
20.5.2
A prior for
\(\psi\)
20.5.3
Results
20.6
A mixture over levels and hierarchical nuisance parameters
20.6.1
Prior calibration
20.6.2
Stan
program
20.6.3
Results
20.7
A different assumption about mixing
20.7.1
Stan
program
20.7.2
Results
20.8
R
code to estimate the models
20.8.1
Participant-specific estimation conditional on
\(k\)
with Bayesian model averaging
20.8.2
Participant-specific estimation with a prior over
\(k\)
20.8.3
Mixture model
20.8.4
Hierarchical model
20.8.5
Mixture model with beliefs consistent with truncated type distribution
21
Application: Estimating risk preferences
21.1
Example dataset
21.2
We might not just be interested in the parameters
21.3
Introducing some important models
21.3.1
Expected utility theory
21.3.2
Rank-dependent utility
(Quiggin 1982)
21.3.3
Comparing the certainty equivalents estimated using EUT and RDU
21.4
A hierarchical specification
21.4.1
Population-level estimates
21.4.2
Participant-level estimates
21.5
R
code used to estimate these models
22
Application: Meta-analysis using (some of) the METARET data
22.1
Data
22.2
A basic model
22.3
But the data are really interval-valued!
22.4
Heterogeneous standard deviations
22.5
Student-
\(t\)
distributions, because why not?
22.6
R
code to estimate the models
23
Application: choice bracketing
23.1
Data and model
23.2
Representative agent and individual estimation
23.3
Hierarchical model
23.4
A mixture model
23.5
What do we get out of the structural models, and what could we miss?
24
Application: Ranked choices and the Thurstonian model
24.1
The Thurstonian model
24.2
Computational issues
24.3
Example dataset and model
24.4
A representative agent model
24.5
A hierarchical model
24.6
R
code used to run this
Links to data
References
Structural Bayesian Techniques for Experimental and Behavioral Economics
Setting the stage