"Gathering, evaluating, and aggregating social scientific models"
- golden247
- Sep 14, 2023
- 1 min read
Updated: Aug 13, 2024
Co-authored with Tara Slough et al.
On what basis can we claim a scholarly community understands a phenomenon? Social scientists generally propagate many rival explanations for the phenomena that they study. How best to discriminate between or aggregate them introduces myriad questions because we lack standard tools that synthesize discrete explanations. In this paper, we assemble and test a set of approaches to the selection and aggregation of predictive statistical models representing different social scientific explanations for a single outcome: original crowd-sourced predictive models of COVID-19 mortality. We evaluate social scientists’ ability to select or discriminate between these models using an expert forecast elicitation exercise. We provide a framework for aggregating discrete explanations, including use of an ensemble algorithm (model stacking). Although the best models outperform pre-specified benchmark machine learning models, experts are generally unable to identify models’ predictive accuracy. Our findings suggest that algorithmic approaches for the aggregation of social scientific explanations can outperform human judgement or ad-hoc processes.
Recent Posts
See AllCo-authored with Eugenia Nazrullaeva and Dylan Potts Prior scholarship contends that control over patronage appointments confers the...
We conduct parallel surveys of legislators and citizens in three countries to study their tolerance for corruption. In Italy, ...
Co-authored with Saad Gulzar and Luke Sonnet To investigate whether lack of information is one reason politicians may be unre- sponsive...
Comments