Jack D. Walker II John David Walker II;
Jack Walker

Email
CV
Google Scholar

More ORCID
LinkedIn
Twitter

headshot

I’m a pre-doctoral fellow at Yale University working with Prof. Alan S. Gerber and Prof. Gregory A. Huber on behavioral research in American politics. I help coordinate the Stanford-Arizona-Yale Presidential Election Study (SAY24) (N=130,000) in collaboration with YouGov. I'm affiliated with the Institution for Social and Policy Studies, the Center for the Study of American Politics, and the Tobin Center for Economic Policy.

I’m broadly interested in public opinion, voting, and causal inference methods. My developing research agenda includes the use of large-scale data to study political behavior—particularly how individuals respond to political events—and how researchers can better measure these dynamics.

I received my B.A. in political science and art history, cum laude, from Columbia University. At Columbia, I researched voting with Prof. Donald P. Green, polarization with Prof. Justin H. Phillips, and bureaucratic politics and separation of powers with Prof. Michael M. Ting.

Publications
  1. "Measuring the Effects of Campaign Events: Specifying and Comparing Estimates of the Effect of Trump's Conviction" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart). Forthcoming at Political Science Research and Methods.
    Abstract

    How can we measure the effects of campaign events? We estimate how voters respond to a prominent campaign scandal—Donald Trump being found guilty of 34 felony counts of falsifying business records—using data from a large, eight-wave panel study. The panel included waves before and after the conviction, as well as a wave in the field when the verdict was announced. We find the trial had virtually no effect on any Trump supporters, even among those who previously reported that their support for Trump was conditional on his being found not guilty. We compare this precisely estimated null effect to estimates generated by popular cross-sectional methods that do not require panel data, showing the cross-sectional methods fail to replicate this null. We find that the “change” question format estimates the verdict increased support by 6% among pre-verdict Trump supporters. We also find that the “counterfactual” question design estimates a 10% decrease in support for Trump among the same population. We formalize the estimands that each method estimates and provide insights into how each should be interpreted in the event study literature.

Working Papers
  1. "Common Attitudinal and Social-Psychological Scales May Exaggerate the Dimensionality of Human Differences" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart).
    Abstract

    Psychometric scales have proliferated in political science research to explain important outcomes, treating each new construct (e.g., racial resentment, social dominance orientation, authoritarianism, etc.) as measuring a distinct underlying factor, dimension, or trait. We analyze approximately 200 items from 39 commonly used scales to show that many of these scales are highly correlated, yielding inflated associations and false positives when tested in isolation. Simulations demonstrate that correlated but non-causal scales can appear statistically significant by inheriting variance from the true causal construct. At the item level, we find that constituent components of scales are often correlated most strongly with items from other scales, suggesting limited independence. Exploratory factor analysis reveals six latent factors that cut across established scale boundaries. The first explains a large share of the common variance and powerfully predicts partisanship and vote choice. These results underscore the need for greater attention to redundancy in political psychology measurement.

  2. "Expanding the Social Scope of Politics: Presidential Elections Associated with High Levels of Stress" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart). Under review.
    Abstract

    Elections are often described in political science as moments of civic learning and engagement central to democratic accountability. They may also be psychologically taxing collective events. This paper provides the first large-scale measurement of the stress Americans experience over the course of a presidential election, using a seven-wave nationally representative panel fielded throughout the 2024 U.S. campaign. Our results compare election stress to other stressful situations, helping understand both the intensity and duration of the stress associated with a core democratic process. We show that Americans reported finding 2024 more stressful than previous elections, with a growing proportion wishing it were over as Election Day neared. Americans state they would collectively pay approximately $60 billion to end the election early. We also find a significant pre-election "stress gap" between Democrats and Republicans that widened following Donald Trump’s victory, underscoring the role of partisanship in structuring responses to electoral outcomes. These findings emphasize a new dimension to the study of elections as social experiences with measurable psychological and emotional costs.

  3. "Can Debates Matter? Evidence from the 2024 Biden-Trump Debate" (with Alan S. Gerber, Gregory A. Huber, Mackenzie Lockhart, and Douglas Rivers). Under review.
    Abstract

    How much does a large and unexpected revelation of candidate quality affect vote choice? We estimate the effect of the June 2024 debate between Joe Biden and Donald Trump, in which Biden’s poor performance was widely viewed as leading to his withdrawal from the race. Using a large-scale panel dataset, we estimate a 1.1-percentage-point decline in Biden’s support post-debate, as well as significant effects on other voting attitudes and behaviors, some of which are novel in campaign events research. Our panel randomly staggered post-debate measurement, which we use to test for effects that may develop over time. We find no evidence for slowly developing effects. Although prior research finds that most campaign events do not matter, our results demonstrate that extreme cases that arguably reveal new information can alter candidate quality assessments and vote choice, suggesting a likely upper bound on effects of debates in the current environment.

  4. "Common Psychometric Scales Are Not Exogenous to Partisanship: Evidence from a Two-Wave Panel" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart). Under review.
    Abstract

    Political science increasingly uses attitudinal and social-psychological scales like racial resentment, right-wing populism, and social dominance orientation to explain important outcomes. These analyses assume that measured scale scores are not caused by the same political forces that drive outcomes and that they are fixed over the short term. We test these assumptions for 36 commonly used scales using a nationally representative two-wave U.S. panel study fielded six months apart during the 2024 presidential campaign. We find that partisanship strongly predicts directional changes in most scales, with partisan gaps widening over the campaign. We also estimate the test-retest reliability (stability) of each scale against key benchmarks and find very different levels of stability. Our results raise important concerns about the exogeneity of these scales as implemented in surveys.

Research in Progress
  1. "On Americans’ Perceptions of Social Status" (with Alan S. Gerber, Gregory A. Huber, and Eric M. Patashnik).
  2. "On Americans’ Inflation Worry and Economic Vulnerability" (with Alan S. Gerber, Gregory A. Huber, and Philip Moniz).
  3. "On the Crypto Constituency: Identity and Economic Perceptions in the 2024 Election."
Updated: 01 April 2026 Website design inspired by Theo Serlin and others.