Jack D. Walker II John David Walker II;
Jack Walker

Email
CV
ORCID
LinkedIn
Twitter

headshot

I’m a pre-doctoral fellow at Yale University working with Prof. Alan S. Gerber and Prof. Gregory A. Huber on behavioral research in American politics. I'm affiliated with the Institution for Social and Policy Studies, the Center for the Study of American Politics, and the Tobin Center for Economic Policy.

I’m broadly interested in public opinion, voting, and causal inference methods. My developing research agenda includes the use of large-scale data to study political behavior—particularly how individuals respond to political events—and how researchers can better measure these dynamics.

I help coordinate the Stanford-Arizona-Yale Presidential Election Study (SAY24) (N=130,000) in collaboration with YouGov, working end-to-end on survey design, experimental modules, data analysis, and manuscript preparation.

I received my B.A. in political science and art history, cum laude, from Columbia University. At Columbia, I researched voting with Prof. Donald P. Green, polarization with Prof. Justin H. Phillips, and bureaucratic politics and separation of powers with Prof. Michael M. Ting.

Working Papers
  1. "How Stressful are United States Presidential Elections? Evidence from a Large-Scale 2024 Panel Study" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart).
    Abstract

    Elections are often described as moments of civic learning and engagement, yet they may also be psychologically taxing collective events. This paper provides the first large-scale measurement of the stress Americans experience over the course of a presidential election, using a seven-wave nationally representative panel fielded throughout the 2024 U.S. campaign. Our results compare election stress to other stressful situations, helping understand both the intensity and duration of election stress. We show that Americans reported finding 2024 more stressful than previous elections, with a growing proportion wishing it were over as Election Day neared. Americans state they would collectively pay approximately $60 billion to end the election early. We also find a significant pre-election “stress gap” between Democrats and Republicans that widened following Donald Trump’s victory. These findings emphasize a new dimension to the study of elections as social experiences with measurable psychological and emotional costs.

  2. "Can Debates Matter? Evidence from the 2024 Biden-Trump Debate" (with Alan S. Gerber, Gregory A. Huber, Mackenzie Lockhart, and Douglas Rivers). Under review.
    Abstract

    How much does a large and unexpected revelation of candidate quality affect vote choice? We estimate the effect of the June 2024 debate between Joe Biden and Donald Trump, in which Biden’s poor performance was widely viewed as leading to his withdrawal from the race. Using a large-scale panel dataset, we estimate a precise 1.1-percentage-point decline in Biden’s support post-debate, as well as significant effects on other voting attitudes and behaviors, some of which are novel in campaign events research. Our panel randomly staggered post-debate measurement, which we use to test for effects that may develop over time. We find no evidence for slowly developing effects. While most campaign events do not matter on average, our results demonstrate that extreme cases that reveal new information can alter candidate quality assessments and vote choice, providing a likely upper bound on observed effects of debates in the current environment.

  3. "Assessing the Stability and Orthogonality of Popular Attitudinal and Social-Psychological Scales" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart). Under review.
    Abstract

    How stable are attitudinal and social-psychological scales commonly used in political science research? Scales like racial resentment, right-wing populism, and Big Five personality traits are often used in research under the assumption that they are fixed over the short term. We test this assumption for approximately 40 commonly used scales using a nationally representative 1,400-person U.S. panel study with waves six months apart. For each scale, we compare its stability to that of partisanship and ideology, demonstrating that these scales exhibit very different levels of stability sometimes approaching that of partisanship. Additionally, we show the limited ability of most key demographic variables (e.g., age, education, and news interest) to predict either scale stability or secular shifts over the course of the 2024 election campaign. By contrast, partisanship strongly predicts shifts in many scale scores. Our results provide useful benchmarks for future researchers looking to understand and use these psychometric scales and raise important concerns about the exogeneity of the scales tested.

  4. "Measuring the Effects of Campaign Events: Specifying and Comparing Estimates of the Effect of Trump's Conviction" (with Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart). Conditionally accepted at Political Science Research and Methods.
    Abstract

    How can we measure the effects of campaign events? We estimate how voters respond to a prominent campaign scandal—Donald Trump being found guilty of 34 felony counts of falsifying business records—using data from a large, eight-wave panel study. The panel included waves before and after the conviction, as well as a wave in the field when the verdict was announced. We find the trial had virtually no effect on any Trump supporters, even among those who previously reported that their support for Trump was conditional on his being found not guilty. We compare this precisely estimated null effect to estimates generated by popular cross-sectional methods that do not require panel data, showing the cross-sectional methods fail to replicate this null. We find that the “change” question format estimates the verdict increased support by 6% among pre-verdict Trump supporters. We also find that the “counterfactual” question design estimates a 10% decrease in support for Trump among the same population. We formalize the estimands that each method estimates and provide insights into how each should be interpreted in the event study literature.

Research in Progress
  1. "On the Underlying Dimensionality of Popular Attitudinal and Social-Psychological Scales" (with Matt Blyth, Alan S. Gerber, Gregory A. Huber, and Mackenzie Lockhart).
Updated: 20 November 2025 Website design inspired by Theo Serlin and others.