Maximizing message response using causal analysis
Communication advice can be bewildering. Talk about the threat, but don’t talk about the threat too much. Make it personal, or make it about future generations. Connect it to responsibility, to love, to spirit, to agency, to nature, to nurture.
All of these things work sometimes for some people on some beliefs and actions.
That leaves a strategist with a familiar problem: too many plausible ideas and no clear starting point. If every message works under some conditions, the practical question is which message is the best place to start, with which audience, if the goal is to move a specific outcome.
We built an answer for US-based climate messaging from 19 waves of the Pew American Trends Panel—409 questions on climate attitudes, environmental behavior, religious practice, theology, values, and policy. Take a look and see what you think.
Audiences already maxed out on your outcome have no room to move. Those too far away won’t move enough to cross the threshold. And sometimes audiences with plenty of room on the outcome are already maxed out on believing your message. The right message depends on where the audience stands right now.
We segmented 1,168 panelists not by who they are demographically, but by how they would respond if reached.
These five audiences are not a universal taxonomy of climate publics. They are the major response patterns in this dataset collected in the US over the course of about 5 years: five groups of people for whom different messages do meaningfully different things.
Different audiences have different amounts of “headroom” or capacity to move on different things. We chose to study how all the message levers moved people on four different outcomes.
Climate attitudes, experiences, and behaviors form a connected system. Some attitudes drive outcomes directly. Others work indirectly, shifting related beliefs that then shape what people do. We used causal discovery to learn the structure of these effects from the survey data.
Page through the story below to see how the pieces fit together, or click any node to explore on your own.
The project started with a specific question about Christian-framed stewardship: would messages about Earth as sacred, or creation as a gift, move the outcomes climate campaigns care about? They do — Earth as Sacred turned out to be a direct and indirect cause of Policy Support specifically. But the interesting claim isn’t about stewardship. It’s about the method.
The system doesn’t work the same way for everyone. Extreme Weather Attribution hits Rooted, Ready hard in one audience and barely affects Close to Home. World We Leave Behind lifts four outcomes in most audiences and only moves Policy Support for Sidewalk Weather. Different audiences get different answers from the same model — that difference is what the tool below surfaces.
Pick an outcome or a message. The model returns the best audience-lever pair, the strongest alternative, and the reasons the other three audiences aren’t the pick.
The larger claim: strategic message choice is modelable from standing survey data, not just guessable. Stewardship is the test case. The method is the point.
We used Claude Code to implement the statistical models we designed and to present these results on this website. We didn’t use Claude to design research questions, determine which metrics to use, or write copy. Because AI is very bad at knowing what’s important and is even worse at writing compelling prose.
We used 19 waves of the Pew American Trends Panel—a nationally representative panel where the same respondents answer different surveys over time. We explored over 400 questions spanning climate attitudes, environmental behavior, religious practice, theology, values, and policy. The final analytical sample is 1,168 respondents with complete data on all active variables, post-stratified on age × race joint cells to match the full 22,504-person panel. Gender and education were already balanced in the complete-case sample and did not require weighting.
Pew Research Center bears no responsibility for the analyses or interpretations of the data presented here. The opinions expressed herein, including any implications for policy, are those of the author and not of Pew Research Center.
We used factor analysis to discover which questions measured the same underlying thing. Some things got cut—“we are living in the end times” mixed with fiscal conservatism in ways we couldn’t cleanly separate, and electric vehicle attitudes created modeling artifacts. Other things merged: “dominion” and “stewardship” loaded together as a single coherent factor, not two opposing theologies. We also added constructs that didn’t emerge from the factor analysis but represent important messaging domains—future generations is one of those. If you’re curious about where your favorite construct landed, reach out.
We used DirectLiNGAM (500 weighted bootstrap iterations) to learn which constructs cause which, and in which direction. We constrained the model so that stable traits (ideology, church attendance) can’t be caused by downstream attitudes, and outcomes can’t retroactively cause the messages that might shift them. The algorithm resolved everything else from the data.
We included ideology and church attendance not to remove their influence but to make sure the model’s estimates of what messages can do are realistic given what stays fixed.
The “per 1,000” figures estimate how many people in a given audience would cross a behavioral benchmark if reached with a given message. These are probabilistic estimates from a simulation that respects the limits of what each person’s broader belief system allows—nobody gets pushed beyond what their other attitudes would structurally support.