Riki Conrey · Audience Research

Tracing the Message Web

Maximizing message response using causal analysis

Communication advice can be bewildering. Talk about the threat, but don’t talk about the threat too much. Make it personal, or make it about future generations. Connect it to responsibility, to love, to spirit, to agency, to nature, to nurture.

All of these things work sometimes for some people on some beliefs and actions.

That leaves a strategist with a familiar problem: too many plausible ideas and no clear starting point. If every message works under some conditions, the practical question is which message is the best place to start, with which audience, if the goal is to move a specific outcome.

We built an answer for US-based climate messaging from 19 waves of the Pew American Trends Panel—409 questions on climate attitudes, environmental behavior, religious practice, theology, values, and policy. Take a look and see what you think.

Audiences Defined By What Works For Them

Audiences already maxed out on your outcome have no room to move. Those too far away won’t move enough to cross the threshold. And sometimes audiences with plenty of room on the outcome are already maxed out on believing your message. The right message depends on where the audience stands right now.

We segmented 1,168 panelists not by who they are demographically, but by how they would respond if reached.

These five audiences are not a universal taxonomy of climate publics. They are the major response patterns in this dataset collected in the US over the course of about 5 years: five groups of people for whom different messages do meaningfully different things.

Different audiences have different amounts of “headroom” or capacity to move on different things. We chose to study how all the message levers moved people on four different outcomes.

Policy Support
  • Are you optimistic we can address climate change?
  • Do you favor providing a tax credit to encourage businesses to develop carbon capture technology?
  • Do you favor requiring oil and gas companies to seal methane gas leaks from oil wells?
  • If the U.S. shifts from fossil fuels to renewables, how would that affect the environment?
  • If the U.S. shifts from fossil fuels to renewables, how would that affect energy prices?
Civic Action
  • Have you contacted an elected official to urge them to address climate change?
  • Have you attended a protest or rally to show support for addressing climate change?
  • Have you volunteered for an activity focused on addressing climate change?
  • Have you donated money to an organization focused on addressing climate change?
Social Media Action
  • Have you interacted with posts about the need for action on climate change (liking, commenting)?
  • Have you posted or shared a post about the need for climate action?
  • Do you follow an account or organization that focuses on the need for climate action?
Daily Eco Habits
  • Do you use fewer single-use plastics to help protect the environment?
  • Do you reduce your food waste to help protect the environment?
  • Do you reduce the amount of water you use to help protect the environment?

The Causal System

Climate attitudes, experiences, and behaviors form a connected system. Some attitudes drive outcomes directly. Others work indirectly, shifting related beliefs that then shape what people do. We used causal discovery to learn the structure of these effects from the survey data.

Page through the story below to see how the pieces fit together, or click any node to explore on your own.

Try It

The project started with a specific question about Christian-framed stewardship: would messages about Earth as sacred, or creation as a gift, move the outcomes climate campaigns care about? They do — Earth as Sacred turned out to be a direct and indirect cause of Policy Support specifically. But the interesting claim isn’t about stewardship. It’s about the method.

The system doesn’t work the same way for everyone. Extreme Weather Attribution hits Rooted, Ready hard in one audience and barely affects Close to Home. World We Leave Behind lifts four outcomes in most audiences and only moves Policy Support for Sidewalk Weather. Different audiences get different answers from the same model — that difference is what the tool below surfaces.

Pick an outcome or a message. The model returns the best audience-lever pair, the strongest alternative, and the reasons the other three audiences aren’t the pick.

or

The larger claim: strategic message choice is modelable from standing survey data, not just guessable. Stewardship is the test case. The method is the point.

Methods & Data — For Serious Nerds

We used Claude Code to implement the statistical models we designed and to present these results on this website. We didn’t use Claude to design research questions, determine which metrics to use, or write copy. Because AI is very bad at knowing what’s important and is even worse at writing compelling prose.

Data

We used 19 waves of the Pew American Trends Panel—a nationally representative panel where the same respondents answer different surveys over time. We explored over 400 questions spanning climate attitudes, environmental behavior, religious practice, theology, values, and policy. The final analytical sample is 1,168 respondents with complete data on all active variables, post-stratified on age × race joint cells to match the full 22,504-person panel. Gender and education were already balanced in the complete-case sample and did not require weighting.

Pew Research Center bears no responsibility for the analyses or interpretations of the data presented here. The opinions expressed herein, including any implications for policy, are those of the author and not of Pew Research Center.

Constructs

We used factor analysis to discover which questions measured the same underlying thing. Some things got cut—“we are living in the end times” mixed with fiscal conservatism in ways we couldn’t cleanly separate, and electric vehicle attitudes created modeling artifacts. Other things merged: “dominion” and “stewardship” loaded together as a single coherent factor, not two opposing theologies. We also added constructs that didn’t emerge from the factor analysis but represent important messaging domains—future generations is one of those. If you’re curious about where your favorite construct landed, reach out.

Causal Discovery

We used DirectLiNGAM (500 weighted bootstrap iterations) to learn which constructs cause which, and in which direction. We constrained the model so that stable traits (ideology, church attendance) can’t be caused by downstream attitudes, and outcomes can’t retroactively cause the messages that might shift them. The algorithm resolved everything else from the data.

We included ideology and church attendance not to remove their influence but to make sure the model’s estimates of what messages can do are realistic given what stays fixed.

The Numbers

The “per 1,000” figures estimate how many people in a given audience would cross a behavioral benchmark if reached with a given message. These are probabilistic estimates from a simulation that respects the limits of what each person’s broader belief system allows—nobody gets pushed beyond what their other attitudes would structurally support.