The Goldilocks "theory" of public opinion on climate change
We often are told that "dire news" on climate change provokes dissonance-driven resistance.
Yet many commentators who credit this account also warn us not to raise public hopes by even engaging in research on -- much less discussion of -- the feasibility of geoengeineering. These analysts worry that any intimation that there's a technological "fix" for global warming will lull the public into a sense of false security, dissipating political resolve to clamp down on CO2 emissions.
So one might infer that what's needed is a "Goldilocks strategy" of science communication -- one that conveys neither too much alarm nor too little but instead evokes just the right mix of fear and hope to coax the democratic process into rational engagement with the facts.
Or one might infer that what's needed is a better theory--or simply a real theory--of public opinion on climate change.
Here's a possibility: individuals form perceptions of risk that reflect their cultural commitments.
Here's what that theory implies about "dire" and "hopeful" information on climate change: what impact it has will be conditional on what response -- fear or hope, reasoned consideration or dismissiveness-- best expresses the particular cultural commitments individuals happen to have.
And finally here's some evidence from an actual empirical test conducted (with both US & UK samples) to test this conjecture:
- When individuals are furnished with a "dire" message -- that substantial increases in CO2 emissions are essential to avert catastrophic effects for the environment and human well-being -- they don't react uniformly. Hierarchical individualists, who have strong pro-commerce and pro-technology values, do become more dismissive of scientific evidence relating to climate change. However, egalitarian communitarians, who view commerce and industry as sources of unjust social disparities, react to the same information by crediting that evidence even more forcefully.
- Likewise, individuals don't react uniformly when furnished "hopeful" information about the contribution that geoengineering might make to mitigating the consequences of climate change. Egalitarian communitarians — the ones who ordinarily are most worried — do become less inclined to credit scientific information that climate change is such a serious problem after all. But when given the same information about geoengineering, the normally skeptical hierarchical individualists respond by crediting such scientific information more.
Am I saying that this account is conclusively established & unassailably right, that everything else one might say in addition or instead is wrong, and that therefore this, that, or the other thing ineluctably follows about what to do and how to do it? No, at least not at the moment.
The only point, for now, is about Goldilocks. When you see her, watch out.
Decision science has supplied us with a rich inventory of mechanisms. Afforded complete freedom to pick and choose among them, any analyst with even a modicum of imagination can explain pretty much any observed pattern in risk perception however he or she chooses and thus invest whatever communication strategy strikes his or her fancy with a patina of "empirical" support.
One of the ways to prevent being taken in by this type of faux explanation is to be very skeptical about Goldilocks. Her appearance -- the need to engage in ad hoc "fine tuning" to fit a theory to seemingly disparate observations -- is usually a sign that someone doesn't actually have a valid theory and is instead abusing decision science by mining it for tropes to construct just-so stories motivated (consciously or otherwise) by some extrinsic commitment.
The account I gave of how members of the public react to information about climate change risks didn't involve adjusting one dial up and another down to try to account for multiple off-setting effects.
That's because it showed there really aren't offsetting effects here. There's only one: the crediting of information in proportion to its congeniality to cultural predispositions.
The account is open to empirical challenge, certainly. But that's exactly the problem with Goldilocks theorizing: with it anything can be explained, and thus no conclusion deduced from it can be refuted.
Reader Comments (4)
I fully agree regarding your skepticism about Goldilocks. Not the least beacause this is the way that most "climate experts" do their work. I.e. ad hoc fine tuning to fit a theory to seemingly observations. To quote von Neumann "With four parameters I can fit an elephant, and with five I can make him wiggle his trunk".
@Sven--
I disagree.
Goldilocks is not iterative dynamic modeling.
Goldilocks is just ad hoc story telling disguised as explanation. No data is ever collected & anything that happens in the future will necessarily "fit" the pseudo-science "model" involved.
Iterative dynamic modeling involves model fitting after the fact followed by prediction, recalibration, & then more prediction.
Unlike Goldilocks, It's predictions can be proven wrong. Indeed, they inevitably *are* wrong.
But if the predictions get progressively *better* over successive calibrations, the enterprise is a success & has genuine value.
If it doesn't get progressively better in its predictions, it is a failure.
But in latter case, it's not failing in any way comparable to the way Goldilocks fails -- b/c Goldilocks could never have succeeded to begin with.
This is a pretty basic distinction. I can accept that someone argying that climate science isn't succeeding b/c predictions aren't getting better, etc. But if somoene things ierative dynamic modeling is just ad hoc retrodictive modeling fitting, they actually just don't get what's going on
Go ahead & tell me why you think climate modelers are failing, but for sure don't create risk that people will think that you just don't understand what it would look like if they were succeeding! I'll be astonished if you tell me you think there's a problem w/ the whole enterprise of iterative dynamic modeling.
Dan
There you are. This is the major problem in this case as in many other. People do not want to digest the fact that data is missing to make a reasonable good regression analysis. Better to look away from the facts - pick out pretty ad hoc data and claim you are certain. Now that is the real interesting matter in the climate debate.
@Steve:
Is problem "missing" as in "missing observations" or as in omitted variables?
But in either case, if the model is being validated by use for genuine prediction -- not just by being fitted to past data -- what exactly is the problem?
If as result of omitted variables the models are not valid or are biased, then they will never get better as the parameters are progressively adjusted. The enterprise fails.
If as a result of missing data, they perform poorly now but get progressively better in the future -- b/c fewer observations are missing & b/c parameters are being revised to accommodate new observations -- then great: the enterprise is succeeding.
But that's not goldilocks-- which can never be defeated b/c it makes no determinate predictions.