follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Are scientists unlikely to be religious persons?! One of the weirdest survey results I've ever seen | Main | Weekend update: a 10-yr reassessment of "expressive overdetermination" »
Sunday
Apr232017

A token (or 2) of the Liberal Republic of Science

In honor of the march:

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (31)

"People don't care about what you know until they know what you care about."

That was the key point of a rally speaker here in Madison, WI yesterday, the one I found most thought-provoking. Clearly communicating our own spheres of concern is not a strategy that we've talked about very much on this blog, and I think it may deserve more attention. It may help scientist-citizens step outside the authoritative voice and be heard as informed citizens.

April 23, 2017 | Unregistered Commenterdypoon

@dypoon-- do you mean something like Gould's NOMA?

April 24, 2017 | Registered CommenterDan Kahan

"Science and technology are making our lives healthier, easier, and more comfortable."

What would you have hypothesized as the response to this statement? It's such a motherhood-and-apple-pie sounding statement that I am not at all surprised that it gets nearly the same high support everywhere. Maybe if you broke it down by rural vs. urban, you'd find that is the primary reason for the very slightly higher left vs. right and non-religious vs. religious support.

If you're looking for a statement that is more likely to show non-uniform support, try one about the relative authority of scientists vs. others. Something like "I am more likely to believe scientists pronouncements about nature than religious leaders." would likely show non-uniformity. After all, isn't it the relative strengths of authority figures that is important?

April 24, 2017 | Unregistered CommenterJonathan

Dan,

More about that Wood and Porter anti-backfire-effect paper - they and Nyhan and Reifler are all working together now:

http://nymag.com/scienceofus/2016/11/theres-more-hope-for-political-fact-checking.html

and the combination have converged against backfire. I think this may not bode well for CC.

April 24, 2017 | Unregistered CommenterJonathan

=={ . I think this may not bode well for CC. )==

Perhaps within an experimental condition, but how about in the real world? In the real world, people sort through fact checks and corrections and counter fact checks. Watch what happens when Trump lies and gets fact checked on it and gets labeled with his pants on fire and then Spicey comes out and explains why the fact-checkers are wrong and Trump didn't lie and Hannity comes on and shows how the supposed lies are fake news promoted by librul media that just wants to hurt Trump. Everyone goes on their merry way, thinking exactly what you would predict based on their preexisting ideological orientation.

Methinks the reports of the death of CC are greatly exaggerated.

April 24, 2017 | Unregistered CommenterJoshua

@ Dan: No, not at all NOMAs. I'm surprised you went there from what I said; I'm curious why.

I guess I'm familiar with the trope that "Science can't possibly have such-and-such answers because that's Religion's domain", though I wouldn't have thought to attribute it to Gould. I guess that speaks to Gould's internalization of a Christian worldview on religion and its role in philosophy and society, and perhaps also his Englishness. The English religious class has been trotting out this old horse ever since it had to deal with divine-watchmaker deism in the 17th century.

In my experience, Christians and people from dominantly Christian cultures are the only ones arguing for NOMAs, and I disagree with the idea completely. For one thing, it's completely nonsensical from a Buddhist perspective. That's my usual diagnostic for something that's a distinctive problem with Christianity, as opposed to all religion. The first step along the Eightfold Path is to find the right worldviews. The questions of what scientists do know, and how that should influence one's choice of worldview, can immediately follow from within a Buddhist religious framework. There's no need for non-overlapping magisteria when one can in fact contain the other.

Nor do I think that the idea of NOMAs really jives with the speaker from Sunday's thrust. that people are not as divided as partisans would believe, and that even the most partisan people have more common interests with the other side than they think. I think he would say that if you set up a NOMA framing, you instantly give a bunch of people who care about their religion an excuse to just stop listening. If you don't really care about what religion might say, and are just presenting a bunch of things you know from science, why should people who do care about religion care about you?

Scientists, if they want to speak politically, have to speak as citizens. It is, after all, their status as humans and citizens, and not as scientists, that gives them the standing to participate in the court of public discourse. How do -normal- non-scientist citizens speak about issues of political significance? They tell you all the reasons why they care about what the government is doing, and stories they've heard about how it affects people like them. From there, they opine on what the government ought to be doing. Is that how scientists typically frame their arguments? No. Could it be more so? Maybe. Would it be more persuasive? Could be tested.

For example, I remember how debates about conservation in my significant other's freshman science class often turned on the valuation of natural capital itself. There were often people who didn't care about the environmental risks not because they didn't think the risks were real, but because they didn't care about what would be lost!

April 25, 2017 | Unregistered Commenterdypoon

Joshua,

"Methinks the reports of the death of CC are greatly exaggerated."

The Wood and Porter study's primary refutation was of the backfire effect itself, not of overall CC. It only says that people do seem to (roughly but robustly) follow Bayes, but it does not address why priors are so skewed by politics - in fact, it shows this skew of priors in its results. In as much as CC helps explain the skew in priors, yes it is still very much alive.

But, without cognitive backfire reinforcing the skew, is any part of the skew due to cognition effects, or instead just completely due to differences in environment - the news bubble, amplified by social connections, for instance? With a backfire effect, it becomes harder to show that this is a cognitive problem.

April 25, 2017 | Unregistered CommenterJonathan

"With a backfire effect, it becomes harder to show that this is a cognitive problem."

With should be Without.

Using Preview post doesn't guarantee good results...

April 25, 2017 | Unregistered CommenterJonathan

Jonathan -

=={ The Wood and Porter study's primary refutation was of the backfire effect itself, not of overall CC. }==

Agreed - but it seemed to me that you were extrapolating from a finding that there is no backfire effect to question the existence of CC...

=={ With a backfire effect, it becomes harder to show that this is a cognitive problem. }==

...and so you were, right? (assuming that by "this" you mean CC, and that you are thinking of CC as a cognitive problem? Do I understand you correctly?)...

But i'm not entirely convinced that if we eliminate the "backfire effect" we've really made a serious dent in CC. For example, ...

=={ or instead just completely due to differences in environment - the news bubble, amplified by social connections, for instance? }==

How do you separate out news bubbles, social connections, etc., from CC...and how do you determine that they, likewise, aren't explained as cognitive effects (i.e., a product of built in pattern seeking as a building block of our cognitive processing)?

April 25, 2017 | Unregistered CommenterJoshua

Joshua

"How do you separate out news bubbles, social connections, etc., from CC...and how do you determine that they, likewise, aren't explained as cognitive effects (i.e., a product of built in pattern seeking as a building block of our cognitive processing)?"

What cognitive processes are left in CC without backfire? As you say - pattern seeking, also selective memory. But, neither is anti-Bayes, as was backfire. If there is no anti-Bayes component in CC, then there is hope for public discourse to be a corrective, without science-curious training.

April 25, 2017 | Unregistered CommenterJonathan

Jonathan -

=={ What cognitive processes are left in CC without backfire? As you say - pattern seeking, also selective memory. }==

I suppose confirmation bias would fit under pattern seeking, as would Apophenia, which is my new, favorite words. Perhaps the number of CC related cognitive biases are small, but they are biggies. But I also think that the psychological processes involved in identity protective cognition are not separable from "cognitive processes."

=={ But, neither is anti-Bayes, as was backfire. If there is no anti-Bayes component in CC, then there is hope for public discourse to be a corrective, without science-curious training. }==

I don't understand what you're saying there. What do you mean by "anti-Bayes," and why would Baysean reasoning be required for there to be hope for correction? And why would we not want to rely on science-curious training?

April 25, 2017 | Unregistered CommenterJoshua

"I don't understand what you're saying there. What do you mean by "anti-Bayes," and why would Baysean reasoning be required for there to be hope for correction? And why would we not want to rely on science-curious training?"

I think he's talking about what Dan calls the 'bounded rationality thesis' - the position that the biases are the result of actual irrationality as opposed to different groups having different priors, models, and data within a rational framework.

Bayesian reasoning is the best, most mathematically justifiable method science has found for drawing conclusions from evidence, and is used to represent optimum 'rationality'. The backfire effect as commonly understood is supposedly inconsistent with it (more evidence in favour reduces belief?!), and if it is an ineradicable part of human psychology, then there is no hope for justifiable decisionmaking on culturally contended issues - even if you find ways to evade the effect itself, it means the decisionmaking process is fundamentally irrational and therefore unreliable for finding the truth. If the backfire effect isn't real, but instead either an artifact of precise question wording, or a rational response to contextual information not considered by the experimenters, and if all the other elements of motivated reasoning can be explained in the same way, then the problem reduces to information availability, which is something that a debate exchanging and criticising models, priors and data can in principle solve.

I'm not sure I agree. I think that there are plenty of elements of human reasoning that are Bayes-inconsistent heuristics that are used because truth-seeking isn't the only consideration (minimising mental effort is too), and I think that it's quite possible for arguments about models and priors to be unresolvable by debate even within the Bayesian framework. But I can understand the point.

April 28, 2017 | Unregistered CommenterNiV

Joshua:

"What do you mean by "anti-Bayes," and why would Baysean reasoning be required for there to be hope for correction? And why would we not want to rely on science-curious training?"

By anti-Bayes, I mean a phenomenon that causes people exposed to the same evidence to move further apart on their belief related to that evidence (Bayes theorem, if followed, would never allow that). This is the backfire effect.

I'm not suggesting that Bayesian reasoning is required - it's just my way of pointing out what I think is the most frightening aspect of CC. If people will always move their beliefs closer together when exposed to the same evidence, then exposure alone might be sufficient.

This doesn't mean people aren't resistant to exposure to evidence that conflicts with their beliefs - as in that article you linked to recently, and many others.

For SC, it seems less important (than Dan's data might lead one to believe) if there is no backfire effect for it to fend off. It still might be important as a way to counteract resistance to exposure of conflicting evidence.

Sorry for the hyperbole.

April 28, 2017 | Unregistered CommenterJonathan

"By anti-Bayes, I mean a phenomenon that causes people exposed to the same evidence to move further apart on their belief related to that evidence (Bayes theorem, if followed, would never allow that)."

I'd be interested to see a proof of that! :-)

There are three separate inputs to Bayesian updating, one of which is commonly ignored. These are the evidence, the priors, and the statistical model. The evidence is the same. The issue of defining priors for Bayesian methods is one of the acknowledged and widely discussed major philosophical issues with the approach, but everyone tends to assume that the statistical model - the means by which you calculate the probability of outcomes given each hypothesis - is obvious and agreed. In general, it's not.

And people using different statistical models can indeed move further apart on the same evidence, using Bayes theorem. If I rank P(O|H1) above P(O|H2) for some observation O, while you do vice versa, then we will update our Bayesian beliefs in opposite directions given the same observation.

In practice, statistical models are also constructed on the basis of evidence. So on seeing some evidence about a topic, you might change your beliefs about the topic, or you might alternatively revise your statistical model about the probabilities of observations. You might switch to a more sophisticated and detailed existing model, having seen evidence that the model you were using previously is inadequate, or you might create an entirely new one as a result of finding apparent flaws in the old one.

Scientists switch models with such frequency and facility that they often don't even notice they're doing it. If rigid body dynamics doesn't work, they switch to elastic body dynamics. If Newtonian physics gives nonsensical answers, they can switch to relativistic physics. If classical physics doesn't work, they'll switch to quantum mechanics.

Human intelligence is built on multiple models of the world, and switches between them constantly to find one with enough fidelity to give sufficiently accurate results, but which is simple enough to calculate with. It was one of the early issues discovered in AI research - the initial assumption was that experts would have one set of rules to assign probabilities and make deductions, but it was found that this didn't work. Rules that worked in one situation failed in another, which humans had no problem understanding. Marvin Minsky concluded that humans build "frames" which are mental models of some specialised aspect of the world, and switch between them depending on context. Physicists have a 'Newtonian' frame and a 'relativistic' frame. People will interpret a word differently depending on the discussion that went on before, which determines what frame they're currently operating in.

In particular, people from different cultural weltanschauung use different models to assign probabilities to events. A socialist would presume the price caps would benefit the poor by making staple goods cheaper, an economic "neoliberal" would assume it would harm the poor by creating shortages and black markets. The poor are observed to be starving - is that because of the price caps the government introduced, or because of the neighbouring US waging economic war to defeat the glorious socialist revolution? The exact same evidence can give rise to opposing conclusions, because people are using different models of how the world works.

The model is itself part of the hypothesis, and subject to being updated on the basis of new evidence. However, most treatments of Bayesian reasoning ignore this, and simply assume the model as a given.

April 28, 2017 | Unregistered CommenterNiV

Incidentally, even using the same model, it's not generally true that Baysian updating will move people closer on the same evidence. The Bayesian update can be considered using log-likelihood. The log-likelihood ratio of the priors is updated by adding a fixed log-likelihood based on the experimental evidence to give the posterior. In the log-scale, the separation between initially different beliefs will remain the same after updating. In terms of probability, the probabilities can either converge or diverge, depending on where you start. Someone who is firmly convinced that the claim is false (prior is 0.000001, say) might update this to a slightly higher but still negligible probability (like 0.00001). Someone with a prior around 0.5 will, on the same evidence, update to a much bigger number (about 0.909). They've got further apart.

But I understood what you meant - that people could go in the opposite direction on the same evidence and model is inconsistent with Bayes. I'm just being pedantic now. :-)

April 28, 2017 | Unregistered CommenterNiV

NiV,

I meant Bayes doesn't allow two people to go in opposite directions from their priors on the same evidence. I do understand how they can get further apart while going in the same direction - which is not backfire at all. I didn't think I was assuming that the people had the same model. But I certainly was partially doing that.

OK - I will refrain from calling the backfire effect anti-Bayes - when it might just mean radically different models are in play. I really should think more carefully about how I phrase things. However, my point is that the Wood & Porter work calls into question (bigly) the backfire effect, and the backfire effect is (for me) the most alarming issue in CC. Where the backfire effect is defined as people moving in opposite directions on the same evidence (for whatever reason).

April 28, 2017 | Unregistered CommenterJonathan

@NiV -- see proof in Politically Motivated Reasoning sections 2 & 3. But if one accepts that Bayesianism is mute on how to determine likelihood -- and merely says what to do with it: use it as factor for multiplying prior odds-- then people polarizing when given same evidence isn't anti-bayesian. ONly normatively defensible conception of Bayes is one that insists likelihood ration be determined by valid truth-seeking criteria

April 28, 2017 | Registered CommenterDan Kahan

Dan,

I think there are cases where valid truth-seeking criteria can lead to likelihood ratios that diverge (<1 vs. >1). Consider criminal cases where the defendant is being framed. Some will just weigh the evidence without noticing that the evidence is too perfect, and increase their suspicion of the defendant. Others will notice that the evidence is too perfect and suspect the defendant less as a result.

Regardless, nothing like this is happening in Wood & Porter's results.

April 29, 2017 | Unregistered CommenterJonathan

@Jonathan-- All I can say is that after reading Wood & Porter I am even more convinced there is a backfire effect.

April 29, 2017 | Registered CommenterDan Kahan

"I meant Bayes doesn't allow two people to go in opposite directions from their priors on the same evidence."

I thought so. But like I said in the first comment, it's not necessarily true.

"I do understand how they can get further apart while going in the same direction - which is not backfire at all. I didn't think I was assuming that the people had the same model. But I certainly was partially doing that."

It's one of those unstated assumptions that people often don't even realise is an assumption! No blame accrues for doing what everyone else in the field does.

"Where the backfire effect is defined as people moving in opposite directions on the same evidence (for whatever reason)."

I think it's more usually defined as people believing less because of being told to believe more, for whatever reason. The original communication is meant to have one effect on belief, but "backfires" by having the opposite effect to the one intended. It would still be a backfire even if everyone who heard it went in the same direction. It's only the subset of backfires where people do go in opposite directions that raise suspicions about irrationality. That's just my understanding of the semantics, though.

"@NiV -- see proof in Politically Motivated Reasoning sections 2 & 3."

Yes, that includes some classic examples:

In effect, someone who is reasoning this way derives the likelihood ratio from his priors. It is as if he were reasoning: I think the odds that [controversial theory] is 10^-4:1. The National Academy of Sciences just issued an “expert consensus report” concluding that [controversial theory]. That’s not right - so I’ll assign the report a likelihood ratio of 1, or less [cf. Nyhan, Reifler, Richey & Freed 2014] since obviously the authors of the report were not genuine experts.

So, take an extreme example. I think the odds that 2+2=5 are less than 10^-6, but a billion experts from the Ultimate Source of All Scientific Authority Organisation have just announced in a definitive report that 2+2=5. What's more likely? That 2+2=5 and I've been counting my fingers wrongly all these years? Or that the "experts" are wrong, or having a joke, or have been misreported, or something?

The point is that the naive model only takes the truth of 2+2=5 as its hypothesis, but real people are more sophisticated than that. They also include in their hypothesis things like "... and the experts have done their sums correctly, and have been reported accurately, and have not been caught in Stephen Schneider's "double ethical bind",..." and so on. And the evidence can modify the level of belief in any part of the compound hypothesis - not just the bit you think it ought to.

Other classic examples are the stage magician ("did he really saw that lady in half and then stick her back together?!"), and the common paradoxes and fallacies scientists enjoy as puzzles. (Every mathematician knows a few "proofs" of 2+2=5 - if you can't spot the trick, should you believe that they actually did it, or that you're somehow being tricked?). Most people are sensible enough not to believe - even when they can't see how the trick works - because of the strength of their priors. That they're being deceived is a far more likely hypothesis.

In a world where even 'experts' can mislead you, truth-seeking behaviour can and must allow belief in experts to be discounted if what they say appears to be untrue or inconsistent. The priors in cases like this are themselves based on evidence previously seen - the question is not whether the priors per se should influence the interpretation of new evidence, but whether previously seen evidence should be taken into account too. It's true that people do it inconsistently - if they were following Bayes, then the order in which the evidence is presented shouldn't matter, and it does. But the fact they can discount new evidence because of its inconsistency with priors is not itself evidence that they're not truth-seeking in their behaviour, or that other explanations need to be sought.

Most of these examples in the literature use various forms of Argument from Authority as their new evidence. In the absence of strong priors, most people will accept that - it's a reasonable heuristic when trading mental effort against truth-seeking reliability. But even for people who use/trust the AfA heuristic, it's a pretty weak form of evidence, and if authorities conflict with stronger evidence previously seen, it's much easier to discount one's belief in their authority than one's belief in the evidence of one's own eyes.

And given the "Nullius in Verba" principle, I'd say they're a lot more 'scientific' to do so than all the researchers chasing new ways to prop up acceptance of AfA and the public's unquestioning trust in their own pronouncements. It's very easy to see the motivation behind that!

"In effect, someone engaged in motivated reasoning derives the likelihood ratio for new information not from truth-convergent criteria independent of her priors but from the impact that crediting the new information would have on her standing within a group whose members share identity-defining political commitments."

That's why I found the article Joshua found so interesting - "The UIC researchers and Jeremy A. Frimer, a corresponding author from the University of Winnipeg, indicate the divide goes beyond political topics. Respondents also had a “greater desire to hear from like- versus unlike-minded others on questions such as preferred beverages (Coke vs. Pepsi), seasons (spring vs. autumn), airplane seats (aisle vs. window), and sports leagues (NFL vs. NBA),” they wrote."

I really don't think a preference for a particular season or type of airplane seat preferences triggers much concern about their social standing within their political group. I think it's more likely people are judging expertise (and their willingness to listen to it) partly on the basis of whether what the experts say say appears to be true, rather than judging truth on the basis of what the experts say. And they'll include their own political beliefs as "truths" with which to test them.

Do you have a specific reason for thinking they're not? If you tried your National Academy of Sciences example on a non-political question like 2+2=5, do you think you'd get a different answer?

April 29, 2017 | Unregistered CommenterNiV

Anecdotally related, a link drop:

https://theintercept.com/2017/04/28/how-a-professional-climate-change-denier-discovered-the-lies-and-decided-to-fight-for-science/

April 29, 2017 | Unregistered CommenterJoshua

Dan,

"All I can say is that after reading Wood & Porter I am even more convinced there is a backfire effect."

LOL! A self-fulfilling prophet is the only believable prophet!

April 29, 2017 | Unregistered CommenterJonathan

=={ "All I can say is that after reading Wood & Porter I am even more convinced there is a backfire effect." }==

Is that because of a backfire effect?

April 29, 2017 | Unregistered CommenterJoshua

NiV,

About your 2+2=5 example - it's a bad example because math is outside the domain of discourse of Bayes. The reason is simple: if math is in doubt, then Bayes theorem itself is in doubt, because Bayes theorem is math. It's also considered outside because math is deductive, not inductive/empiricist.

Your magician example is similar to my framed defendant example. I agree that there are such "going meta" cases. But, even then, there are valid truth-seeking examples of "going meta" and non-valid ones. It would not be valid for someone to construct a meta-level counter-argument just because they're experiencing cognitive dissonance at the base level, for example.

I suspect that "going meta" is not involved in the ways most people challenge orthodoxy (in those rare cases when they do). That doesn't mean they would not be receptive to meta-level counter-arguments from others, just that they are not likely the source of such meta-level counter-arguments. I further suspect that most people don't differentiate among the meta levels - hence live in a "logical flat-earth". In terms of your magician example, this means that most people are not deceived because others have informed them that magicians use deceptive tricks. However, most people are still fascinated by magicians because they perceive a great skill is needed to deceive that well, and that they should be able to see through such deceptions - they have not been informed by the cognitive biases folks yet that such deceive-ability is natural, expected, ubiquitous and mostly "in the mind of the deceived".

However, I also suspect that "going meta" is often used by high OSI types to aid in their challenges to orthodoxy. It is also sometimes used by orthodoxy itself to dismiss challenges.

The problem with reckless use of "going meta" is that such cases are less correctable. A sufficiently strong urge/ability to go meta would probably lead one to become a conspiracy theorist. The ultimate case would be nihilism.

April 29, 2017 | Unregistered CommenterJonathan

"Anecdotally related, a link drop:"

Funny! Thanks for that!

What were the two devastating arguments that persuaded this conservative, lifelong professional sceptic and veteran debater to change his mind? That "some" of Hansen's predictions (which ones?) were "spot on", and that "Just because the costs and the benefits are more or less going to be a wash, he said, that doesn’t mean that the losers in climate change are just going to have to suck it up so Exxon and Koch Industries can make a good chunk of money."

I'm not sure what point you're trying to make, though. Did you really mean this as an example of a "backfire" communication, one intended to persuade people of one position, and through its utter implausibility (to anyone who has ever argued with a climate sceptic) having precisely the opposite effect? Does it?

It was very funny, anyway. :-)

April 29, 2017 | Unregistered CommenterNiV

"About your 2+2=5 example - it's a bad example because math is outside the domain of discourse of Bayes. The reason is simple: if math is in doubt, then Bayes theorem itself is in doubt, because Bayes theorem is math."

And why is that a problem? Godel's theorem considers the consistency and correctness of mathematics, and is mathematics. It's own truth/consistency is within the domain of its own discourse.

"It's also considered outside because math is deductive, not inductive/empiricist."

Actually, that's rather questionable if you get deeply enough into the philosophy. For one, mathematics chooses axioms aimed at modelling the world, and that's physics and therefore empirical. The "number of things" is a physical observation, that we formalise with our choice of mathematical rules, but in a universe with different physics we would pick different rules. (And in fact, in quantum mechanics "the number of things" is a quantum operator that can give fuzzy answers. Even in this universe, "number" is more complicated.) But that's all off-topic.

"I suspect that "going meta" is not involved in the ways most people challenge orthodoxy (in those rare cases when they do)."

Why do you think that?

"I further suspect that most people don't differentiate among the meta levels - hence live in a "logical flat-earth"."

You do?

Do *you* live in a logical flatland?


"In terms of your magician example, this means that most people are not deceived because others have informed them that magicians use deceptive tricks.

So they have a prior belief, yes? One which causes them to draw opposing conclusions from the same evidence compared to someone who hasn't been told? A lot of people believed Uri Geller. A lot of others didn't - and the more magic tricks he did the less they trusted him. Isn't that the sort of thing we're talking about?

"However, I also suspect that "going meta" is often used by high OSI types to aid in their challenges to orthodoxy. It is also sometimes used by orthodoxy itself to dismiss challenges."

Yes.

"The problem with reckless use of "going meta" is that such cases are less correctable. A sufficiently strong urge/ability to go meta would probably lead one to become a conspiracy theorist. The ultimate case would be nihilism."

Did you mean Solipsism?

Yes, I agree. Conspiracy theories become self-sustaining, because the absence of any solid evidence is exactly what you would predict from an all-controlling powerful conspiracy able to cover it up. Valid scepticism about sources has gone wild, and rejects anything outside the belief system as part of the cover up.

A lot of mental illnesses are actually about perfectly normal cognitive processes being taken to an extreme. The fact that conspiracy theories are so common (like climate scepticism being secretly funded by Exxon/Koch...!) among humans - even highly educated ones - suggests that sort of meta-scepticism is a big part of people's cognitive machinery.

The trick is to be aware of the possibility, and to consider the competing standards of evidence before rejecting a claim. This is why, as I keep pointing out, the fact that low science literate partisans are less polarised is so important. People only reject claims they don't like if they can construct a "truth-seeking" reason to do so. They're no more comfortable with the conclusions than their high-OSI co-partisans, but everyone likes to believe they're rational.

April 29, 2017 | Unregistered CommenterNiV

NiV,

"Godel's theorem considers the consistency and correctness of mathematics, and is mathematics. It's own truth/consistency is within the domain of its own discourse."

The domain of discourse of Godel's Theorems is Peano arithmetic (or any system that contains Peano arithmetic), but the deductive rules and conclusions of Godel's Theorems are not in Peano arithmetic (or whatever Peano-arithmetic-containing system is chosen as their domain of discourse). Furthermore, Godel's Theorems are the very proof of the reason why Bayes Theorem cannot be about Peano arithmetic - because Bayes theorem is a deductive rule that is a formula in Peano arithmetic, and Godel's theorem (part 2) states that mathematical systems that are at least as strong as Peano arithmetic cannot consistently reason about their own validity. The formula "2+2=5" is also in Peano arithmetic, hence Bayes cannot consistently reason about its validity. Godel's theorems were the end of the "logical flat-earth" project in mathematics, undertaken famously by Russell and Whitehead. As a result, mathematical logic has since then adopted a "leveled" approach, where one must be extremely careful about which level any particular formula is at, so as not to create a level cycle.

The word "math" is itself a flat-earth category - as I should have said "Peano arithmetic" all along. However, I do suspect that you attempted to use Godel's theorems without understanding enough about them.

April 29, 2017 | Unregistered CommenterJonathan

Oops - actually, Bayes, being about real probabilities, is not in Peano arithmetic, but instead in a system of reals that includes Peano arithmetic. But, no matter - Godel's theorems still apply to that system, and prevent Bayes from being a valid rule of inference about the very system it is confined within.

Also, slight oops - one could write a Bayes-looking formula that is quantified over whatever arithmetic system one chooses, but it wouldn't (according to Godel's theorems) be able to get all of its support from that system, hence it would not be Bayes theorem (as its proof, if it existed, would not be the same proof as any used to prove Bayes).

Mixing formal and informal talk about math is a bitch...

April 29, 2017 | Unregistered CommenterJonathan

"However, I do suspect that you attempted to use Godel's theorems without understanding enough about them."

Hmm. I think I could say the same. :-)

Godel originally proved his theorems for the arithmetic system in Russel and Whitehead's Principia Mathematica, not Peano arithmetic. However, it works for Peano arithmetic, Zermelo-Fraenkel (ZF), Zermelo-Fraenkel with the axiom of choice (ZFC) and any extension of those. Since ZF/ZFC are the most common axiomatisations of mathematics used, it can be said to apply to mathematics generally - bar a few very restricted corners of it.

"but the deductive rules and conclusions of Godel's Theorems are not in Peano arithmetic (or whatever Peano-arithmetic-containing system is chosen as their domain of discourse)"

Yes they are. That was the entire point of Godel's theorem - that he encoded the deductive rules and conclusions of the theorem as statements and relations in number theory. Thus, anything that could do number theory thereby included the theorems.

"Furthermore, Godel's Theorems are the very proof of the reason why Bayes Theorem cannot be about Peano arithmetic - because Bayes theorem is a deductive rule that is a formula in Peano arithmetic, and Godel's theorem (part 2) states that mathematical systems that are at least as strong as Peano arithmetic cannot consistently reason about their own validity. "

Mathematical systems cannot prove the consistency of the *entire* system, but they *can* reason about the consistency of subsets of them - and individual theorems in particular. For example, Peano arithmetic can be proved consistent using ZFC, which contains it.

The mechanisms of Godel's theorem *do* provide a way for a mathematical system to reason about such matters as proof and consistency of their own theorems. There are limits to what it can achieve, but it's like humans doing mathematics. Humans cannot prove that human-constructed mathematics is consistent, but humans can most definitely reason about many human-constructed proofs. Godel numbering is just a way to automate part of it.

"Godel's theorems were the end of the "logical flat-earth" project in mathematics, undertaken famously by Russell and Whitehead. As a result, mathematical logic has since then adopted a "leveled" approach, where one must be extremely careful about which level any particular formula is at, so as not to create a level cycle."

I'm guessing that by "levels" you're talking about Russell's theory of types, which was already incorporated into Russell and Whitehead's work by 1908. Subsequent mathematics tends to regard it as too clumsy and restrictive, and has worked to relax Russell's rules considerably.

Bayes theorem is a straightforward theorem in measure theory, and it's easy to assign measures to the space of theorems (they can be mapped onto numbers by Godel numbering, and you can certainly apply measures to sets of numbers). The only subtlety you have to be careful of is to distinguish Bayesian probability from Bayesian belief. The philosophical questions about whether probability is real are fraught enough in physics (what with the universe being deterministic according to some interpretations), but it's highly questionable what it means to say a theorem is "probably" true. Is the Riemann Hypothesis "probably" true? Most mathematicians would say so. But most of the problems clear away if one recognises that it is actually Bayesian belief they're talking about.

Also, the uncertainty people might have about arithmetical questions is not purely an issue of mathematical truth - it's largely an issue of the potential fallibility of the computation done by the brain, so we're actually talking about physics and biology, not mathematics. Humans are prone to a range of errors where there are 'bugs' in the algorithms it uses - like optical illusions. Like you can *see* spots where there are none. It's legitimate to ask whether other 'direct perceptions' of facts like 2+2 being equal to 4 could be the result of the same sort of universal 'bugs' in human reasoning, and to admit some (very small) uncertainty about the question. Uncertainty means we can apply Bayes. We can never be sure that we're not making a mistake in doing so, since if maths is inconsistent then Bayes is unreliable. But that applies to all human reasoning anyway. We reason with fallible computers made of meat. The best we can ever do is approximate the truth with some high level of confidence. But being fallible is no reason to give up reason entirely.

"Mixing formal and informal talk about math is a bitch..."

Yes, it is! :-)

April 29, 2017 | Unregistered CommenterNiV

NiV,

When you say that Godel's Theorems "encoded the deductive rules and conclusions of the theorem as statements and relations in number theory. Thus, anything that could do number theory thereby included the theorems." That's certainly true, but not the issue here. The issue is the inference step necessary to prove Godel's theorems. The Godel sentence G for system S cannot be proven by system S - precisely because the Godel sentence G for system S is "The Godel sentence G for system S cannot be proven by system S". Thus, one can infer that if G can be created in S, then S cannot prove G without becoming inconsistent. This last inference step is valid for us, but not within system S. If it was valid in S, then S could prove G, which would be a contradiction.

So, here's a rough sketch of a proof of my point: Pick a system S that can be used to prove Bayes theorem. Can Bayes then be used to infer the validity or invalidity of system S, such as by probing contradictory statements like "2+2=5" in S? It would be a problem if it was able to infer "2+2=5" is true, as that would imply S is inconsistent (I'm assuming you didn't pick a system S that doesn't include Peano arithmetic). It would also be a problem if it was able to infer any false statement like "2+2=5" is false - as that would allow it to infer the negation of the Godel sentence G is false, which would be equivalent to inferring that G is true, which is a contradiction. Note that G, in order to be the Godel sentence for S, would include encoding about inference by Bayes theorem itself (which is the really weird part for me - normally mathematical inference isn't done evidentially/probabilistically - but if Bayes is to be included as an inference system of mathematical validity such that statements like "2+2=5" are open to Bayes, then it must be encoded in G's 'provable' predicate. You'd also need to choose some probability <1 above which G considers things "proven", since Bayes can't reach 1. If you choose 1, that is effectively ruling Bayes out as a rule of inference.).

Hmmm - but at one point you say you're talking specifically about Bayesian belief. I guess that means your case about "2+2=5" is not meant to be about the consistency of some system of mathematics, but is instead a question about the beliefs of some person about statements in that system, as those beliefs are maintained by Bayesian inference. In that case, the probabilistic values that are being manipulated by Bayes' formula are not about the likelihood that 2+2 objectively is or is not 5. That is like saying that G's provable predicate does not include Bayesian inference. In that case, such use of Bayes is not then trapped as above.

However, as someone undergoing belief maintenance, I would not use Bayes theorem on mathematical statements like "2+2=5" - because the moment I thought that "2+2=5" or some similarly obviously inconsistent mathematical statement was at all likely, I'd have to question the validity of Bayes theorem itself. And, you sort of agree, I think: "We can never be sure that we're not making a mistake in doing so, since if maths is inconsistent then Bayes is unreliable." I would instead say "if maths is inconsistent then Bayes is utter rubbish". One could use any formula at all just as (un)reliably. Or just make up results - and they don't even have to conform to probabilities (0<=p<=1). "But that applies to all human reasoning anyway" - unreliable, yes; utter rubbish, I hope not! Although, by Godel's 2nd theorem, I'll never know.

What about open questions in math, such as P = NP? I'm still not really Bayesian here, because the moment I see a formal proof (assuming it has a humanly-understandable proof, and understandable by me) one way or the other, I recognize a mathematical truth and I act like its probability is pinned at 1. So, when I say something like "I think P <> NP is more likely than P = NP", that isn't a belief formed by Bayesian inference - I am instead making some non-mathematical informal guess. It's a useful fiction. I'm not at all under the impression that I'm maintaining this belief in strict accordance with Bayes theorem. I may appear to be maintaining it in a roughly Bayesian-like way, but only because that's all I can think of doing.

Finally, if math statements are in the Bayesian domain of discourse, with probs < 1, then what is your prior for the validity of Bayes theorem, and what kind of evidence makes you modify that belief? If you have a Bayesian-inferred belief about something at 0.9, and you encounter good evidence that it is false, do you modify that belief, or use that evidence to modify your belief in Bayes theorem, or both? Since your belief in Bayes theorem is less than 1, what other conflicting inference technique(s) are you open to as a result, and how do you combine their usage?

Which is why "2+2=5" is a bad example of how Bayes' theorem could allow for the effect we were originally discussing - of which the magician and the framed defendant cases are better examples. Neither of those examples throws all of mathematics (rather: our belief in all of mathematics), including Bayes' theorem itself, into turmoil.

April 30, 2017 | Unregistered CommenterJonathan

"The Godel sentence G for system S cannot be proven by system S"

The Godel sentence isn't the Godel theorem. The Godel theorem is that the Godel sentence cannot be proved if the system is consistent. That can be proved

" Can Bayes then be used to infer the validity or invalidity of system S, [...]?"

No. Bayes can be used to assign a measure to the belief in the consistency of S, that interacts with other beliefs in a way consistent with probability. That's nothing at all like proving the validity or invalidity of S.

Being able to say we believe the Riemann Hypothesis is "probably true", does not constitute a proof of the Riemann Hypothesis. Being able to say we believe that mathematics is "probably consistent" does not constitute a proof that it is.

"You'd also need to choose some probability <1 above which G considers things "proven", since Bayes can't reach 1."

Nothing can. We always have to assume that we made no mistakes in our proof. If the probability of making an error in a step is p, and the proof has n steps, then the probability of correctness is Pc = (1-p)^n, and if k people check the proof and their errors can be considered to be independent, the probability of correctness is 1 - (1-Pc)^k. It's easy to get a very high probability of correctness this way, but there's always a remote chance that everyone who checks it makes exactly the same set of errors in the same places when calculating/proving it.

Note - our confidence in the truth of the theorem is based on observing a number of fallible proof-machines proving it, with small but non-zero probabilities of error. Looks very Bayesian, doesn't it?

"." I would instead say "if maths is inconsistent then Bayes is utter rubbish". "

If we found maths to be inconsistent, we'd just modify the axioms to deal with the problem.

When Russell found Russell's paradox rendered naive set theory inconsistent, he just changed the rules to keep the results he wanted and exclude the inconsistency. It's the same with human reasoning. If you discover you have misunderstood something and are reasoning incorrectly, you just change the way you reason to fix it. Stop using the invalid rule.

If we found an inconsistency that broke all systems with arithmetic, though, it would probably break human reasoning, too, since human reasoning includes numbers as a built-in feature. There's no guessing what the consequences of that would be, though.

". So, when I say something like "I think P <> NP is more likely than P = NP", that isn't a belief formed by Bayesian inference "

Why do you believe it, then?

I'd say something like: "Given the amount of interest and research in the question, and given that most simple questions have simple answers, if there was a simple algorithm for converting NP to P someone would have found it by now. Nobody has, so I think they're probably unequal." I've modelled a predicted consequence from their equality, made an observation to see if the prediction occurred, and found that it didn't. I've then modified my belief in the hypothesis in (rough) accordance with my assessment of the probability of researchers failing to find an algorithm when there is one, versus the probability of them failing to find an algorithm when there is none.

That's an application of Bayes theorem. My assessment of the probability of researchers finding an algorithm if there is one is similarly based on observation and experience. I've seen how often people are able to find algorithms to answer interesting questions, and how long it usually takes them. I've seen how often simple questions have simple or irreducibly complicated answers. And yes, there's a certain amount of guesswork/assumption in there too, but Bayes says nothing about how the statistical model is to be constructed. It seems pretty straightforward to me.

--
By the way, thanks for the interesting discussion. I don't often get the chance to argue at this level about interesting stuff with people who understand it. :-)

April 30, 2017 | Unregistered CommenterNiV
Member Account Required
You must have a member account on this website in order to post comments. Log in to your account to enable posting.