Newsletter 159 - October 3, 2023
by Heidi Burgess and Guy Burgess
Sorting out Facts
As important as it is that we learn how to communicate in ways that limit misunderstandings, it is equally important that we learn how to assure that the information that we communicate is trustworthy and accurate. In today's political environment, it is very hard to distinguish "true facts" from "fake facts" (and here we are referring to information that is really fake, not information that is actually true, but someone has labeled it "fake" because they don't like it or don't believe it because of various cognitive biases. Given that most people choose to get their information from sources that agree with their worldview and, generally, do not challenge their core beliefs (even when they are wrong), it is very hard to know what is really true and really not. But it matters. The real world can really hurt you if you do not take adequate precautions — precautions that increase in effectiveness in direct proportion to the accuracy of the information upon which they are based. COVID can really kill you (or ruin your health), even if you don't believe it exists. Measles is a real threat to your children, particularly if you don't vaccinate them because of unfounded fears that the vaccine causes autism. Climate change threatens our children and grandchildren even if we think otherwise. In short, it matters what we believe and how we act on that information.
That said, it is important to understand that not everything "science" says is trustworthy and that much non-scientific knowledge is valid. Look at the science around nutrition, for example. For a long time, we were told high carbohydrate, low fat diets were good for us. Then they weren't. We were told butter was bad for us. Then it wasn't. Now it maybe is again (I haven't followed this recently.) As Heidi searched Google for examples of "bad science," she came up with a story published in the venerable journal Science that stated that gay political canvassers could change the minds of long-time anti-gay voters with just a short conversation at the door. It turns out, though, that isn't true — the data were faked.
The problem, unfortunately, goes deeper than a relatively modest number of cases of scientific fraud. Psychology, for example, is now struggling with the fact that many important research findings cannot be replicated. There are disturbing allegations that scientific journals are subtly pressuring authors to alter their findings in ways that more closely correspond to favored narratives. Even more serious problems plague much of medical research as this especially comprehensive review of sources of error explained:
This, unfortunately, is just the tip of the iceberg. There are lots of reasons why reasonable people (and, especially, journalists) might question "scientific" findings, especially in today's hyper-polarized environment. To protect ourselves, we all need to be much more careful about what we believe and what we don't. Consider the source. Is it reputable? Does it have a vested interest in a particular answer? Does it have a political agenda? (Sometimes organizations with vested interests will still do good science, but one should be more wary.) Get second and third opinions on things that matter. And don't spread information further unless you know the source and have good reason to believe it is true. Be especially wary of sources that share the kind of inflammatory content that is likely to get lots of "likes" when shared on social media. These outrageous stories are, almost always, false or highly misleading, and they are big drivers of hyper-polarization.
Scientific Disagreements and Uncertainties
What do you do when scientists disagree with each other, or seemingly keep changing their minds? Years ago Senator Ed Muskie, exasperated over the unwillingness of scientists to provide definitive answers to complex public policy questions, was said to have wished for "a one-armed scientist" — someone who wouldn't keep saying "on the other hand..." Unfortunately for Sen. Muskie, responsible scientists need to openly acknowledge their disagreements and the inevitable uncertainties associated with their work. The issues they study are immensely complex, and even the best, and most lavishly, funded scientists don't have the ability to sort out all the uncertainties enough to be 100% sure of their answers. Different scientists, using different assumptions, data sources, and methods can and often do reach very different conclusions.
In conflict situations, it is common for competing advocacy groups to seek out experts whose analyses support their predetermined policy preferences. The result is that competing interest groups each have their own "scientists" — scientists who advocate for different policies based on (sometimes) the same science, and (other times) different scientific "facts," In such circumstances, it is helpful to have all scientific experts from all political camps work together to sort out their differences. Are they simply interpreting the same data differently, and could they come to a consensus conclusion if they worked together? Or, if they are using different data, how was each data set collected? Were the methods reliable? Are there joint research projects they could conduct that would resolve these disagreements? Bottom line, if the competing scientists are willing to work together, they are much more likely to be able to come closer to a consensus interpretation of "the facts."
It is also important to remember, however, that all science involves irreducible uncertainties and, as additional information accumulates, it is both normal and desirable for past conclusions to be disproven and for new facts to take on the role of "best currently available information" That's the way science advances (especially in situations where we are confronting a new, quickly changing, and poorly understood phenomenon like the COVID-19 pandemic).
Differentiating between Facts and Values
Just as it has become difficult to differentiate between "real facts" and "fake facts," it has also become very difficult to differentiate between facts and values. But this distinction, too, is essential. Facts are accurate descriptions of some aspect of objective reality. For example, the number of people who voted for each candidate in any election is a knowable fact. The changes in average temperature in various parts of the world and the speed with which glaciers are disappearing and contributing to sea level rise (while perhaps hard to measure) is a knowable fact. The number of people who have died of COVID is another theoretically knowable, but hard to measure, fact. (If someone died of a heart attack while they had COVID, is that a death from COVID? That is debatable, but as long as one makes the criteria clear, then the counts should be reasonably objective. (This assumes, of course, that the people attending these deaths have the needed time and expertise and that there is a reliable reporting system available for collecting this information.) Measurement costs and difficulties mean that even relatively straightforward facts like these involve significant uncertainties (e.g. We are 95% sure that the true number falls within X and Y).
These are just two relatively simple examples. It turns out that, as Jon Askonas explains, in his excellent article on the history of facts, the idea that there are facts is surprisingly recent and currently under attack with, as Yascha Mounk documents in his new book on identity politics, many now questioning the very existence of objective reality.
Values, on the other hand, are people's opinions about what is right and wrong, good or bad. People might prefer the agenda of Republicans or Democrats — that preference is a value. Who legally won the election, though, is a fact, even though you may disagree with the judicial judgments that ratified that election. People may believe that climate change is the most important problem facing the world today — that's a value judgment. Or they may believe that climate is not nearly as important as another issue, for instance, the economy. That, too, is a value judgment. It is the judgment regarding what is and is not important that places this in the realm of values. But whether the arctic is getting warmer and glaciers are disappearing — that's a fact. That's even a fact that is fairly easy to measure.
Now, whether that warming and melting is caused by human behavior, and whether it is a long-term trend or a short term "blip," is also a theoretically knowable fact. But it is a fact that is harder to measure and prove, since the climate system is so extremely complex. This means that the uncertainties are much larger than those surrounding simpler factual questions. The vast majority of scientists believe it is true, which means (it seems to us) that it probably is, and it is our opinion that we should behave assuming that it is correct, because further delay, waiting for proof, is simply too dangerous. But that's our opinion, reflecting our values. It is not a fact.
The decisions we make about what to believe and how to act in the face of scientific uncertainty are value judgments. Similarly, people may think that forcing people to get vaccinated is unfair. That's an opinion, a value. But the idea that vaccines help reduce hospitalizations and deaths from COVID or measles or flu — that's a fact that has been firmly established — at least according to our understanding of the reporting of the current science available. While science (and rigorous legal and policy analysis) can predict, with much more accuracy than other methods, the outcomes that are likely to result from different policy choices and courses of action, in a democracy, it is up to the citizens, through their elected officials, to decide which policies (and expected outcomes) to pursue.
This distinction can be visualized by thinking of science and fact-finding as focused on finding the answer to a series of "if.../then..." statements, if you do A, then X is likely to result, If you do B, then Y Is likely to result, And if you do C, then Z is the likely result. Value judgments are, by contrast, focused on making decisions about whether X, Y, or Z is most preferable and, on that basis, choosing to do A, B, or C.
People, particularly politicians, but many other people as well, muddle the distinction between facts and values. For example, consider those who claim that science says that we should require that all students to be vaccinated against COVID-19. What is really happening is that such individuals are making a policy recommendation based on their personal beliefs about how we should balance the costs, benefits, and risks that scientists have told us are associated with COVID-19 and the vaccines. Other people, with different risk profiles, are likely to look at the same information and come to different conclusions. What policymakers need to do is to balance these competing judgments — that's not a job for science. Science's role is to make the information upon which those judgments are based as accurate as possible.
Translating Science into Widely Understandable Language
The above discussion raises another critical issue: How does the general public find out what the science says? And how do we know what science to believe? One big problem with science is that it is generally published in journals that are written for other scientists in the same field. So the articles are written using terminology that is only understandable to people with similar, usually advanced, training in that particular field. Other people, often, cannot understand it.
Some journalists make an effort to learn scientific language enough to become science writers, and they then try to translate the technical writing into something that the general public can understand in ways that separate findings with important societal implications from academic curiosities of little interest to the broader society. Some of these folks are better than others, and all are better in some fields than others. So while some errors occur this way, reading their stories is usually more helpful than trying to wade through volumes of jargon-laden scientific documents.
In presenting scientific information it is, of course, important that journalists do so in ways that accurately represent what is currently known and not slant their stories or focus on only those sources that reinforce predetermined narratives designed to do things like attract a wider audience or advance a personal or organizational political cause. For example, Roger Pielke argues, in this article, that most of the stories that are now attributing extreme weather to climate change are presenting as established scientific fact conclusions that differ substantially from the conclusions of the prestigious Intergovernmental Panel on Climate Change.
Even better than articles by science writers are articles written by the scientists themselves who take the time to write generally-accessible and understandable articles that explain what they have learned and why it is important for the larger society to understand their findings.
Despite such efforts though, there is still a large segment of the public which does not understand or trust anything labeled as "scientific." Given the politicizing of science described above, this is understandable. Plus, many people have seen many reversals, and don't understand that reversal is the nature of science (as scientists learn more, they disprove scientific knowledge from before). That is part of what is so confounding and confusing about the science around COVID. When the pandemic first started in January 2020, we knew very little about the virus. Scientists and doctors made their best guesses about how it would behave based on knowledge about how other coronaviruses behaved. Over time, and after many more studies, scientists began to learn more. So advice about what people should or shouldn't do to stay safe (or safer) changed over time.
The Politicalization of Science
Some of that advice also may have been tainted by political (or practical) concerns. When Americans were first told that we didn't need masks, only health care workers needed masks, that didn't make sense to us. We agree, they probably needed them more than others, since they were likely coming into close contact with infected people much more than others. But that wasn't what was said. What was said was that ordinary people didn't need masks to keep themselves safe.
Then, when scientists learned more about the aerosol transmission of the virus and, coincidentally, we began to be able to produce more masks, then government "experts," scientists, doctors, and science writers began telling people we all needed to wear masks. It is not hard to see why some people would not trust these "experts," who first said one thing, but then another. But that's the nature of science. With it, we learn. We become smarter and more able to interact with complex systems such as pandemics, more effectively.
But added to that, of course, is the fact that wearing masks and getting vaccinated became political at least in the U.S. If you wore a mask, or got the vaccine, that was a signal, in many contexts, that you were a Democrat. Many on the right were told that masks and vaccines weren't needed, and were being needlessly imposed by "big government" and lying scientists. This distrust of science and the politicization of the pandemic made the U.S. one of the worst countries for COVID deaths in the world.
Just as we were finishing up this article, we read a newly published article by Jessica Weinkle, entitled "Don't hate the player, hate the game." The essay points out the degree to which scientific investigation and particularly fact-reporting has become politicized. Journal "peer reviews" are twisted to favor publishing articles that meet the journal's values, while declining those that don't. Weinkle focused in on the journal Nature saying: "FOIA ([Freedom of Information Act] investigation into policymaker communications about the potential lab origins of COVID-19 reveal serious problems in authors misrepresenting the state of knowledge and pressure from political interests on the Editor in Chief, Magdalena Skipper." Weinkle also cites the work of Patrick Brown, who, she says, has
expertise in extreme weather attribution science and publishing in high powered journals. Brown succinctly outlined the whole racket step by step. 'The worse offense, and Brown’s overarching point, is that this entire scheme from politicized science to the business of publishing does little to nothing to improve society’s ability to advance towards its publicly stated value goals.'
Brown's article, not surprisingly, spurred outrage. Weinkle's observation:
By articulating the structure of knowledge production, the underlying belief system and motivations, by outlining a how-to, he shows how the game is played.
Brown makes it difficult to ignore the decades worth of abundant observations that mainstream climate change science is not just politicized, it is big business. And elite journals are in on it.
So what does this mean? Does this mean that we shouldn't trust any science? That we should believe whatever we prefer to believe because our "lived experience" is better than the "crooked science"? The truth is that, because of the unavoidable limitations of our personal, lived experiences, we have no choice but to to rely upon the systematic compilation and scientific analysis of data about our complex world. "Lived experience" did not bring us the wonders of the modern world—it took science and technology to do that. We can slam COVID science all we want, but it did develop COVID vaccines faster than anyone thought possible and clearly saved hundreds of thousands of lives.
But we also have no doubt that politicians should stay out of the lab and out of the scientific publishing business. There should be a firewall between the two. We used to be skeptical of studies funded by industry—studies of the safety of powerplants, for instance, that were funded by energy companies. But it now appears we need to be much more vigilant in our efforts to protect science from political pressures while simultaneously working to address perverse incentives that have arisen within the scientific community — incentives that are distorting its findings and undermining its critically important contribution to society.
This mess needs to be cleaned up—because facts DO matter. If we are pursuing politically expedient solutions to climate change that won't really work because the climate is not a political animal, we're in trouble. We need to understand, as best as humanly possible, what is really going on, and then we need to have as accurate an image, again, as humanly possible, about what responses will likely make a difference and which will not. It will take "clean science," to do that.
Another idea that might be tried is to complement peer reviews, which we acknowledge are essential for keeping science trustworthy, with reviews by people outside the author's area of expertise to help assure that the paper addresses topics of broader social concern in an understandable way. Such explanations wouldn't need to be throughout the paper — the methods section, for instance, could be highly technical while the discussion and conclusion sections should be written in a way lay people could understand. This would go a long way toward making science more trustworthy and trusted.
At the same time, experts and science writers need to take care not to "talk down" to non-experts. There was a ballot initiative being discussed in our hometown of Boulder, Colorado a couple of years ago, that argued that a large annexation of property into the city should be put up for a vote of the citizens before it goes forward. The argument being made against that initiative is that the issue is "too technical" for ordinary people to understand, and the decision should be left to "the experts." We can't think of anything that would make the general public more suspicious! How does someone know what we can and cannot understand? We certainly understand that this annexation would lead to significant growth of our community, straining community services significantly. It would also urbanize a large piece of open space that is vital to many non-human species, and is enjoyed by a great many humans who regularly recreate there. Most of our neighbors thought that they were capable of comparing the likely costs of annexation with expected benefits.
Part of what was at issue here, again, was the confusion of facts and values. The scientists could study the technical impacts of annexation. Flood mitigation was one of the big issues, as was growth and environmental impacts. But once the likely impacts are presented by scientists, then someone has to make a value judgment about relative importance of the various issues. Is better flood mitigation more important than slowing growth? Is it more important than retaining the natural wetlands? Questions about what is better or worse, more important or less important, are value questions. And those questions must be answered by the citizens or the representatives citizens choose to represent them, not solely by scientists.
Conclusion
In this essay we have tried to highlight two major points. First, our ability to successfully navigate the many complex problems that currently plague society will ultimately depend upon the accuracy and sophistication of our collective understanding of these problems and the relative merits of possible solutions. This is something that will require 1) the systematic collection of reliable data, 2) the trustworthy analyses of that data, 3) the understandable presentation of policy-relevant findings to the public (and its democratically selected decision-makers), and 4) the sensible use of that data to guide societal problem-solving efforts. Our second point was that society's systems for doing these things is, in many ways, failing and in need of major reforms. This is an extraordinarily complex and multifaceted problem which we simply must find a way to effectively address. As we have argued in previous editions of this newsletter, the only way that we see to meet this challenge is through a massively parallel series of initiatives, each designed to address one or a few of the many problems outlined above. We would greatly appreciate learning about (and forwarding to our readers) any information you might have about people who are working to address some of these problems.
Please Contribute Your Ideas To This Discussion!
In order to prevent bots, spammers, and other malicious content, we are asking contributors to send their contributions to us directly. If your idea is short, with simple formatting, you can put it directly in the contact box. However, the contact form does not allow attachments. So if you are contributing a longer article, with formatting beyond simple paragraphs, just send us a note using the contact box, and we'll respond via an email to which you can reply with your attachment. This is a bit of a hassle, we know, but it has kept our site (and our inbox) clean. And if you are wondering, we do publish essays that disagree with or are critical of us. We want a robust exchange of views.
About the MBI Newsletters
Once a week or so, we, the BI Directors, share some thoughts, along with new posts from the Hyper-polarization Blog and and useful links from other sources. We used to put this all together in one newsletter which went out once or twice a week. We are now experimenting with breaking the Newsletter up into several shorter newsletters. Each Newsletter will be posted on BI, and sent out by email through Substack to subscribers. You can sign up to receive your copy here and find the latest newsletter here or on our BI Newsletter page, which also provides access to all the past newsletters, going back to 2017.
NOTE! If you signed up for this Newsletter and don't see it in your inbox, it might be going to one of your other emails folder (such as promotions, social, or spam). Check there or search for beyondintractability@substack.com and if you still can't find it, first go to our Substack help page, and if that doesn't help, please contact us.
If you like what you read here, please ....