Ep. 44: The Science Of Misinformation And How To Prevent It With Dr. Sander van der Linden


TED 44 | Preventing Misinformation

With the pandemic, climate change, and science, anything can be politicized in today's environment. People need to understand what misinformation is so that they can avoid fake news. Join Dr. Alaina Rajagopal and her guest, Dr. Sander van der Linden, in this discussion on how to fight misinformation and learn what is real and what isn't. Dr. van der Linden is a Professor of Social Psychology at the University of Cambridge. He is also the Director of the Cambridge Social Decision-Making Lab. Learn all about medical misinformation, fake news, bias, how social media helps inflate misinformation, and the role science has in decision-making and information. Discover what Dr. van der Linden is doing to help people spot misinformation. Tune in and learn what you can do to avoid getting duped by fake news and stop spreading of incorrect information.


Listen to the podcast here:




The Science Of Misinformation And How To Prevent It With Dr. Sander van der Linden


We're talking to Dr. Sander van der Linden who's an expert on misinformation. Over the last few years, it's become increasingly apparent how the spread of incorrect information can be damaging on a societal level. We'll talk a little bit about where misinformation comes from, what it is and how you can stop it. Welcome, Dr. Van der Linden.


It’s a pleasure to be on the show.


Dr. Van der Linden is a Professor of Social Psychology at the University of Cambridge and Director of the Cambridge Social Decision-Making Lab. Prior to Cambridge, Dr. Van der Linden was a Postdoctoral Research Associate and Lecturer in the Department of Psychology at Princeton University. Before that, he was a visiting Research Scholar at Yale University. He received his PhD from the London School of Economics and Political Science.


His research centers around the psychology of human judgment, communication, and decision-making. He's also interested in the psychology of fake news, media effects, and beliefs systems such as conspiracy theories. As well as the emergence of social norms and networks, attitudes and polarization, reasoning about evidence, and the public understanding of risk and uncertainty. Let’s start with the basics. What is misinformation? How does misinformation differ from disinformation? Let’s get some definitions.


There are a lot of debates on these definition concepts. I use a definition that I think is okay. It's pretty decent as far as definitions go. I view misinformation as anything that's incorrect. It's the broadest overarching concept. It could be a simple error. Whereas, disinformation is misinformation coupled with some psychological intentions to harm or deceive other people or to manipulate them.


Disinformation is misinformation plus some intent to deceive people. People use the term fake news for both misinformation and disinformation. Sometimes also propaganda, which is to me, disinformation plus a political agenda. You're deceiving people in the service of a political agenda. I can’t distinguish between those three layers because it's not perfect.


One of my favorite examples is a headline from a reputable outlet like The Chicago Tribune, which ran with the story that a doctor died after receiving the COVID vaccine, which was two independent events but it was framed as if there's some connection there, which was unknown at the time. Is that disinformation? Is that misinformation?


If it's an error and they weren't thinking about it, maybe it's misinformation. Was it an explicit clickbait attempt to get people into clicking on the story? Maybe it's disinformation. Trying to discern the intent can be quite difficult. Just keeping that distinction in mind is potentially useful because I'm much more concerned about disinformation than I am about misinformation.


Especially, in looking at vaccines, a lot of people will draw an association with an event that has happened one time to one person and say, “I got my COVID vaccine then I got sick after. I must have gotten COVID.” There's a lot of danger in drawing those associations. That newspaper article highlights that and how common that is.


It's a big issue. Scientific or medical misinformation is particularly tricky, especially as our understanding of science has evolved. Some things you can say are categorically true or untrue regardless of what stage in the scientific process we're in. Other things that are emerging can sometimes be a bit trickier. Does the virus originate from a lab?


How does the study of misinformation relate to behavioral science? You've already touched on this a little bit.


There are two sides to it. One is we can try to understand the psychology of it. How does the brain process information more generally, but misinformation in particular? Why do we make errors when it comes to judging the veracity of news media content? What's going on there in terms of the psychological mechanisms of what makes us think that something is true versus false? That's a very interesting line of study.


There are also the consequences of individuals believing, endorsing and sharing misinformation. Both consequences to the individual as well as to society at large. Some of those consequences range from not supporting action on climate change to not vaccinating yourself or your children, which also has larger societal effects. If not enough people are vaccinated, it compromises herd immunity.


Misinformation has led to people ingesting harmful substances. Here in the UK, people have set phones and masks on fire because they think that 5G is somehow connected to the spread of COVID-19. Viral rumors on WhatsApp led to mob lynchings in India. There are behavioral consequences to misinformation as well.

Misinformation spreads faster, further, and deeper in social media than factual information.

The pandemic has been an interesting example of how humans interact with one another, balance personal risk, and personal independence with societal needs. Do you have any insight into the choices people are making? You named some examples.


In social psychology, we call it a social dilemma. One of its key examples is the decision to vaccinate. The logic behind the social dilemma is that what seems to be in your personal interest or a rational thing for you to do is not to take maybe any risk or maximize your own personal preference. If everyone does that, then collectively we're all worse off. That's the case with vaccinations. If people say, "I don't want to get a jab. I don't want to take a small risk. I don't want to expose my child to potential risks."


When we take an aspirin, we also take risks. There's a risk with everything. The key point is that if everyone takes a tiny amount of risk, then we're all going to be protected. That’s what's in the public interest. That’s the nature of social dilemmas and you see those everywhere. It's vaccination, climate change and recycling. Anything we do is a trade-off between, “Am I going to do what's in my interest or am I going to do something that's also going to be an interest of the collective, the planet, your family or your neighbors? It's interesting to study how people make these trade-offs.


With vaccination, what we've seen is that people always think about the pro-social angles or what we call the benefits to other people. We explained to people what the benefits are of herd immunity. If you take a tiny risk, you can protect people who are vulnerable, who can't get a vaccine, maybe because of a condition or elderly individuals. You can help other people by getting the vaccine and keeping them safe. We found that people find that framing to be quite persuasive.


Very few people intentionally want to be a bad person or not care about other people. When you frame the decision to vaccinate as something that has benefits to other people and you're being kind because you're vaccinating, maybe not for yourself, but to protect other people. That works, but not for everyone. It works for a lot of people, explaining the benefits of herd immunity and the pro-social nature of the decision. That's been an interesting insight for me in terms of the research.


We found that for people who score higher on pro-sociality, there are traits that range from pretty much doing only what's in your interest versus also caring about other people. People who are quite high towards the "I care about other people" spectrum are much more likely to see vaccination in that context. Helping people shift their thinking around to the societal benefits can be quite useful.


It doesn't always work. When you're in a society that's highly polarized. There are very strong expressions of individual freedom and counter-narratives that suggests that it's all about executing your individual rights. It gets more complicated. For the most part, that kind of message has been helpful. I'm based in the UK. There’s been a lot of pro-social framing throughout the pandemic. Why do you have to wear the mask? Why do you have to keep your distance? If you're young and healthy, why do you have to take on that burden? It's because you're helping vulnerable people. Most people are receptive to that.


Speaking of polarization, it does seem like society or at least in the United States and potentially also in the UK is becoming more and more polarized about topics that haven't typically been that polarizing. Do you have any insight as to why this is happening? Why are we becoming so polarized?


At the beginning of the pandemic, I was wondering whether COVID-19 was going to become a topic that’ll be politicized. In a way, you start out by thinking, “This is a virus. It's a pandemic. It's real. Nobody's going to deny that it's real, because it’s so obvious. Everyone knows it's a threat. We’re all going to get on board and have bipartisan support.” A few months in, you start to see the politicization happening. People become more and more polarized. People are denying that it's real. They’re denying the extent to which we should do something about it. It becomes another way for people to express their political identity by either endorsing or protesting against COVID-19 measures.


It’s interesting that sometimes, even though we study it, you wouldn't expect it to happen but in hindsight, it makes sense. There are a few factors that can help explain it. One is that the inherent uncertainty of science is used as a vehicle to politicize science. You see that with other issues like climate change, for example, where there have been concerted campaigns to cast doubt on the link between fossil fuels and global warming. The same technique was used by the tobacco industries to cast down the link between smoking and cancer.


It's a strategy that's clever because they're not necessarily spreading fake news but they're saying, "We don't know enough yet. Science is emerging. It's evolving. Science is very uncertain. We should wait and see." That sounds much more plausible to an otherwise reasonable individual who might go and say, “Maybe we don't know everything yet. Maybe it's over-hyped. Maybe we should wait and see.” That is the more dangerous disinformation or what we call manipulation tactics because it's not as obvious. It’s much more subtle and it sounds much more reasonable.


The uncertainty of science is used as a weapon against science and that's a well-known technique. That's definitely been happening with COVID-19. We've seen them on masks. Especially, when science is rapidly evolving, it's also very hard for scientists to talk about uncertainty. Governments get very flustered about whether or not to admit uncertainty. You get the problem that if you don't communicate enough uncertainty, people are going to get upset because you've miscommunicated or you said one thing and it turns out to be another thing. That's a real big issue.


The other thing, especially, in the United States and also here in the UK is political elites. They have a real influence on the formation of public opinion. You could see polarization in the tweets. If you looked at social media data, which some people have analyzed, you could see polarization happening in the tweets that they were sending out about COVID-19. In fact, Bernie Sanders also said some polarizing things. Early on, you can see the political elites debating it, which then trickles down into public opinion and it polarizes people. The elites are sending cues.


The third factor is the spread of misinformation that feeds into existing polarization. Fox News was quite instrumental in disseminating misinformation at least at the beginning of the pandemic. Social media has been a huge source of misinformation about COVID-19 and the pandemic. That also plays into it. At the end of the day, you have a situation where people want to express their social identities in certain ways. The issues become less important.


TED 44 | Preventing Misinformation
Preventing Misinformation: Misinformation is anything that's incorrect, whereas disinformation is misinformation coupled with psychological intentions to harm, deceive, or manipulate people.

It becomes more about expressing the group that you belong to. If you're a Conservative, you want to express the values that Conservatives are endorsing and that the elites are signaling. If you're a Liberal, you want to do the same. When you already have that environment in place and you're already polarized in so many issues, it becomes very easy for the next issue to fit into that category, “This is going to be another one of those issues that's a Liberal hoax or an alarmist thing.” We’re going to counter-campaign against that narrative. Now you get two opposing camps.


It's surprising because your neighbors in Canada, there was bipartisan support for COVID-19 pretty quickly. It has a lot to do with the unique political context of the United States. Although, here in the UK, there have also been lots of misinformation and polarization on COVID. It's definitely not unique to the United States, but some of the Commonwealth countries like Australia, the UK and the US are fairly similar in terms of the emergence of polarization.


You've mentioned social media several times as far as tweets of some of the political leaders and the role of WhatsApp in some violent crimes in India. Social media has made it so much easier for people to peddle misinformation. Can you talk a little bit more about the role of social media in some of the misinformation campaigns?


It’s a difficult question. In a way, everyone assumes that social media is a huge vector in terms of spreading misinformation. It is but the causal question is not so easy. Are people more polarized because of social media or were people already polarized, then they were handed access to social media which amplified the existing polarization. It's the same with misinformation. Although, there's more evidence that some of the misinformation emerges on social media, where it didn't exist before. There's been research and it quite clearly shows that falsehoods spread at many times the rate than factual stories on social media.


There are a lot of good studies into that, showing that misinformation spreads much faster, further and deeper in social media than factual information. In fact, that Chicago Tribune story was one of the top stories on Facebook at the beginning of the quarter. Misinformation gets five or six times as much engagement on the platform than factual information. They can test these claims because they say that we don't have access to all of the data, which is true. That's part of the problem in trying to make these inferences.


We did a study where we looked at millions and millions of data points on both Twitter and Facebook, of both US Congressional accounts, as well as all of the popular media accounts. One of the things that were already well-known is that emotional content tends to go viral more. If you're highly emotional in your language, it tends to rile people up both positive and negative.


In addition to that is was what we call moral emotional language. Things that tap into moral transgressions that get people excited. It'd be a story like, “Old lady assaulted on the streets.” Even if it's fake, it gets people really worked up because it was an innocent old lady and there was a violent criminal. These types of stories like, “Baby died from a horrific side effect of the new formula.” Some of these fabricated stories are supposed to get people upset. That was pretty well-known.


One of the things that we added in the study was we replicated these patterns in our data. We coded whether the language in the tweets and the Facebook posts were from Liberals or Conservatives. What we found as the top predictor of engagement on these platforms is the amount of group derogation. The more you talk about the other group and the more negative things you say, the more likely you are to get engagement on your posts.


This is on top of using moral or emotional language. The number one predictor was trash talk about the other group. That gets the most likes and shares on these platforms. It illustrates that social media amplifies political tensions. When you have that, it becomes easier for misinformation to spread as well because people use it as a way to reaffirm their political identity. If you're angry at the Liberals and there's a story that's fake and you know it's fake, but it says something negative about Liberals, you're more likely to want to share that and forget about the accuracy or the motivations of it because you have an overriding social motivation, which is to join the bandwagon and express your identity. That's where social media comes in and harnesses some of these effects.


There are also filtering and things like that. Fact-checking and corrections are all useful. When on social media, you have segregated information patterns, what happens is that people who are like-minded tend to share content with each other. That creates a very biased flow of information. People who are more similar like you are retweeting similar content. If you want to get the facts across, then you're not penetrating those echo chambers. You can't get across to the audiences that you need to reach because the flow of information is biased towards like-minded others.


People who are spreading misinformation and engaging with that content are not going to share a factual story. You don't get traction with the facts in that way on these platforms because the groups are polarized and the flow of information is interrupted. It’s heavily biased towards a particular type of content. It makes it very hard to actually debunk and fact-check content because of the way that social media is structured. That partly explains why it goes viral and why there's so much engagement.


That leads to the question that we fundamentally need to redesign social media. I should say to the audience that I'm a consultant for Facebook. I work with WhatsApp. I do a lot of work with Google. I try to help them counter misinformation within their institutional confines. At the same time, we do a lot of work that's critical of social media companies. It gives me also some insight into how these companies work and what the issues are.


What's most interesting to me is that they don't envision these platforms as a wonderful place where people just share accurate and factual content. That's not their mission. They're quite explicit about that. They say, “This is not a platform that's meant to promote accurate content. We want people to have whatever conversations they want to have. We have our policies. You can’t violate the terms of service and use of policies, hate crimes and things like that. As long as you don't do that, you can talk about anything.”


People don't realize that it's a fundamental shift in thinking. I'm thinking, we can redesign these platforms to be really useful tools to disseminate all kinds of important factual, accurate and constructive conversations. The people running these platforms, that's not their goal. Even though they want to counter misinformation and they don't want bad things to happen on the platform, their goal is not necessarily to promote accuracy or facts. Their goal is to promote conversations of all kinds.

You can spot a conspiracy theory by how it uses emotions to manipulate people.

It seems like it's easier for people to have angrier conversations through these social media platforms compared to what they would do in real life. If you run into somebody in a grocery store line, you're probably not going to end up saying the same things that I see people say to one another on social media platforms. Do you think that played a role in this information bias, where people just latch on to ideas that they choose to support?


I think so. There are gradations of this. If you looked at the early days of the internet in the '90s and beyond, there were a number of studies that were quite interesting that had some insights about anonymity. People expressing themselves in different ways online because they could assume any identity. They weren't afraid of the consequences of saying something because it's not face-to-face. You could say anything. People wouldn't know where you live, who you are, and so on. That was very prevalent then.


On social media, unless you're a troll or a bot, you're not anonymous. The incentive there is a bit reduced in the sense that people do care about their reputation on social media. They are a bit more careful than if you were a completely anonymous blogger or netizen. It's still the case that it's not face-to-face. People still feel emboldened to say things on social media that they wouldn't say to somebody face-to-face. That still holds true. That leads people to be more emotional and aggressive. That's why you see more of these flame wars and things like that on social media than you would see face-to-face.


Some research shows what happens when people tune out from social media for a week. They found that people get less polarized when they're not online. That's causal evidence, which we didn't really have for a long time on social media. People are going to say, “Even though they’ve tuned out, they've had social media before.” You can never do a clean experiment that maybe you can have medicine in some ways with social media. There are all these confounders that are always going to be present. It's a social system. It's difficult to study.


What we can do is ask people to tune out on social media for a week, people get less angry and less polarized. At least, that says something about what might be going on. That should send a signal to these companies that they might want to rethink some of the ways in which their platform works. They're very notorious for putting up defensive press releases. They say, “Polarization existed before social media.”


There's a lot of evidence that says that it's correlational and not causal. You can't know for sure because you don't have access to the platform. We have different data. You get it back and forth. Some of the points are worth considering. We don't have access to all the data. From what we can see from the outside, with public data, there are lots of reasons to be concerned about it.


Just thinking about the issue, it makes sense that if you're seeking out somebody who agrees with what you're saying, it's much easier to connect with those people through online platforms rather than at your town’s grocery store or a restaurant or a bar where people may have connected in the past. You may run across more people that don't necessarily agree with you in real life. Whereas online, you can instantly connect with other people who have similar biases as you do. That affirms those biases as correct.


In all fairness, some of these counter-arguments are not completely bogus. They would say that "There are offline echo chambers too, which suggests that we are not the cause of echo chambers." Which is the distortion of what's going on. That's partly true. We have interesting studies that show even in New York City, in areas close to Central Park, Democrats are closer to each other than Republicans in terms of where they live. They might not even know it, but they segregate. Even in the same buildings, there will be segregation.


You look at an election map of the United States and you can see areas where people predominantly have a political leaning toward one side versus the other.


You can see that on the map very clearly. This is true even when you zoom in at the city level. It's prevalent. They use this as a counter-argument but I don't think it's necessarily a counter-argument. It's true but social media still makes it easier for people to also do this online. You have online echo chambers mapping onto geographical echo chambers.


You have a feedback system where it enhances polarization. It speeds things up and it makes it worse. You can quibble about how much they make it worse. They like to say that, “We don't have any counterfactual.” If they had not existed, would it have been worse? It's very clear that they're not adding positive value to that conversation about polarization. It's true that we come to the platforms with our own biases but we don't expect them to be amplified to such an extent that we get into a space that's very polarizing and negative that we may not have anticipated or signed up for. That's the difference.


I wanted to ask you about a study that you and your colleagues did regarding the Oregon Petition and climate change. This was an interesting bit of research.


In our field, people quibble about the effect of misinformation. They say, "How can we know that misinformation is the cause of it?" We did a very controlled experiment and this was several years ago. It was quite fundamental in establishing that misinformation has a very negative causal effect on people. What we did is we expose one group of people to the facts about climate change. That was pretty easy. We just said, “Most scientists agree that climate change is happening and humans are the cause.”


One condition we showed people is this petition that you mentioned, the Oregon Petition. It’s a very strange project. It started in the '90s. A bunch of politicians and former pseudoscientists started a petition claiming that thousands of thousands of scientists have signed this agreement saying that global warming isn't real. When you examine this petition, there were all kinds of strange things happening. The signatories on it were Spice Girls and Charles Darwin. People were fooling around, signing up for this bogus petition. There's no quality control on this petition.


TED 44 | Preventing Misinformation
Preventing Misinformation: It's useful for the public to not see something as true or false, but to weigh the evidence to inform their judgment of how trustworthy or reliable a source is.

There are some people on there with degrees, but they're not inclined with science. Their expertise is totally irrelevant to making claims about climate science but it's been very influential. It formed the basis of the most viral story on Facebook in 2016 saying, “Scientists have declared global warming as a hoax.” Even though it's from the ‘90s, it's being recycled all the time. It went viral on social media. It's being kept up-to-date, this petition. The purpose of this petition is to cast doubt on the scientific consensus by suggesting that there are thousands of scientists who say that they don't agree with the science.


In this experiment, we first wanted to see what effect that has on people. The second question was whether we can inoculate or vaccinate or immunized people against this content. What we found was that if you expose people to this petition, they're very confused about the scientific consensus on climate change. Whereas, they were already a bit confused but they are downgrading their perception of whether or not this is a settled issue in terms of climate change.


We know that matters because if people think this science isn't settled, they won't want to take action. That's the strategy that this petition plays into. In fact, it's called the fake expert technique. You use fake credentials to try to convince people that you have expertise. The petition is modeled off of what the tobacco industry did where they had coughed out fake doctors and saying that their favorite brand of cigarettes is Camel or Lucky Strikes. They would run campaigns. They would call it The White Coat Project.


As a doctor, you know probably like no one else that they use the white coat to fake expertise and dupe people. From research, we know doctors are the most trusted expert, more so even than college professors. The issue here is what can you do about it? What we found is that it did have a very negative effect on people. Instead of fact-checking and debunking, which was the main strategy that's been used to try to counter misinformation, we wanted to see, “Could we preemptively vaccinate people against this technique?” It follows the vaccination analogy exactly. What we did is we preemptively injected people with a weakened dose of the virus.


We told people, “There are some petitions online. It's going to try to convince you that global warming isn't real. You should know that Charles Darwin signed it. It's totally bogus. The people on there don't actually have any expertise in climate science. Be warned.” Later on, with the experiment, we let people browse the website and look around. What we found was that if you preemptively inoculate people, it doesn't fully immunize them.


The misinformation has no impact on people. It did still make them doubt the science a little bit more but the effect was tiny compared to when we didn't inoculate people. When we didn't inoculate people, there were massively confused. Whereas, when we inoculated people, they were so onboarded with the science just a little bit less than before.


That was the idea of, “We can psychologically inoculate people against these techniques.” Even though it's not a real vaccine like Pfizer where you get 95% dominance for misinformation or the exact percentage. There is probably some range here. It was still pretty useful. This was true for whether you were Conservative or Liberal. It didn't really matter what your affiliation was. We found that to be useful. That was what the Oregon study was about.


By preparing people for the fact that there may be misinformation present, they were more aware of it and able to combat it like a vaccination.


There are a few important psychological mechanisms here. One of the things, I realized over the years is that the main response of debunking and fact-checking is based around the notion that people don't know enough, that we don't have enough facts, and that we don't understand the science. Although education is important, this is true to some extent, the total misperception is that somehow bolstering this is going to protect people from misinformation. It doesn't because it's not specific enough. It's not targeted. It doesn't give people the right antibodies they need.


What is useful is simulating the attack for people in the weakened dose and giving them the specific refutations they need beforehand so that they can retrieve them and access them from their memories and rehearse them in advance. When they're challenged on their beliefs, they can retrieve the right information. They've been warned before and had time to think about it. This is what we call resistance to persuasion. People can now resist attempts to persuade them of false information.


You don't get that when you try to retroactively debunk something or fact-check. I'm not saying that that’s bad. The problem is that once you're exposed to falsehood for a few years like “The vaccines cause autism,” it settles into your associative memory networks. We make claims with other concepts. The longer it sits, the stronger the links became melded in our memories. It becomes very difficult to get rid of it.


I can tell you something is wrong and your brain will mark it as incorrect but that doesn't do much in itself because it's still linked to all sorts of other things. People continue to make inferences based on false information, even when they acknowledged that they've seen the correction. One way of illustrating this is that if I told you, “I went to your favorite restaurant down the street. I got food poisoning. I had a terrible night. Don't go there.” A month from now, I'll tell you, “It wasn't your street. I totally got that confused. It was somebody else's street.”


Every time, you're going to walk past that restaurant, even though you know that was false, you going to think of food poisoning. It works the same with misinformation. This is the brain's way of keeping track of things. That's why it's so difficult to get rid of it after the fact, even though it can be helpful. There are ways to do that. We found that there is so much value in the prevention metaphor as with vaccines. That prevention is better than cure. It does depend on the incubation period. If we follow the viral analogy of the misinformation pathogen, you can pre-bunk or inoculate. We call it pre-bunking because easier for people to understand. That works pretty well. There's a timeline.


Sometimes the inoculation is more therapeutic like a therapeutic vaccine. People have already been exposed but hasn't settled into your brain yet. You've heard of it but you're not infected yet to the degree that you're sharing misinformation. There's therapeutic value in boosting your immune response with inoculation. It's better if it's totally prophylactic in the sense that we can get there before the misinformation arrives in the first place. In the end, we're just debunking after the fact if we come very late to the story. That's the spectrum that we deal with. What we found is that it's so much more effective to try to go the vaccination route.

For people to perceive a communicator as trustworthy, they have to admit uncertainty provided in ways people can understand.

One thing that I've been asked repeatedly is how people know that a source of information is valid or trustworthy. When I'm looking at an information source, I will look at the primary scientific literature. I'll look at the methods, evaluate the methods of the studies, and decide whether it's an article that's reached its conclusions appropriately. That’s a tough set of skills to learn. Usually, it takes at least two to four years of graduate education or in my case, eleven years of graduate education to learn to do properly.


I'd like to be able to tell people that they can trust physicians, but I've also seen a lot of physicians out there peddling misinformation which probably started in the era of Andrew Wakefield. Do you have any advice or good ways that the general public can validate information to determine if it comes from a reliable source and is trustworthy?


On one hand, there are useful tips like check the source. Make sure that there is a context provided. Is the source credible? Can you identify it? Try to find other lines of evidence that support the source? There are these tips that are useful to follow. What we found in all of our research is rather than trying to tell people what's true or false, we've trained people to recognize the techniques of manipulation in spotting information that may not be true. The reason for that is because when the heuristic is at the level of the publisher, it works pretty well but not always.


Take the Chicago Tribune, for example. If you use the heuristic, "Is the source reliable?" Chicago Tribune has independent fact-checker ratings that are very high but then they publish a headline like "Doctor died after getting COVID vaccine." That's not working because even though it's a reliable source, the headline is misleading. What we tend to do is try to familiarize people with the most common techniques that are used to spread misinformation.


Some of these we've already talked about that includes polarization. Is the headline polarizing? You should probably be suspicious. What we try to do is get people in the mindset of not, “Is this true or false?” but calibrate your judgment in terms of how reliable something is. We can chat more about evolving science and how we as scientists are trained to constantly update our opinions about the weight of evidence on something.


It's useful for the public to not see something as true or false like,