People have been getting it wrong or flat-out lying since they were old enough to play “whisper down the lane” or “telephone” at recess. Now that the story is being passed tweet by retweet and post by repost, rather than mouth-to-ear, we use terms like “misinformation” and “disinformation” to describe, respectively, the accidental or purposeful mucking of the message. A high-profile report on this phenomenon, recently released by the Aspen Institute, suggests that social media’s ability to speed the relay and reach of misleading and biased information to an audience that has grown increasingly isolated and credulous of questionable information, due to longstanding social inequality, has created what the researchers call “information disorder.”
Denise Agosto, PhD, a professor in the College of Computing & Informatics has been studying information behaviors online for over two decades. She suggests that to start fixing the problem we all need to become more reliable and discerning receivers and conduits of information from a young age. Teaching students to be versed, not just in understanding information presented to them, but in assessing the credibility of its source, is a lesson primarily emphasized in the context of writing research papers or in library skills courses. But Agosto contends that information literacy skills – gathering, organizing, assessing and communicating information – should be a foundational part of curriculums from the beginning. Agosto and her team have been looking at how schools are currently teaching these skills with respect to information gathering on the internet and have created a number of resources for teachers, librarians and parents to help children become more information-savvy and maintain a healthy relationship with technology during the pandemic.
Agosto recently took some time to review the Aspen study and to provide some insight on how teaching information literacy could address the roots of our society’s information disorder.
The report makes a point of explaining that our struggles with mis/disinformation are not happening in a vacuum – suggesting that in some ways our broader social problems (e.g., structural inequities) have primed people to fall victim to the spread of bad information and misleading narratives. Is this something you’ve observed in your research as well? Why is this important context for guiding any efforts that might begin to address our “information disorder”?
Yes, it’s crucial to understand that misinformation and disinformation are not, in themselves, root causes of the problems we’re seeing in online speech. Rather, they are manifestations of existing societal problems. Typically, these are long-standing social problems, such as social power imbalances, political bullying, economic inequity and racism. The spreading of disinformation through media is not new either. It can be traced in the mass media back at least to height of yellow journalism in the United States in the late 19th century, and probably much further back than that.
If these issues and the existence of disinformation aren’t new, why, then, are misinformation and disinformation currently of such urgent interest? Part of the current upsurge of disinformation stems from the highly polarized political spheres in the U.S. and in several other countries today as well. This adversarial, political partisanship feeds into, and is fed, by the nature of social media itself, which enables biased, deceptive and inflammatory information to spread much faster than ever before to many more people than ever before. Thus, social media serves to multiply the speed, reach and influence of misinformation and disinformation. And the damage it creates can be far worse than in previous media environments, when the news and information cycle was much, much slower and more often comprised professional news organizations with editors, fact-checkers and established journalistic standards.
So, what can we do to address the current rapidly growing spread of misinformation and disinformation online? The most effective approach must be a multi-stakeholder strategy. To date in the U.S., we have ceded the responsibility for monitoring online discourse to access providers – social media companies. This ad hoc regulation has proven to be uneven and ineffective, giving disproportionate power to the corporations that run these platforms and undue influence to advertisers and other self-promoting entities. We must explore effective government interventions that would more evenly represent the values of the communities they represent. Educational intervention is also crucial. Educators at all levels should teach their students not only to identify bias in media systems and resources, but also to challenge the existing power structures that give rise to the social problems and social injustices that they often enflame. And of course, economic intervention is necessary as well. All of these partial solutions require funding for successful development and implementation.
How has this reality guided your research and your approach to the problem?
Issues of power, privilege and equity are fundamental to my work, which uses information literacy and library services to combat this crisis. I also build on concepts of media literacy, teaching people to identify the bias encoded in media messages and to understand that all information has a perspective.
If you could institute one regulation on platforms or technology companies right now to stop the spread of dis- and misinformation, what would it be?
My one move right now to reduce the spread of misinformation and disinformation online would not be a technological regulation of my own creation. Effective technological regulations need to be built on community needs and multi-stakeholder input. Instead, I’d move immediately to reform social media education in schools.
I have worked in several high schools around the U.S. to study students’ reactions to school-mandated social media education. Nearly always, schools – and the state governments that regulate their curricula – translate “social media education” to “internet safety education.” Unfortunately, this typically means focusing on scare tactics to show students only the negative sides of social media use while neglecting any educational, social and emotional benefits. In addition, the schools that I have visited lean heavily on using videos to teach internet safety lessons. The scare tactics and the non-interactive video for content delivery tend to turn students off from learning.
So, my one first fix would be to reframe internet safety education in U.S. schools to reduce scare tactics and present a more balanced discussion of the risks and benefits of internet and social media use. And teach students thoughtful risk and benefit decision-making, add additional balanced media literacy content and build content delivery on personal conversations with peers and with members of the communities outside the schools to increase student interest, learning outcomes and community understanding and empathy.
What seem to be the best messages or techniques you’ve found in researching information literacy in the age of dis- and misinformation and putting together recommendations for teaching information literacy to children and teens?
The students I have worked with respond best to peer-to-peer learning – active discussions with peers about their shared and unique experiences online – and hands-on lab-based learning, which enables instructors to show real examples of suspicious and harmful online conversations, as well as sharing best practices for engaging in meaningful online discussion. Lastly, building a sense of community and responsible community membership is key to the effective teaching of good digital citizenship.
Some of the key approaches we can take toward teaching effective information literacy to combat misinformation and disinformation include:
- Explaining how information is created and shared in networked environments. Many people lack a clear understanding of how information is created and shared online, which can lead to their placing unwarranted trust in things they read or see online.
- Acknowledging the strong role of cultural and social group membership in information sharing…and believing. We tend to believe information that confirms our existing beliefs or fits with our cultural knowledge frameworks. This “confirmation bias” can diminish our critical thinking.
- Framing conversations about online discourse around power, privilege and equity as:
- embedded in people’s everyday information practices
- embedded in algorithms and information systems design
- motivation for spreading disinformation
- Teaching people to consider who created a piece of information and why it was created before sharing it.
- Encouraging people to calm down before sharing emotionally-charged or political content. Emotionally-charged content tends to be more compelling than facts or statistics, so it’s important to take a few minutes to calm down before deciding to believe or share information.
- Emphasizing empathy to reduce anger-fueled discourse. Often if we understand another person’s perspective or motivation, we can engage in more thoughtful, reasonable discourse.
For more information and resources on information and media literacy, Agosto recommends:
- Pen America’s “How to Talk to Friends and Family who Share Misinformation”
- The Center for Media Literacy’s “5 Key Questions”
- Common Sense Media’s “How to Spot Fake News (and Teach Kids to Be Media-Savvy)”
Reporters interested in speaking with Agosto, should contact Britt Faulstick, assistant director of media relations, email@example.com or 215.895.2617.