Distinguishing evidence from analysis: A student’s perspective on the first step in source evaluation

Sara Zoroufy is a junior and the Research Teaching Assistant for the Castilleja School library. Inspired by Nora Murphy’s work on source literacy, Sara chose to spend this year observing research lessons and unpacking how she and other students think about sources. Her work helps inform lesson planning. Here, she shares an idea she has been contemplating recently.

“CNN reports that the Justice Department found the following statistics…”

During a presentation in our tenth grade government class, this phrase caught my attention. Why would a speaker attribute a statistic to two different sources? I have been thinking about this turn of phrase for a long time, trying to understand precisely why it troubled me. Recently, I realized that students struggle to distinguish factual evidence from a source’s analysis of that evidence. In the example above, the student was having trouble determining what type of information she was citing and which source was responsible for the creation of that information. Without separating evidence from analysis, we can neither evaluate nor properly cite a source. I tried to draw a visual to help myself understand how a source breaks down into these components, which culminated in this flowchart:

Mapping these concepts in this way helped me identify a number of key points in the process of evaluating a source. I began to think that the essential questions that students should initially ask when faced with a particular source are: what is the evidence that is presented, what is the corresponding analysis, and what sources shaped each component?

Asking these questions is the first step in unpacking a source, and the answers are not always immediately clear when students encounter unfamiliar genres of writing. This year, my grade was presented with an excerpt from a Pulitzer prize winning piece of investigative journalism about the diagnosis of black lung in coal miners. We were asked to identify the sources of the statistics in the article. No one was able to locate this information because journalistic convention dictates integrating the names of sources into the text, as opposed to employing parenthetical citations that students use in their own writing. For example, just prior to starting a bullet pointed list of statistical evidence, the article said, “The Center [for Public Integrity] recorded key information about these cases, analyzed [the medical expert’s] reports and testimony, consulted medical literature and interviewed leading doctors.”[1]  Since the students weren’t accustomed to this particular form of citation, many of us responded that no sources were given.

Students had been instructed to pinpoint the evidence in the article and label it with an “E” and to identify and label its sources with an “S.” As I sat with Tasha Bergson-Michelson, our instructional librarian, and considered my flowchart in relation to the lesson, we realized that the instruction had skipped over several crucial steps in the process of identifying the evidence. This experience made it clear that identifying the sources of evidence can be confusing, and that simply telling students to exercise that skill was not effective. Rather, the development of this skill requires explicit instruction and opportunities to focus on practicing it.

Once we’ve identified the source’s evidence and where it came from, we are able to further evaluate it. Depending on the type of evidence, we can investigate its quality and veracity in different ways–reading the methodology behind a study or poll, for example, or comparing the details of anecdotal evidence across various sources. Another factor to take into consideration is the original publication venue of the evidence itself. Recognizing the background of the publication adds to our understanding of the ethos of the evidence, as well as the sponsor’s motivation for collecting the evidence.

After examining the evidence, we can begin to consider the analysis of that evidence. The analysis reflects the perspective of the author and the publication in which it appears. Often, students stop their investigation into a source once they have determined its bias or perspective, but that is only the beginning. The real importance lies in the source’s purpose–why and how that perspective is being argued. Our history department uses the acronym SOAPA–Subject, Occasion, Author, Purpose, and Audience–to remind us to critically evaluate each aspect of a source.[2] This strategy has been particularly helpful in reminding us to think about the author’s purpose and how it shapes the analysis of their evidence.

I find it useful to think of every source, be it a journal article or a photograph, as an essay that selects and interprets evidence to support its thesis, but that comparison is not necessarily intuitive. This idea that all sources make an argument is easily overlooked, especially when we students are presented with historical documents which we sometimes subconsciously perceive as pure fact. In our 8th grade science classes, students first encounter the idea that nonfiction can be analyzed like literature. The lesson teaches students to consider the language of a source to determine what argument the author is making and what they want the audience to think, feel, or do.

Differentiating between evidence and analysis is the first step in considering the three essential questions: what is the evidence that is presented, what is the corresponding analysis, and what sources shaped each component? Answering these questions helps us understand:

  • Sources make arguments using evidence and analysis.
  • Evidence tells us what the source is using to make its argument.
  • Evaluating the origin and quality of the evidence contributes to our understanding of the strength of the argument.
  • Critically evaluating the publication venue of the source itself helps us recognize the perspective the analysis will try to validate.
  • Doing a close reading of the analysis in the source gives us insight into the author’s intention in making the argument.

In the case of the quote that started this whole journey, knowing that the evidence came from the Department of Justice and the analysis from CNN allows students to draw on any credibility offered by the DOJ’s statisticians and CNN’s popularity as a source of reporting. The students themselves attain credibility by demonstrating that their thinking is based upon rigorous sources.

________________

  1. Chris Hamby, Brian Ross, and Matthew Mosk, “Breathless and Burdened: Dying from black lung, buried by law and medicine,” The Center for Public Integrity, last modified October 30, 2013, accessed March 2, 2017. https://www.publicintegrity.org/2013/10/30/13637/johns-hopkins-medical-unit-rarely-finds-black-lung-helping-coal-industry-defeat.
  2. The College Board recommends a similar version, SOAPSTone, for its history APs.

 

Reading News across the Political Spectrum

Last spring, I had to confront a gaping hole in my professional knowledge. But that is jumping ahead. Let’s start at the beginning, with our students.

It began with a project in the 10th grade American Political Systems class. Working in pairs, students were to select two articles with differing viewpoints about a contemporary issue, then lead a current events discussion. Last spring was the first time I had the opportunity to meet with students about their article choices. Inspired by Nora Murphy’s work on source literacy, I asked students to talk to me about what types of source they were looking at, and why they felt each article was sufficiently authoritative.

Repeatedly, I found myself facing students’ inability to distinguish between what felt good and what was good quality. Generally falling along a political spectrum, articles that aligned with what students already believed earned a rating of “reliable” … and the other sources were all considered equally foreign and indigestible. Certainly, they were applying no particular standards to finding authoritative expressions of the opposite viewpoint, beyond maybe a shrugged: “Well, it was in the library databases,” as if that assured authority, or “It was ranked high in my search results.” In these discussions, students generally knew how to think about the authority of authors, but it was clear that it did not occur to most of them to consider how the identity of the publisher shaped a piece. As their research skills educator, I found myself demanding, again and again, that there were standards of rigor to which we must hold every publication, which provided a way to identify quality sources at many points along the political spectrum.

The first step we covered was simply learning a bit more about what possible points along the political spectrum were, and identifying where a particular publication situated itself. Wikipedia was invaluable in these investigations. In many cases, the first few sentences of entries on a specific media outlet provided a lot of vocabulary (progressive, libertarian, neoconservative, paleoconservative) that gave us good talking points for getting started. Sections on editorial staff, board members, funding, and past controversies were also helpful, if not taken in isolation. “About” pages could sometimes be useful, but were often so full of obfuscating language as to be impossible to parse (we first addressed this difference in readability in 9th grade, when students tried to decode what Sputnik News was). However, a publication’s media kit for advertisers and their submission guidelines for writers were often much clearer indicators. These tools helped us build a crucial understanding about the nature of the source we were encountering. It did less to help us understand rigor.

One trouble is that the standards of rigor remain, well, elusive. One high school student and I spent the entire subsequent summer debating how to define those qualities in a manner that would be findable and useful: If we cannot find out about editorial process, what is a reasonable way to measure the same thing? What role does the size and demographic of readership play in our understanding of a publication? How does it matter if a publication was “born digital?” If most people view media by clicking through from social networks, what weight do we give to how clickbait-y their headlines feel? What does the publication articulate as its own claim to authority? But we barely scratched the surface.

Blur: How to Know What’s True in the Age of Information Overload better helped me frame my thinking. It identified four forms that “news” reporting might take. Two, in particular, caught my eye. “Journalism of verification” requires journalists to examine evidence relating to a question to determine what conclusions to draw, whereas “journalism of affirmation” is “a new political media that builds loyalty less on accuracy, completeness, or verification than on affirming the beliefs of its audiences, and so tends to cherry-pick information that serves that purpose” (34). While I wanted my students to be selecting sources that followed the model of verification, they tended to encounter and be attracted to sources that were rooted in affirmation. Reading journalism of affirmation is rewarding; it articulates the reader’s feelings clearly, makes her feel smart, and is widely available. So not making a practice of identifying a source’s model — verification, affirmation, or other — is how we end up with a gulf between what feels good and good quality.

As a result of these conversations, this fall I had the opportunity to have an hour with the now-11th graders to help them set up news feeds. My assignment was to make sure every student was regularly following some kind of news. After thinking long and hard, I turned back to Blur and decided to break the lesson into two parts: 1. Practicing how to identify sources and evidence in journalistic writing, and 2. Giving every student an opportunity to identify a method for following news on a regular basis. It was a first time through this topic; the lesson plans linked to above have a great deal of room for growth. The lesson culminated in a plea that students commit to reading a variety of good quality sources that could help them experience a range of perspectives.

Which brings us back to the gaping hole in my professional knowledge that I mentioned at the outset. Because the truth was that last spring, while I was urging students to pick sources based on rigor rather than emotions, I found myself limited in the rigorous sources I knew to recommend. Let me be more honest: because I had not cultivated my own knowledge of good quality conservative sources, I knew of only three. Three sources that I kept pointing to, again and again. Frankly, I had to struggle continually with my own snap judgments. My knowledge proved completely insufficient, and I barely sounded convincing, even to myself. How could I stand for a rule of rigor across the spectrum if I was unable to point students to acceptable options?

So, I decided I had to educate myself, which is what I was trying to do when I attended my first AISL conference in Los Angeles last year. Many of you kindly allowed me to question you about your collections, and your thoughts on a variety of news sources.

Slowly, my list of sources from around the spectrum began to grow. It is still smaller than I would like. But I am fortunate to have three learners in my life who are particularly committed to notions of source literacy, reading broadly, and engaging with multiple narratives. One student and I had conversations about how floods of articles from two or three favorite news sources started to feel like seeing the same stories over and over again; it was taking 90 minutes a day to weed through and find new ideas or events. We determined that it was better to select a small number of media outlets representing differing perspectives, so that each article would offer a new point of view, even if the topic repeated. Another argued that there were more varieties of narratives than just international and political; sources from various US geographic regions and affinity groups (news from ethnic, religious, and other groups) went on my list. Work with each of those students — including my Research TA, who chose to focus on source literacy — is messy and ongoing, and would be a post unto itself. Each of them did, however, respond to the questions I was asking of myself with a passion and commitment that transformed my curiosity into something I actually needed to live up to. They made me do more than just make lists — they made me expand my reading.

The final element of the 11th graders’ lesson this fall looked at what it might mean to access multiple perspectives through news. Given the fractious environment at the time, I decided simply to share a selected list from the sources I was following (at the time, I was testing close to 150 different news sources) and let students explore for themselves. I am grateful to colleague Connie Williams for making me realize that the list I shared with students is not founded on consistent standards. For example, my list of international sources includes many options that fall under what Murphy would term “necessary bias,” aimed at assuring exposure to various narratives being promoted around the world. My affinity group media options similarly look to expand my own ability to access and hear the multiplicity of narratives experienced by the United States’ diverse population, while my regional newspapers attempt to balance that same need with a sense of editorial rigor. My political spectrum is where I look most critically at the use of sources, evidence, and careful argumentation. It seems that, in the process of writing these words, I have discovered the next stage of work I have to do on my own thinking.

Nevertheless, I am gaining something deeply important from both investigating potential sources and reading those I selected on a somewhat regular basis. It is a time-consuming process, and it can be quite unsettling to encounter religious, political, or ethnic viewpoints to which I have not really been exposed before. Yet I also find it immensely enlightening to read about an issue in an expressly paleoconservative, African-American, or Catholic leaning publication, or any publication outside my regular media diet. Seeing a reasoned argument proceeding from a different set of values or experiences often provides me with crucial details that are lost when filtered through more familiar media. Often, I encounter statements that make a lightbulb go off for me — I’ve found myself reading and re-reading a portion of the Constitution, until I can finally figure out what someone else is seeing in the words.

Ultimately, it comes down to a real question of what my professional values — and my human values — really are. Just as journalistic ethics require not neutrality but that conclusions should arise from fact-based evidence, every time I work with students I drill into them that research is not about starting with something you believe to be true and cherry-picking evidence to prove it. Rather, we look at a range of rigorously supported viewpoints and draw evidence-based conclusions. Our library teaches students to use close reading and a knowledge of logical fallacies to unpack how word choice and argumentation impact readers’ emotional responses to nonfiction. In our program, those are becoming core source evaluation skills. If I actually stand by all parts of this curriculum, then it seems reasonable to me to expect that I should be accessing multiple, rigorous, and diverse viewpoints in my own reading of news, before drawing conclusions. When I hold myself to those standards, I can do more than just telling students to avoid fake news; I can offer them a positive range of news options on which to draw. When I do that, I model for students that I live by what I teach.

NOTE: I should have asked this initially, but I would love for anyone to share ideas for sources with me. I hope to build a much more rigorous list over time.