Do you have that pink book about Rosa Parks? On “impossible” questions becoming “possible”

I was fully looking for something cheerful to post about today, and it turns out that “cheerful” in this instance means finding a use for something about which I have historically felt little enthusiasm: the new-ish top “result” in Google search.

When I worked at Google, one of the realizations I had revolved around questions that we librarians had a tendency to (among ourselves) view as “stupid.” First among those was asking for a book by the color of its cover. Essentially, we felt it was an unreasonable question, because it was one we could not answer. (Also because people remember green books as red and yellow books as blue, but I don’t yet have a solution for that problem.) Sometimes, technology allows us to solve a problem, as I discovered when I went to try to understand of what use color filtering in image searching could really be:



Well, this morning I was grappling with a question and I decided to try using Google AI to answer it, and look what happened:


Asking Google’s AI to tell me in which databases to find The Atlantic and JAMA in full-text

Are these responses complete? Completely correct? Did I burst into flame from typing a long-form question into a search box? The answer to each of these questions may well be “no.”

Nonetheless, I think about all the times that I wished I knew which databases to search to find x source, and I was pleasantly surprised to have this tool to try and help me.

So – hope this brings some joy or at least ease to your week. Take care, and search on!

Unpacking AI and Wikipedia quality

With gratitude for the collaboration of Amy Pelman (Harker), Robin Gluck (Jewish Community High School of the Bay), Margi Putnam (Burr and Burton Academy), Hillel Gray (Ohio University), Sam Borbas, Cal Phillips, and a special librarian whose name we will not share due to their work.

Thank you so much to Alex for posting to our listserv about an article they read entitled “The Editors Protecting Wikipedia from AI Hoaxes.” Since our Wikipedia editing group meets Wednesday nights on Zoom, we decided to take a look and see if we could come up with a lesson plan for teaching students to understand when they see AI-generated content in Wikipedia. 

We read, experimented, and chatted for a few hours, trying to figure out what would be most helpful. Ultimately, we did not construct a lesson plan, but we have a set of burgeoning ideas and thoughts about approach. We look forward to collaborating with other members of this community to move forward, as needed.

Overall, while we see that some fallacious AI-generated content is making its way into Wikipedia, like it is in so many sources, we do not yet feel there is evidence that it is currently causing particular danger to information quality within Wikipedia.

 A very quick, vastly informal review of literature investigating the quality of Wikipedia content discovers other themes entirely. Overall content quality checking was common in the late 2000s/early 2010s. At that time, most researchers found that Wikipedia tended to be fairly high quality, often higher than the perception of quality by potential users. Over time, the understanding of “quality” and the research on Wikipedia has moved more into questioning the same issues we question in more traditional research sources: identity-related gatekeeping – who is included, who is excluded, how the identities of editors and the creators of source materials cited impacts the completeness of coverage on a given topic. As from early days, articles that get more traffic tend to measure up well if quality checked (e.g, anatomy), meaning that more obscure articles (and, I would argue, less used by students for schoolwork) have a greater chance of maintaining misinformation and errors. One study that looked closely at hoaxes reminded readers that, as of 2016, Wikipedia editors running “new article patrols” meant that 80% of new articles were checked within an hour of posting, and 95% within 24 hours. 

Thus, a significantly larger issue facing Wikipedia today is the substantial fall-off in the number of editors in recent years, which means that page patrolling and other quality-supporting behaviors are also suffering. This is a very real issue. 

On the bright site, there are many more tools that help editors doing quality-sustaining work to figure out where problems lie. I get notified whenever a page I (or my students) have worked on is edited, and when the changes are malicious the vandalism has usually been corrected in the few minutes it takes me to get to the page to check it. While one of the first lines of defense – the “recent changes” page and its sophisticated, bot-driven advanced search – does not yet have a set of choices for suspected AI-created content, I am guessing that we will see that option before too long. Here is how editors can currently filter the list of recent changes, and from the vandalism training I did I observed that the bigger problems tend to be dealt with extremely quickly:

Ultimately, given that genAI content is showing up in so many places, there is no reason to suspect Wikipedia any more than, say, content in many of our databases. In fact, depending on the type of database, articles may have fewer eyes on the lookout for problematic content than does Wikipedia. Certainly, the high-profile Elsivere case and the growing use of AI in our “trusted” news outlets suggested to our editing group that we do not so much need to warn students off of Wikipedia as we need to teach them about the overall changing information landscape and how to work within it.

Here is our brainstorm of potential topics that we might integrate into our teaching that address the increased use of genAI in all sources and in Wikipedia:

Teach about:

– Critical reading of all potential source materials, including – but not limited to – Wikipedia 

-Recognizing AI-reated content

– Identifying what on Wikipedia is “good information,” or learning when to use and not use Wikipedia

– Understanding that AI may be one of several factor that may add level of inaccuracy to Wikipedia, and is one of many factors editors watch out for with regularity

– Teaching about ethics of academic honesty

– Teaching about ethics of AI

– Teaching about AI and academic honesty

AI more generally:

– How do we recognize AI content?

– Google search has AI generate responses queries; does that make Wikipedia less relevant in our students’ information lives?

Wikipedia:

– Are there patterns on Wikipedia that are repeated with AI-generated content?

Wikipedia:WikiProject AI Cleanup/AI Catchphrases is a wonderful source that records a number of phrases that may appear in AI-generated content, as does the Wikipedia:WikiProject AI Cleanup main page

Category:Articles containing suspected AI-generated texts – Wikipedia

-There have been instances where text has appeared on Wikipedia pages that even our group members who have almost no knowledge of generative AI recognized immediately, such as the (long-ago fixed) page on IChing:

that even gives itself away quite explicitly:

– What are positive uses of AI on Wikipedia? (examples: helping with grammar, helping with sources, flagging possible vandalism)

– Look, as a class, at Wikipedia:WikiProject AI Cleanup and follow links to read and discuss the 

various impacts of AI on Wikipedia, and possibly extend that learning to other types of sources as well

– How do Wikipedia reviewers recognize vandalism?

– How quickly is Wikipedia “cleaned up” after an issue is flagged?

– How quickly is AI “cleaned up”?

– Look at recent changes page

– What are Wikipedia’s rules regarding AI-generated content?

– Does AI-created content violate the “No original research” rule? (based on Village Pump article)

So, we apologize that this is kind of a quick-and-dirty set of thoughts without many clear answers. Once more, however, we were all in agreement: Wikipedia appears no more riddled with AI-generated disinformation that other types of information, so learning to assess the quality of whatever you are reading is key.

A new way to record (and share) library statistics

Of course one can track stats on various elements of library life…but what kind of audience and attention do they actually receive?

In July, I wrote about embracing joy — tracking everyday joyful experiences with a simple quilting, paint, or paper craft project — as imagined and shared by Kitty (@nightquilter), the founder of the Quilt Your Life Crew aspirational data visualization project. In addition to tracking joy, members of the group pick something to track for a period of time (usually a year) and we support each other in designing an effective visualization. They can be simple patterns, or more complicated tangible or abstract designs, as well. A favorite of mine tracked what kinds of tacos a quilter ate over the course of a year.

I tend to track something from my work life. For the 2023-2024 school year, I chose to track the library’s instructional collaborations:

A brief legend of what each square represents. This pattern, “Renew” by @jitterywings, was perfect to convey a very complex data set. To see a lengthier legend, click here.

I worked hard to complete and compile the sixty-nine blocks of the quilt face and also the legend before returning for our new school year. (I have been told to communicate that construction lasted through 10 audiobooks + 7 seasons of the Great British Baking Show + a weekend-long quilting retreat + a live SF Giants game + the summer Olympics.)

In our opening days, before students officially returned to campus, I displayed the front and back of the quilt outside the library.

The work paid off! Many of my colleagues stopped to look it over, to try to identify their block(s), and — in at least two cases — note: “Oh, I did not have you in my class very much, did I?”

And now? I am in their classes on a regular basis this year. I believe that the collaborations here (across all departments and all grades) normalized the idea of having research skills instruction for some colleagues.

Another fun outcome is that the colleague from maintenance who helped me hang the quilt commented that it might be helpful for him to make a visualization of the work orders he undertakes for the school. I offered to help him (though not to make a quilt), and am looking forward to the rather unique collaboration that will spring from that conversation.

Of course, being a librarian, I felt it important to cite my sources, and I think I will try to do this for any quilt using new fabrics in the future! (Most fabrics have an edge, called a “selvedge,” that gives the title, creator, and manufacturer of the fabric.)

The bibliography for the quilt, showing what fabrics and pattern were used in its construction.

Not everyone can — or wants to — spend a gazillion hours making a quilt, but as the Quilt Your Life Summer Joyfest has proven, there are many ways to undertake such a visualization. Pick a medium that works for you! It is, however, helpful to find a nontraditional form of visualization that will engage your colleagues and make them want to stop, look, and engage.

What might you want to track for public consumption? How might you like to construct a data visualization?

“Just teach the databases”: Better responses than eye-rolling?

A recent conversation with a colleague about that perpetual, one-time-a-year “collaboration” request for “just a quick introduction to databases” made me reflect carefully on why I don’t really get that particular gem of an assignment anymore.

This colleague had just received that same ask and felt saddened – as it did not resonate with what she thought students actually needed.

So, we began discussing what skills her particular students do need to move forward in the word, and then we began plotting a “database lesson” that would deliver one of those skills, instead. The process reminded me of a closely-held principle I’ve had since before entering school librarianship: what we teach is mostly thinking skills; any technical skills will need to be about flexibly adapting to change over time and across tools, in any event.

This is where I began to reflect on strategies I used in the early years at my school when this was a frequent instructional request. Now, I do teach the basic intro in ninth grade (and my colleague in sixth). Otherwise, whenever I was asked to teach databases, I instead taught a skill that was useful in a broad range of research situations. Of course, we used the databases to practice, so I was delivering on my colleague’s desires. These lessons include, but are not limited to:
*How search tools work (I’ve pivoted to using Stephanie Gamble’s lego method, far superior to my prior attempts);
*Mind mapping pre-existing knowledge to expose potential search terms;
*Using stepping stone sources (reading for useful search terms);
*Imagining sources (for example: most newspaper articles on sports do not mention the name of the sport, but tend to mention team names; articles on psychology do not tend to use the word “psychology” – unless it is in the journal title – but instead refer to specific conditions and possibly the subject group tested);
*Close reading of non-fiction to determine POV;
*Accessing multiple perspectives;
and so forth.

I have recently realized that this approach not only delivers more skills to my students that are more flexible across their needs, but it also demonstrated to my colleagues the greater range of what I have to offer and has led to many fewer requests for “just the databases,” and colleagues coming in the door looking for more meaningful and applicable (and less repetitive) engagements.

What I Mean When I Say Information Literacy

When I arrived at my current school months before Covid, I was told that the only department that had traditionally collaborated with the library was the history department. This was shared in a self-evident way–the history classes were the only ones that did research. My gut reactions were 1) I/the library can collaborate on more than traditional research, 2) surely there is research happening in other classes, and 3) my goal is to start collaborating with more departments. So, I started reaching out to department chairs to come pitch the library in department meetings. Some chairs were happy to let me have some time. Others were friendly but skeptical in the “we don’t do research” kind of way. 

When I said “library” or “information” or variations of that, all anybody could hear was “formal research.”

And then, on this very day in 2020, we started teaching virtually and my goals and priorities were radically altered. Which is how I found myself not fully revisiting my goal of building stronger relationships with all departments until this past fall. With a new chair in our Arts Department I reached out again and heard a similar response–our arts classes are performance/product oriented: the chorus sings, the ensembles play, the theater students act, the photography students take pictures, etc.–so they don’t really need instruction from the library. Of course, my librarian brain could think of loads of ways our arts students use information and need information literacy, but what I realized, in this case and others,is that something kept being stuck in translation. When I said “library” or “information” or variations of that, all anybody could hear was “formal research.”

https://xkcd.com/1576/

To remedy this I’ve taken a two pronged approach. The first step has been to address the semantics challenge. Starting with our Vice Principal for Academic Affairs, I’m working to develop a broader, shared understanding of information literacy (IL) drawing on the ACRL Frameworks. We also discussed how to develop a mutual understanding of IL so that faculty can start to see how they already teach IL within their disciplines and also possibilities for collaboration that they had not considered before. Next, I will be joining a department heads meeting to explain IL, and later in the spring doing a mini-PD at a faculty meeting. When our faculty and I are speaking the same language we will be able to have more productive conversations, and hopefully collaborations.

The second step is a targeted approach of pitching hypothetical IL lessons to teachers and departments who don’t expect to have a need for library instruction. A fruitful example from this fall turned into a two-day collaboration with our Advanced Photography class. I approached the Photo teacher and asked if/to what extent her class discussed ethical use of images, particularly in light of the spread of AI image generators, or how students are copyright holders of the images they take. By offering a idea that I saw as a potential intersection of IL and the work the photo students were doing we were able to design a teaching collaboration. On the first class period I introduced students to copyright, their rights as a copyright holder of the photos they create, Creative Commons licenses and how to include those on works they share online, and how to understand some of the issues in determining the ethical ways of engaging with other peoples images. On our second day we discussed the impact of AI on the authority of photographs in photojournalism and the bias in AI image generators. This collaboration would never have developed if we stayed at the misunderstanding of library=research. 

By recognizing this bottleneck in library outreach, I have been able to take the steps to build a shared understanding among our faculty about the broader possibilities of what the library can mean for them and their students. But, shared understanding is only one step. By offering new ideas of how to build students’ IL skills in their own disciplines, I have helped faculty start to see what that broader definition of “library” can look like own classes. These demonstrations of non-research information skills in action are already starting to spread roots in departments, opening doors to new collaboration opportunities by showing, rather than just telling, what teaching our students IL can really include.

What lessons do you teach outside the traditional research projects? How have you engaged with less obvious (to them) classes or departments?

Bringing Sources into Conversation: Teaching Literature Review to High School Students (Part 2)

As I mentioned in November, I have become a huge fan of having students read and write literature reviews before heading off to college. Working with students in those upper-level electives that use scholarly sources, I have found that they completely misinterpret what that section of papers is doing and how they are meant to interact with it. More importantly, I find that literature reviews help with basic and highly specific skill-building for which alums express appreciation when they transition to college. In addition, I have several highly collaborative colleagues now (in our AP-equivalent Advanced Topics Statistics, Biology, and History Research and Writing classes) who collaborate on teaching how to build lit reviews, and also invite me to hang around as students work, involve me in draft reading and feedback, as well as assessment.

For my first several years at this school, AP/AT Statistics was the only class that undertook functional literature reviews, and the teacher made time available some years for me to come in and teach students what a lit review was before they wrote it. So, I had several opportunities to experiment. I will admit that, in part, this process has gotten easier as students have had an increasing number of years building relationships with me prior to my appearing for this lesson (in year two, students stared at me stony-faced over a sample lit review about whether dogs feel jealousy and in year three the lit reviews on women and swearing got the same response – in years nine, ten, and eleven, the same lit reviews go over very well among my gender-diverse girls school students, because they are unsurprised that I plumb the Ig Noble award-winning papers for funny, readable, and informative examples).

In any event, over the years I found some methods that worked better than others at teaching students particular skills inherent in lit review writing, but I still found the outcomes of student work quite inconsistent. No matter how I explained the basic building blocks of lit reviews, not all students seemed to get it – or, at least it took more, one-on-one discussion over time to drive the concepts home. So, this year I took on a new approach – and this one seemed to yield much stronger results.

What is a lit review?

This year, I did not tell students what lit reviews are for or how they are organized. Working in pairs or table groups, students read sample lit reviews. Each student would have a different paper. Their task was to compare, discuss, and answer: 

1. What job the lit review was doing? and 

2. What are the building blocks of lit reviews? 

We would then work to synthesize their observations as a class, which gave the classroom teacher and myself opportunities to add observations, clarify details, answer questions, and correct misconceptions. We always pause to look at an example of a sentence that address a single study and one that reflects on several studies that arrive at similar findings.

We do this work on paper — lots of annotating takes place, and we want them focused — so most students had their computers closed. One student took notes for the whole class to refer back to ask they worked (examples). I also gave them Assiya’s (my dedicated Lit Review Research TA) FAQ that I shared back in November, of course!

Creating conversations

In the second round, students looked for signs of “conversation.” How could you tell that authors are bringing sources into conversation with each other? What words did they use to demonstrate a conversation was taking place? Students discovered signal phrases – a concept I learned from The Harker School’s Lauri Vaughan – and transitions in their texts, and I gave them hard copies of the transitions template from They Say, I Say, and a handout on signal phrases with lists of sample verbs. 

(Sidenote: I get these documents into the hands of students every chance I get. They really help students to bring sources into conversation. A former Research TA and I analyzed multiple grade-levels of History writing from the same cohort of students, looking for how they were using evidence and hallmarks of strong skills. We found that precise and varied verb selection was at least highly correlated with good use of evidence. Since then, I encourage those students who do not naturally jive with synthesizing from multiple sources to let verbs lead their way; it is really helpful for them to pull out the list and just ask themselves which fit what they are seeing: are these sources contradicting? building upon? supporting? advocating for? Classroom teachers love that students use more variety than “said….said….said.” I encourage students to keep these docs next to their computers for reference whenever they are working to bring multiple sources into conversation.)

I do not know why I did not try this method years ago. Clearly, having students observe for themselves and puzzle out the “rules” of lit review was so much more effective than telling them.

Organizational schema

The final step of the lesson, which I have used for the last eight years or so, was to give students a set of notecards and have them practice organizing lit reviews based on different prompts. (I have two sets I use, here and here.) For each set of cards, I have three questions, and students work in their groups to pile notecards into the paragraphs they would create to answer each. For dogs, the questions this year were:

  1. Do dogs feel jealousy only over “their person,” or any person?
  2. Do dogs distinguish between social and non-social recipients of their person’s attention?
  3. What method is most effective for testing secondary emotions in dogs?

For each of these questions, most of the studies conveyed on the cards could be used in a lit review. However, for each of these questions, how the sources would be grouped would vary. A lit review might be organized thematically, methodologically, chronologically, etc. This exercise reinforces the idea they discovered earlier in the class that lit reviews are not “serial book reports” (a paragraph going into depth on each source) but synthetic documents.

I’ve come to love working with students on lit reviews, and feel quite passionate about the feelings of agency and accomplishment that they engender. Do you collaborate on any lit review instruction or creation? How do you approach this work?

Lessons with Legos

One of my favorite teaching tools is a box of Legos. I’ve built several lessons around Legos, and it is a guaranteed way to get my upper school students excited about a library session. The lesson I’m sharing here is one I use with 9th graders. The objective is to have students understand what a controlled vocabulary is, how it works in the context of searching, and how that applies to LOC Subject Headings and subject searches.

The set-up: I pre-sort my Legos into standard bricks and irregular pieces, providing a pile of standard bricks, randomly, to each student (or small groups, depending on the student:Lego ratio). I tell them we are building a database of Legos and get some volunteer input to get a definition of what a database is. I then give students about 4 minutes to decide with a partner/small group how they will categorize their Legos so we can search our database to find the right bricks.

Depending on the space that I have, students may write their categories on the board as they discuss, or share them out after and I will write. Typically they offer categories like color, shape, size. For each, I press a bit further and we get lists like:

  • Color
    • Red, green, blue, white, yellow
  • Shape
    • Square, rectangle
  • Size
    • Number of studs (yes, that’s what the bumps on Legos are called)
    • Stud dimensions (1×2, 2×2, 2×4, etc.) 
    • Short or tall (in Lego lingo this would be plate or brick)

Next, we try “searching” our database. I’ll call out a search and the students will push forward their “results” on their desks. I start easy with things like “red” or “square.” I point out how they can combine things “red AND 2×2” and bam, we get the brick we want. 

But, as librarians we know it’s not so easy to search and get what you want, so I point out that there are, in fact, three different shades of blue in my Lego set and that I may do a search for “turquoise,” which based on what we established as a class, is not an option: zero results. This creates the opportunity to discuss the challenges of controlled vocabularies for searchers–if I don’t know the language used for the colors, my search for turquoise will leave me thinking there are no results for me, when there are a lot of turquoise Legos, they are just called blue. So, do we keep it broad and say I should just search for blue and then I have to sort through all the blue results to find the ones that are turquoise, or do we want our Lego database to specify what our three different shades of blue should be called? And, will that alway help? What if I call the lightest shade turquoise but they call it “light blue” or “sky blue”? And, how would I know what words to use? When we work through it like this, students catch on quickly.  At this point, I let them build a creation from the bricks they have as we plow forward. 

New information gets created all the time, so our database expands– I give them a few more Legos from the bits set aside earlier and we upload this new data into our system. We quickly hit complications. How, for example, am I supposed to search for a wheel when our data structure doesn’t have a way to do that–wheels are not square or rectangular and they don’t have studs. Or how would we find a sloped piece? Or other irregular pieces? My goal here is for them to see that, while imperfect, adding more specific categories titles for our blue issue seemed like a fairly simple fix. If we try to come up with names and categories for all the irregular shapes the vocabulary gets unwieldy and it becomes even more confusing to know what to call things. How we chose to include information, label it, and organize it, impacts how it is used. 

Now I introduce LOC Subject Headings and how that language can be obscure, biased, and difficult to find as a novice searcher. But also, knowing how information is labeled and organized helps you know how you can search for it, as well as how some questions may not be readily answered by the way information is organized. We do exploratory searching in our catalog (we use AccessIt) so I can show them how to find the Subject Headings of results of their searches, that those are clickable links that redo a search, and how to backtrack to the stem if the subject is too specific.

The best part is I get to do a lesson on searching that engages my students without relying on walking them through searches projected on the board and connects to the ACRL Frame, Searching as Strategic Exploration through the knowledge practices: understand how information systems are organized in order to access relevant information; and, use different types of searching language (e.g., controlled vocabulary, keywords, natural language) appropriately.  

Knowing the author of a source matters: Gilmore Girls explains why

Anna Birman is a (graduated) senior and Research Teaching Assistant at the Castilleja School Library. She has spent the past two years observing and teaching research lessons to understand how middle school students best learn about media literacy, databases, and citations. She has been developing lesson plans such as this one based on those experiences.

From my collaboration with my school librarians, I hear that it is sometimes frustrating when lesson plans are not met with the same enthusiasm my librarians feel about them. Personally, I enjoy drawing connections in class to TV shows. In this presentation, I use a pop culture theory about the popular 2000’s TV show Gilmore Girls to illustrate how an author or narrator’s point of view can affect the way the reader understands the source. Gilmore Girls (2000-2007) follows the everyday lives of the fiercely independent single mom Lorelai and her studious teenage daughter Rory living in the eccentric Connecticut small town Stars Hollow. The theory states that the reason that Lorelai and Rory’s behavior seems so different in the 2016 reboot A Year in the Life is because the original series is narrated by Rory herself, while the reboot is told by an omniscient narrator. Lorelai and Rory did not change; the narrator did, and that made all the difference. Looking at the author in the context of SOAPA–subject, occasion, author, purpose, and audience–can help enhance our understanding of a source because the world view of the author impacts the evidence used and conclusions drawn in the source.

Building Knowledge in the Age of AI

I was tempted, but this blog was not written by AI or any Chatbox, one who loves me or not. But this piece is all about AI and its implications for librarians and education.  It seems we can expect a flood of texts written by AI from now on.  The question is how reliable will they be? Will the program pull from authoritative sources?  

As of now, AI  has no access to the “invisible internet” of database resources or print books that have not been digitized.  Nor, does it have materials uploaded after 2021.  When these programs scan sources, how will they determine the value of the sites? Just look for similar language and phrases? These questions have important consequences: for example, a  recent Nature article noted that scientists were fooled by such texts.

The increasing usage and acceptance of AI, presents challenges and new opportunities.  Perhaps the most important skill or students will need going forward will be to assess the accuracy and relevance of texts.  Yesterday, for example, the International Baccalaureate (IB) program announced that it would accept AI  generated material if cited properly. Matt Glanville observed that “When AI can essentially write an essay at the touch of a button, we need our pupils to master different skills, such as understanding if the essay is any good or if it has missed context, has used biased data or if it is lacking in creativity.” So, assessing content will be vital.  Granville states, “These will be far more important skills than writing an essay, so the assessment tasks we set will need to reflect this.” This approach is fine as long as students have time in school and home, to acquire this content in the age of distraction.

Emphasizing skills rather than content has become a trend lately. Memorizing facts is seen as boring and unnecessary.  The idea being students should learn the skills to “do” history and science like  the professionals..  Content could be learned later, or just by “googling” something as the need arose  But if you don’t have a solid foundation of basic facts, how you can judge the credibility of AI-generated content?   Will readers take the time to assess each fact?  Of course, these demands were present with human-generated content, but now the need is greater.  Perhaps it will help that the National Council of Teachers of English is placing greater emphasis on reading nonfiction.  

Of course, the role of librarians is clear: acquire and highlight noteworthy, human-authored background content and nonfiction so that students can build this important reservoir of background knowledge when they encounter new texts, regardless of who or what created it. Encourage the idea that reading for information can be fun, especially if connected with previous knowledge and interesting facts.  It will be essential in a world dominated by texts produced in 5 minutes by AI.

.

Research Season is Here

For me, the third quarter of the school year is my Research Season. Teachers of course assign small research projects all year long, and I work with them on most of those, but this time of year is when we do the big US History Research Paper. This is the biggest research project that many of our students do in their high school careers, and it is also the project where I get to collaborate the most with the teachers who teach it. Each year, we take a look at the results from the previous year, and what we’ve learned in professional development opportunities that year, and make any changes to the process that we think will help our students learn the process of research better. We’ve been tweaking this project together, year by year, for 7 years now, and here are 2 recent changes that we feel have made a positive impact.

The Synthesis Matrix

For a several years we tried to incorporate an annotated bibliography into the project, but the students never quite understood it or it’s place in the research process. Students would find things that had something to do with their topic in order to write the annotated bibliography entry, but when they started actually writing the paper, they would often need to find all new sources because they weren’t paying attention to how the sources answered their research questions. Then, in 2021 at the AASL conference, I attended a session that talked about using a synthesis matrix as an alternative to an annotated bibliography. We added it to the project last winter with great success.

Image from University of Arizona Global Campus Writing Center, https://writingcenter.uagc.edu/synthesis-matrix

In a synthesis matrix, you place the research questions or themes in the top row, and then add each source down the side of the grid. For each source, you answer how it fits each of the research questions/themes across the top, leaving a blank if that source doesn’t fit one of your questions. Our students create their synthesis matrix as soon as we start looking for sources and fill it in as we go. If a source is blank across all of their questions, they discard that source and keep looking. It helps students see right away that just because a source talks about the Civil War doesn’t mean that it’s useful for their specific research. It also helps them see which of their research questions aren’t addressed with the sources they have so that they can tailor their future searches for those questions. As a personal bonus, I end up with fewer freaked-out students who suddenly don’t have enough sources the day before the paper is due.

Free Research Goals + 1 Minute of Knowledge

Both of the following tips came from the AISL community in some way, and they go hand-in-hand. Shoutout to Erinn Salge, who got this tip from Dave Wee and then shared it on the list-serv – every time you have students do free research in class, set a goal for students to reach by the end of class. You could do this as an exit ticket, or like Erinn you could work with teachers to add it into the classroom participation for the day. I usually just have students tell me something they found. For example, in 2 recent biography projects, students had to tell me an interesting fact about their chosen person at the end of class.

For the US history research paper, I’ve combined this with the 1 minute goal from William Badke’s Research Strategies, a book that several of us read together last spring in a discussion group (it’s worth a read, though none of us agreed with everything Badke says). Badke points out that you need a working knowledge of a topic before you can dive in to full-on research, and a rule of thumb for what constitutes working knowledge is to be able to talk about a topic for 1 minute without repeating yourself. Today, we are exploring possible topics for the US history paper, and students are reading reference sources about whatever topic/s they’re interested in. The students’ daily goal is to be able to talk about their potential topic to a partner for 1 minute; if they run out of things to say, they know that they need to read a bit more. This is all taking place before students even turn in their topic proposals, so by the time we start looking for primary sources, students should have a decent working knowledge of their topic.