Mid-Year Check In

Back in August, I shared my 3 big goals for the year – weeding, dynamic shelving, and signage. Now that we’ve reached the end of semester, I thought I might share my progress on these goals.

Goal 1: Weeding

As of Tuesday, the weeding is finished! I worked my way through all of our fiction genres one by one and started with the criteria that it had to have been purchased more than 8 years ago (just before I started at Webb – not sure I’m emotionally ready to omit fault in my own purchases yet) and that it hadn’t been checked out since the library moved to the upper school building 5 years ago. I pulled a lot of series this go-round that I had tried to keep together during past weeds – I kept the first 3 of Wheel of Time but gave away the other dozen of the series, and all of the Princess Diaries got the boot since the first one went missing last year. There were also a few that I kept after all because I think they’ll get some traction once they’re front facing. I’ll admit, this step would have gone quicker if I’d had more access to boxes and more places to store them (I lost my work room in the move), but I think a semester is pretty good timing as a solo librarian. Most of the books were shipped to ThriftBooks a few days ago, where they will handle the selling and share the profits with us. The others are going to our local Friends of the Library, and those few boxes are the only thing standing between me and totally clear shelves.

Goal 2: Dynamic Shelving

This goal was almost entirely dependent on the weeding happening first to make room for more front-facing displays. I started by moving our year-round state book award display to a free standing shelf, which gave me an entire stack of extra shelves to work with. So far, I’ve shifted and re-organized Adventure, Fantasy, Historical Fiction, and Humor, and all the extra space from that former display has now been used up. The dynamic shelving worked really well in Fantasy because there were tons of series to stack together. I even felt like I made even more extra space in Fantasy! But in Historical Fiction, where there are very few series, all of the extra space was eaten up really quickly. Humor ended exactly where it had before, which isn’t going to give me much room when I get to our larges genre, Realistic Fiction. I might have to go back and rework some of these first sections later.

I also realized very quickly that I didn’t have enough book ends to make books front-facing, which is what Kelsey Bogan recommends in her blog series about how to do dynamic shelving. I ordered more from Demco – these are the ones I like – and have already used up all the new ones too. I think I need to find a cheaper option for this project, because I actually like the way they work better than a traditional book display stand. They keep the book much more upright than a display stand, which seems to put less stress on the spine, especially for hefty tomes like the Harry Potters and Game of Thrones.

The shelves are very busy, but students do seem to be more engaged with what’s on the shelves. I love that series are very clearly together. I think this might even help to speed up shelving, or at least help students make fewer mistakes when replacing books they decided not to check out after all. I haven’t had any true feedback from anyone yet, but I’m inviting teachers for a drop in “grab a book and coffee” on the half day before break starts (thanks for idea Lee De Groft!), and I’m interested to see what they say.

Goal 3: Signage

And this is where the progress ends, because I haven’t thought any more about signage since I wrote that blog post. It’s still on my list, but it will have to entirely happen next semester. If you love your signage, especially for nonfiction, I’d love to see it!

So that’s where I’m at midway through the year – 1.5 out of 3 – but that’s why we make annual goals, right? How are you doing on your goals this year? Anything AISL can help you with?

Oh and remember that evil printer I mentioned in August? It’s still up to it’s hi-jinks and hasn’t worked 2 consecutive days yet this year. Would I be farther along in my goals if it worked reliably? I fear we shall never know…

The Joy of Museum Exhibition Design

Well, we are deep in History Final Projects season once again! I have an in-box full of research assistance requests, so I am going to share the joyous discovery I made through a new collaboration this year.

One of our History teachers moved to 9th grade, taking lead on curriculum — basically a completely new one — for that class. About half of this term has focused on ethnic studies, mostly using historical thinking skills to weigh multiple perspectives, and doing close reading of primary source material. The teacher came to me with the idea of having students make a museum exhibits, feeling there was something about the storytelling — the human experience — that she wanted students to strive to capture with their final project. Their topic is a social movement from an ethnic group living in California in the 20th Century or earlier.

Of course, that gave me a perfect rabbit-hole to go down! The available materials on exhibition design are wonderful, and I quickly discovered that it offered a fantastic framework that allows the development and demonstration of many historical thinking skills. Not only does it allow students to chose a voice for their writing, but the strictures and low word counts push back against writing in curlycues to try to “sound smart” that so often plague our 9th graders.

Some of the excellent sources I drew upon for lesson planning purposes included guides from the Smithsonian and the BC Museums Association. (NOTE: Debbie Abilock just pointed me to the classic work that almost every guide I read talked about, but the Internet Archive was down when I was designing the project.) Students will be making a digital exhibit with Thinglink. (Here is my very in-process example.) I’ll be vulnerable and tell you that I am due in class now, even though I have citations missing from pretty much everything at this point. The links here should point you to most of what I used to build the curriculum, if not the exhibit.

In any event, here is our first draft of class materials. I’m very, very open, as always, to ideas and feedback. (Please NOTE: We decided we absolutely want to keep this project in the curriculum, and we would definitely take more days to do it next year.)

I’m off to support storytelling about the Chicano Tattoo Movement, gospel music in Los Angeles, and more!

It’s time for Library Olympics!

Greetings, all, and happy almost-end of the year. Winding down these final weeks, the students’ and teachers’ brains are all but toast, so for our final visits with grades 3-6 we host our annual Library Olympics: a series of games reviewing skills and concepts covered throughout the semester (we do one before Winter Break and one before Summer Break). Below are a few of the events we play, remixed and revamped every year. Enjoy!

Mini-Shelves

We play this game to review organizational strategies. 7-8 books are set up at each table, and students simply rearrange the books in call# order. (Fun fact: there is ALWAYS one team who arranges the books in reverse order.)

Catalog Relay

So that I don’t have 24 9-year olds storming the stacks, this is a true relay: a small stack of cards are placed on each table, with either a title, author, or subject for each student to search in the catalog. One student from each table scoots to the shelf and brings back their book, and then tags a fellow table-mate who then retrieves their book, and so on until either time or cards run out. We play this game to review organizational strategies and location & access.

Codebreaker Word Scramble

This game involves a bit of prep on my part, but it sure is fun (and it usually takes me longer to explain than it does for the kiddos to actually play). This game reviews text features, and if we have time, students must find the correct book on the shelf first (location & access). Each book has a corresponding worksheet, and students must use the table of contents, glossary, and/or index to find the correct words. One letter from each word corresponds to a blank on the whiteboard (prepped before-hand), and once every team has filled in their letters, the whole message unfolds.

Kahoot!

No Library Olympics would be complete without a Kahoot! game. Depending on the grade level and concepts covered, these questions can range from simple catalog screenshots (“Is this book available?”) to fill-in-the-blank questions (“A digital subscription source with articles edited by experts is a ____”). Madness ensues. It is spectacular.

What about you? What games / activities do you use to review concepts with your students? Thanks for reading!

On Collaboration and Success….

I love my job.  I began my career as a university librarian and soon realized that the part of my job I liked best were the student and faculty interactions, but I was not able to spend as much time as I wanted in those areas.  I became a school librarian almost by accident and can honestly say that it was the best move that I could have made. The connections that I make with colleagues and students genuinely bring me joy.  After a tough couple of years (professionally), I am thrilled to have this feeling again in my work life.

As I shifted to serving in the middle school building full time, collaboration with faculty was a primary focus.  After several years of me only being a part-time presence in the building, the faculty was not used to having a librarian available daily.  Last year was a building year that is paying off in 2024-25.  During the first quarter, I collaborated with 7 teachers and taught a total of 60 class sessions. In addition to a few one-day introductory classes, I spent multiple days with sixth grade English and Science classes laying a foundation for larger projects in future months.  One major success is working with a social studies teacher who has been convinced that they did not need any assistance from the library regarding resources or information literacy skills. For the first time in my 8 years working with middle school staff, I finally scheduled a one-day collaboration with this teacher – and it was a success!

I also count my connections with students as a success.  During my time in the middle school, three of my children have been students in the building and I do my best to be sensitive to the potential embarrassment of having your mom as one of your teachers. As a member of GenX, I find myself moving into the “get off my lawn” phase of life and am trying not to become too curmudgeonly.  In one of my recent English lessons, I jokingly used GenZ slang to open the lesson – to the slight embarrassment of my current 7th grader. The students called me on the cringe factor of using their language, but also valiantly attempted to teach me the correct syntax and phrasing.  As I walked through the hallway later in the day, I overhead a couple of students say that I was the G.O.A.T and that they couldn’t wait to see what I do in my next lesson. During lunch duty yesterday I was informed that my “drip is slaying today!”, which I understand to be a compliment.  😉

As I continue to build the library program at the middle school level, it is easy to focus on the disappointments or the times that administration does not understand my duties.  Refocusing my attention on the connections that I am building with staff and students reminds me of the reasons I chose this career. I remain thankful for all of these connections built through learning.

Indexing Knowledge

Up until recently, I was fine with acknowledging that while our students don’t know how to use a print encyclopedia (it’s just sad to watch, really), that searching in our online reference is what they will use mostly now and in the future and that keyword searching is fine for that.

Then while doing an activity with some of my 9th grade history classes that asked them to look in a print encyclopedia I realized that the added research value of a good index is something that is missing from the digital reference we have. For example, if you look for maroons in our Encyclopedia of Latin American History and Culture, the index provides not only the page numbers for for several entries, but also the cross-reference cimarrónes, and the “see also” terms: Palenque, Quilombo, and Miskitos. For the thorough researcher who ventures to those terms in the index as well, they will find valuable information. Under the Miskitos heading is a subheading for “enslavement of African fugitives” that isn’t otherwise referenced in the “maroons” entries, and quilombo leads to subheadings for the specific community of Palmares. 

For digital subject encyclopedias in history, we use InfoBase’s History Research Center. Taking my search online, “maroons” yielded 69 results that are encyclopedia entries, with the leading entries being “maroons,” Maroon’s rebellion,” “Maroons in America,” and “Brazilian maroons,” before continuing on with headings that don’t contain the word maroon. None of the 69 results includes the entries for “quilombo,” or “Palmares,” which the print  index pointed us to, despite HRC containing articles for both. Throughout this process, there are no suggested terms provided the way you would find indexed terms that cross-reference and nest within larger headings. Reading through the individual articles, you will come across those terms, but they are not flagged, linked, or indicated in association. What HRC does provide is a box with unstructured tag clouds for each entry, and it is unclear how they are determined. 

As a side-note, I tried asking ChatGPT for help with the prompt: I’m interested in researching maroons and maroon communities. What are some other terms I should use for searching, beyond “maroon?” ChatGPT provided 20(!) additional terms to search, some of which would be too broad on their own to get me good results (“Creole societies,” “African diaspora,” “guerrilla warfare,” “underground resistance,” and “ethnogenesis”) and some that mirrored the cross-references and subheadings from the print encyclopedia index (“Cimmarones,” and “palenques.”) However, Quilombo, Palmares, and Miskitos did not appear. 

Reference sources, to me, require the ability to utilize an index, but indexes are incredibly helpful in other works as well. While Command F and other “search within” search features will find words in your text, these searches can lead to at least 3 less-than-desirable outcomes:
*An over-abundance of results if the term is ubiquitous to your topic,
*A dearth of results because you have the wrong term,
*An incomprehensive set of results because there are places where your topic is discussed but without using the search term. 
A good index ameliorates these issues by directing you to the topic where it is discussed in the text regardless of the terms used in that part of the text, as well as providing alternate terms, and where the topic requires nuance, with more specific sub-headings.

To pull back out of the weeds here, a good index provides advantages to a researcher that are distinct from full-text searching. And yet, my 9th graders came in with no idea what an index is, much less how to navigate and leverage them. Convincing them of the nuanced advantages of an index when they are so accustomed to full-text searching may be a challenge. Particularly as I myself was ready to capitulate to the ease of digital reference sources (which I still prefer for their currency in many cases). 

My take-away isn’t necessarily that I need to keep print reference works just so students can practice researching with an index. Rather, now that I’m aware of the skill gap and reminded of the important role a good index can play in effective research, I can target my instruction to specifically teach students how to use an index and why they will want that skill. I’ll also bear in mind this gap when working with students on identifying keywords recognizing they may need to work harder at that without the useful suggestions of an index at their pre-research stage. Beyond the instructional elements, I’ve also realized an important piece to look for and advocate for in our digital reference sources so we don’t lose something useful in the translation from print to digital.

Citation Puzzles

One of my orientation sessions with our 5th graders covers citations. To start, we talk about what citations are, why it’s important to cite your sources, and what sort of information citations include. While I mention that formatting is important, I tell them we won’t worry about that today since our NoodleTools software formats for us. Instead, we focus on finding the elements of citations for different types of sources, because different sources need different types of information. I want them to come away from the lesson knowing that: 1. You cite your sources to give credit to the creator, and so other people can find your sources; 2. Different types of sources require different types of information in citations.

I love to gamify my lessons, so I start by dividing the class into two teams. They then compete to assemble a twelve-piece puzzle featuring a blank country map, and correctly identify the country. I created the puzzles by printing out country maps blank except for capital cities. Then, I colored in the selected country to distinguish them from the surrounding countries, laminated them, and cut them into twelve symmetrical pieces. I’m sure there’s a more impressive, less craft-y way to do it, but my handmade puzzles work pretty well!

To earn puzzle pieces, each team must fill in twelve forms, guiding them to twelve different sources. They may work together, or individually. The source is listed at the top of the form (which, to exactly no-one’s surprise, they rarely read), and I provide blanks for the specific information needed for that source. After years of struggling with computers that wanted to update right in the middle of a lesson, wouldn’t log on, lost database access, etc., I threw my hands up and now use only printouts and print books.

I post or set the materials around the classroom, grouped by type: print books, website pages, eBook pages, database article pages, and online encyclopedia pages. On the printouts, I highlight the labels for information the students need, though they still struggle with it. Especially in the last couple of years, I found that students can’t identify a magazine or newspaper title on the database article printouts, even highlighted—I need to clarify that “source” = magazine/newspaper title for my chosen articles.

Once the students have completed a form, they bring it to me to check. If they have completed it well, I give them a puzzle piece for their team. If they have missing or incorrect information, I send them back to find it. They are welcome to help others on their team. Once they have completed the forms, they work on completing the puzzle, and looking up the capital city to identify the country. I always hand out the piece with the capital on it last!

This lesson has held up pretty well, though I do find myself making it easier/more obvious each year, mostly to streamline the game to ensure we finish on time. I did notice that this year, the students struggled more with the puzzle than in years past, and I noticed that they also struggled in a session that involved filling out a blank library map. I wonder, is students’ overall spatial awareness is declining? In any case, I don’t expect them to remember details from this lesson. If they come away knowing what a citation is, that they need to cite their sources, and that different sources require different information in citations, then they have completed their first step on the citation road. How has your teaching of citations changed in the last few years? Please write it in the comments!

Do you have that pink book about Rosa Parks? On “impossible” questions becoming “possible”

I was fully looking for something cheerful to post about today, and it turns out that “cheerful” in this instance means finding a use for something about which I have historically felt little enthusiasm: the new-ish top “result” in Google search.

When I worked at Google, one of the realizations I had revolved around questions that we librarians had a tendency to (among ourselves) view as “stupid.” First among those was asking for a book by the color of its cover. Essentially, we felt it was an unreasonable question, because it was one we could not answer. (Also because people remember green books as red and yellow books as blue, but I don’t yet have a solution for that problem.) Sometimes, technology allows us to solve a problem, as I discovered when I went to try to understand of what use color filtering in image searching could really be:



Well, this morning I was grappling with a question and I decided to try using Google AI to answer it, and look what happened:


Asking Google’s AI to tell me in which databases to find The Atlantic and JAMA in full-text

Are these responses complete? Completely correct? Did I burst into flame from typing a long-form question into a search box? The answer to each of these questions may well be “no.”

Nonetheless, I think about all the times that I wished I knew which databases to search to find x source, and I was pleasantly surprised to have this tool to try and help me.

So – hope this brings some joy or at least ease to your week. Take care, and search on!

Predictability, Personal Curation, and a Radio Station

I’ve continued to build on my “Christina’s Highlight’s 2024” playlist since my last post. Yet several times this fall when music was playing in the background while I worked, a seemingly unfamiliar song would disrupt my focus. I’d turn to my phone and scroll through Spotify looking for the title highlighted in green. But there was none. Like any librarian, that led me to the random wisdom of the Internet, where a targeted search showed that hundreds of Threads and Reddit users shared my frustration.

Vindication. Except, is it?

I know that I’m typically solidly in the “majority” camp for adopting new technology, so I try to step back and ask myself if I don’t like something because it’s new or if I don’t like something because I actually don’t like it. Or, perhaps, semantics.

To me, a playlist is a personal creation. Radio/Stations are where I want to be introduced to new music in keeping with a genre or theme. I’ve spent hundreds of hours on curation for my personal playlists; it’s not where I’m looking to meet new music. But so many words have changed over time. Technology makes it easier than ever for meanings to morph and spread and morph again.

The AI Overview, however, agrees with me that playlists are human-created and radio stations are driven by algorithms. Not that I exactly trust the Overviews for accuracy, but I do tend to note when responses match what I already believe.

This leads me to my bigger takeaway about how the information landscape has changed since I graduated from library school, back when people were already asking about the value of the degree when information had become so much more readily available.  

Theoretically, there is a world of music within arm’s reach most of my waking hours. In practice, I’ve just gone deeper into the same genres I’ve been listening to since Middle School. Ironically, most of the music that’s intrigued me outside of those genres in recent months has been discovered through the radio. The Radio radio. Shaboozey, anyone?

Even when the radio is on in the car, for most stations I can now see the artist and title as songs play. I can’t be the only one who remembers a childhood of waiting through commercial breaks for DJs to announce a new artist or title? Perhaps this is why I am so much better at music trivia or games like Podquiz now- with the visual representation of what I’m hearing constantly reflected on my screen. But there’s also still so much I don’t remember. That I don’t know. Or that I didn’t even realize I didn’t know. Simply having information available doesn’t mean we access it or that we can remember it elsewhere. I feel like there is a never-ending promise from technology companies that improved access to information will improve human quality of life. Frankly, sometimes that’s overwhelming. That’s why it can be easier to retreat to the playlists we know, the ones created by our labor, the ones that don’t distract us while humming away in the background. But there is something to the serendipity of the radio/station – the opportunity to discover something we didn’t know we’d like but that has been curated for us by an outside source, whether it’s a librarian or an algorithm. Just not through Smart Shuffle on my playlists.

AI Poem Comparison Lesson

At the start of the year, I have eight thirty-minute sessions with the fifth grade, to introduce them to the library and its resources. This year, I decided to swap out one of our sessions for an intro to AI. I started with a fifteen-minute lesson from Commonsense Media, but for the other half, I wanted to try something I read about in a Knowledge Quest article. The article’s author showed students a human-written poem and an AI-written poem, and discussed differences. As I could not locate the article, I decided it would be appropriate to have ChatGPT write the lesson plan as well as the poems, and I would adapt as necessary. Here is the lesson plan I ended up with:

Lesson Plan: Comparing AI-Written Poems to Human-Written Poems
Grade Level: 5th Grade
Duration: 15 minutes

Objective:
Students will compare and contrast a human-written poem and an AI-written poem that explore similar themes, examining the style, tone, and emotional depth of each.
Materials:

  1. Two short poems expressing similar themes (dreams)
    o One human-written (“The Dream Keeper” by Langston Hughes)
    o One AI-generated poem
  2. Printed copies of both poems for each student or displayed on the board/screen.
  3. Chart or Venn diagram for comparison.

Lesson Breakdown:

  1. Introduction (2 minutes)
    • Teacher: “Today, we’re going to look at two poems: one written by a person and one created by a computer program, called AI (Artificial Intelligence). We’ll compare them to see how similar and different they are.”
    • Briefly explain what AI is and how it can be used to write poetry.
    o This is generative AI
    o I gave it the prompt to write a poem in the style of the author of the human-written poem, of about the same length, and with the same theme
  2. Read the Poems (3 minutes)
    • Do not reveal which is the human-written poem
    • Read the first poem aloud, encouraging students to listen for emotions, imagery, and tone.
    • Read the second poem aloud. Ask students to listen carefully for similarities and differences compared to the first poem.
  3. Group Discussion (4 minutes)
    • Ask students to share their initial thoughts on both poems:
    o “Which poem do you like better? Why?”
    o “What feelings or pictures come to mind when you hear each poem?”
    o “Do they sound similar or different? How?”
  4. Comparison Activity (4 minutes)
    • Teacher: Lead students in filling out a comparison chart or Venn diagram.
    o Similarities: Themes, structure, words, or phrases used.
    o Differences: Tone, emotional depth, word choice, or if one feels more “natural” than the other.
    • Prompt questions like:
    o “Do both poems talk about the same idea in the same way?”
    o “Does one poem feel more personal or emotional?”
    o “Which one uses more descriptive or creative words?”
  5. Vote (1 minute)
    Teacher: Have students close eyes and vote on which is the human-written poem
    Reveal how they did
  6. Closing (2 minutes)
    • Summarize the discussion by highlighting that AI can write poetry, but it may sound different from human writing because AI doesn’t experience emotions the way people do.
    • Encourage students to think about how both human creativity and technology can create interesting things.

Assessment:
• Participation in the discussion and completion of the comparison chart or Venn diagram.

For the poems, I chose a Langston Hughes poem, since the 5th grade teacher loves Langston Hughes and shares his poetry with the students. I directed ChatGPT to write a poem in the style of Langston Hughes with the theme of dreams, and no longer than ten lines. Here are the two poems:

A Dream on the Wind

By ChatGPT

I heard a dream whisper, soft and low,
Telling me where I ought to go.
“Climb,” it said, “the mountain high,
Touch the stars in the endless sky.”

But when the day brings shadows near,
Dreams fade, held back by fear.
Still, the dream calls, steady and strong—
A hope, a wish, a waiting song.
I rise, I run, I will not stay—
For dreams will lead me all the way.

The Dream Keeper

By Langston Hughes

Bring me all of your dreams,

You dreamers,

Bring me all of your

Heart melodies

That I may wrap them

In a blue cloud-cloth

Away from the too-rough fingers

Of the world.

When asked which poem they preferred, the students almost universally chose the AI poem, citing its flow and rhyme. We didn’t get to the Venn diagram as there wasn’t board space for it, and the teacher stepped in to help the kids see the differences between the two. The kids were quite perspicacious, once they really focused. However, about half did not identify the human-written poem when we voted.

For the next class, I decided to change out the poems so that the teacher wouldn’t be familiar with the human-written poem, either. I found a short poem called “Grandfather,” by Syrian-born poet Mohja Kahf, and directed ChatGPT to write a poem of similar length, with similar themes, in the style of a modern Middle-eastern poet. While I cannot reprint her poem, as it is still copyrighted, you can find it in The space between our footsteps : poems and paintings from the Middle East, selected by Naomi Shihab Nye (Simon & Schuster Books for Young Readers, c1998). Here is the AI-written poem:

Whispers of Time

By ChatGPT

I hold time in my hands like sand,
slipping through each creased finger—
you see it in my eyes, the years,
the suns that rose and fell.

I was once like you,
running toward a horizon too far to catch,
but now, I sit with the dusk,
listening to the world slow.

I wish I could gift you the patience
to wait for the wind to tell its story—
but youth is fire, burning for tomorrow,
while I learn to love yesterday.

Again, most of the class preferred the AI-written poem to the human-written poem, and actually, the teacher did too—and said he felt very conflicted about that. After we discussed the poems, though, almost everyone identified the human-written poem. Specificity is one of the commonalities I noted with both of the human-written poems as opposed to the AI poems.

While it was a successful and interesting lesson, I find that I, too, am conflicted about the results! If any of you have tried a similar experiment, I would love to hear about it in the comments.

Unpacking AI and Wikipedia quality

With gratitude for the collaboration of Amy Pelman (Harker), Robin Gluck (Jewish Community High School of the Bay), Margi Putnam (Burr and Burton Academy), Hillel Gray (Ohio University), Sam Borbas, Cal Phillips, and a special librarian whose name we will not share due to their work.

Thank you so much to Alex for posting to our listserv about an article they read entitled “The Editors Protecting Wikipedia from AI Hoaxes.” Since our Wikipedia editing group meets Wednesday nights on Zoom, we decided to take a look and see if we could come up with a lesson plan for teaching students to understand when they see AI-generated content in Wikipedia. 

We read, experimented, and chatted for a few hours, trying to figure out what would be most helpful. Ultimately, we did not construct a lesson plan, but we have a set of burgeoning ideas and thoughts about approach. We look forward to collaborating with other members of this community to move forward, as needed.

Overall, while we see that some fallacious AI-generated content is making its way into Wikipedia, like it is in so many sources, we do not yet feel there is evidence that it is currently causing particular danger to information quality within Wikipedia.

 A very quick, vastly informal review of literature investigating the quality of Wikipedia content discovers other themes entirely. Overall content quality checking was common in the late 2000s/early 2010s. At that time, most researchers found that Wikipedia tended to be fairly high quality, often higher than the perception of quality by potential users. Over time, the understanding of “quality” and the research on Wikipedia has moved more into questioning the same issues we question in more traditional research sources: identity-related gatekeeping – who is included, who is excluded, how the identities of editors and the creators of source materials cited impacts the completeness of coverage on a given topic. As from early days, articles that get more traffic tend to measure up well if quality checked (e.g, anatomy), meaning that more obscure articles (and, I would argue, less used by students for schoolwork) have a greater chance of maintaining misinformation and errors. One study that looked closely at hoaxes reminded readers that, as of 2016, Wikipedia editors running “new article patrols” meant that 80% of new articles were checked within an hour of posting, and 95% within 24 hours. 

Thus, a significantly larger issue facing Wikipedia today is the substantial fall-off in the number of editors in recent years, which means that page patrolling and other quality-supporting behaviors are also suffering. This is a very real issue. 

On the bright site, there are many more tools that help editors doing quality-sustaining work to figure out where problems lie. I get notified whenever a page I (or my students) have worked on is edited, and when the changes are malicious the vandalism has usually been corrected in the few minutes it takes me to get to the page to check it. While one of the first lines of defense – the “recent changes” page and its sophisticated, bot-driven advanced search – does not yet have a set of choices for suspected AI-created content, I am guessing that we will see that option before too long. Here is how editors can currently filter the list of recent changes, and from the vandalism training I did I observed that the bigger problems tend to be dealt with extremely quickly:

Ultimately, given that genAI content is showing up in so many places, there is no reason to suspect Wikipedia any more than, say, content in many of our databases. In fact, depending on the type of database, articles may have fewer eyes on the lookout for problematic content than does Wikipedia. Certainly, the high-profile Elsivere case and the growing use of AI in our “trusted” news outlets suggested to our editing group that we do not so much need to warn students off of Wikipedia as we need to teach them about the overall changing information landscape and how to work within it.

Here is our brainstorm of potential topics that we might integrate into our teaching that address the increased use of genAI in all sources and in Wikipedia:

Teach about:

– Critical reading of all potential source materials, including – but not limited to – Wikipedia 

-Recognizing AI-reated content

– Identifying what on Wikipedia is “good information,” or learning when to use and not use Wikipedia

– Understanding that AI may be one of several factor that may add level of inaccuracy to Wikipedia, and is one of many factors editors watch out for with regularity

– Teaching about ethics of academic honesty

– Teaching about ethics of AI

– Teaching about AI and academic honesty

AI more generally:

– How do we recognize AI content?

– Google search has AI generate responses queries; does that make Wikipedia less relevant in our students’ information lives?

Wikipedia:

– Are there patterns on Wikipedia that are repeated with AI-generated content?

Wikipedia:WikiProject AI Cleanup/AI Catchphrases is a wonderful source that records a number of phrases that may appear in AI-generated content, as does the Wikipedia:WikiProject AI Cleanup main page

Category:Articles containing suspected AI-generated texts – Wikipedia

-There have been instances where text has appeared on Wikipedia pages that even our group members who have almost no knowledge of generative AI recognized immediately, such as the (long-ago fixed) page on IChing:

that even gives itself away quite explicitly:

– What are positive uses of AI on Wikipedia? (examples: helping with grammar, helping with sources, flagging possible vandalism)

– Look, as a class, at Wikipedia:WikiProject AI Cleanup and follow links to read and discuss the 

various impacts of AI on Wikipedia, and possibly extend that learning to other types of sources as well

– How do Wikipedia reviewers recognize vandalism?

– How quickly is Wikipedia “cleaned up” after an issue is flagged?

– How quickly is AI “cleaned up”?

– Look at recent changes page

– What are Wikipedia’s rules regarding AI-generated content?

– Does AI-created content violate the “No original research” rule? (based on Village Pump article)

So, we apologize that this is kind of a quick-and-dirty set of thoughts without many clear answers. Once more, however, we were all in agreement: Wikipedia appears no more riddled with AI-generated disinformation that other types of information, so learning to assess the quality of whatever you are reading is key.