Canva, AI, and the biases baked into everything

Welcome back to “thinking out loud with Sara.” Today, and most days, I’m thinking about biases within AI-generated content.

One of my summer projects was to create some materials to support faculty in their use of Canva with students. As part of that, I wanted to explore some of the new generative AI tools that Canva has introduced.

Before I started exploring, I heard a story on There Are No Girls on the Internet about Canva’s text-to-image tool flagging the prompt “black woman with bantu knots” as possibly resulting in unsafe or offensive content. This article from People of Color in Tech covers the story in more detail – and I highly recommend reading it. 

Since I’m already a day late with this post, I’m just going to post some images from my initial searches, and give you the same prompts I hope to give students:

What do you see? What does it make you think? What do you wonder?

All images below are from prompts on July 26, 2023

Canva seems to be getting better at actually producing images, but I’ve done this prompt many times, and it has yet to produce an image of a Black woman with actual bantu knots

These are all concerning (but not that surprising) in different ways. The search that really surprised me though was this one:

What? I reached out to Canva support about this, but was unable to get past canned responses to my questions and concerns.

As I started writing this post, I decided to try again, and see if Canva had addressed this. And I actually got results!

Then I decided to push my luck…

My response to this is probably not appropriate for a professional blog.

There’s a lot here to discuss with faculty and students, obviously, and there is a part of me that’s grateful to have such clear examples of bias in generative AI to use in conversations. But we all know that bias is not always this obvious – and is easily missed if we’re not consciously looking for it. How do we equip ourselves and our students to be on the lookout for these things? How do we craft prompts that account for these possibilities? How do we put the brakes on the rush to using generative AI while acknowledging that it is going to play a significant role in our lives? I don’t have good answers, but I know I need to keep asking these questions.

What are you wondering about when it comes to bias in generative AI? What questions are you asking? What questions are your students and faculty asking?

Leave a Reply

Your email address will not be published. Required fields are marked *