Digital Alchemy - Alex Hanna on Combating AI Injustice
Moya Bailey 0:02
ICA presents.
Hello and welcome to this episode of Digital Alchemy, a production of the ICA Podcast Network. My name is Moya Bailey and I'm an associate professor in the Department of Communication Studies at Northwestern University, Director of the Digital Apothecary Lab, and today's host. With us today is Dr. Alex Hanna. Dr. Hanna's work centers on how data used in computational technologies can exacerbate intersectional inequality. Dr. Hanna's work has led her to serve as the current Director of Research at the Distributed AI Research Institute, co-chair of Sociologists for Trans Justice, and a Senior Fellow at the Center for Applied Transgender Studies. I'm so delighted to have Alex Hanna with me today. I always like to start with: how did you get started? How did you find yourself in the work that you're doing? What made it exciting for you?
Alex Hanna 1:12
Thanks for having me, Moya. I'm so excited to be here. What a complicated question. Because I think it's first defining what the kind of work is and how to get there. Because I think there's a part of the work that's a political journey and also an academic journey. I've been fortunate enough where I'm doing a bit of both. I can give you the history of the political work, which started doing labor organizing with United Students Against Sweatshops and getting involved with my union at University of Wisconsin. And then also started to understand my gender more and then transitioned more to trans activism, getting health benefits for government workers. And then started doing more machine learning, as a practitioner first and as a data science person. Then also quickly realizing how this work could be used for ends that would support existing power structures and knew that there needed to be more critical perspectives on it. Then started transitioning more towards doing work on this field that could be considered AI ethics, or algorithmic fairness. Although those are two terms I don't really like myself because I think it really constraints or mischaracterizes what we're talking about. So, that's how I got my start, trying to really think about what it meant to talk about the social ramifications of technology and what those implications are.
Moya Bailey 2:38
Do you have a term that you prefer in terms of encapsulating that capaciousness of all the things we're talking about when we're talking about ethical AI or algorithmic injustice?
Alex Hanna 2:52
Algorithmic injustice is a good one. I often talk about harms that are in sociotechnical systems. I think that's generally how I characterize it. Sociotechnical is still a bit of a confusing word but it's intended to expand it more from the algorithmic, which I think constrains a lot of thoughts of what's happening, or AI which I think is envisioned to be a lot of things depending on who you ask. But sociotechnical implies that these are technical systems that interact with people in very critical and dramatic ways.
Moya Bailey 3:26
I think that really plays into the brilliance you're bringing to DAIR. I wonder if there's something you can say about what DAIR is doing, what DAIR is, and how it's getting to these questions in ways that other conversations haven't?
Alex Hanna 3:43
So, DAIR was really born out of trial and a bit of fire. So, when Dr. Timnit Gebru was fired from Google, after publishing a paper that is around large language models, it really was an upsetting moment for this field that we call AI ethics. Because it was really saying, "Well, if you want to really tango with this work, you have to think about what the vested interests are of corporate actors and other people in quote, unquote, AI." And so, DAIR is a bit of an experiment, which is both exciting and also terrifying. But it's great, I wouldn't change a thing about the journey so far. So DAIR, as the Distributed AI Research Institute, is this experiment and understanding what can happen with AI if we resituate it from technology to a conversation about what it would mean to use technology in our communities in ways that would make it work for our communities. So, the idea here is it's a nonprofit because corporations have their own issues with this. Universities have a lot of constraints as well. Timnit and I have worked in both. She's worked more in the corporate side, I've worked more in the university side. We wanted to really understand what it would mean to focus on AI that would do two things. First, it was focused on harms coming from AI and other sociotechnical systems, pointing those out, doing research on where that exists, as well as develop new technologies that would work for people. Our focus is on Africa and the African diaspora. The idea being that we really need to situate Black folks worldwide in thinking about AI, since so much of AI leaves Black people out of it. What DAIR brings to this conversation is really thinking about what it would mean to do both of these things and to focus on the most marginalized people in these conversations. We started out focusing on three groups of people: refugees, data laborers, and gig workers. These folks are people who are often really at the bottom of where these technologies are being deployed or subject to their harms. We have people involved in DAIR that are researchers but we also have people involved that have that lived experience. Meron Estefanos, who's one of our Fellows is a longtime refugee advocate and has done advocacy for refugees who are escaping Ethiopia and Eritrea, typically in different kinds of flows either within Africa or to the EU and often are kidnapped and held hostage. She has literally freed hundreds of refugees through raising funds for them, through putting things on the radar of the West. We see these technologies being deployed at the border, within refugee camps for ostensibly humanitarian ends but it's really the testing ground on refugees who have very few rights. In terms of gig workers, we have Adrienne Williams, who's a former Amazon driver and a current labor organizer, who has been focusing on the use of surveillance technologies, often AI-powered surveillance technologies, in monitoring driver motions, even driver eye contact in the vehicle. If they're adjusting their phone when they're parked, it'll ding them, will call them out by name, will say when they're not driving, when they're not looking and it's adding more to their workload. We also have a fellow Mila Miceli who's been working intensively with data annotators in places where they're the most precarious, back in the lower class areas of Argentina where Mila's from, as well as data annotators who are refugees from places like Syria and are doing these annotation jobs, typically given instructions in English even though they're main language is Arabic and having a hard time to interpret them. So often, when we're in this conversation of talking about AI, people are like, "Well, why would you hire this person?" Well, this person understands AI more intimately than any engineer because they are experiencing this stuff firsthand and they're experiencing the outcomes firsthand. And that's an incredibly important perspective that needs to be drawn into the conversation.
Moya Bailey 8:20
I love that. People aren't thinking about refugees, gig workers, together. People are thinking about those individual issues.
Alex Hanna 8:29
Exactly. It's an amazing moment of international solidarity and thinking about these are two sides of the same coin, whether it's surveillance at the border or surveillance in the delivery car.
Moya Bailey 8:40
Given your connection to the academy, I wonder what thoughts you have for how academic researchers can support the work of DAIR but also what kind of things people should be teaching their students to better prepare them for addressing these issues or thinking about them if they go into industry?
Alex Hanna 9:03
I can speak more to the latter part of the question, thinking about academics, like teaching their students. We're in such a precarious job market. And I teach a class about data and power. A lot of my students are going into industry and they're like, "What can I do here? You know, I still have to get a job." Which is legit, it's very difficult. I think the important thing to consider is what are the real material impacts of these technologies on people? What are these things that are going to reduce violence against people? Often within the constraints of the academy, or in industry, you might not have the opportunity to do work that is leading to different types of liberatory outcomes. It's a bit of a scale. There's things that heighten surveillance to things that are outright weapons. For students, it's often thinking about like, well, where's the line that you're gonna draw for yourself and what's the line that you think will do some kind of mitigation and open the door for other people? What does that mean? I was having this conversation on Wednesday with my class. We were reading a book that I like to read, which is Dean Spade's "Normal Life". In that book, Dean Spade, who's an organizer and legal scholar, talks about liberal reforms, especially in mainstream gay and lesbian organizations versus critical trans politic. It was written a few years ago before the legalization of gay marriage. Spade talks about the legalization of marriage and how that itself becomes a pathway to things like health care or child rearing. Thinking more on a different horizon, what would it mean to not tie healthcare to employment? We had a really productive conversation in which I kind of rubbed some the class the wrong way. They're like, "Well, I feel like it's hopeless." And I'm like, "Well, we can think about this as an analytic. What gets us closer? And how can this orient your thinking? What gets us closer to a world we want to live in? And what might be just putting more money into institutions that are preventing us from getting there?"
Moya Bailey 9:58
That actually takes me to my last question, which is: what's keeping you hopeful in this moment? How are you keeping that horizon in mind, despite the fact that we do have to go to the job every day?
Alex Hanna 11:16
I mean it's such a hard question, right? We live in such dire times. And we're recording this two days after a few 100 authors wrote to the New York Times telling it to stop writing so poorly about trans people. And transfemmes are trying to get the medical care they need and many transwomen of color are trying to stay alive. What gives me hope is that these days, at least from my corner of the world, there is an articulated desire of getting beyond what has been considered to be reform. I think more and more people have been turning to different kinds of frameworks, have been turning to abolition, they've been turning to Afrofuturism, they've been turning to Octavia Butler. I think that gives me hope. I love teaching because it puts me in contact with lots of students and lots of people who are much younger than me at this point. And they are really enthusiastic and they want to learn more. They do things in such creative ways. That gives me a super amount of hope. I think I'm often getting more than what I put into that endeavor. So, talking to students gives me so much hope. And I think the best thing that I can do is to keep on teaching, to keep on opening the door for other people.
Moya Bailey 12:36
Thank you so much for taking the time to talk to me. It's been such a pleasure.
Alex Hanna 12:42
Moya, this has been wonderful.
Moya Bailey 12:46
Digital Alchemy is a production of the International Communication Association Podcast Network. This series is sponsored by the Northwestern University School of Communication. Our producer is Sharlene Burgos. Our executive producer is DeVante Brown. The theme music is by Matt Oakley. Please check the show notes in the episode description to learn more about myself. Today's digital alchemist and digital alchemy overall. Thanks for listening!