Discrimination in the Digits: A Conversation with Ruha Benjamin, Author of “Race after Technology”
Ruha Benjamin is a Professor of African American Studies at Princeton University and is the author of the recent book Race After Technology: Abolitionist Tools for the New Jim Code. She is the founder of the JUST DATA Lab and a Faculty Associate in the Center for Information Technology Policy, Program on History of Science, Center for Health and Wellbeing, Program on Gender and Sexuality Studies, and Department of Sociology Additionally, Dr. Benjamin serves on the Executive Committees for the Program in Global Health and Health Policy and Center for Digital Humanities and is a member of the Institute for Advanced Study.
Dr. Benjamin’s studies of the social dimensions of science, technology, and medicine have taken her all across the globe, from San Francisco to London to Nairobi, to lecture on the relationship between innovation and inequity. She has spoken at multiple venues, including TEDx Baltimore, the 53rd Annual Nobel Conference, Data for Black Lives, and countless universities across the United States. Her work has also been featured in major news and media sources such as National Geographic, The Guardian, and NPR’s Science Friday.
Dr. Benjamin has been awarded numerous fellowships and grants from institutions including the American Council of Learned Societies, National Science Foundation, and Institute for Advanced Study. In 2017, she received the President’s Award for Distinguished Teaching at Princeton. Dr. Benjamin received her B.A. in Sociology and Anthropology at Spelman College and holds an M.A. and Ph.D. in Sociology from the University of California, Berkeley.
Arjun Jagjivan (BT): As I understand it, “Race After Technology” isn’t the only book you have written about technology and sociology; as a scholar of African American Studies, what inspired you to explore this relationship in the first place?
Ruha Benjamin (RB): Many of us think of science and technology as asocial and apolitical, as existing in a bubble removed from social and historical processes. As an undergrad, when I first started learning about the history of scientific racism and the eugenics movement, I grew more and more interested in bursting that bubble through research, writing, and teaching.
Fast forward a couple decades, and I study not only how science, medicine, and technology impact society, but also how society shapes science, medicine, and technology, so the query goes both ways. How do existing laws, norms, and structures make certain scientific and technical achievements possible and seemingly inevitable? What are the values and assumptions guiding science and technology? Who benefits and who is harmed? And can we imagine and design things differently? The fields of African American studies, sociology, and STS [science and technology studies] offer powerful conceptual tools to think about these questions from multiple perspectives.
For example, Black people have historically been the object of harmful scientific experimentation, from J. Marion Sims’ surgical procedures on enslaved Black women to the U.S. Public Health Service Study at Tuskegee to the non-therapeutic experiments on people imprisoned at Holmesburg Prison in Philadelphia. These experiences offer an incisive perspective from the underbelly of modernity which tech practitioners, policy makers, and the broader public can learn from today.
BT: As you were writing “Race After Technology,” was there anything surprising you discovered? Based on your personal experience from this process, what do you consider the most valuable experience from delving into such a unique subject?
RB: What surprised me most is how dramatically the public conversation around emerging technologies has shifted in such a short time. In part due to the Facebook-Cambridge Analytica scandal surrounding the 2016 US presidential election, more people seem to question the idea that “technology is neutral.” When I first started the book, I was expecting a lot more push back against the premise that technologies reflect the values and assumptions of those who create it. I’m happily surprised that many people I talk to today are critical of the dominant narrative that "technology equals progress" and that they want to discuss how we can design technology with equity and justice in mind.
BT: As influential and ubiquitous as the Internet is in our daily lives, it is not necessarily available to everyone in the same capacity. Do you believe that poor and racialized communities are placed at a disadvantage in terms of accessibility?
RB: Due to the COVID-19 pandemic, schools around the world have been forced to move classes online. This has exposed the glaring inequalities when it comes to access to technology. Even in a relatively wealthy town like Princeton, the public school district has to reckon with the fact not all students have easy access to the Internet or computer devices at home. This has always been the case, but due to the pandemic, the stakes of this inequality are much higher.
BT: You’ve been a recent victim of what is known as Zoom-bombing, in which people can hijack business-related or educational video conferences to post explicit, racist, or sexist materials. Why is this happening, and what can we learn from it?
RB: As troubling as some of the examples are, technology can act like a mirror and bring issues to light that we’ve been unwilling to address and look at clearly before, it throws it in our face... the question is, what do we do when we see it? Many people will explain it away or create excuses for why these things continue to happen rather than trying to wrestle with the [reality] of it. In my view, it offers an opportunity to look at the underlying issues, because it’s not the technology that’s creating the problem in and of itself; it’s exposing it and in some ways facilitating it, but it’s not inventing racism. What I experienced is very minor compared to the racism that is being exposed due to this pandemic, especially the targeting and violent attacks on people who are perceived to be Chinese, or the more structural forms of violence experienced by poor and racialized people who must work under unsafe conditions or who have lost their jobs and healthcare. But the latter is on a spectrum with the digital forms of harassment that I and many of my colleagues routinely experience.
BT: Technological prowess and social progress are two measurements for human advancement, yet it is unclear how much they do or do not overlap. To what extent do you think the two notions are interrelated? How much do you see this relationship evolving in the near future, and what factors do you think will be most relevant to such development?
RB: People often seem to conflate these two processes or they assume that technological prowess necessarily leads to social progress. Instead, evidence suggests that in many cases technological development exacerbates social inequalities unless those in power make a concerted effort to prioritize equity in tech development. In other words, the default setting of innovation is inequity.
I think that the average person experiences the problems with these default settings on a routine basis through social media, because we spend so much of our time online. For example, online harassment is a huge issue, and the default setting of most social media companies –and I have started to shift my language from calling them “platforms” to “media companies” because “platforms” in some ways lets them off the hook; it’s like they’re just providing the space for this bad stuff to happen, but their decisions actually facilitate it – [such as] Twitter and Facebook, the decisions that they make and what they choose to prioritize and value, facilitates racial harassment, sexist online targeting, and so on, often under the guise of free speech and ease of communication, but it’s only easy for those who are not targeted.
So I was thinking about the Zoom situation, and the default setting is that people can share their screens automatically unless the host goes in and changes that setting, and the argument on the part of this company is that [it’s] part of the ease with which Zoom operates; that’s why they are so popular, because they don’t have a lot of hoops you have to jump through... but the underside of that ease is that it makes it easy to then target people, and so for the people who are regularly targeted, their interests and concerns are not being prioritized by Twitter, by Zoom, by any of these media companies. One of the major assumptions is that there’s a kind of universal user who is not racialized, is not marked by gender, sex, or class.
I don’t focus as much on social media in my book because I think more scholarly attention has been placed there when it comes to thinking about racism and technology. What I try to do is to show all of the areas of our lives that are impacted by these default settings, by the way that we design technology, and how they are more hidden from [our] view, things that people don’t realize are happening behind their back. The examples I draw on range from healthcare, criminal justice, hiring, and education, where automated decision systems are being used to make really consequential decisions about our lives that we’re hardly aware of, and by drawing attention to it, I hope that I can raise awareness so that people can take a more proactive approach to that.
BT: In an era where diversity is highly valued as a metric of potential and success for an institution, the implementation of diversity ‘quotas’ by companies has received both praise and criticism for how it is treated in the workplace. Why do you think this notion has become increasingly common, and how do they reflect the values and beliefs in the industry?
RB: First, I think very few institutions actually use rigid quotas, even though that’s the idea that people invoke to undermine diversity efforts. In reality, most institutions celebrate vague platitudes of inclusion and equity without implementing concrete plans to hire, retain, and promote people who don’t fit the narrow image of a tech bro. The tech industry is a unique context because of powerful narratives that have been shaped by scientific racism to explain why there are few Black, Latinx, and indigenous people working in STEM fields.
Too often the presumption is that members of racialized groups lack the cognitive ability to excel. But what if we flipped the script? And focus on how the tech industry lacks the social acuity to address the racism and sexism that shape these fields. The challenge is not simply attracting a more diverse workforce, but making a concerted effort not to push out existing talent because of the everyday racism and sexism that is normalized throughout the tech industry. Keep in mind, plantations were “diverse.” Diverse and hierarchical. Diverse and exploitative. Diverse and deadly. Diversity can coexist with injustice. In fact, celebrating diversity as a vague ideal can keep us from addressing deep-seated power dynamics.
BT: Building off of that, there tends to be a discrepancy in perspective between the corporate level and the employee level. To what extent can this be an issue, and how can it be resolved?
RB: It’s really important to acknowledge that the power dynamics within most companies make it difficult to address many of the issues we are witnessing, not only in terms of who sees the problem, but even once everyone is made aware of a particular problem, who’s empowered to act on it and to offer solutions. I think part of what we need right now are people in the higher echelons of these companies being willing to empower others within their organization to speak up when they notice or experience issues of inequity and injustice, to proactively reveal them rather than bury them in the interest of the bottom line or because it’s going to hurt their public image.
An example that came to light a few months ago in the late fall was when Google was rolling out its new Pixel 4 phone, and it was hiring contract workers to go and target Black homeless people in Atlanta to get their facial images, their selfies, so that they could diversify their training data so that the phone’s facial recognition system would work more readily. I was thinking about that scenario and who was in the room when that decision was made to go target a vulnerable population, given the history of racial science in the country, the history of exploiting and experimenting on Black people, in particular Black poor people. Maybe someone in that room was aware of it and thought “Hmmm, this is probably not going to go well for us,” but [I wonder] whether they even felt empowered to bring it up in that meeting, or maybe they thought that someone else with more power wouldn’t think it was that consequential. Thus the issue is not simply making people aware of problems, but the power to act on that awareness is something that I think a lot of organizations have to reckon with and really take an honest assessment of… The person with the lowest title perhaps may be the first one to notice an issue that’s coming down the pike, and if they don’t feel like people are going to take them seriously or act on it, then it really disincentivizes people from speaking up, so in part this is a question of the culture of organizations and the power dynamics within them.
BT: Datasets being fed into machine learning algorithms and other aspects of the tech industry come along with human biases and other discriminatory factors. What are some less well-known ways that discriminatory datasets have proven adversarial to lower income and minority groups?
RB: In the last few years, there’s been a lot of attention directed towards specific arenas in which these algorithms are clearly harming people. One of the main areas is in the context of the criminal justice system, where there’s already widespread public concern about unjust policies and practices, even before the technology comes into the mix. On the other end of the spectrum, and where I would draw more attention, is to arenas where we are less suspecting, in part because we have more trust in the goodwill of the people who work in a particular arena, and so there I’m thinking of the healthcare system, where automated decisions systems are increasingly being used. A study that came out last fall showed that a widely-used healthcare algorithm that affects millions of people throughout the country favored White patients over sicker Black patients. By using past spending to predict future healthcare needs, this digital triaging system unwittingly reproduced racial disparities because on average Black people incur fewer costs for a variety of reasons, including systemic racism. In fact, more than double the number of Black patients would have been enrolled in programs designed to help them stay out of the hospital if the predictions were actually based on need rather than cost. Race neutrality in the design of systems, it turns out, can be a deadly force.
BT: If you were to write a follow-up or new book after “Race after Technology,” what would be its title and subject? Would you examine any different issues or delve deeper into those previously introduced, and why?
RT: One of the books I’m working on is tentatively titled The Emperor’s New Genes: On Borders, Belonging and Biopolitics. It examines how genomics is used to arbitrate political disputes in different parts of the world. Race after Technology is more US-focused, so this next project will expand the frame and think comparatively and globally about how the life sciences reflect, reinforce, and sometimes transform existing social processes.
Also, given the profound changes underway, I've started working on a project related to this crisis. The global spread of a microscopic virus places the ravages of racism and inequity under the microscope. But predictably, some commentators including the U.S. Surgeon General are placing the blame for higher Black mortality from COVID-19 on Black people themselves. In contrast, I situate this data within the broader context of social inequity and the longer history of scientific and medical racism. I also highlight how Black communities are proactively engendering justice, joy, and mutual aid in the midst of crisis. Finally, I encourage deeper reflection on how the “pandemic is a portal," to borrow a phrase from one of my favorite writers Arundhati Roy, for creating a new world in which we can all thrive.