top of page
  • Writer's pictureAI for Peace

Promise of New Technologies against the Peril of Racial Bias

Updated: Feb 28, 2021

By Amanda Luz, Jeremy Pineda, Loren Crone, Stephanie Hilton


Introduction: When AI fails to recognize humanity, it is time to face it

Oprah Winfrey is one of the most recognizable faces in the United States. As an American talk show host, television producer, actress, author, and philanthropist, she is considered an iconic Black woman, along with other successful examples like Michelle Obama and Serena Williams. However, their easy face recognition by humans is not equally done by algorithms. In this image, we see an example of how technology companies with facial analysis services such as Microsoft, Amazon, and Google fail to recognize their faces as women's faces.


According to MIT researcher and digital activist Joy Buolamwini, this is not an isolated incident: there are gender and skin-type bias in facial analysis technology from leading tech companies (Buolamwini, 2018). With the increased adoption of artificial intelligence and new technologies for analyzing humans, we would like to introduce the discussion about 1) what we should know about this topic, 2) what are the concerns about the biases in new technologies, and 3) how to propose creative openings that can be taken by civil society and governments to pressure for more accountable and equitable AI.


1. What should we know about algorithms and new technologies?

Because we are going to discuss structural threats by new information technologies, we think it is important to take a step back to understand how concepts such as "coding", "machine learning" and "algorithms" work, given their intrinsic presence in our daily lives. We chose two short videos to introduce this discussion: the "Coding" episode trailer, for Vox + Netflix's "Explained" series and the "Ain't I a Woman" video. In the first, Zeynep Tufekci, sociologist and professor at UNC-Chapel Hill, explains what is coding and how society's structural inequality is reflected and built into the design of technologies and data that we use. In the second, Joy Buolamwini creates a spoken-word piece based on the research findings by Gender Shades (MIT) that uncovered gender and skin-type bias in facial analysis technology from leading tech companies. Both videos introduce a main underlying aspect that who codes matters, how we code matters, and why we code matters. In other words, a biased society creates biased software.


Based on this initial assessment of the topic, we recommend the articles "Wrongfully accused by an Algorithm" (Hill, 2020) and the optional reading of "Machine Bias" (Angwin et. al., 2016) to deepen the conversation about the implications of biased AI. The cases reported in the articles show that software used to predict future criminals is biased against Black individuals. Therefore, new technologies are not just reinforcing racial bias in society, but they are enabling innovative and pervasive ways to create inequality on an unprecedented scale. In this knowledge review, we propose that the pathway does not end at making AI more inclusive and ethical. Data-driven technologies should be equitable and accountable and considered not a solution in themselves, but helpful tools for constructive dialogue and solutions in a biased society.


2. Concerns: Racial biases embedded in new technologies & Tech solutionism

In this knowledge review, we are raising two main concerns: how the racial biases (among other structural inequalities) are embedded in new technologies by design and how our cultural beliefs about technology might, in many ways, underscore the structural threats that biased technologies represent to the rights of specific groups that are already under- or misrepresented in societies.


2.1 New technologies are biased by design

The relationship between technology and inequality is multifaceted. As pointed out by Tufekci, technology is neither good nor bad, nor is neutral: "technology alters the landscape in which human social interaction takes place, shifts the power and the leverage between actors, and has many other ancillary effects (Tufecki, 2017). Data-driven technologies are increasingly shaping all aspects of our lives throughout the world and determining expected outcomes in employment, education, health care, and criminal justice. And subsequently, this dynamic introduces the risk of systemized discrimination on an unprecedented scale, by reinforcing existing structural inequalities, potentializing and creating new forms of how these structural inequalities can negatively affect specific groups in society.


Structural inequalities such as institutional racism are reinforced because of incomplete and biased datasets. The Human Rights Council Report by Special Rapporteur Tendayi Achiume (2020) explains that these datasets can be called "dirty data", because of what they tell us about contemporary forms of racism, racial discrimination, xenophobia, and related intolerance. Biases of tech creators are embedded at every level of the tech design process, from conception to production and distribution, and ultimately, they lead to products or algorithms that reflect and reinforce gender, racial, religious, and ideological intolerances. Like all technologies before it, emerging technologies such as artificial intelligence reflect the values of their creators, so inclusivity matters from who designs it to who sits on the company boards and which ethical perspectives are included (Walch, 2020).


In the case of non-recognition of Oprah's face, it is not just a matter of being recognized or not recognized. The underlying understanding is that as companies, governments, and law enforcement agencies use AI to make decisions about opportunities and freedoms, Black people are subjected to under-or misrepresentation (Buolamwini, 2018).


2.2. Technology is neither an isolated institution nor a solution in itself

It is relevant to challenge "technological determinism", that is, the belief that technology can influence society, but its impact is largely insulated from social, political, and economic forces (Morozov, 2020). When biased tools are introduced to schools or to the criminal justice system, which are social institutions already affected by racial injustice, we have the structural racial inequality accentuated by biased algorithms. In the following table, by the Algorithm Justice League, we see the potential harms from algorithmic decision-making on individual and collective levels.


Additionally, it is important to challenge "technological solutionism" or "technological chauvinism" (Morozov, 2020), which is the focus on the emerging digital technologies as the panacea for all current and future structural inequalities in society, such as failing political systems, hunger, and homelessness. By believing so, tech solutionists risk delegating the democratic exercise of power in the representation by public institutions to a few private platforms that have their interests, which, in turn, results in deepening inequality because the structures that create inequality are not being targeted.


3. Creative openings

In debating about technology, it's easy to focus on the tools: the unknown processes of a machine learning model or the app that is designed to keep our attention captive. However, by centering the discussion in the platforms, we emphasize the "new, bright and shiny things" leading our future while overlooking the ecosystem in which we are introducing these new technologies. As an exercise of searching for creative openings to be taken by civil society and governments, we thought of two main themes to approach data-driven technologies:


3.1 Shifting the civil society debate from "the tools in themselves" to "their relation to ourselves"

By shifting the focus to people and communities concerning technology, we can highlight technological intrinsic flaws in an open and public discussion about the increased use of data-driven technologies in our daily lives. At the same time that we can amplify efforts to inclusiveness in AI, we can critically understand the social implications of the technologies and processes we either create or use. As an example, this can be done "by documenting failure cases and working with impacted communities to determine if, when, and how to advance technological innovations (Buolamwini, 2018).


Additionally, efforts to better document data sets and models for their performance limitations and other properties should focus on "bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored." (Johnson, 2020)


There is also the opportunity to expand initiatives aiming at more inclusive technologies beyond the Global North, as the Branka Panic (founder of AI for Peace) points out, "likely because of the concentration of the AI talent, one of the challenges is connecting AI expertise with communities and problems around the world and investing in talent in the Global South" (Walch, 2020).


In sum, we would like to highlight the AI for Peace mission statement as a roadmap to this creative opening alternative: "We believe that with a technology as powerful and complex as AI, constructive dialogue and engagement between academia, industry, and civil society is critical to maximizing the benefits and minimizing the risks to human rights, democracy, and human security. We want to make sure that peace-builders, humanitarians, and human rights activists are well informed, and their critical voices heard in this process".


b) Pressuring tech companies for accountable and equitable technologies


Reframing data-driven technologies to be equitable and accountable requires a multifaceted approach, encompassing the technology platforms themselves, governments, and investigations to enhance transparency. There is a need to move beyond nonbinding principles that fail to hold developers to account (Johnson, 2020). A "post-solutionist path" – one that gives the public sovereignty over digital platforms – begins by breaking the artificial binary between the agile start-up and the inefficient government that limits our political horizons today (Morozov, 2020). In other words, independently of political ideologies, we should question "what institutions do we need to harness the new forms of social coordination and innovation afforded by digital technologies?"(Morozov, 2020)


In some cases, such as the use of facial recognition in the justice system, the discriminatory effect of data-driven technologies will require their outright prohibition. In other cases, moderation and rights protection measures during the development and deployment processes could support the safety-critical applications of AI and other emergent technologies systems. Other possibilities are open-source alternatives to commercial technologies; more actions from government, researchers, and civil society in general to pressure for equitable practices and accountability in software; as well as the use of data-driven technologies to create possibilities of dialogue and solutions.


4. Conclusion

The false or misconstrued assumption of machine neutrality, as well as technological solutionism and determinism, threatens the exercise of our human rights if we don't demand increased transparency and accountability. We need to significantly change how we approach the design, development, deployment, maintenance, and oversight of data-driven technologies, "and most urgently we need to have more POCs – poets of code, people of color, persons of conscience – positioned to shape the technology that is shaping society" (Buolamwini, 2018).


By shifting how we think of tools– not as innovative in themselves, but how they relate to what we think about ourselves as societies and individuals, we can start to undertake measures as civil society, companies, and governments, to ensure that the development and deployment of AI are beneficial (and not detrimental) to our humanity.

 

This piece was prepared as a Mini Case Knowledge Review by Amanda Luz, Jeremy Pineda, Loren Crone, Stephanie Hilton

149 views
bottom of page