Home Business A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI

A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI

0
A Move for ‘Algorithmic Reparation’ Calls for Racial Justice in AI

[ad_1]

Supporters of algorithmic reparation suggest taking lessons from curation professionals such as librarians, who’ve had to consider how to ethically collect data about people and what should be included in libraries. They propose considering not just whether the performance of an AI model is deemed fair or good but whether it shifts power.

The suggestions echo earlier recommendations by former Google AI researcher Timnit Gebru, who in a 2019 paper encouraged machine learning practitioners to consider how archivists and library sciences dealt with issues involving ethics, inclusivity, and power. Gebru says Google fired her in late 2020, and recently launched a distributed AI research center. A critical analysis concluded that Google subjected Gebru to a pattern of abuse historically aimed at Black women in professional environments. Authors of that analysis also urged computer scientists to look for patterns in history and society in addition to data.

Earlier this year, five US senators urged Google to hire an independent auditor to evaluate the impact of racism on Google’s products and workplace. Google did not respond to the letter.

In 2019, four Google AI researchers argued the field of responsible AI needs critical race theory because most work in the field doesn’t account for the socially constructed aspect of race or recognize the influence of history on data sets that are collected.

“We emphasize that data collection and annotation efforts must be grounded in the social and historical contexts of racial classification and racial category formation,” the paper reads. “To oversimplify is to do violence, or even more, to reinscribe violence on communities that already experience structural violence.”

Lead author Alex Hanna is one of the first sociologists hired by Google and lead author of the paper. She was a vocal critic of Google executives in the wake of Gebru’s departure. Hanna says she appreciates that critical race theory centers race in conversations about what’s fair or ethical and can help reveal historical patterns of oppression. Since then, Hanna coauthored a paper also published in Big Data & Society that confronts how facial recognition technology reinforces constructs of gender and race that date back to colonialism.

In late 2020, Margaret Mitchell, who with Gebru led the Ethical AI team at Google, said the company was beginning to use critical race theory to help decide what’s fair or ethical. Mitchell was fired in February. A Google spokesperson says critical race theory is part of the review process for AI research.

Another paper, by White House Office of Science and Technology Policy adviser Rashida Richardson, to be published next year contends that you cannot think of AI in the US without acknowledging the influence of racial segregation. The legacy of laws and social norms to control, exclude, and otherwise oppress Black people is too influential.

For example, studies have found that algorithms used to screen apartment renters and mortgage applicants disproportionately disadvantage Black people. Richardson says it’s essential to remember that federal housing policy explicitly required racial segregation until the passage of civil rights laws in the 1960s. The government also colluded with developers and homeowners to deny opportunities to people of color and keep racial groups apart. She says segregation enabled “cartel-like behavior” among white people in homeowners associations, school boards, and unions. In turn, segregated housing practices compound problems or privilege related to education or generational wealth.

Historical patterns of segregation have poisoned the data on which many algorithms are built, Richardson says, such as for classifying what’s a “good” school or attitudes about policing Brown and Black neighborhoods.

“Racial segregation has played a central evolutionary role in the reproduction and amplification of racial stratification in data-driven technologies and applications. Racial segregation also constrains conceptualization of algorithmic bias problems and relevant interventions,” she wrote. “When the impact of racial segregation is ignored, issues of racial inequality appear as naturally occurring phenomena, rather than byproducts of specific policies, practices, social norms, and behaviors.”

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here