Wednesday, October 20, 2021

A.I. Led Race Detection- How Socially Irresponsible Is Our Technology?!

Decades ago, A.I. was still a subject mainly reserved for works of science fiction. With the onset of the fourth industrial revolution, it is now amplifying its presence in the modern world’s every nook and corner. With such a pervasive presence of a technology that holds the potential of the metamorphosis of generations, it is important to look back at how AI has changed our lives and impacting our future.

We have come a long way sailing on the waves of machine learning, but we have still just started. Other approaches such as “reinforcement learning” or “generative antagonist networks” (GAN) are next in line to take up space in a future that seems less and less distant. Soon enough, machines may learn in the same way as human beings.

With such explosive impact on the present and the future of the human race, how socially responsible are these technologies towards the communities, genders, and well, races?

Turns out, we are soon going to find out the answer.

We have all heard about facial recognition technology that is a part of our daily lives through our smartphones and smart safety tech devices. However, this innovation has now ascended to take a conflicting turn- AI-led Race Detection Technology.

Yes, you heard that right. The handful of the population at present who have the access to this technology can identify people from the crowds, analyze their expressions, and detect their gender, age, and facial features that signify their race. In the future, this might just become a common part of our lives.

Or it already is! Businesses are already using this technology to personalize their customer experience and evolve marketing practices to hit the native chord of consumers.

What is this Race Detection technology and how does it work?

The race-detection software is a subset of facial analysis, a type of artificial intelligence that scans faces for a range of features-from the arch of an eyebrow to the shape of the cheekbones-and uses that information to conclude gender, age, race, emotions, even attractiveness. This is different from facial recognition, which relies on an A.I. technique called machine learning and is used to identify particular faces to unlock a smartphone or spot a troublemaker in a crowd.

Elements such as makeup have a high impact on the precision of the systems.

According to data, facial recognition devices deployed at U.S. airports have captured and stored the facial data of more than half of U.S. citizens. This information, stored in a centralized database, can be explored freely by the authorities. In more than half of the big cities, the police are equipped with body cameras, allowing facial recognition in real-time.

Despite ensuring high reliability and morality of the technology, AI race detection innovation is already bring the possibility of aggrevated racism to the observation of the world.

The misuse of technology has already begun

In China, AI-led race detection technology is bedeviling the already exploited community of Uighur Muslims. According to Washington Post, Chinese company Huawei has developed the technology that immediately sends “alarm” to the Chinese authorities on detection of a Uighur Muslim in the region. The country is already carrying out a gory crackdown on the community, coercing religious conversion and putting people into detention camps.

The communist regime established a system of “social credit”, which is nothing more than the use of surveillance technology in the hands of a government. People in China are at 24×7 surveillance at public places for their civic behavior through the help of facial recognition cameras called Skynet. Consequently, they face penalties or receive privileges based on what government likes and what it doesn’t. Payment of fines, debts, and civil infractions, even smoking in a prohibited public place are included in the assessment that is facilitated by algorithms whose code is not known to the public.

In case of a low score, which means a “low reliability” of the citizen, it can lead, for example, to barring a person from traveling by plane or train.

Growing latitude of race detection in the American continent

In the United States, facial recognition software is increasingly used by the police. Yet, several researchers warn, they present a severe bias against African Americans.

Race-detection software has already been used to justify the arrest of black men who were later found to be innocent. Cell phone programs and social media filters encourage changes in photos that allow you to tune your nose and lighten your skin, reinforcing discriminatory stereotypes of beauty. Facial recognition not only offers the possibility of introducing mass surveillance, but it also contains a prejudiced tendency against certain groups in our societies – with black women being the most affected.

90% of the 151 racial detection arrests in Brazil were of black people according to a survey carried out by the Security Observatory Network. This data is was particularly collected from Bahia, Rio de Janeiro, Santa Catarina, Paraíba, and Ceará.

Data bias is another aggravating factor. More than 2,400 professors, researchers, and students filed a petition against a system created in the USA to discover the likelihood of people committing crimes by crossing biometric information on the face and criminal records. As technology has fed on racially charged crime data, it can legitimize violence against marginalized groups.

A failure in the race-detection computer system led to the arrest of an African-American in Detroit, Germany. According to a complaint filed that highlights concerns about using a technology that critics say reinforces the racial bias. The American Civil Liberties Union (ACLU) said it was the first known case of an “illegal” arrest based on race-detection technology. Its detractors consider it inaccurate to distinguish African-American faces. 

The mechanism of race detection technology is fundamentally discriminatory in nature- Talk about socially responsible tech!

If you think that this technology can only promote racism, here is a bigger problem- the fundamental structure and functioning of race detectors are itself biased.

In 2016, the investigation site ProPublica conducted a major survey on the bias of the COMPAS software, used to predict the risk of recidivism, and concluded that this software was “biased against blacks.” Not explicitly – but because it took into account variables strongly correlated with “race” (understood not as a biological reality but as a social construction) in a society where there are substantial economic and social differences between the African American community and the white community.

In 2018, during a self-test, the ACLU found that Amazon Rekognition software mistook 28 members of Congress for criminals. Those concerned were mainly people of African-American origin or Latin origin. This lack of precision about minorities increases the risk that innocent people will be wrongly accused.

Lack of racial diversity at Silicon Valley is churning out tech that is ‘insensitive’ and ‘brutal’

A Google Photos user reported in 2015 that the platform had tagged the photo of a black couple with the caption “gorillas”. The company said that it would take the necessary measures to avoid the repetition of errors like this. A similar problem occurred with Flickr. In place of gorillas, the platform labeled pictures of black people with the word “monkeys.”

Interviewed, Clare Garvie, a researcher at Georgetown University and director of a research project on the effectiveness of race-detection software, recalls that there is no consensus within the scientific community of the efficacy of these tools.

The study she led recalls: “We know very little about these systems. We do not know their effect on privacy and individual freedoms. We do not know how they can ensure the accuracy of their results. And we do not know how these systems- at the local, state, or federal level- affect racial and ethnic minorities. ” 

One of the biggest reasons for these fundamental flaws in the technology is the lack of racial diversity in silicon valley, leading to the formation of “racialized codes”.

Time to rethink technology

The artificial intelligence revolution is still in its infancy. Till the 2030s, this technology will continue to develop and will take more and more place in our daily lives.

But, faced with all these ethical unknowns, companies like IBM have completely stopped developing race-detection software. In contrast, Amazon and Microsoft have made a temporary halt in selling their systems to the police.

After all, it is dangerous to leave the keys of the henhouse to the fox.

As Cathy O’Neil, another activist for ethical A.I., puts it: “Companies cannot be trusted to verify their work, especially when the result may conflict with their financial interests.”

In order not to perpetuate and even amplify social patterns of injustice by consciously or unconsciously codifying human prejudices, we have to go from “reactive mode” (patching up when problems are found) to “proactive mode”, incorporating intersectional analysis from the same conception of projects.

Featured

RELATED

MONEY

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here