Over the past decade, there has been no shortage of examples of human bias creeping into AI processes.
In 2020, Black Farmington Hills resident Robert Williams was arrested and jailed after a police facial recognition algorithm misidentified him as a shoplifting man in security footage – a known weakness of these systems to accurately identify people with darker skin. In 2019, researchers demonstrated that a software system widely used by hospitals to identify patient risks gave white people preference for many types of care. A few years ago, Amazon all but scrapped a system it used to screen job applicants when it discovered it consistently favored men over women.
How human biases are integrated into AI algorithms is a complicated phenomenon.
Bias doesn’t have just one source, but bias issues are often rooted in how AI systems classify and interpret data. The power of most AI systems is their ability to recognize patterns and categorize things, but it’s important to note that the process usually begins with a training period as they learn from us. . For example, think of the image recognition algorithm that allows you to find all the pictures of cats on your phone. The intelligence of this system began with a training period during which the algorithm analyzed known photos of cats selected by a human. Once the system saw enough correct examples of cats, it acquired a new intelligence: an ability to generalize characteristics essential to felineness, which allowed it to determine whether a photo it had never seen previously was a cat photo.
The important thing to note about the example above is that the intelligence of the algorithm is fundamentally built on a foundation of human judgment calls. In this case, the key human judgment is an initial selection of photos that a person has determined to be cats, and in this way the machine intelligence is built into our “bias” for what a cat looks like. . Sorting photos of cats is fairly innocuous, and if the algorithm makes a mistake and thinks your dog looks more like a cat, that’s okay. But when you start asking AI to perform more complex tasks, especially those that are embedded in very important human concepts like race, sex, and gender, mistakes made by algorithms are no longer harmless. . If a facial recognition system has questionable accuracy at identifying darker-skinned people because it was trained mostly on white faces, and someone ends up being wrongfully arrested because of it, that’s it. is obviously a huge problem. For this reason, figuring out how to limit bias in our artificial intelligence tools, which are now widely used in banking, insurance, healthcare, hiring and law enforcement, is considered the one of the most crucial challenges facing artificial intelligence engineers today.
Desmond Patton, a professor at the University of Pennsylvania and alumnus of the UM School of Social Work, helped come up with an interesting approach to combating AI bias. In his recent lecture as part of our Thought Leaders Lecture Series, Patton argued that one of the biggest problems – and one that can be largely solved – is that we haven’t had all the relevant voices at the table when these technologies are developed and the key human judgments that shape them are being made. Historically, AI systems have been the domain of technology companies, data scientists, and software engineers. And while this community has the technical skills to build AI systems, it typically lacks the sociological expertise that can help protect systems from bias or expose uses that might harm people. Sociologists, social workers, psychologists, health workers — they are people experts. And since the AI bias problem is both technical and human, it makes sense for human experts and technology experts to work together.
Columbia University’s SAFE Lab, which Patton directs, is a fascinating example of what this can look like in practice. Their team is trying to create algorithmic systems that can use social media data to identify indicators of psychosocial phenomena like aggression, substance abuse, loss and grief – with the ultimate goal of being able to positively intervene in people’s lives. . It’s a hugely complex AI problem, and they’re bringing a diverse team to it: social workers, computer scientists, computer vision experts, engineers, psychiatrists, nurses, youth, and community members. One of the really neat things they do is use social workers and local residents to qualitatively annotate social media data so that the programmers building the algorithms have appropriate interpretations. For example, says Patton, one day he received a call from one of their programmers about a concern that the system was flagging the N-word as an “aggressive” term. This might be an appropriate classification if they were studying white supremacist groups. But since their focus communities are the black and brown neighborhoods of major cities, the word was used in a different way. Having that kind of contextual awareness gave them a way to tweak the algorithm and improve it.
Patton says SAFE Lab’s work also draws on the hyper-local expertise of community members. “The difference in how we approach this work is who we appoint as subject matter experts,” Patton said. “We [hire] young black and brown people from Chicago and New York as research assistants in the lab, and we pay them like we pay graduate students. They spend time helping us translate and interpret the context. For example, street names and institutions have different meanings depending on the context. You can’t just look at a street on the south side of Chicago and say “it’s just a street”. This street can also be an invisible border between two rival gangs or cliques. We wouldn’t know if we hadn’t talked to people.
Patton thinks approaches like this could fundamentally transform artificial intelligence for the better. He also sees today as a pivotal moment of opportunity in the history of AI. If the internet as we know it transforms into something akin to the metaverse – an all-encompassing space based on virtual reality for work and social life – then we have a chance to learn from the mistakes of the past and create an environment more useful, fair and joyful. But it will mean seeing our technologies no longer as strictly technical, but as human creations that require input from a broader spectrum of humanity. This means that universities will train programmers to think like sociologists in addition to being great coders. Police and social workers will need to find meaningful ways to work together. And we will need to create more opportunities for community members to work alongside academic experts like Patton and his SAFE Lab team. “I think social work allows us to have a framework for how we can ask questions to initiate processes of building ethical technical systems,” says Patton. “We need hyper-inclusive involvement of all members of the community – disrupting who can be at the table, who is educated and how they are educated, if we are really going to fight bias.”
#practical #approach #combating #biases #artificial #intelligence