top of page

Google Cloud's AI image recognition code 'biased' against darker-skinned people


Google Cloud’s computer vision algorithms are accused of being biased: an experiment probing Google’s commercial image recognition models, via its Vision API, revealed the possibly biased nature of its training data.

Algorithm Watch fed an image of someone with dark skin holding a temperature gun into the API, and the object was labelled as a “gun.” But when a photo of someone with fair skin holding the same object was fed into the cloud service, the temperature gun was recognized as an “electronic device.”

To verify the difference in labeling was caused by the difference in skin color, the experiment was repeated with the image of the darker-skinned person tinted using a salmon-colored overlay. Google Cloud’s Vision API said the temperature gun in the altered picture was, bizarrely, a “monocular.”

Tracy Frey, director of product strategy and operations at Google, apologized, and called the results “unacceptable,” though denied the mistake was down to “systemic bias related to skin tone”.

“Our investigation found some objects were mis-labeled as firearms and these results existed across a range of skin tones. We have adjusted the confidence scores to more accurately return labels when a firearm is in a photograph,” she told Algorithm Watch.

Intel and Georgia Tech win four-year DARPA AI contract:

Research has shown adversarial examples fool machine-learning systems into making wrong decisions – such as mistaking toasters for bananas and vice-versa – by confusing them with maliciously crafted data. So far, adversarial examples that hoodwink a particular AI system will fail to trick any other AI, even if they are similar, due to their narrow nature.

However, DARPA set up the Guaranteeing Artificial Intelligence Robustness against Deception (GARD) program to fund research into adversarial examples that are able to fool multiple similar machine-learning systems at once. Now, researchers at Intel and the Georgia Institute of Technology are leading that effort.

“Intel is the prime contractor in this four-year, multimillion-dollar joint effort to improve cybersecurity defenses against deception attacks on machine learning models,” it announced this week.

The eggheads will look at how to make machine-learning models more robust against adversarial attacks in realistic settings for “potential future attacks.”

UCSD to analyze coronavirus lung scans with machine learning:

AI researchers are studying chest X-rays to look for telltale signs of pneumonia associated with the COVID-19 coronavirus at the University of California, San Diego.

It’s unclear whether COVID-19 lung infections are particularly distinctive compared to other diseases, so it’s difficult to use machine learning as a diagnostic tool.

But researchers at UCSD can help combat the disease by suggesting patients with early signs of pneumonia be tested for COVID-19. “Patients may present with fever, cough, shortness of breath, or loss of smell,” Albert Hsiao, an associate professor of radiology at UCSD’s School of Medicine, told The Register.

“Depending on the criteria, they may or may not be eligible for RT-PCR testing for COVID-19. False negative rate on RT-PCR is estimated around 70 per cent in some studies, so it can be falsely reassuring."

“However, if we see signs of COVID-19 pneumonia on chest x-ray, which may be picked up by the AI algorithm, we may decide to test patients with RT-PCR who have not yet been tested, or re-test patients who have had a negative RT-PCR test already. Some patients have required 4 or more RT-PCR tests before they ultimately turn positive, even when x-ray or CT already show findings”

-

by Katyanna Quach, The Register

bottom of page