Some notes about the article and what I see as politically motivated statements.
"Facial recognition software has improved greatly over the last decade thanks to advances in artificial intelligence. At the same time, the technology — because it is often provided by private companies with little regulation or federal oversight — has been shown to suffer from bias along lines of age, race, and ethnicity, which can make the tools unreliable for law enforcement and security and ripe for potential civil rights abuses."This isn't accurate. The reason that the tools show bias has nothing to do with lack of regulation or federal oversight. It's because some ethnic groups show a lesser degree of facial morphism than others. If you have ever taken part in a study to check a computer's findings against that of a human evaluator -- and I have -- you have seen that for yourself. Human evaluators do not get it right 100% of the time either. And the bias among humans is similar to that of computers, no matter who is doing the evaluating. Some subjects are just harder to tell apart. That's a fact that no amount of oversight, regulation, wishful thinking, or outrage can eliminate.
Facial recognition software doesn't correctly identify members of any group 100% of the time; but that shouldn't be its purpose. Rather, when properly employed it would be used to narrow down a search so that human evaluators are not flooded with obvious mismatches. The problem isn't the technology. It's the use to which it's put. In not making this distinction clear, the article is misinformative.
More properly stated, the tools are unreliable for law enforcement and security because they are being improperly employed. That's where oversight is necessary.
More properly stated, the tools are unreliable for law enforcement and security because they are being improperly employed. That's where oversight is necessary.
"In 2018, research by Joy Buolamwini and Timnit Gebru revealed for the first time the extent to which many commercial facial recognition systems (including IBM’s) were biased. This work and the pair’s subsequent studies led to mainstream criticism of these algorithms and ongoing attempts to rectify bias."
They are referring to results posted at gendershades.org, and I invite you to follow the link to see them. All products exhibit similar bias, which is exactly what you would expect to see if what I stated above is accurate. Some groups are simply more difficult to classify by gender than others, even for neural networks. This alone is not an indication of either sexism or racism. Gendershades.org makes this insightful comment: "However, accuracy is not the only issue. Flawless facial analysis technology can be abused in the hands of authoritarian governments, personal adversaries, and predatory companies." Again, it is the use to which the products are put that are primary concern.
The Verge continues:
"IBM has tried to help with the issue of bias in facial recognition, releasing a public data set in 2018 designed to help reduce bias as part of the training data for a facial recognition model. But IBM was also found to be sharing a separate training data set of nearly one million photos in January 2019 taken from Flickr without the consent of the subjects — though the photos were shared under a Creative Commons license."
In other words, they did have the consent of the subjects. The statement strongly suggests that author of the Verge article is unclear on what "Creative Commons" means. It is a license to use the material without having to bother the copyright holder for permission. An appropriate Creative Commons license is permission. The article is again inaccurate in phrasing this as if it were an object of concern.
In IBM's letter to Congress [PDF], Arvind Krishna (CEO of IBM) makes it clear why IBM is withdrawing the products: "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies." It's being misused.
"Bias" in the products is inherent. You can't make it perfect. People can't do it perfectly. So to use it properly, you have to know where the weaknesses are and allow for them. Used properly, it could be a valuable tool. Unfortunately, we have people in positions of authority who don't know how to use their tools properly.
And that, folks, is why you can't have nice things.
--==oOo==--
Aside: The difficulties inherent in using facial recognition software remind me of a business rules engine I designed for a major mortgage insurance company. At the turn of the century, lenders were pressed by the Federal government to increase lending to groups that wouldn't typically be regarded as strong financial risks. This increased the number of questionable applications that the underwriters had to evaluate. To an extent, we had to introduce bias in order to deal with this, as purely objective financial information alone was the cause of the perceived bias Congress was legislating against. The purpose of the rules engine was not to do the job of underwriters. However, some decisions are slam-dunks... either an easy "yes" or "no". This leaves a grey area of uncertainty where human evaluation is required. So the engine gave trinary results. The easy yes/no decisions were automatically processed, and difficult decisions were left to humans who were qualified to deal with uncertain judgment calls. This didn't absolve humans for responsibilities for the slam-dunk yes/no decisions. Humans still wrote the rules. But once the rules were written and approved, the software allowed for thousands more transactions to be processed in a day than would be otherwise possible if human underwriters were bogged down mechanically responding to loan applications that didn't require their talents. Software exists to make people more efficient, not replace them.
No comments:
Post a Comment