Lights and shadows of facial recognition
Lights and shadows of facial recognition
These systems are still wrong in excess with the Asian population or with black men
Look at the phone and activate the screen. That the photo application knows how to recognize the people that appear in the photographs. A social network that reports when someone has uploaded a photo in which we appear.
Video surveillance systems in airports or smart glasses that identify the most wanted criminals. The possibilities of facial recognition are numerous. But there are still issues to be solved, such as data protection, security and fairness in the recognition of all races.
Facial recognition technologies are being used in various fields, from the most domestic to national security. For example, the Government of India is using it to find missing children. In the United Kingdom, some media use it to detect the presence of celebrities at royal weddings.
Many phones use it to unlock the screen and we can use it without using the fingerprint, a numerical code or a drawing pattern. And, not without some controversy, this technology is also used in a growing number of contexts by law enforcement agencies.
White men
The controversy that often surrounds the use and application of this technology is not new. In fact, some time ago, Google Photos or Flickr labeled the images of black people as "gorillas" or "simios", which generated (evidently) a lot of controversy. Since then, developers of facial recognition systems have worked hard to improve them, but today they still have important failures.
According to a study by Joy Buolamwini, researcher at the MIT Media Lab, on artificial intelligence (AI), algorithms and prejudices, facial recognition is more accurate for white men and has many more faults for people with darker skin, especially women. More concretely,
All facial recognition systems work by analyzing and comparing new images with those already stored in your database. As the number of images grows, the 'software' improves its capabilities to find patterns and identify individuals. The problem, as in other areas of artificial intelligence, is often related to the bias from which they start.
Thus, these applications of facial recognition are 'trained', in most cases, with photographs of white men. In smaller numbers, photographs of women are also provided, with a predominance of whites. Precisely for this reason, the error rates of this MIT study are higher in the case of women, because the database of available faces on which these machines learn is smaller.
In recent months, leading providers have said they have diversified their data sets to include darker and diverse faces and have made progress in reducing bias. Microsoft claims that the new version of its Face API software tool already only has an error rate for women with darker skin of 1.9% of attempts.
A study by MIT claims that applications misidentified 35% of women with dark skin
IBM, for its part, points out that Watson Visual Recognition is wrong 3.5% of the time. Both IBM and Microsoft acknowledge that their results have not been independently verified and that the error rates in the real world may be different from their image collections.
This problem also happens with Asian people. On many occasions, these systems do not recognize these people well or ensure that they have their eyes closed. Something that, in certain uses, entails that the systems ensure that they are invalid photos.
And there are many Asian countries, especially China, which are using these facial recognition technologies in airports and in video surveillance systems. In Malaysia, for example, it is assured that the success of this technology reaches 80% and that this is helping to reduce the waiting time in boarding operations (going from an average of between eleven and thirteen minutes to a wait of between nine and ten minutes).
Comments
Post a Comment