Meta released the FACET dataset to examine computer vision models for bias

By: Bohdan Kaminskyi | 01.09.2023, 21:01

Meta has unveiled a new tool, FACET, for evaluating the fairness of artificial intelligence that classifies and recognises objects in photos and videos, including people.

Here's What We Know

FACET and consists of 32,000 images with 50,000 people labelled by human annotators. The tool takes into account different classes related to occupations and activities, as well as demographic and physical characteristics.

Meta applied FACET to its own computer vision algorithm DINOv2. The tool found several biases in the model, including against people with certain gender characteristics. It also found that DINOv2 tended to stereotype women as "nurses."

Meta recognises that FACET may not adequately reflect real-world concepts and demographic groups. In addition, many of the depictions of professions in the dataset may have changed since the tool was created.

For example, most doctors and nurses in photos taken during the COVID-19 pandemic wear more personal protective equipment than they did before the pandemic.

In addition to the dataset itself, Meta also provided a tool to explore the data. To use it, developers must agree not to train computer vision models on FACET, but only to evaluate, test and validate them.

Source: TechCrunch