Facebook’s content moderation algorithms block many advertisements aimed at disabled people.
What’s new: The social media platform’s automated systems regularly reject ads for clothing designed for people with physical disabilities. The algorithms have misread such messages as pornography or sales pitches for medical devices, The New York Times reported.
How it works: Automated systems at Facebook and Instagram examines the images and words in ads that users try to place on the sites. They turn down ads they deem to violate their terms of service. The system tells would-be ad buyers when it rejects their messages, but not why, making it difficult for advertisers to bring rejected materials into compliance. Companies can appeal rejections, but appeals often are reviewed by another AI system, creating a frustrating loop.
- Facebook disallowed an ad for a sweatshirt from Mighty Well bearing the words “I am immunocompromised — please give me space.” The social network’s algorithm had flagged it as a medical product. Mighty Well successfully appealed the decision.
- Facebook and Instagram rejected ads from Slick Chicks, which makes underwear that clasps on the side as a convenience for wheelchair users, saying the ads contained “adult content.” Slick Chicks’ founder appealed the decision in dozens of emails and launched an online petition before Facebook lifted the ban.
- The social-networking giant routinely rejects ads from Yarrow, which sells pants specially fitted for people in wheelchairs. Facebook doesn’t allow ads for medical equipment, and apparently the algorithm concluded that the ads were for wheelchairs. Yarrow has successfully appealed the rejections, which takes an average of 10 days each.
- Patty + Ricky, a marketplace that sells clothing for people with disabilities, has appealed Facebook’s rejection of 200 adaptive fashion products.
Behind the news: Other social media platforms have been tripped up by well-intentioned efforts to control harmful speech.
- YouTube blocked a popular chess channel for promoting harmful and dangerous content. Apparently, its algorithm objected to words like “black,” “white,” “attack,” and “threat” in descriptions of chess matches.
- In 2019, TikTok admitted to suppressing videos made by users who were disabled, queer, or overweight, purportedly an effort to discourage bullying.
Why it matters: Millions of small businesses advertise on Facebook and Instagram, many of which serve niche communities. For such companies, being barred from promoting their wares on these platforms is a major blow.
We’re thinking: Moderating content on platforms as big as Facebook would be impossible without AI. But these cases illustrate how far automated systems are from being able to handle the job by themselves. Humans in the loop are still required to mediate between online platforms and their users.