Moderating a Facebook gardening group in western New York is not without challenges. There are complaints of wooly bugs, inclement weather and the novice members who insist on using dish detergent on their plants.
And then there's the word “hoe.”
Facebook's algorithms sometimes flag this particular word as “violating community standards," apparently referring to a different word, one without an “e" at the end that is nonetheless often misspelled as the garden tool.
Normally, Facebook's automated systems will flag posts with offending material and delete them. But if a group's members — or worse, administrators — violate the rules too many times, the entire group can get shut down.
Elizabeth Licata, one of the group’s moderators, was worried about this. After all, the group, WNY Gardeners, has more than 7,500 members who use it to get gardening tips and advice. It's been especially popular during the pandemic when many homebound people took up gardening for the first time.
A hoe by any other name could be a rake, a harrow or a rototill. But Licata was not about to ban the word from the group, or try to delete each instance. When a group member commented “Push pull hoe!" on a post asking for “your most loved & indispensable weeding tool," Facebook sent a notification that said “We reviewed this comment and found it goes against our standards for harassment and bullying."
Facebook uses both human moderators and artificial intelligence to root out material that goes against its rules. In this case, a human likely would have known that a hoe in a gardening group is likely not an instance of harassment or bullying. But AI is not always good at context and the nuances of language.
It also misses a lot — users often complain that they report violent or abusive language and Facebook rules that it's not in violation of its community standards. Misinformation on vaccines and elections has been a long-running and well-documented problem for the social media company. On the flip side are groups like Licata's that get caught up in overly zealous algorithms.
“And so I contacted Facebook, which was useless. How do you do that?" she said. “You know, I said this is a gardening group, a hoe is gardening tool."
Licata said she never heard from a person and Facebook, and found navigating the social network's system of surveys and ways to try to set the record straight was futile.
Contacted by The Associated Press, a Facebook representative said in an email this week that the company found the group and corrected the mistaken enforcements. It also put an extra check in place, meaning that someone — an actual person — will check offending posts before the group is considered for deletion. The company would not say if other gardening groups had similar problems. (In January, Facebook mistakenly flagged the U.K. landmark of Plymouth Hoe as offensive, then apologized, according to The Guardian.)
“We have plans to build out better customer support for our products and to provide the public with even more information about our policies and how we enforce them,” Facebook said in a statement in response to Licata's complaints.
Then, something else came up. Licata received a notification that Facebook automatically disabled commenting on a post because of “possible violence, incitement, or hate in multiple comments."
The offending comments included “Kill them all. Drown them in soapy water,” and “Japanese beetles are jerks."
Copyright 2021 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.