Katy Steinmetz reports for Time magazine on how Instagram is trying to use AI to reduce how much the platform is used for cyberbullying, but as she notes, “it’s much easier to recognize when someone in a photo is not wearing pants than it is to recognize the broad array of behavior that might be considered bullying.” Oh, and the person in charge of this whole effort, Adam Mosseri, previously was in charge of the development of Facebook NewsFeed, so this should inspire confidence. (How does your AI read sarcasm, he asked.)
One problem with Steinmetz’s article is that she accepts the frame of all the blitzscaled platforms, which is that connecting the entire world online requires massively open platforms, unfortunately creating massive toxic effects. But cyberbullying isn’t, as Steinmetz writes, “a problem that crops up anywhere the people congregate online.” It’s a problem that crops up wherever a platform has been optimized for engagement over any other value, and where there is limited to no human moderation. For example, a user of Front Porch Forum in Vermont, where each instance is centered on a neighborhood of roughly 1000 households and a paid part-time moderator helps keep the conversation civil, does not experience cyberbullying, as a recent study found.