The World Economic Forum shared a report last week outlining a plan to mitigate “the dark world of online harm” by using human and artificial intelligence to censor bad actors who produce and promote child abuse, disinformation, and hate speech.
Inbal Goldberger, vice president of Trust and Safety at ActiveFence, a company that detects malicious online content, published an op-ed on the global organization’s website, offering a solution to online abuse. Such solutions would enable a blend of AI and so-called “subject matter experts” to “detect nuanced, novel online abuses at scale, before they reach mainstream platforms.”
Goldberger said an intelligence-fueled approach to content moderation would work by allowing human and AI teams to flag or remove high risk-items after transmitting millions of sources through training sets.
“Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in,” she wrote. “This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale.”
Goldberger argues that online access has played a vital role in public perception of events like recessions, viruses, and wars. While extreme opinions, the spread of misinformation, and the wide reach of child sexual abuse material have been enabled since the birth of the Internet.
“Before reaching mainstream platforms, threat actors congregate in the darkest corners of the web to define new keywords, share URLs to resources and discuss new dissemination tactics at length,” Goldberger said. “These secret places where terrorists, hate groups, child predators and disinformation agents freely communicate can provide a trove of information for teams seeking to keep their users safe.”
The National Center for Missing and Exploited Children revealed that over 29.3 million child sexual abuse material reports were made to the CyberTipline 2021 — a 35% increase from 2020.
Between the online child pornography problem and the push to silence disinformation and hate speech, many have argued the automated censorship idea shared by the Davos-based elite group could produce a slippery slope of more authoritarianism.
“He who controls the information controls the world,” Young Americans for Liberty said in a tweet referencing the plan.
Dave Reaboi, national security and political warfare consultant and senior fellow at Clairemont Institute, said in a tweet that the content moderation approach would be “the most monstrous tyranny history has ever seen.”
Read the full story here.