Just last week, a report from the World Economic Forum detailed a new plan to try and mitigate “the dark world of online harm” by making use of both artificial and human intelligence to apply censorship to bad actors that produce and promote child abuse, hate speech, and disinformation.
Inbal Goldberger, the vice president of Trust and Safety at ActiveFence, which is a corporation that tries to detect malicious online content, put out an op-ed on the global organization’s website, setting up a solution to online abuse issues. Such solutions would create a blend of AI and so-called “subject matter experts” to ” detect nuanced, novel online abuses at scale, before they reach mainstream platforms.”
Goldberger stated that an intelligence-fueled approach to the idea of moderating content would work out by letting human and AI teams flag or remove high-risk posts after they have had time to transmit millions of sources through training sets.
“Supplementing this smarter automated detection with human expertise to review edge cases and identify false positives and negatives and then feeding those findings back into training sets will allow us to create AI with human intelligence baked in,” she explained. “This more intelligent AI gets more sophisticated with each moderation decision, eventually allowing near-perfect detection, at scale.”
Goldberger made the argument that online access has played a pivotal role in the public perception of events such as viruses, wars, and recessions. While extreme opinions, the spread of misinformation, and the wide reach of child sexual abuse material have been allowed to exist since the birth of the Internet.
“Before reaching mainstream platforms, threat actors congregate in the darkest corners of the web to define new keywords, share URLs to resources and discuss new dissemination tactics at length,” stated Goldberger. “These secret places where terrorists, hate groups, child predators and disinformation agents freely communicate can provide a trove of information for teams seeking to keep their users safe.”
The National Center for Missing and Exploited Children stated that well over 29.3 million child sexual abuse material reports were created and sent over to the CyberTipline 2021, an over 35% jump from the levels in 2020.
between the problems of online inappropriate child photography and the push to outright silence disinformation and hate speech, quite a few have made the argument that the use of automated censorship that was shared by the Davos-Base elite group could produce a slippery slope of more authoritarianism.
“He who controls the information controls the world,” expressed Young Americans for Liberty in a social media post talking about the plan.
One national security and political warfare consultant and senior fellow at Clairemont Institute, Dave Reaboi, stated via social media that the content moderation approach would be “the most monstrous tyranny history has ever seen.”