To combat the spread of terror-related content online, Facebook announced Thursday that it will bolster its automated and human-powered efforts to flag and take down extremists posts, and will develop data-sharing systems across its family of social media and messaging apps.
“Making Facebook a hostile place for terrorists is really important to us,” said Monika Bickert, Facebook's head of global policy management.
Facebook announced earlier this year that it would increase its community operations team by 3,000, increasing the number of people who review flagged posts on the social network, including instances of bullying, hate speech and terrorism. One hundred and fifty employees at Facebook count counterterrorism as their primary responsibility, the company said.
Facebook also will deploy artificial intelligence to weed out extremist content. Through image matching, AI can keep certain images and videos that have been flagged from being uploaded again. The company is also building an algorithm that aims to analyze written text to keep terrorism-related language off the platform. But Facebook acknowledged that human expertise is key to its new measures. “AI allows us to remove the black-and-white cases very, very quickly,” said Brian Fishman, the lead policy manager for counterterrorism at Facebook. But he added that human experts are better at analyzing the context of a post, and in grappling with the evolving methods used to bypass Facebook's counterterrorism measures.
Systems to block accounts of terrorists across the flagship social network and its sister apps, Instagram and WhatsApp, are also being developed. Facebook declined to say what types of customer data will be shared between its apps, but said that cross-platform systems being developed for counterterrorism purposes are separate from its commercial data sharing.
In recent years, Facebook has been criticized for not doing enough to combat propaganda and extremist content online. After the terrorist attack in London this month, British Prime Minister Theresa May attacked Web companies for providing a “safe space” for people with violent ideologies. Under pressure from governments around the world, the tech industry has responded to this type of criticism before. Facebook, Twitter, Google and Microsoft said they would begin sharing unique digital fingerprints of flagged images and video, to keep them from resurfacing on different online platforms.
In a separate post Thursday morning, Facebook said it will be seeking public feedback and sharing its own thinking on thorny issues, including the definition of fake news, the removal of controversial content, and what to do with a person's online identity when they die.
Corporate sponsored censorship at the behest of a government is nothing more than tyranny outsourced to the private sector. Facebook's actions to censor accounts during the elections in France, UK, and Germany- along with their efforts to work with the Chinese government in this regard-are a testament to their disregard for the rights of their users. Many will say that if the users don't like it they can get off the platform since it's privately owned rather than public, but I disagree. When it has become this massive with billions of users and is known to shape public opinion by altering the algorithm, they should be outed and held accountable to their users and public at large. It's up to individuals to decide the views they disagree with and condemn them, not a private company nor a government.
absolutely
Congratulations @devprotim! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
Award for the number of posts published
Award for the number of comments received
Click on any badge to view your own Board of Honnor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP
By upvoting this notification, you can help all Steemit users. Learn how here!