Pasting AI-tagged commentary areas is depriving us of the freedom of equality

in #science7 years ago

Since ancient times, standing and speaking have not hurt back pain. In such a social atmosphere that encourages questioning, arrogance seems to be the best encouragement. In the era of decentralization, everyone has a microphone. The cost of speaking on the Internet has reached the lowest level in history.

There is a very typical example in the near future. Luo Yonghao’s speeches “having few days” have been criticized successively by Beijing Daily and Guangzhou Daily. The debate on “Hammer” and “Hammer Black” in Weibo’s commentary area is “beautiful” than Luo Yonghao’s own speech, but this “looks good”. It is the popularity of the fire that comes from the verbal attacks that come to me. According to the official microblogging of Guangzhou Daily, the commentary on "Carefulness" was ranked first in the review. The words "smart", "walking dog" and "SB" were also common. The fierce battle of Weibo is so strong that "passers" are not tolerated, let alone a rational analysis of whether Luo Yonghao is the question of what is the fine day and the fine day. Weibo is still far from the “public domain” of China’s speech environment.

Content management, from the comment area to the winding road

The commentary area is more beautiful than the content itself, and it is now more than just a net friend's ridicule. Major social networking and question-and-answer platforms have begun to notice the operation of the commentary area. Previously, in order to improve the efficiency of review reviews and effectively manage Weibo comments, Weibo also began to take some actions to open review comments for head users and regular members. After this feature is opened, users can review and post comments on their comments on Weibo.

Therefore, if the article itself is called "the first content area", then the comment area has become a flourishing "second content area", and now it is increasingly important to pay attention to the management of the comment area. At the GMIC conference in April, Li Dahai, a partner and senior vice president, disclosed that, in the content-aware community, AI can already start semantic recognition and contextual judgment. Because relying solely on artificial power to manage content is really not an efficient choice.

In the know, there is now a 2-year-old robot called "Wali". Now the robot can quickly respond to "unanswered" and "unfriendly" content online, reducing discrimination, malicious labeling, insults and other low quality content. Interference with the user. However, due to the limitations of the current NPL (natural language processing) and the corresponding problems in the operation of the platform, the effect is still not ideal.

If we say that artificial intelligence algorithms can monitor the acoustic characteristics of people when they speak, such as the frequency of sound, the level of sound, and the change of tone, it is a great technique to recognize the type of emotions spoken by human beings. It is even more difficult to identify human speech emotions directly through the NPL. Because a dirty word does not carry a sentence with no sensitive words, it is difficult for AI to sum up their common characteristics. Therefore, AI can be selected as a representative of negative remarks and become the “scavenger” of the commentary area.

In addition, the current machine learning method is to use the Chinese corpus to mobilize users to manually mark the corpus to train AI. This is an indispensable step in supervised learning. But new technologies are emerging, such as reinforcement learning and in-stream supervision, where data can be tagged during natural use.

Or, you can learn from the establishment of a speech cloud platform by HKUST to establish a speech cloud platform, so that you can discover updated corpora at a faster rate from other, more extensive channels. Because waiting until the new corpus is available in the Chinese corpus allows the machine to learn, the presence of such corpora is already extensive on the platform. If this kind of corpus is malignant, then the ecology of the content of the commentary has already been polluted.

In the commentary area, the relationship between AI and people is worth studying

Let's look at the situation abroad. The top foreign media had realized the importance of the review area earlier and began to use technology to manage the review area. The New York Times collaborated with Jigsaw (a research team of Google Alphabet) and led the "Washington Post" and the Mozilla Foundation founded The Coral Project (Coral project) to study how to improve online commentary and hatch a series of open source tools for each Big news editors provide technical support. In 2015, the "New York Times" adopted a review system that relied on algorithms to prioritize the reviews of different users, and to determine which users' comments could be published without manual reviews over a long period of time.

However, whether artificial intelligence is a blue sea or a deep pit still requires the verification of time. Analyst Yang Suying, an analyst of smart relativity (ID: aixdlun), also wanted to put forward two discussion points for using AI to manage the ecology of commentary content:

  1. People behind AI are people. If people don't want to solve a problem, then AI is just a gimmick and a decoration. Recently, I saw an article in a grassroots public account discussing the payment of the contents of the Himalayan APP. The owner of the public uploaded the audio of his own interpretation of the book “The Umbrella” in the Himalayas, and was suddenly sued for a copyright complaint. Rack.

Himalayan customer service said, "Because there is already a paid audio interpretation of the book, so upload and read the audio of the book even if it is infringement." For a single knowledge payment content can not monopolize the market For the time being, the only comments of the “Unanimous” pay audio, which is only approved by the platform, show that this paid content is not satisfactory.

However, the platform did not care about the feedback of the audience in the commentary area and had no intention of spreading good quality content. Many platforms now try to use AI to manage comment areas. However, AIs always perform human thoughts. Even if AI finds this problem through various identifying functions, if people do not want to solve this problem, then the establishment of AI What else is useful?

  1. In the commentary area, AI labels each user by identifying user speech, which is hardly to say that it is not a matter of ranking everyone's speech rights in a high and low order. Many platforms that use AI to manage the commentary area will say that they can now use AI to label users, which is a bit like the so-called social credit system. But the question behind this seemingly reasonable credit system is: After arguing for the right to equality and speech for hundreds of years, it was so deprived by AI again.

From the perspective of sociology, we often discuss the repression of human nature by labels and the pain caused by labels, and think about how we can make the social environment “delabel”. But what we did not notice is that AI is using our data left on the network to tag us. If someone has published some negative remarks, then after AI's recognition, his remarks may never be able to float in the bottom of the commentary area. Potentially, is this person equivalent to indirectly weakening the right to speak? You can speak, but nobody will see it. This also means that the right to speak in the online community will be further centralized. Therefore, AI should also consider the issue of labeling, and establish a mechanism for labeling. Everyone has the opportunity to be forgiven, but the data will know this?

Instead of allowing information to flow freely, technology has created a larger island of information. Among WeChat, only those who are close friends can see the interaction of comments under the same personal circle of friends. This mechanism creates a myriad of different small circles in WeChat, and the small circles are separated by high and low walls. Although WeChat helped us connect more people, we isolated our parents. The commentary area that she/he can see is always the only one who wants to know your life through your circle of friends, know your friends and the outside world, but the technology has blocked such simple thoughts. He does not Know how you usually play with your friends and don't even know what you are talking about. There are no more explanations in the commentary area, and you think that you are going to be a monk and quickly call for advice.

However, in order to avoid your troublesome explanation process, the technology can also establish more "to trouble" mechanisms, such as grouping visible. For us, we are improving our own life efficiency and better managing our own lives. However, for parents, they are simply forced to cut their right to know. This is actually a step before depriving them of their right to speak.

In daily information acquisition, the way of depriving the right to speak is hidden. It usually deprives your right to know. The headline's artificial intelligence algorithm recommendation information today, the accuracy of the overly-superstitious algorithm, in fact, trapped the user on the island of information it created. The information barriers that were broken by the Internet were re-established under the headline today. But what is more profound behind this is that the bombing of homogenous information blocks the upward passage of those who do not have the ability to actively acquire information.

In such a large information community, the platform needs to establish its own humanistic values ​​and social responsibilities. Once the values ​​begin to confuse, money will become the only measure. Artificial intelligence is another upgrade of technology, but artificial intelligence cannot be a refuge for all old ideas that are corrupted, can not cover the confusion of values, and can not condone the deprivation of users’ right to speak on the technology platform. How to seek a balance between protecting the right to speak and protecting the environment of the platform community is the first question that needs to be considered in the use of AI to manage the commentary area.

The “Golden House” and “Yan Ruyu” in the comment area are obvious today, but the content operators of the platform want to exert their power on it and still need to continue exploring. It is a good thing for AI to help manage intelligent comment areas. However, there are still too many unsolved problems hidden behind AI. Managers must use question marks to make good use of AI and make AI a good helper for content management.

Sort:  

wow, interesting and very heady write-up (-: peace