YouTube is the largest video platform in the world. The Google subsidiary has more than 1.9 billion users per month, around 47 million of w...
YouTube is the largest video platform in the world. The Google subsidiary has more than 1.9 billion users per month, around 47 million of whom come from Germany. But who gets to see which videos on the platform are completely different. The YouTube algorithm creates an interesting profile based on the search and viewing history so that interesting suggestions can then be made. A Mozilla study has now shown that this very algorithm violates YouTube's own guidelines. What has been assumed for some time is now being substantiated with new findings.
A Ten-Month Crowdsourcing Study
The Mozilla Foundation is a non-profit organization that has been operating worldwide since 1998. Her goal is to examine the online world in order to help shape the future of the Internet for everyone. Now a 10-month study has looked at the YouTube recommendation algorithm and brought a big problem to light. Thousands of data from users were brought together and evaluated for the study. The result: The recommendation algorithm of the world's largest video platform recommends fraudulent, disturbing, and violent content on its platform. It is precisely this that violates the specially imposed guidelines. The research also revealed that users from non-English speaking regions of the world are even more affected by this problem.
Covid-19 – The Initiator For The Investigation
During the corona pandemic, people around the world had to spend a lot of time at home. Due to these regulations, which were intended to prevent the further spread of respiratory disease, digital media were used significantly more. During this time, some people have encountered unpleasant video content that the recommendation algorithm has passed to them. These included videos spreading fear and misinformation about Covid-19, as well as inappropriate cartoons aimed at children. Mozilla has summarized and evaluated these reports and the associated data.
First Actions By YouTube
The video platform YouTube, which belongs to the Alphabet Group, reacted quickly after the investigation was published and removed almost 200 videos suggested by the algorithm from the platform. According to Mozilla, that's around 9% of the bad posts that were discovered. However, this reaction was long overdue and, in hindsight, is more of a cosmetic than a long-term solution. After all, the deleted videos had previously been viewed more than 160 million times. This fact alone shows what problems there are with the recommendation algorithm.
Countries That Speak Other Languages ​​Are Much More Affected
The research found that users were 60% more likely to be recommended far-right, misogynistic, or otherwise disturbing videos when they lived in a non-English speaking country. Brandi Geurkink, Mozilla's senior advocacy manager, hypothesized that "the algorithm must be much better trained on the English language than on others."YouTube has also confirmed that all developments and changes will be rolled out first in the US and will only be used later in the rest of the world. Currently, videos with misinformation about Covid-19 are the biggest problem. Around 14% of users in an English-speaking region and 36% in non-English-speaking regions complain about this problem.
Mozilla Demands More Transparency From YouTube
With the publication of its results, the Mozilla Foundation asked YouTube to make the recommendation algorithm much more transparent. Specifically, regular transparent reports are required, which should publish information about how the system works. In addition, an option is to be introduced with which users can deactivate the personal recommendations function. Furthermore, the recommendation algorithm should also be equipped with risk management systems so that such recommendations no longer occur. Those responsible for the study also demand that political decision-makers become aware of the problem and introduce appropriate legislation. The transparency of AI systems should be anchored in law and at the same time protect investigative researchers.
YouTube's Reaction
A YouTube spokesperson said, "As more than 200 million videos are uploaded to the platform every day, YouTube's recommendation algorithm is designed to connect users with interesting content. It is one of the cornerstones of the ministry." In addition, intensive work has been done on the algorithm for several years, which is now said to use 80 billion pieces of information for its evaluation.
In the last year alone there have been 30 changes intended to reduce recommendations for harmful content. YouTube itself states with its "Violative View Rate" (VVR) that only 0.16 to 0.18% of all views were recorded for unwanted content. Out of 10,000 hits, 16 to 18 clicks come from infringing content on the platform. Since 2017, this value has been reduced by 70%, according to YouTube.
How Can Users Protect Themselves?
There is currently no protection. The recommendation algorithm is independent and its functionality is obfuscated. Users can only report unwanted content. The risk can only be reduced further by using YouTube via a VPN. This significantly increases the necessary security with top VPNs in Austria, Germany, and Switzerland. Otherwise, it is to be hoped that the study will bring some changes to YouTube and politics.