A Black Hole Called Telegram
Is regulation of the mainstream social network the way to go, or should there be free reign? Or would it be best to have both moderated and un-moderated ecosystems available and leave the choice into the hands of the user? The example of Telegram presents some unexpected answers to these questions.
By: Miro Yanev
The world of social media has changed radically in the last few years. A split has emerged between platforms introducing regulation to protect users from misinformation and almost anarchistic platforms which view regulation as censorship.
The process towards heavier regulation of traditional digital platforms began with policies to tackle misinformation related to Covid-19. It accelerated further after Russia’s invasion of Ukraine and the explosion of war propaganda on both sides of the conflict. In this context, Telegram has emerged as the most popular and important alternative social media platform.
Telegram was created by the Russian entrepreneur Pavel Durov in 2013 and began life as a messaging platform, comparable to WhatsApp in terms of being open-source and enabling individual and group communication. As with Whatsapp, Telegram does not charge the users or display ads. But there are substantial differences between the two. Telegram presents itself as a non-mainstream platform that aims to protect the user’s privacy. Telegram groups can have up to 200,000 members, while for WhatsApp groups the maximum is 256 members. Telegram allows the creation of channels that broadcast messages from a single source to multiple users and doesn’t interfere with the content posted on it.
As time has progressed, however, the lack of moderation combined with robust security has encouraged thriving sub-ecosystems with a pronounced dark side. In Indonesia, terrorists used Telegram to promote radicalism and to give instructions for carrying out attacks. Neo-Nazi groups leveraged the platform to share their ideologies. Crypto investors coordinate large groups through Telegram channels to arrange market manipulations like pump and dump frauds. Revenge porn and channels with child pornography content are also not uncommon.
These developments raise the question of where to draw the line. Is regulation of mainstream social networks necessary to protect users or does it encourage censorship? Is Telegram a blackhole encouraging malign and illegal activity or is it a secure platform that promotes free speech? Do we need to find a balance between these poles or should both moderated and un-moderated ecosystems be available to users?
The Experiment
In search of answers to these questions, myself and a digital research team performed an empirical test on open-access dataset that includes 317 million Telegram messages sent to 28,000 public Telegram channels between 2017 and 2021. Telegram does not have a central directory of all channels, so the researchers who created the dataset used a snowball method. We started with a list of 250 English-language channels of users discussing politics, local news or crypto-currencies. Then we identified more channels and groups by looking at the shared posts and built the dataset incrementally.
Out of this dataset, we focused on messages covering the most recent two-year period available in the dataset – from October 1, 2019, to September 30, 2021. This period is most relevant for our research aims, because prior to this period, the English-speaking audiences of Telegram were smaller and the problem of misinformation was not so prevalent. Many actors spreading misleading content were not yet “purged” from mainstream social media platforms, so they weren’t focusing their attention on largely un-moderated alternatives like Telegram.
The dataset contained 24.7 million messages from this period. For our analysis we extracted all messages that contained hyperlinks to any website, or 6.8 million messages. We focused only on the content that was posted in channels rather than in groups. Using these posts, we assessed the distribution of both misleading and professional news sources.
The Findings
We found that although links to known sources of misleading information were shared more often than links to professional news sources, misleading content did not attract more Telegram views than posts with links to professional news. We also found that Telegram channels with a high proportion of misleading sources were more active, and their posts had greater reach than those sharing links to credible news outlets. Overall, however, misleading sources were confined to a smaller set of channels than professional news sources.
Several important conclusions can be derived from our findings. First, contrary to widespread attitudes based on non-systematic observations, our research demonstrated that Telegram has not actually become an environment where misinformation outperforms professional news. The audience of US-focused professional news sources was potentially more extensive than the audience of US-focused sources that shared misleading content. Misinformation activity might outperform in terms of content shares, but the scale and the reach of this information seems to be less dramatic than is frequently portrayed by pundits and commentators.
These findings have theoretical implications as well. Previous studies have argued that misleading information has the potential to reach a larger audience due to its sensationalized nature. The data presented a challenge to this theory, showing that, while misleading information can enjoy success, high-quality news seems to be more successful even in a largely un-moderated platform.
Second, platforms without one of the key preconditions for content virality – algorithmically curated timelines – can still see misinformation disseminated virally. We found that misleading sources were shared on Telegram more often than professional news, and communities sharing links to misleading information were more active content contributors than communities sharing information from trusted sources. These results are consistent with previous studies of user engagement with misleading information versus professional news on platforms like Facebook or Twitter that showed that the total audience of misleading sources was typically smaller, but more engaged.
Unlike Facebook or YouTube, Telegram offers no algorithmic timeline or recommendations, which could provide content to users who are not subscribed to particular channels. Despite the absence of the algorithmic curation of content that encourages the viral spread of information, Telegram misinformation communities managed to disseminate content across their network through sharing content with links to sources. One of the most commonly proposed solutions to misinformation on social media is to curb the power of platform owners to encourage the virality of content on their algorithmically curated timelines. However, our experiment showed that misinformation could be virally distributed even on platforms without an algorithmic timeline if active communities are involved in spreading such content.
Third, our findings correspond to previous research suggesting that a few active sources of misleading information can contribute to the majority of all such information distributed on a platform; indeed, similar research has been conducted in other domains of online life such as comments under news articles. The processes of motivated reasoning can account for the existence of these normally tiny but active communities. The main risk is that in case of a global event like the pandemic or the war in Ukraine, Telegram activist networks can quickly and easily become influential in linking an information network to the outside world, as well as helping to build large cohesive communities. Similar methods can be employed by state actors to influence public opinion through building alternative media ecosystems based on misleading or fabricated sources. These actors can potentially build larger networks or spread information with greater speed.
All of the findings emphasize the role of user agency rather than algorithm-related control in the battle between professional and misleading news on all digital platforms.
In conclusion, there is no substitute for the active and engaged end user, employing critical thinking and a healthy dose of skepticism to both mainstream and alternative sources of information. Both the heavily regulated traditional social media ecosystem and the emerging anarchistic alternatives have their risks and benefits for the vigilant reader.
Miro Yanev is a Digital Media and Artificial Intelligence expert based in Sofia, Bulgaria