[Law & Tech] Does content promoting Vaccine Hesitancy fall under 'misleading' information?

  • Disha Patwa
  • 08:44 AM, 15 Jun 2021

Read Time: 19 minutes

With the invention, production and availability of vaccines for Coronavirus, governments across the world face another obstacle to overcome, that of vaccine hesitancy. The World Health Organisation defines vaccine hesitancy as, “delay in acceptance or refusal of vaccines despite availability of vaccine services”[1]

Although, one must consider that vaccine hesitancy is not a novel issue and that it existed well before the advent of social media, platforms (including but not limited to Twitter), have only given an added advantage with regard to ease in circulating and expressing this hesitance.

Social media platforms are not only mediums of expression, but also influence. They are not only mediums where questions can be raised but also where communities are formed around information and beliefs by acts of imitation and confirmation. Be it as it may, Twitter acts not only as a medium to voice out the concerns viz. vaccination(s), but rather, it also as a medium to impact people.

The pandemic has raised numerous issues. Making sense of facts amidst the bombardment of various conspiracies, inaccurate information, and baseless claims has been the usual doldrum.

How to then gauge the correct mechanism for content moderation to deal with the obstacle of vaccine hesitancy? Can social media platforms like Twitter indiscriminately and effectively apply the content moderation policy, in such circumstances?

Amidst the pandemic, currently, we inhabit a highly mediated world. Social media has been a major source of connection between individuals and their communities. During the pandemic, social media platforms proved to be one of the major sources of information regarding the necessary supplies, oxygen, medicines, oxygen beds, etc. Numerous leads helping people during a dire situation. The very platforms connecting us socially while in a state of social distancing and isolation, are also the very platforms that make sharing of unsubstantiated information and opinions easier. The problem of misinformation, albeit pre-existing, surfaced as a major problem during the pandemic right from its onset. Various conspiracy theories regarding the virus, myths about masks and sanitizers, information (misinformation) about vaccination, the problem of false information and contradictory opinions on issues of public health and medicines have emerged as pertinent issues.

The ongoing pandemic has therefore brought the already problematic arena of content moderation on social media platforms to the fore. The problem of content moderation on social media platforms is not only related to efficiency but as this article asserts, also to do with accountability and transparency. Although, Twitter has a policy in place to counter misinformation, the process and reasons – for taking down, tagging, or deleting – are part of information that is not available to examine the basis on which such action is taken.

Moreover, on one hand – while broader censorship may stifle speech and right to question, as it exists in a democracy, on the other hand inefficient content moderation may lead to no substantial impact on how social media influences opinion on vaccination.

Twitter and Content Moderation:

In March 2020, following steps of Facebook to remove content with false information about COVID, Twitter also implemented a similar policy, wherein it said,
“We’re expanding our safety rules to include content that could place people at a higher risk of transmitting COVID-19.”

Subsequently, vaccination and information around it was also included with such content moderation policy.

Existing content moderation policy of Twitter concerning COVID vaccination:

Twitter has a policy to tackle misinformation related to the pandemic, specifically. It has put a "strike system" in place, wherein depending on the repeated behaviour of a user, action is taken. According to such policy, vaccine hesitancy has been dealt with as follows:

False and misleading information about:

  • “The pandemic or COVID-19 vaccines that invoke a deliberate conspiracy by malicious and/or powerful forces.
  • Vaccines and vaccination programs which suggest that COVID-19 vaccinations are part of a deliberate or intentional attempt to cause harm or control populations.
  • How vaccines are developed, tested, and approved by official health agencies as well as information about government recommendations.
  • Twitter has also introduced a strike system based on two actions; Tweet deletion and labelling. For severe violations of the above, Twitter would require to delete content and would accrue 2 strikes. Whereas, labelling the content with additional context from twitter with regards to the nature of content would accrue 1 strike.”[2]

“Repeated violations of this policy are enforced against the user on the basis of the number of strikes an account has accrued for violations of this policy:

  • 1 strike: No account-level action
  • 2 strikes: 12-hour account lock
  • 3 strikes: 12-hour account lock
  • 4 strikes: 7-day account lock
  • 5 or more strikes: Permanent suspension”[3]

A pertinent question that arises here is that with the above policy in place, is Twitter able to handle the misinformation issue on its platform?

Is there parity in the way in which it deals with all content on the platform?

Do the steps taken follow the due process and is the action in consonance with the community guidelines and policy as stated by Twitter?

Various recent incidents bring to light the opaque manner in which the platform’s content moderation operates. With lack of accountability and transparency, Twitter has also been accused several times of acting on ‘whims and fancies’[4]

Issues with the current policies:

Recent, analysis – by the Bureau of Investigative Journalism (the bureau), an independent, not-for-profit organisation – reveals the incongruous manner in which the platform operates. As stated by the bureau:

“An analysis of just a week on Twitter revealed more than 60 examples of misinformation shared by accounts with a combined reach of more than 3.5 million followers. More than two weeks after the tweets were collected, just five of the accounts had been suspended.”[5]

In one instance of content moderation, Twitter proceeded with adding a “manipulated media” tag to the tweet of the BJP’s spoke person Sambit Patra, owing to a ‘Congress- toolkit’. Although Twitter stated that its action to such tweets remains impartial, the process through which they conclude and flag content, remains unknown.

With no transparency in the process to examine whether the flagging of content was in line with its policy, parity cannot be made out in the content moderation system of Twitter.

Additionally, in another instance of tweets promoting Baba Ramdev’s brand Patanjali no action was taken to contain any misinformation. Baba Ramdev has posted various posts on Facebook and Twitter apart from his advertising campaigns through other means, promoting ayurvedic products of his brand as opposed to the allopathic products through which COVID is treated. As reported by the Bureau recently, Ramdev is actively associated with “…false claims on Twitter to his more than 2.4m followers. One video on Twitter, in which he mocks COVID-19 patients for not being able to breathe properly, has been viewed almost 800,000 times.”[6] There has been no action pertaining to removal of content from his Twitter handle and continues remains a hugely popular icon on Twitter. This, in a way highlights the manner in which actions of Twitter suggest incongruity.

Moreover, there are huge repercussions to how content moderation on a platform like Twitter entails. As stated by Amber Sinha, a researcher with the Centre of Internet and Society,

“Platforms have far too much power, and operate in a state of opacity that prevents complainants and respondents alike, as well as the general public from being able to understand how and why it takes decisions which have an impact on freedom of expression.”[7]

Vaccine Hesitancy and content moderation:

In such an ecosphere of content moderation wherein the decisions are taken around regulating online speech are taken by social media giants in absence of any transparency and accountability, how then does one situate tweets about vaccine hesitancy?

Vaccines are a major way in which herd immunity can be reached. This is backed with enough scientific data. At the same time, concerns around vaccination remain for the public at large.

In such a scenario, the role of platforms like Twitter in content moderation becomes all the more crucial.

How then can the question of balance of content moderation be balanced with the right to voice an opinion or concern?

It remains a question all platforms are currently grappling with. With healthcare issues, especially in a pandemic, misinformation can endanger the public at large. At the same time - stifling relevant questions that are put to the government can prove to be an undue outreach of content moderation policies.

With constantly changing policies regarding vaccination and new strains of viruses emerging, along with local discourse and awareness campaigns, it is equally crucial for Twitter to adhere to its community guidelines and COVID misinformation policies in a non-arbitrary manner.

Taking down tweets, flagging content, and deleting accounts propelling vaccine hesitancy needs to be done in a proactive yet vigilant manner. Accordingly, amidst the public health crisis, Twitter needs to place public interest at the forefront and not govern by the logic that favours the platform commercially and politically.

- Disha Patwa is the Law & Tech Correspondent at Lawbeat