India intends to regulate social media and has notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. However, we are not the only country to have imposed social media regulations. Let’s see how social media regulations in major other countries stack up against India’s hotly debated rules.
The First Amendment of the United States Constitution prohibits the government from restricting most forms of speech. However, Section 230 of the Communications Decency Act is the most important provision in the United States for regulating social media. It states that an “interactive computer service” cannot be considered the publisher or speaker of third-party content. This shields websites from legal action if a user posts something illegal, with exceptions for copyright violations, sex work-related material, and violations of federal criminal law.
While the provision exempts online services, including intermediaries, from liability for transmission of any third-party content, there is growing agreement in the United States to regulate these social media intermediaries. Even the present President of the USA Joe Biden has publicly proposed repealing Section 230 entirely.
The Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) enactment aims to combat sex trafficking by weakening legal protections for online platforms. Section 230 is clearly not applicable in cases of sex trafficking or conduct that promotes or facilitates prostitution.
Besides, the US Congress has proposed multiple bills to regulate social media. A proposed bill establishing the National Commission on Online Child Exploitation Prevention lays out guidelines for detecting and removing child exploitation content. Noncompliance with these rules may result in the loss of immunity under Section 230 of the Communications Decency Act, which protects companies from liability for user posts.
Other Bills and Recommendations
Further, lawmakers introduced The Lawful Access to Encrypted Data Act of 2020 in the US Senate last year. The bill prohibits providers from offering end-to-end encryption in online services, as well as any encryption that they cannot unlock for law enforcement purposes. Further, when a court serves a search warrant or an order to such providers for a user’s data or device, the providers should be able to decrypt relevant data and hand it over to law enforcement authorities in legible form.
The United States Department of Justice has also issued a series of recommendations to reform Section 230. Among the recommendations are new restrictions on cyberstalking and terrorism, which are likely to result in more proactive moderation efforts as well as measures to punish arbitrary or discriminatory moderation. It goes on to say that the law should grant immunity for moderation decisions only when done in accordance with clear and specific terms of service and accompanied by a reasonable explanation.
Clarence Thomas, a US Supreme Court Justice, has stated that Congress should consider updating laws to better regulate social media platforms that have “unbridled control” over “unprecedented” amounts of speech. He contended that while digital platforms like Twitter allow for historically unprecedented amounts of speech, they also concentrate control of so much speech in the hands of a few private parties. Twitter has the ability to remove anyone from the platform, including the President of the United States. As Twitter demonstrated, the right to silence speech is wielded most powerfully by private digital platforms.
It is clear from the above that the wind is blowing in the direction of regulating social media in the United States as well.
The United Kingdom
The Draft Online Safety Bill was introduced in the UK Parliament in May 2021. The proposed legislation requires digital service providers to moderate user-generated content in order to protect users from being exposed to illegal and/or harmful content online. This new law will require companies to have easily accessible and effective complaint mechanisms in place, allowing users to file a complaint if they believe the service provider removed their content unfairly.
The bill does not prohibit end-to-end encryption – which prevents anyone other than the sender and recipient from reading a private message – or anonymous internet use, but the government has stated unequivocally that end-to-end encryption facilitates child abuse. The Duty of Care enshrined in the Online Safety Bill comes into play here, because introducing a high-risk design feature such as end-to-end encryption without corresponding safeguards would be a clear violation of the Duty of Care.
The Investigatory Powers Act 2016 is another such piece of legislation that establishes and, in some ways, expands the electronic surveillance powers of British intelligence agencies and police. The Act authorizes targeted equipment interference or communication interception. It also requires CSPs (communication service providers) in the UK to be able to remove CSP-applied encryption. However, it does not require foreign companies to remove encryption. The European Court of Human Rights (ECHR) recently ruled that the law “partially violated” the European Convention on Human Rights. The surveillance program did not violate human rights by itself, but violations stemmed from a lack of safeguards and protections.
Interim Code of Practice
The government has established an interim code of practice until it issues and implements a full code of practice. These outline the steps that companies should take to combat terrorism and child sexual exploitation and abuse online. The interim code, as well as all of the principles contained within it, are entirely voluntary and non-binding.
The Interim Code of Practice on Terrorist Content and Activity Online discusses Law enforcement in Section 4. The goal is to prioritize removing any terrorist content and activity that violates UK terrorism legislation, comply with removal and retention requests from recognized UK law enforcement bodies. It also asks service providers to voluntarily report to authorities where it suspects an imminent threat to life or serious physical injury due to activity occurring on their platforms.
The United Kingdom has led the way to ban encryption. With the enactment of the new intermediary rules, India is said to have joined this drive.
In May 2020, France passed a new law requiring social media platforms to remove certain hateful and illegal content within 24 hours. The law requires large operators of online platforms, whose activity consists of connecting several people with the goal of sharing or referencing content, to remove content that condones certain crimes, causes discrimination, hatred, or violence, or denies crimes against humanity, aggravated insults, sexual harassment, and child pornography within 24 hours of being notified by users.
However, the new law reduces this period to one hour for terrorist and child pornography content, once the public authorities notify the platform about it. Platforms could face fines of up to €1.25 million ($1.36 million) in the event they fail to follow the regulations.
Germany enacted the Network Enforcement Act , and it applies to companies with more than two million registered users in the country. The law requires social media platforms to remove hate speech and fake news within 24 hours of being flagged or face fines of up to $60 million. They must also publish reports every six months detailing the number of complaints received about illegal content. Similar to Indian rules, they also require platforms to designate a contact person for the authorities and offer a complaint mechanism.
The German Justice Ministry is reportedly asking messaging applications as well to comply with the Act.
There has been much discussion about whether the new intermediary rules are adequate or overreaching. Social media has gained a lot of traction in our country, and with that comes a lot of power to influence people; such power can’t be left unregulated. Countries all over the world have recognized the importance of regulating these platforms in order to limit their unbridled power.
But as a sovereign state and the world’s largest democracy, India should endeavour to protect fundamental rights at all costs. Concerning the contentious decryption or traceability provisions, it is indeed quite difficult to balance it with individual privacy. Perhaps, any law forcing decryption should embed inherent safeguards, something that the ECHR says the Investigatory Powers Act lacks. More discussions, and the ongoing lawsuit between WhatsApp and the Union of India, the Court’s decision will play an important role in helping us understand the interpretation of contentious provisions.
It should also be ensured that the removal of content is not politically motivated and is entirely in the best interests of the country and its people. However, I am convinced that Indian law is a step in the right direction for regulating social media platforms. It is a necessary step. It is self-evident that countries around the world enact laws that are in the best interests of their citizens, and social media entities must follow those laws.
The rules have now come into force, and platforms such as Google and Facebook have stated that they will comply with the law. Twitter has also appointed a Compliance Officer, though it is stated that he is not an employee of the company, as required by law. It will be interesting to see how this all plays out and in whose favour the Court rules : absolute freedom, self-moderation, or legally regulated moderation.
This article was written by Mrunal Pol, a student of ILS Law College, Pune.