Google had received 36,934 complaints from users and removed 95,680 pieces of content based on those complaints in July. It had removed 5,76,892 pieces of content in July as a result of automated detection. The US-based company has made these disclosures as part of compliance with India’s IT rules that came into force on May 26.
Google, in its latest report, said it had received 35,191 complaints in August from individual users located in India via designated mechanisms, and the number of removal actions as a result of user complaints stood at 93,550.
These complaints relate to third-party content that is believed to violate local laws or personal rights on Google’s significant social media intermediaries (SSMI) platforms, the report said.
“Some requests may allege infringement of intellectual property rights, while others claim violation of local laws prohibiting types of content on grounds such as defamation. When we receive complaints regarding content on our platforms, we assess them carefully,” it added.
The content removal was done under several categories, including copyright (92,750), trademark (721), counterfeit (32), circumvention (19), court order (12), graphic sexual content (12), and other legal requests (4).
Google explained that a single complaint may specify multiple items that potentially relate to the same or different pieces of content, and each unique URL in a specific complaint is considered an individual “item” that is removed.
Google said in addition to reports from users, the company invests heavily in fighting harmful content online and uses technology to detect and remove it from its platforms.
“This includes using automated detection processes for some of our products to prevent the dissemination of harmful content such as child sexual abuse material and violent extremist content.
“We balance privacy and user protection to: quickly remove content that violates our Community Guidelines and content policies; restrict content (e.g., age-restrict content that may not be appropriate for all audiences); or leave the content live when it doesn’t violate our guidelines or policies,” it added.
Google said automated detection enables it to act more quickly and accurately to enforce its guidelines and policies. These removal actions may result in removing the content or terminating a bad actor’s access to the Google service, it added.
Under the new IT rules, large digital platforms – with over 5 million users – will have to publish periodic compliance reports every month, mentioning the details of complaints received and action taken thereon.
The report needs to also include the number of specific communication links or parts of the information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools.
Recently, Facebook and WhatsApp have also released their compliance reports for the month of August.
Facebook said it had “actioned” about 31.7 million content pieces across 10 violation categories proactively during August in the country, while its photo-sharing platform, Instagram took action against about 2.2 million pieces across nine categories during the same period proactively.
“Actioned” content refers to the number of pieces of content (such as posts, photos, videos, or comments) where action has been taken for violation of standards. Taking action could include removing a piece of content from Facebook or Instagram or covering photos or videos that may be disturbing to some audiences with a warning.
Also, Facebook said it had received 904 user reports for Facebook through its Indian grievance mechanism between August 1-31. Instagram had received 106 reports through the Indian grievance mechanism during the same time frame.
In its report, WhatsApp said it had banned over two million accounts in India, while 420 grievance reports were received by the messaging platform in the month of August.