How Meta Is Preparing For Indian General Elections 2024

As the world’s largest democracy prepares for the 18th General Elections, Meta will continue efforts to limit misinformation, remove voter interference, and enhance transparency and accountability on our platforms to support free and fair elections. Towards this, we have around 40,000 people globally working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016. This includes 15,000 content reviewers who review content across Facebook, Instagram, and Threads in more than 70 languages — including 20 Indian languages. With lessons learnt from hundreds of elections globally and with so many more important elections approaching this year, we have developed a comprehensive approach on our platforms.

Over the last eight years, we’ve rolled out industry-leading transparency tools for ads about social issues, elections or politics, developed comprehensive policies to prevent election interference and voter fraud, and built the largest third party fact-checking programme of any social media platform, to help combat the spread of misinformation. More recently, we have committed to taking a responsible approach to new technologies like GenAI. We’ll be drawing on all of these resources in the run up to the elections.

As for all major elections, we’ll also activate an India-specific Elections Operations Center, bringing together experts from across the company from our intelligence, data science, engineering, research, operations, content policy and legal teams to identify potential threats and put specific mitigations in place across our apps and technologies in real time.

We are closely engaged with the Election Commission of India via the Voluntary Code of Ethics that we joined in 2019, which gives the Commission a high priority channel to flag unlawful content to us. 

Addressing Online Misinformation 

We remove the most serious kinds of misinformation from Facebook, Instagram and Threads, such as content that could suppress voting, or contribute to imminent violence or physical harm. During the Indian elections, based on guidance from local partners, this will include false claims about someone from one religion physically harming or harassing another person or group from a different religion. For content that doesn’t violate these particular policies, we work with independent fact-checking organizations. We are continuing to expand our network of independent fact-checkers in the country – we now have 11 partners across India covering 15 languages, making it one of our largest networks for a country. 

Ahead of the elections period, we will make it easier for all our fact-checking partners across India to find and rate content related to the elections, because we recognize that speed is especially important during breaking news events. We’ll use keyword detection to make it easier for fact-checkers to find and rate misinformation. Our fact checking partners are also being onboarded to our new research tool, Meta Content Library, which has a powerful search capability to support them in their work. Indian fact checking partners are the first amongst our global network of fact checkers to have access to Meta Content Library.

a) Countering risks emanating from the misuse of GenAI 

We recognize the concerns around the misuse of AI-generated content to spread misinformation and  actively monitor new trends in content to update our policies. Our Community Standards and Community Guidelines govern the types of content and behaviors that are acceptable on Facebook and Instagram, applying to all content on our platforms, including content generated by AI. When we find content that violates our Community Standards or Community Guidelines, we remove it whether it was created by AI or a person. 

AI generated content is also eligible to be reviewed and rated by our network of independent fact-checkers. Many of our fact checking partners are trained in visual verification techniques, such as reverse image searching and analyzing the image metadata that indicates when and where the photo or video was taken. They can rate a piece of content as  ‘Altered’, which includes “faked, manipulated or transformed audio, video, or photos.” Once a piece of content is rated as ‘altered’, or we detect it as near identical, it appears lower in Feed on Facebook. We also dramatically reduce the content’s distribution. On Instagram, altered content gets filtered out of Explore and is featured less prominently in feed and stories. This significantly reduces the number of people who see it.

For content that doesn’t violate our policies, we still believe it’s important for people to know when photorealistic content they’re seeing has been created using AI. We already label photorealistic images created using Meta AI by putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files. We are also building tools to label AI generated images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock that users post to Facebook, Instagram and Threads.

Starting this year, we also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases. This applies if the ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to depict a real person as saying or doing something they did not say or do. It also applies if an ad depicts a realistic-looking person that does not exist or a realistic-looking event that did not happen, alters footage of a real event, or depicts a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

b) Consumer education initiatives to combat the spread of misinformation

We understand that it is important to educate people on the role they can play to curb the spread of misinformation. We have been running an integrated 8-week long safety campaign, ‘Know What’s Real,’ since the end of February. The campaign is focused on educating users on identifying and addressing misinformation on WhatsApp and Instagram by promoting digital best practices and highlighting available safety tools, along with encouraging people to double-check information that sounds suspicious or inaccurate by sending it to WhatsApp tiplines

Recently, we joined forces with the Misinformation Combat Alliance (MCA) to introduce a WhatsApp helpline to deal with AI-generated misinformation, especially deep fakes, providing a platform for reporting and verifying suspicious media. The service will support multiple languages, enhancing accessibility for users across India. We’re also working with MCA to conduct training sessions for law enforcement officials and other stakeholders on advanced techniques of combating misinformation, including deep fakes using effective open source tools.

Addressing Virality On WhatsApp 

WhatsApp will continue to limit peoples’ ability to forward messages and has announced last year that any message that has been forwarded once can only be forwarded to one group at a time, rather than the previous limit of five. When we introduced the same feature for highly forwarded messages in 2020, we reduced the number of these messages sent on WhatsApp globally by more than 70%. 

People can also control who can add them to group chats and have options to block and report unknown contacts, giving them even more control over their privacy.

Preventing Voter Interference and Encouraging Civic Engagement 

We’re continuing to connect people with details about voting while enforcing our policies against voter interference, electoral violence and misinformation about when, where, and how to vote in an election. We continually review and update our election-related policies which prohibit any election interference and voter interference. We have zero tolerance towards these violations including providing misleading dates of polling day, wrong information on process of voting, etc., and take action if content violates our Community Standards.

We don’t allow ads that contain content debunked by third party fact checkers. We also don’t allow ads that discourage people from voting in an election, that call into question the legitimacy of an upcoming or ongoing election, or with premature claims of election victory. Our ads review process has several layers of analysis and detection, both before and after an ad goes live, which you can read more about here

One key area we focus on during elections is civic engagement and supporting efforts by the Election Commission of India to drive voter participation. Recently, on National Voters’ Day in January, we launched a nation-wide alert to encourage users to visit the ECI website to access authentic information about elections. Like with previous elections including last year’s 5 state elections, we will run Voting Day Reminders and encourage users to share they voted.

Apart from this, we will be launching the ‘Celebrate Each Vote’ campaign by joining hands with national and regional creators to encourage voter awareness and tackle voter apathy among their communities in local languages across the country. This will begin in late March and will target all voters, specially those voting for the first time and also debunk election related misinformation. 

Promoting Transparency & Accountability

Since 2018, we have provided industry-leading transparency for ads about social issues, elections or politics, and we continue to expand those efforts. We have long believed in the role that transparency plays in bringing more accountability to Meta and our advertisers. This is especially important for ads that can influence the way people think, act and vote. We continue to provide industry-leading efforts for ads about social issues, elections or politics. Since 2020, people have been able to decide if they want to see fewer ads on these.

Advertisers who run these ads are required to complete an authorization process and include a “paid for by” disclaimer. We provide information about advertiser targeting choices, and ads delivery, in the publicly available Ad Library. All social issue, electoral and political ads information is stored in the Ad Library for seven years.

The post How Meta Is Preparing For Indian General Elections 2024 appeared first on Meta.



source https://about.fb.com/news/2024/03/how-meta-is-preparing-for-indian-general-elections-2024/


Top rated Digital marketing. From $30 Business growth strategy Hello! I am Sam, a Facebook blueprint certified marketer. Expert in Facebook Ads, Instagram Ads, Google Ads, YouTube Ads, and SEO. I use SEMrush and other tools for data-driven research. I can build million-dollar marketing strategy for your business.
Learn more
Reactions

Post a Comment

0 Comments