News
/
March 21, 2025
Download PDF

Meta Vows to Combat Deepfakes Ahead of Australian Election

As Australia prepares for a national election this May, Meta, the parent company of Facebook and Instagram, has pledged to step up efforts to detect and curb false content and deepfakes across its platforms. While this move appears to counter Meta’s recent shift toward less content moderation in the U.S., it is important to understand that Meta is acting in compliance with Australian legislation.

In a blog post released Tuesday, Meta’s Head of Policy in Australia, Cheryl Seeto, outlined the company’s election integrity plans, which include partnering with fact-checkers from Agence France-Presse and the Australian Associated Press. The company confirmed it will remove any content that could lead to imminent violence or interfere with voting, while false or misleading content will be demoted in visibility across the platform.

“When content is debunked by fact-checkers, we attach warning labels to the content and reduce its distribution in Feed and Explore so it is less likely to be seen,” said Seeto.

Meta also stated that AI-generated deepfake content will either be removed if it violates community policies or labelled as “altered” and pushed down in users’ feeds. Importantly, users will be prompted to disclose when sharing AI-generated or photorealistic content.

This pledge to transparency comes shortly after Meta shut down its U.S. fact-checking programs and loosened restrictions around politically sensitive topics like immigration and gender identity, an overhaul reportedly driven by political pressure. The contrast highlights Meta's country-by-country approach to policy enforcement, driven more by local regulation than by a unified global standard.

With opinion polls indicating a tight race between the ruling centre-left Labor Party and the opposition Liberal-National coalition, Australia’s election will be closely watched. Meta confirmed that this move aligns with similar misinformation prevention efforts it carried out in recent elections in the U.S., U.K., and India.

However, Meta is also navigating increasing regulatory pressure in Australia. The government is planning to introduce a levy on major tech firms to compensate for the use of local news content. Additionally, social media companies are expected to implement a ban for users under 16 before the end of 2025.

As public concern over misinformation and AI-generated content continues to grow, Meta’s actions in Australia signal a more proactive, if not fully consistent, approach to content moderation on a global scale.

More about CEPIC