Meta reaffirms commitment to content integrity and introduces new changes in policy

Meta mobile phone stock image

Will Easton: ‘It’s essential that we remain agile and respond to evolving needs, ensuring that our approach stays relevant and effective.’

Meta announced key changes to its approach to content moderation last month, including a shift from third-party fact-checking to the introduction of “community notes” in order to combat misinformation on its platforms. Speaking with Mia Garlick, Meta’s regional director of policy for Australia, Japan, Korea, New Zealand & Pacific Islands, the company explained its ongoing commitment to maintaining content integrity while adapting its strategies based on evolving user engagement patterns.

One of the key developments involves the transition from third-party fact-checkers to a more community-driven model in the US, which Garlick explained as a response to previous challenges with misinformation management. “When we first rolled out fact-checking, putting large labels on false content actually led to it being shared more. People were suspicious and claimed that Meta was part of a conspiracy,” she said. “With community notes, which is only being rolled out in the US at present, we’re aiming for a more constructive dialogue between users with differing views, which we believe may be more effective than traditional fact-checking in changing people’s minds.”

Community Notes will not be implemented in Australia this year. Any potential introduction would only occur after a thorough assessment of local laws and and after it’s been trialled in the US.

The changes are not limited to fact-checking. In a bid to evolve with societal shifts, Meta also announced a refinement of its hate speech policies, pivoting towards a focus on “hateful conduct.” “We’re allowing more expression, especially around topics that are highly debated. For instance, some people have used slurs to reclaim them, and previously, we removed that content,” Garlick said. “Now, we’re focusing on removing harmful content, such as attacks based on protected characteristics, but allowing more speech that could be deemed offensive but is used for purposes like political debate.

“You don’t have to agree or disagree with what someone else says, but we are changing our rules to allow for this discussion and discourse. For example, someone stating that women should not be allowed to serve in combat roles in the military was up until now not previously allowed and yet this could be discussed in other forums like Parliament.”

As Meta continues to roll out these changes globally, including in the U.S., the company remains committed to ensuring its platforms remain safe and responsible spaces. “We have a commercial incentive to ensure the integrity of our systems. If our users don’t trust our platforms, they won’t use them,” Garlick said.

Will Easton, MD of Meta ANZ, added: “These updates reflect our commitment to providing a balance between user expression and protecting against harmful content. It’s essential that we remain agile and respond to evolving needs, ensuring that our approach stays relevant and effective.”

Keep on top of the most important media, marketing, and agency news each day with the Mediaweek Morning Report – delivered for free every morning to your inbox.

To Top