Google has launched the latest iteration of its Ads Safety Report, building on the progress made with policies for ads and publishers, as well as its accountability for maintaining a healthy ad-supported internet.
In a Mediaweek exclusive, Google’s general manager of Ads Safety, Alex Rodriguez, spoke about how the tech platform blocks and suspends bad ads and advertiser accounts, the reliance on human moderation, and how policies around AI-generated election ads are being enforced.
Advanced AI tools
The report highlighted that Google’s investment in AI-powered tools such as large language models in ad safety, accelerates the enforcement against bad ads, the ability to prevent fraudsters, enhances the ability to uncover and avoid networks of bad actors and repeat offenders, and frees up people to do complex tasks.
In 2024, Google said it blocked or removed 5.1 billion bad ads globally. The majority of these were abusing the ad network (793.1 million), followed by trademark (503.1 million) and personalise ads (491.3 million), legal requirements (280.3 million) and financial services (193.7 million).
The tech giant also reported it suspended over 39.2 million accounts, the majority of which were suspended before they ever served an ad.
In Australia, Google reported 205.7 million ads were removed while 841,000 ad accounts were suspended in 2024.
Google’s Advertiser identity verification tool has also helped prevent suspended bad actors from returning and provided transparency about who is behind an ad.
The program claims to cover more than 200 countries and territories, with over 90% of ads seen by people on Google coming from verified advertisers, on average.
Collaborating with the industry to fight scams
Google has also had a collaborative approach with the industry in combating sophisticated and ever-changing scam techniques. Google teamed up with the Global Anti-Scam Alliance to create Global Signal Exchange, an enhanced cross-industry information-sharing network.
The tech giant also updated its misrepresentation policy to fight public figure impersonation ads, which Google claims have led to a 90% drop in reports of such scam ads in the last year.
Safeguarding election integrity
Google has expanded its efforts in supporting election integrity through identity verification and transparency requirements for election advertisers in new countries.
This includes ‘paid for by” disclosures and a public transparency report of all election ads that provide transparency for users and identify election ads and who paid for them.
Google is also launching disclosure requirements for AI-generated content in election ads, building on our existing transparency efforts around elections. The tech giant said it verified more than 8,900 new election advertisers and removed 10.7 million election ads from unverified accounts. It also enforced policies against false election claims around the world.

Alex Rodriguez
Rodriguez on Google AI and the reliance on human moderation
Google’s general manager of Ads Safety, Alex Rodriguez explained to Mediaweek that Google trains AI models with information, signals and human reviews. After enough training, from there he said that it can figure those things out and make certain judgments on its own.
“Then we routinely sample and test its decisions versus good known human reviews to make sure that it’s acting in a way that we feel is appropriate and we do measurements and have metrics all around that.”
He noted that information and signals can continue to be added but human moderation comes into play when something seems risky in which a person will review and make a call.
“Humans are still very, very much so in the loop and we must continue to do that because we want to make sure that we’re continually testing and probing this model to make sure that it’s doing what it’s supposed to be doing and acting in a good way.
“We see that too, by any number of signals, both our human reviews, we also take user feedback very seriously. If a user goes and reports an ad, we have humans that go look at that report or that look at those, the initial report on any given ad. So that’s critical for us to kind of maintain and build our models and make them more effective.”
Rodriguez on enforcing policies on AI-generated election ads
Google was among the first to introduce disclosure requirements for AI-generated election ads. Those policies are being enforced by asking advertisers to be honest and transparent and disclose themselves.
Rodriguez said: “If it is, and they’re not providing disclosures, we do follow up and we engage to make sure that they are correctly disclosing as we see fit for our platform.”
He also noted that the tech giant also has tools and the capability to understand if the content is AI-generated.
On whether there is potential for transparency standards and disclosure requirements to be expanded to non-political advertising, Rodriguez said: “We’re currently considering how and what makes the most sense for public disclosure across all ads.
“It’s nothing that we’ve made necessarily a decision on just yet, but we are definitely in conversations to figure out what we think would make the best sense for the platform.”