Is your ad budget funding disinformation? If you use programmatic media buying, the answer is most probably yes. A recent report by NewsGuard, a media research organisation, found that more than 140 major brands were paying for ads that end up on junk websites written by AI. 

Most of these adverts on AI-produced news websites were supplied by Google, despite the company’s standards forbade websites from displaying their ads on pages with “spammy automatically generated content.” The practice threatens to waste enormous sums of ad money and speed the emergence of a glitchy, spammy, AI-generated content-filled internet.  

The rise in “made for advertising” websites, where low-paid humans produce low-quality content to generate advertising income, is only being propelled by the acceleration of AI. The Association of National Advertisers in the US estimates that around US$13 billion is wasted globally on these sites each year.  

With programmatic media buying, ads are placed on numerous websites by algorithms based on intricate calculations that maximise the number of potential customers a given ad might reach. Due to the lack of human monitoring, major brands pay for ad placements on websites they may have never heard of before. 

X marks the spot for disinformation 

Fake news, conspiracy theories, propaganda, it’s everywhere these days. Just last week, X (formerly Twitter) was found to have the highest rate of disinformation posts of all large social media platforms. So much so that the EU has warned Elon Musk that his platform must comply with new laws on fake news and Russian propaganda.

But where do advertisers stand in all of this? So many have been found to be funding the rise in disinformation through their programmatic media buys, but what, if anything, is being done to prevent this? 

Harrison Boys, head of sustainability and investment standards, IPG Mediabrands APAC, says that maintaining a governance strategy within biddable media buys goes a long way to mitigating the risks of inadvertently funding disinformation.

“This involves domain and app vetting processes which use various signals to detect the quality of the inventory that we are using,” says Boys. “By identifying the quality of the inventory (brand safety risk, fraud risk, traffic sources, etc), we can go a long way to mitigating the risk of disinformation.” 

Source: Getty Images/NurPhoto

Melissa Hey, chief investment officer at GroupM Australia and New Zealand, says that programmatic advertising isn’t the problem in itself — it allows advertisers to automate the buying process and reach valuable audiences across publishers effectively and at scale. But it’s vitally important to employ the highest levels of brand safety standards within programmatic buying practices.  

“We set up our governance to meet all our clients buying priorities and campaign goals, ensuring we buy only the best and relevant inventory on behalf of our client,” says Hey. “For example, we only use the Media Rating Council (MRC) accredited verification vendors, apply inclusion and exclusion lists across all media buys, just to name a few.” 

Ensuring ad dollars are invested effectively 

We have all witnessed the harm that misinformation can cause to society. The Ukrainian War, regional and international election cycles, and the pandemic have all demonstrated how crucial it is to support reputable, fact-based journalism. 

But sadly, some estimates say that ‘made for advertising’ websites could be siphoning US$17 billion a year away from quality journalism. To try and rectify this major issue, GroupM launched ‘Back to News’, an industry-first program to help brands support quality journalism by re-investing media budgets in credible news publishers. 

“We, as buyers and advertisers, have a role to play to ensure that advertising investment supports credible, fact-based journalism,” says Hey. “Advertising allows publishers to invest in journalists, which leads to responsible and reliable information that consumers can trust. This then attracts quality audiences and provides a safe space for advertisers.” 

For the ‘Back to News’ program, GroupM is working with Internews, the world’s largest media support non-profit, as part of a global partnership announced in February. 

“In Australia, we have a growing list of more than 200 diverse local, regional and metro publishers on board,” says Hey. “It provides an extra layer of vetting for journalistic integrity, credibility and brand safety as well as checks against disinformation and propaganda. This goes beyond any generic brand safety checks.”

But while initiatives like Back to News help to address the drop in ad investment in news publications by re-investing media budgets in credible news publishers, the issue of disinformation is still a growing issue outside of that ecosystem.  

Could it be that this growing wave of disinformation is a result of the trade-off that marketers have made over the past decade while pursuing the promise of ‘programmatic advertising’: more scale, more reach, and lower costs, but more risk of funding disinformation through the automated digital ad buys? 

“A decade ago, advertisers bought space on specific media outlets, but now they buy eyeballs of their target group, regardless of where their target group happens to be on the web,” says Clare Melford, co-founder, The Global Disinformation Index. “Their ideal customer can be reached both on a high-quality news site, but also [and more cheaply] when that same customer visits a lower quality and potentially disinforming news site.” 

IPG Mediabrands Harrison Boys believes the rise of ad funded disinformation is largely due to the ease of ability for a website to monetise through online advertising. 

“The creators of these pages are seeking to influence and also create profits and, in some cases, there is very little in the way of vetting processes for monetisation,” says Boys. “To combat this, we must employ greater control over our inventory sources and essentially have our own monetisation standards. However, my concern for the industry is only agencies over a certain size, like ourselves, would typically have the capabilities to employ these kinds of defense tactics, which leaves the majority of the programmatic ecosystem open to more risk.” 

Will generative AI generate even more disinformation?

There’s no question that emerging technologies are making it easier and faster for sites that are ‘made for advertising’ to spring up. And in this space, AI can be a double-edged sword. 

“On the one hand, AI certainly brings the cost to create and proliferate disinformation across the web down to essentially zero, and without ad placement transparency, it is easy to monetise AI-generated disinformation that is highly engaging,” says Melford. “But on the other hand, AI is allowing us to more accurately detect massive amounts of disinformation in real-time across countless languages and domains. Properly harnessed, it can actually be a powerful tool to fight back against the rise of junk sites and the disinformation they spread.”

AI gives the ability to produce content at scale with minimal effort. Additionally, it has created a robust market for con artists and disinformation agents. By 2025, digital advertising is anticipated to be “second only to the drugs trade as a source of income for organised crime,” according to the World Federation of Advertisers.

What can be done? 

The funding of disinformation by the advertising industry continues largely unabated. It has become much too simple for website owners to connect to the advertising system without any human inspection or even an after-the-fact audit thanks to self-serve application processes and middleman companies who turn a blind eye. Do advertisers need to take back control over their own advertising? Are there too many middlemen?

“One solution is for advertisers to demand greater transparency and control from the companies that buy and place their online campaigns,” says Melford. “In the absence of that, there are free-market tools out there such as GDI’s Dynamic Exclusion List, among others, which can help advertisers ensure their brands are not funding content that goes against their brand values.”

Melford also suggests that in the longer term, a powerful solution will involve technology companies who use algorithms to put content or place ads using an independent, third-party quality signal of that content within those algorithms — and giving the “quality” signal a greater weight relative to the “engagement” signal than what happens today. 

“If tech companies had to use third-party quality signals in their algorithms, we would see a prioritisation of quality content online, and a safer overall environment for advertisers and brands.” 

Boys points out that it’s also important to note what not to do.  

“I think there are some who would err on the side of purely stopping advertising on news content to combat this issue, which is wholeheartedly not recommended.  

“This is really a situation where brands can look to identify who their trusted sources of news are in the markets they operate, and combat disinformation by advertising on trusted information,” adds Boys. “Every step in the chain has a role to play to ensure that we aren’t funding disinformation as an industry. It can’t be solved by only one link putting processes in place.”

This story first appeared on Campaign Asia-Pacific.