The brand-safety industry is facing unprecedented challenges. One industry veteran says he can fix it—with the help of AI, naturally.
Brian O’Kelley, CEO and co-founder of the ad-tech platform Scope3 and a co-founder of AppNexus, said Thursday that Scope3 will expand into the ad-verification and brand-safety business through the release of a tool called Brand Standards.
The tool, an AI model built to a brand’s specifications that can crawl publishers to pull text and images and evaluate brand suitability, aims to offer a nuanced approach to brand safety, as opposed to tools like keyword or category blocking, or avoiding news altogether.
“It’s about precision,” O’Kelley said. “I want to make sure I block only the things that I really don’t want my brand next to, because everything I block is costing me eyeballs, and, from the publisher’s perspective, costing them money.”
Advertisers can prompt the model with guidelines, like “sensitive to natural disasters,” and the tech will scan a page to determine whether it’s suitable.
“Every brand is going to have a different understanding of that page, because every brand actually has different needs from a suitability perspective,” he said.
The tool, which was borne out of Scope3’s acquisition of Adloox, a verification company, is already integrated within The Trade Desk and Google, and will be sold by the publisher Dotdash Meredith, O’Kelley said.
Brand-safety breakdown
Scope3 was founded in 2022 with the original goal of measuring and limiting the carbon emissions created by digital advertising. Its venture into brand safety, which comes following a $25 million funding round late last year, arrives as the category’s leaders, DoubleVerify and Integral Ad Science (IAS), are facing increasing pressure from advertisers. Multiple recent research reports from Adalytics have questioned the effectiveness of those firms’ technology, and the findings have gotten the attention of Congress and some federal agencies, including the Department of Justice.
Last month, Adalytics published a report that found advertisers including Domino’s, PepsiCo, and Amazon, ran ads next to explicit content on an image-sharing website that had previously been found to host CSAM; some of those ads, Adalytics found, appeared to include code from DoubleVerify and Integral Ad Science.
Get marketing news you'll actually want to read
Marketing Brew informs marketing pros of the latest on brand strategy, social media, and ad tech via our weekday newsletter, virtual events, marketing conferences, and digital guides.
DoubleVerify said in a statement at the time that it had “strict policies and processes in place to ensure illegal content is handled in accordance with the law,” and that these ads represented 0.000047% of the company’s total; IAS said in a statement that it “has zero tolerance for any illegal activity, and we strongly condemn any conduct related to child sexual abuse material.”
Meanwhile, tech platforms like Meta and X have loosened their own content moderation policies, seemingly suggesting that brand safety is on the back burner. Advertisers appear to feel similarly—one Forrester survey found that 59% of marketing executives don’t think consumers are concerned “as much as they used to” about brand safety.
It’s also become a political cudgel as right-wing activists have claimed the practice censors conservative media. In August, the House Judiciary Committee published a report that accused organizations like the Global Alliance for Responsible Media of participating in “boycotts and other coordinated action to demonetize platforms,” like X. Days later, X sued the group, which soon disbanded.
O’Kelley said he views the state of the industry as a business opportunity.
“The fact that bias in brand safety is becoming a political issue in the US, this has become something of a grenade,” O’Kelley said. “This is a moment where you can’t afford to use a category-based system…because you could literally be called in front of Congress to be asked, ‘Why are you making these decisions?’ If you can’t really explain how your system works…that’s a real risk.”
Of course, it’s worth noting that AI models can be biased, too, O’Kelley acknowledged.