Agencies

Here’s how three people in the ad industry are using AI

An agency exec, a creative, and a data analytics expert weighed in.
article cover

Getty

· 7 min read

2023 was the year everyone’s parents asked them if AI was going to put them out of a job.

We’re not quite there yet, but Marketing Brew thought it might be worthwhile to ask a few different folks in the advertising industry—an agency exec, a creative, and a data analytics expert—how AI has impacted their day-to-day operations, and where they think the tech is headed in the new year. Here’s what they told us.

Marketing Brew: How are you using AI today?

Dave Meeker, head of design and innovation at Dentsu: Dentsu has started leveraging generative AI across the business. In 2023, we prioritized compliance and established legally and ethically responsible frameworks. We have also established global technology infrastructure, enabling access to common AI platforms from Microsoft/OpenAI, Amazon, and Google. Additionally, we collaborate closely with partners like Nvidia and Meta to incorporate their offerings into client solutions and our own products. In terms of innovation, we have implemented guidelines and empowered local market leaders to explore, test, and build proofs of concept on various platforms and technologies—both commercial and open-source. The scope of AI activity at Dentsu is broad, focusing on internal efficiencies, innovative work delivery for clients, and the transformative impact on our business.

Jamie Carreiro, director of creative engineering, Wieden+Kennedy: I’ve been using AI for image creation, mainly as a tool for generating mockups and specific reference images. It fits into my workflow mostly during the idea phase, taking on the role that Photoshop comps previously held. I’ve also used it to a lesser extent as a stylization tool for finished content, and I’ve used the chatbot AIs as a way to automate simple data-based tasks—like listing a bunch of hex color codes I need, or giving me syntax for snippets of code.

Soren Larson, co-founder and CEO of Crosshatch: We’re using AI for almost everything. We use it to answer questions on code syntax, provide feedback on system designs, and write easy-to-describe functions. I’ve been increasingly using it to evaluate logic or remind me of references to things I can’t quite remember. I used AI to check to make sure references in this email were correct.

Marketing Brew: Where do you see AI going as it pertains to your current job?

Meeker: I see the core of generative AI and related technologies as enablers for us to scale and design experiences for our clients that we simply couldn’t do in the past. Our team members are gaining additional capabilities that make them faster, stronger, and more diverse, and we are able to move much more quickly across all that we do—from strategy to idea generation and concepting to prototyping, testing, and bringing things to market. We see generative AI as a great enabler. We like the Microsoft concept of “co-piloting,” and giving us scale where it simply didn’t exist before.

Carreiro: I think we’re past the point where AI can generate specific finished images that are suitable for publication, so I think the next place it will go is the creation of better, more powerful tools for communicating your intent to the AI. In my current job, I think as soon as I can rely on the AI to know what I want (and rely on my ability to tell it), it will replace large portions of image creation during the beginning and middle of projects. The illustration of ideas, exploration of forms, experimentation—all of this will become heavily AI-powered as soon as the control problem is solved. I think I’ll still turn to humans or finished artworks, but the AI will take over a lot of the pen-and-paper/whiteboard middle phases, and will do so while increasing the vividness of those intermediate drawings and illustrations.

Larson: It seems like we’re on our way to some general intelligence. I like Vinod Khosla’s definition. I also like a Wittgenstein-inspired definition: “Whereof one cannot speak, thereof one must be silent,” (e.g., say AI is generally intelligent when it admits it doesn’t know something). Given that, I spend my time thinking about things a general intelligence can’t readily touch, namely—apologies for being so abstract but—things of infinity and security. Reductively, I previously worked as a data scientist, and now I’ve gone upstream.

Get marketing news you'll actually want to read

Marketing Brew informs marketing pros of the latest on brand strategy, social media, and ad tech via our weekday newsletter, virtual events, marketing conferences, and digital guides.

Marketing Brew: Where does it fall short?

Meeker: Models aren’t perfect. There are simply things that they don’t do well at this stage. Granted, we see constant improvement, but in some cases, it’s two steps forward, and one step back. There can’t be a full reliance on generative AI. It’s not there (yet). For example, it is still suffering from bias, and the data we use to train these systems is a critical element; data is hard. In terms of quality, humans win. In terms of scale and speed, AI is taking the lead.

Carreiro: AI, in its current form, lacks good iterative qualities. In image gen, for example, you can’t really make an image and then iterate on that image without creating a whole new image. It might look similar in style or content, but it won’t be literally the same image with your suggested changes. This prevents you from collaborating with the AI the way you would with a human designer or illustrator. The AI doesn’t remember the image it just created. In its current form, it can’t even really “see” them yet. But I think this is going to get solved really soon. Multi-model AI systems are being built that will add new kinds of awareness, like being able to see images while listening to spoken languages at the same time, and combining data from both.

Larson: I don’t think it’s very interesting to speak of the empirical idiosyncrasies of where AI doesn’t do particularly well. They feel like they’re bound to be fleeting observations. That said, I’ve come to believe there may exist true limits on AI capability not given by technical constraints but rather philosophical and linguistic ones corresponding to the constraints of AI’s interface with the world—language. Put casually, AI cannot understand things it cannot consistently refer to. Data lives at a specific address in computer memory, but “Rigid Designators,”Aristotle” or “Ryan Barwick” (following Saul Kripke) are constructs whose reference or meaning is established at “birth” and maintained through its use in language, irrespective of any descriptive properties associated to the name. AI does not observe all such references and so cannot understand our world, the thinking could go.

Marketing Brew: What’s an AI application that you didn’t expect?

Meeker: There is so much happening in automation around generative AI. The speed at which we are moving is also unexpected. It seems as if every week we see advancements that are sometimes far beyond expectations. There is this synergistic effect in play, and that is what we didn’t expect to be as powerful as it is. As AI models improve, as people understand more, as our training data gets better, things are moving into a place where everything seems to be leading to a place where the experiences we help broker between our clients and their customers are going to happen in real-time. That seems like it is happening “now,” years ahead of where we thought we’d be if you had had this conversation 16 months ago.

Carreiro: One of my favorite applications of AI that took me a little by surprise was the language localization work done for the 2022 film The Fall. In order to create the foreign language dubs for that movie, they used AI not only to do the voice work, but they made it match the sound of the original actress. Then they used their AI tools to replace her mouth in all speaking shots to match the mouth movements/shapes of whatever language they are translating to. The result is that on the Blu-ray, you can pick a language and see the same performers speaking their lines with perfect lip sync in any of a long list of languages. They basically made deepfakes of people but mapped their own mouths onto their own mouths, just changing the words they are saying. Totally changes the experience of watching a movie from another country.

Larson: Nothing. We expect a general intelligence.


Get marketing news you'll actually want to read

Marketing Brew informs marketing pros of the latest on brand strategy, social media, and ad tech via our weekday newsletter, virtual events, marketing conferences, and digital guides.