The single biggest reason most marketing AI projects deliver a fraction of the value they could is that the team chose to automate the wrong layer of their agentic AI work. They automated the interaction. The leverage was in the workflow. Same technology, same model, completely different return profile.
This isn't a complex argument once you've seen the difference. It's harder to spot than it sounds when you're inside a project, because the obvious place to put AI is where humans currently interact with systems — chat interfaces, search bars, content drafting tools. That's the layer where the AI is most visible. It's also, almost always, the layer where the leverage is smallest.
The interaction-vs-workflow distinction
Imagine a marketing operations team that produces 40 campaign briefs a month. The current process: a senior marketer fills in a brief template, hands it to the design team, gets feedback, iterates, sends it to the agency, the agency produces creative, the team reviews, sign-off happens.
The interaction-automation version of "AI for this team": a chatbot that helps the senior marketer write the brief faster. The marketer types prompts, the chatbot suggests language, sections are drafted faster, the brief is finished in 30 minutes instead of 60.
The workflow-automation version: an agent that takes a structured input (campaign goals, audience, channel, budget) and produces a complete brief, formatted to the team's template, with audience research embedded, channel-specific recommendations, prior-campaign learnings referenced, and a list of suggested creative directions. The senior marketer reviews the agent's output, makes edits, signs off. The brief is finished in 10 minutes.
Both versions use the same underlying model. Both look superficially like "AI in the brief workflow". The leverage is dramatically different.
In the interaction version, the senior marketer is still the system of record. They're still doing the work. The AI is making them slightly more efficient at the work. The savings: maybe 30 minutes per brief, 20 hours a month across the team.
In the workflow version, the agent is doing the work. The senior marketer is the editor of the agent's output. The savings: maybe 50 minutes per brief, 33 hours a month, plus a quality floor that's higher than what the inconsistent humans were producing.
Multiply this distinction across an organisation's processes and the gap between the two camps becomes substantial.
Why teams default to interaction automation
Three reasons.
Interaction automation is the default mental model. When most people think about AI in their workplace, they picture a chat interface. ChatGPT, Copilot, the in-app assistants. Those are the visible reference points. The first instinct is to recreate that experience inside their own context.
Interaction automation is easier to ship. It requires nothing more than a good prompt, a reasonable interface, and an existing model. Workflow automation requires process design, data integration, agent orchestration, error handling, testing — significantly more engineering work.
Interaction automation is easier to demo. "Here's a chatbot that helps you write briefs faster" is a five-minute demo. "Here's an agent that produces complete briefs from structured inputs" requires more setup, more nuance, more explanation of how it fits into the broader workflow. Stakeholders responding to the visible thing — the chat interface — gives interaction automation a political advantage in most organisations.
The result: most teams build the chatbot. They feel productive. The output is real. The leverage is small.
How to identify which layer to automate
A few questions I use, working with clients, to figure out where the workflow automation opportunity actually is.
What's the most senior person's most repetitive task? If a senior marketer spends 30% of their week on briefs, the brief workflow is a leverage candidate. If a junior person spends 30% of their week on briefs, the math is much weaker. Workflow automation captures the most value when it offsets the most expensive time.
What workflow has a structured input and a structured output? Brief generation. Campaign reporting. Audit compilation. Pitch deck production. Content briefing. These all have the property that you could, in principle, describe the inputs as a structured form and the outputs as a structured artefact. Anything with that property is a workflow-automation candidate. Anything without — pure ideation, pure judgment, pure relationship work — isn't.
Where does the team currently lose time to handovers? Workflows with multiple human handovers — strategist drafts brief, designer reviews, marketer signs off, agency receives — accumulate latency at each handover. An agent that produces a more-complete first draft compresses the cycle. The bigger the team and the more handovers, the larger the leverage.
What's the worst-quality version the team currently ships? Workflow automation tends to raise the floor. The work that previously got done badly when a junior was overloaded gets done better, consistently. If a team's outputs vary in quality based on who happens to be producing them, the agent version provides quality consistency that the interaction version doesn't.
These questions don't always point to the same workflow. They do, in combination, point to the workflows where the actual leverage is.
The diagnostic that catches teams early
A more pointed diagnostic, useful in early-stage AI strategy conversations: the team has decided where to deploy AI, but ask them what changes operationally if it works. If the answer is "people will be 20% more efficient", the team has chosen interaction automation. If the answer is "we won't need this role" or "this workflow won't be in our org chart anymore", the team has chosen workflow automation.
Most projects that start as the first kind never become the second. The capability gets built, the chatbot ships, the team uses it, the operational shape stays the same. The 20% efficiency gain is real and minor. Two years later, the team is still doing the same workflows in the same shape, with the same headcount.
The teams that capture the bigger leverage have made the operational decision before they've started building. They've decided the workflow will be different, and they're using AI to enable the difference. The technology is the implementation detail. The operational redesign is the investment that captures the value.
Where to start if you want the bigger leverage
If you're running a marketing function and you've been doing interaction-automation projects so far, the move I'd suggest is to identify one workflow — just one — that meets the criteria above and rebuild it agentically.
Don't try to do five at once. Don't try to retrofit every existing AI project into workflow automation. Pick one workflow. Map it. Find the verification points. Build the agent version end-to-end. Run it in shadow mode for two weeks. Compare the output to the human version.
If it works — and if you've picked the right workflow, it will work — the team will have a reference architecture that can be replicated across other workflows. More importantly, leadership will have a concrete demonstration that workflow-level automation is possible in their context, which is the political prerequisite for the bigger investment that the leverage actually requires.
The 90% of marketing AI projects that miss this leverage point aren't missing it because the teams are unintelligent. They're missing it because the visible layer is interaction, the path of least resistance is interaction automation, and the political and operational work required to do workflow automation properly is more substantial than chatbot deployment.
The 10% that don't miss it are getting returns that the 90% can't match. The gap is going to widen. The right time to start was 18 months ago. The next-best time is now.