What 1,000+ marketers told ADMA about AI adoption – and what it means for 2026

AI is no longer knocking at the door of the marketing profession. It’s inside. Embedded. And, for many, open on the desktop right now.

But governance and capability haven’t kept pace with its rapid uptake. And that imbalance leaves organisations exposed.

According to ADMA’s State of AI in Marketing survey, 77% of marketers use AI tools at least weekly, but only 13% have received formal training.

We spoke to our very own Dr Sage Kelly, ADMA’s Regulatory and Policy expert, to unpack what this means for the profession – and why clear guardrails are essential moving forward.

High adoption, low assurance

More than half of Australian marketers are using AI every day, yet only a small minority have received formal training.

That’s one of the key findings from ADMA’s State of AI in Marketing survey, which captured insights from 1,092 marketing professionals across roles, seniority levels and sectors.

‘We ran this survey to understand how marketers are using AI now: which tools they use, what level they’re using them at and what kind of governance structures exist at their companies,’ says Dr Sage Kelly, Regulatory and Policy Manager at ADMA.

The results show high uptake; professionals across creative, content, analytics and planning roles are embedding AI into everyday workflows.

Of those surveyed, 3 in 4 said they’ve used ChatGPT. And 52% said they use AI every day.

But using AI regularly isn’t the same as knowing how to use it well.

‘If over half our survey respondents are using AI every day, but only 13% have been trained in it, that points to a significant capability gap,’ Sage says. ‘One that could have material implications for businesses.’

What the AI capability gap means in practice

When AI literacy varies widely across a workforce, productivity and output suffer.

Some employees confidently embrace new tools – exploring their full functionality, quickly mastering the features and embedding them into daily workflows. Others hesitate or misuse tools entirely.

That inconsistency can slow delivery and weaken outcomes.

There’s a wide disparity in how well people can use the different programs,’ Sage explains. ‘ChatGPT is quite intuitive. Lots of people feel comfortable using it. But other AI programs – particularly those built for specific functions – are more complex.

‘There’s a lack of understanding of how AI systems work. And if you don’t understand how a tool functions, you’re not going to get good results.’

And training isn’t filling in the gap.

‘Often, there’s no adequate training because senior leaders themselves don’t know how to use these tools. That means there’s no consistent baseline for capability across an organisation,’ Sage continues.

‘Leadership needs to make a deliberate shift to prioritise AI education and consistent practice.’

In environments where AI use is encouraged, employees may also feel pressure to overstate their capabilities.

‘Fear of job displacement is significant, especially in Australia, where trust in AI is relatively low,’ explains Sage. ‘At the same time, organisations increasingly expect teams to be AI-proficient, even when they provide no guidance.

‘As a result, some people claim confidence out of self-preservation, even when their capability isn’t there. And that’s where vulnerability and risk start to build.’

The broader risks of unchecked AI use

The consequences extend beyond gaps in internal capability.

‘There are all sorts of legal, ethical and regulatory risks that come with unregulated AI use,’ says Sage. ‘Everything from copyright infringement and loss of creativity to bias, privacy breaches and actual harm to human beings.

‘Organisations need to consider all these factors when deciding whether to use an AI tool – and put structures in place to mitigate them.’

These challenges demand a more deliberate approach to AI governance. One that the industry must now rise to meet.

‘Australia has announced that there won’t be AI-specific regulation due to the risk of legislative duplication,’ Sage explains.

In the absence of dedicated legislation, the onus is on leadership to ensure their teams are AI-literate – and equipped to use it responsibly.

The question is what that looks like in practice.

How to put effective guardrails in place

With no AI-specific governance framework in Australia, organisations need to proactively adopt voluntary standards.

That begins by embedding accountability across the organisation.

‘To establish consistent and responsible AI capability, you need to work from the ground up,’ Sage says. ‘We recommend embedding accountability in role descriptions, ensuring there's a designated team member responsible for overseeing AI use.

‘Telling people to go and educate themselves isn’t enough because, realistically, not everyone will. You need someone to be held accountable – whose role is to ensure AI guidelines are embedded into day-to-day work.’

Next comes legal literacy.

‘We need to educate people on how existing legislation applies to AI use,’ says Sage. ‘That includes everything from copyright law to Australian Privacy Principles to international legislation, if you’re operating overseas.

‘These laws haven’t changed; in fact, they’ve been in place for decades. But marketers need to understand how they now apply in an AI world.’

That legal context becomes even more important as regulatory scrutiny continues to rise.

Finally, organisations must move beyond informal experimentation and implement structured governance frameworks. That means clear, industry-aligned guidance on proficient, responsible AI use. And shared standards that set expectations and create consistency across teams.

‘AI governance is no longer optional,’ says Sage. ‘Leaders need to operationalise responsible AI practices so that governance and compliance sit alongside creativity and strategy in 2026.’

A clear appetite for clarity

The encouraging signal from ADMA’s survey? Marketers are already asking for clarity.

Respondents told us they want practical guidance on how to use AI. They want help navigating hallucinations, bias and brand risk. And they want clearer direction on regulatory and ethical obligations.

The appetite for industry-aligned guidance is there. It’s now up to marketing leaders to meet it.

ADMA is committed to supporting the industry through AI transformation. For further guidance on AI adoption and governance, you can refer to ADMA’s AI Toolkit, exclusively available for members.