By Dr Rob Nicholls, Manager, Regulatory and Advocacy at ADMA
The AI arms race is in full swing. Across most industries, businesses are racing to harness AI’s potential. For marketers AI has quickly become the latest enabling tool – helping to craft campaigns, predict consumer behaviour and personalise experiences at unprecedented speed, accuracy and scale.
But as adoption skyrockets, so do the big questions: How do we use AI responsibly? Where do we draw the line between innovation and risk? And with regulation still taking shape, what rules should we be following right now?
While these questions have been bubbling away for a while, it is becoming more urgent to answer them as this technology becomes more deeply embedded within marketing practices. The ability of AI to automate targeted digital marketing is fuelling particular concerns around bias, fairness, transparency and accountability.
After all, what happens when an AI-powered campaign unintentionally excludes certain demographics? Or when an algorithm optimises for short-term conversions at the expense of brand reputation?
These aren’t just hypothetical questions. Around the world, regulators are scrambling to put controls (commonly called AI guardrails) in place, from the EU’s AI Act to U.S. state-level legislation. In Australia, however, AI regulation remains a work in progress.

Rob Nicholls
While it’s almost certain that we will see new regulatory measures emerge, businesses don’t need to sit idly by until that happens. The reality is that existing laws and guidelines already provide a foundation for responsible AI adoption. Our country’s existing legal frameworks – specifically those covering consumer protection, privacy, competition, and anti-discrimination – already provide a solid foundation for responsible AI adoption.
The Australian Consumer Law, for instance, mandates that businesses must not engage in misleading or deceptive conduct. This is something that applies as much to AI-driven decision-making as it does to traditional business practices.
Similarly, the Privacy Act covers data collection, usage, and storage, which directly affects how AI models are trained and deployed. The Australian Human Rights Commission also outlines the need for AI to comply with anti-discrimination laws, addressing fairness in automated decision making.
Rather than rushing to replicate European or U.S. approaches, Australia must develop AI regulation that reflects the specific needs of our market and legal traditions. Unlike other countries, Australia’s regulatory system tends to take a principles-based approach – one that balances innovation with consumer protection.
To explain further, principles do not mean voluntary, or not enforceable by regulators. Rather, principles-based rules state what businesses must achieve, and may or may not address the detail of how those requirements are achieved, or even specific technologies or business activities to which the rules are applied.
A rigid, one-size-fits-all framework may stifle AI’s potential benefits before they can be fully realised. Additionally, Australia’s unique market characteristics, such as its concentrated business landscape and smaller economy, require an approach that won’t unduly burden local businesses.
The key challenge is ensuring that any new regulation addresses actual risks rather than creating unnecessary complexity. AI has enormous potential to improve business efficiencies, customer experiences and economic productivity. However, an over-engineered regulatory framework could introduce burdens that discourage businesses from experimenting with AI-driven innovation.
There is also the risk that excessive regulation could lead to regulatory arbitrage, where companies move AI operations offshore to avoid compliance, ultimately reducing local competitiveness.
For now, Australian businesses should focus on compliance with existing laws, while adopting best practices for AI governance. This means maintaining transparency in AI-driven decisions, ensuring data privacy obligations are met and being proactive in assessing the ethical implications of AI use.
Companies should also establish clear accountability measures, ensuring properly trained human oversight where AI is used in critical automated decision-making processes. A strong internal AI governance framework, including risk assessments and bias detection mechanisms, can help mitigate potential legal and reputational risks.
That governance framework needs to also address processes in which AI is used, the quality of data used by AI systems and assurance controls to ensure that processes are reliable and reliably followed. Australian and international standards addressing AI, data and information security also provide useful guidance as to good practice to mitigate legal and reputational risks.
Regulators, too, have a role to play beyond creating new laws. Greater regulatory guidance, industry collaboration and the development of voluntary AI governance frameworks could provide businesses with the clarity they need to move forward confidently.
Regulators may recommend that specific standards or codes of practice are followed, or state that adoption of a particular standard or code will be taken into account in assessing whether an entity has complied with legal requirements.
It’s also crucial, given the unpredictable changes currently happening in the US, that AI regulation in Australia doesn’t copy and paste laws from other countries without consideration of local realities. We may not fully understand the ramifications of the recent US AI policy changes for some time.
The EU AI Act has been criticised for undue complexity and rigidity. The UK government is canvassing development of a middle course, between the highly predictive and detailed EU AI Act and the light-touch AI regulation mooted by the US White House. Australia may wish to find its own middle course on AI regulation, possibly aligned with the UK government’s approach.
Alignment with international best practices can be beneficial but we are seeing diverse approaches to AI regulation and no clear international norms. Australian businesses need regulations that are fit for purpose and that enable businesses to develop innovative AI enabled business practices within clear and predictable guardrails.
Our regulatory approach should focus on supporting responsible AI use while ensuring that AI driven Australian businesses remain competitive in a rapidly evolving global economy.
If regulation is to come, it should be carefully designed to enhance trust without stifling innovation. Australia has a unique opportunity to take the best from international approaches when tailoring a framework which suits our economy and regulatory environment.
Rather than waiting for the government to set the rules, businesses that take a proactive approach to AI governance today will be better positioned for whatever regulations come tomorrow.