Shadow AI exposes the cost of slow governance
It doesn’t start with a data breach or a compliance scandal. It starts with someone pasting sensitive code into ChatGPT to speed up a demo. Or dropping customer data into an open-source model to test a feature. No approvals. No oversight. Just everyday shortcuts with enterprise-sized consequences.
This is shadow AI. And whether organisations realise it or not, it’s already woven through their workflows, codebases and content stacks, operating without guardrails and outpacing governance and company policies at every turn.
It’s not fringe behaviour: when a global consumer electronics company had a data breach in 2023, it was found that employees had been using a public SaaS-based AI application to write their code. Among the data they put into the AI platform was the source code for some of the company’s proprietary software. When other people used ChatGPT, they were able to find the source code and spoil confidential company trade secrets.
Here in Australia, Telstra has issued internal warnings and guidelines around the internal use of AI. NSW Health has paused high-risk innovation investment. The public service is drafting new guidelines, and ASIC has been sharpening its focus on AI operational risk and resilience. Shadow AI may not be on their radar yet, but if organisations continue to let unmonitored tools run wild inside their walls, it soon will be.
So what’s the trigger? People want to move quickly, and IT can’t move fast enough. So teams go around it just like they did in the early days of shadow IT, when unsanctioned cloud tools flooded the workplace.
But there’s a key difference: cloud apps might have been invisible, but AI acts. It generates code, rewrites content, makes decisions, and increasingly, executes them. Without oversight, that’s not innovation. That's a potential serious liability.
And yet, banning AI isn’t the fix; that would be the fastest way to lose visibility completely.
The real problem isn’t that people are using AI. It’s that they’re using it in the dark. Without knowing where models are running, what data they’re trained on, or how decisions are being made, organisations are gambling with blind spots, and regulators won’t accept ‘we didn’t know’ for very long after guidelines and guardrails are set.
Shadow AI is what happens when governance fails to keep pace with experimentation. The solution isn’t to slow down; it’s to match the speed of innovation with the structure to support it.
That starts with visibility: not just in code, but across teams, from developers building with open models to marketers using AI to draft content. If you can’t see where AI lives, what it touches, or who’s using it, you can’t govern it. You can’t secure it. And you definitely can’t defend your liabilities against it.
From there, it’s about building guardrails that flex. Static rules won’t survive in a landscape where new models launch every week. What works is principle-based governance, clear standards around data handling, model validation, transparency and accountability that can scale with use, not fight against it.
Just as important, stop leaving teams to fend for themselves. If sanctioned AI tools are clunky or locked down, they’ll keep turning to public models. That’s why I’m seeing a shift toward private AI environments, purpose-built, enterprise-grade platforms where people can work with AI securely, without compromising data, IP, or compliance. These platforms don’t just reduce risk, they give organisations control over their models, the ability to customise algorithms, and confidence that decisions are being made on solid foundations; not scraped data and mystery maths.
And there’s an open-source lesson in all of this: decentralised innovation can work. But it only works sustainably with shared responsibility and transparency. Because shadow AI isn’t a trend; it’s a symptom. Of unclear policies. Of clunky tooling in rigid development processes. Of a widening gap between innovation and oversight. But the fix isn’t fear. Its visibility, structure, and giving people tools that work as well as the ones they’d reach for on their own. |
![]() |
How agentic AI will revolutionise customer experience in Australia
Agentic AI is changing the way that Australian businesses are adopting AI technologies.
How to prepare your data for AI success
Proving the AI-readiness of data is a process and practice based on the availability of metadata...
GenAI: the hype, the hopes and the hard truths
GenAI has been heralded as a revolutionary power that will transform industries, democratise...