We keep being told AI is dangerous, that we should be careful, and invest in loads of ramp up projects and tools to help us “adopt” AI, but what does reality look like?

It took just five days for a ChatGPT to reach one million users back in 2022, now it has over a billion users.
The term “AI Readiness” therefore seems a bit redundant doesn’t it? How can we become more ready for something millions of us have been using without guidance, with relative success, for years?
Decoding Business Speak
What “AI readiness” really means is, how can we use this modern tech on our ageing business systems, which are saddled with technical debt, to make more money?
Nobody really knows. Much of it is hysteria, many are confused between AI and automation, most are really asking for automation, some want a mixture of both. But the supposed path to simply enabling AI isn’t any different to what us people in security have been trying to tell people for years.
Security fundamentals are the same for everyone, it’s just that our Christmas tree now has a Copilot shaped star. Nothing on that top spot has ever been as exciting as Gen AI. It used to be Purview, which is something we only do when forced (let’s be honest). You could argue items like Application Control (WDAC) also belong up there in terms of business maturity.

What’s the actual risk?
I’d argue the bigger risk is using GenAI that’s already attached to your business data.
That is a pretty damming thing to say out loud, but we’re putting all this security effort in to prevent people surfacing or using documents they shouldn’t have access to, when prompting for new outcomes or performing work tasks.
How can you lower that risk?
I’d argue it is to use GenAI that’s not attached to your business data while you sort your fundamentals. Yeah, I’m talking about the free Copilot chat, Claude, Deepseek, Qwen, Gemini, ChatGPT etc. or paid versions if you need more usage.
I’m scared they’ll train on my sensitive data
I don’t think we’re really in a position to debate ethics of uploaded content to AI when practically all models were trained on unconsented scraping of the entire internet. The problem this created was so vast that we once again presented scenarios that our feudal-age legal systems were never designed to tackle. Added to the fact that the companies committing these crimes against creativity are also the wealthiest, backed by the other wealthiest investors and people – our laws have never been successful against that type.

The Risk
The risk isn’t zero. But it’s almost zero isn’t it?
If I exercised some diligence, I could go ahead and redact client names, or intellectual property, but the chances of the content I upload being read/used by anyone that presents a material risk back to me or my company are so low, that I’d bet it’s less of a risk than letting a standard user have complete access to M365 Copilot which is connected to a complete mess of SharePoint permissions.
The approach
Some of the most useful prompting comes from what M365 Copilot knows about you and your business data. We should continue to exercise the security and governance efforts in our own and customer tenants. Hint: SharePoint will be the hardest bit.
While we take that journey, we should not deny end-users usage of other tools, but provide better education around them and gather insights about internal usage using tools like Microsoft Defender for Cloud apps.
We all use them anyway, and I’d prefer not to pretend otherwise.