A Third of Your Employees Are Using AI at Work

Published On May 4, 2026

We pulled DNS data from 2,000 user devices across our client base to see what AI use actually looks like inside Calgary businesses right now. The headline number is that 27.9% of users had hit at least one AI platform during the 30 day sample period. The number behind that one is more interesting: only 15% of those users were inside an organization that had any kind of written policy governing AI use.

In other words, AI is already being used in the majority of the organizations we looked at, and the overwhelming majority of that use is happening with no rules, no guardrails, and — almost universally — on personal accounts that the organization can’t see, audit, or control.

If you’re reading this and you’re not sure what your staff are doing with AI, you’re not alone. But you probably should find out.

What we looked at, and what we found

DNS filtering sits between every device on a network and the public internet. Every time someone tries to reach a website — chatgpt.com, gemini.google.com, anything — that request passes through DNS. We don’t see what’s typed into those tools. We see which tools are being reached, by whom, and how often. For this review, we sampled 2,000 devices across our client environments and looked specifically at traffic to the major AI platforms.

The breakdown is roughly what you’d expect, with one exception. ChatGPT accounted for 92.6% of all AI traffic. Gemini came in second at 2.9%, followed by Grok at 2.0%, Claude at 1.6%, and Suno at 0.8%.

ChatGPT isn’t winning the workplace. ChatGPT has won the workplace. Whatever’s happening in the broader conversation about AI competition and capability, on actual employee devices, the contest is over by roughly an order of magnitude. Gemini, Grok, and Claude combined accounted for less than 7% of AI traffic. Suno, for the curious, is an AI music generator — somebody is working on a side project.

The 15% problem

The adoption number is interesting. The policy number is the one that should bother you.

Of the 2,000 devices we looked at, only 15% were inside organizations that had written down any kind of policy on AI use. The other 85% were operating in a vacuum. No guidance about which tools were approved. No rules about what data could be pasted into them. No expectation, even informally, that employees should think about it before sending company information to a third-party service.

This isn’t because anyone is being reckless. It’s because AI moved into the workplace faster than most organizations could think about it. Two years ago, ChatGPT was something a handful of people had heard of. Now it’s quietly embedded in how staff draft emails, summarize documents, prep for meetings, write code, and rough out client proposals. Most leaders we talk to didn’t decide to adopt AI. Their employees did, individually, on their own initiative, because it makes their jobs easier.

That’s not necessarily a bad thing. The problem is that nobody told those employees what’s safe to put into it.

Where the data actually goes

When an employee opens a personal ChatGPT account and pastes in a client contract, a list of vendor pricing, a draft of a sensitive email, or a spreadsheet of customer information to ask “can you summarize this,” they are sending that data to OpenAI’s servers under the terms of a free consumer account.

On a free or even a Plus account, those inputs can be used to train future versions of the model. The user agreed to it when they signed up — or more accurately, they clicked through the terms without reading them, the same way everyone does. The result is that pieces of your business — quotes, contracts, internal communications, customer lists — get absorbed into a model that millions of other people will use, without anyone at your organization ever deciding that should happen.

This is the part that tends to land hard when we explain it to executive teams. Most of them assume their data isn’t going anywhere meaningful. It is. And it isn’t hypothetical. We’ve shown several clients exactly which AI platforms their staff are using and roughly how often, and the conversation that follows is usually some version of: nobody told us this was happening.

And then the employee quits

There’s a second problem with personal AI accounts that gets less attention. It shows up when an employee leaves.

When someone uses ChatGPT or Claude or Gemini on a personal account to do their job, they aren’t just sending data out. They’re building a profile. These platforms have memory now — they remember context across conversations, hold onto preferences, retain project history, and adapt to the person using them. The longer an employee uses one of these tools for work, the more the tool knows about how that work is done.

That includes the things you’d expect a competitor to want: your sales process, the way you price, the objections you hear most often and how your team handles them. It includes the product roadmap people have been kicking around. It includes the specific clients the employee has been managing and the internal language you use to describe them. None of this is theoretically sensitive on its own. Taken together, it’s the actual operating knowledge of your business, captured in an AI account the business doesn’t own.

When that employee leaves, the account goes with them. They don’t have to copy anything or forward themselves a single email. They sign into the same ChatGPT account at their next job, on a new device, and everything they’ve built with that AI is still there. The model still remembers. So does the chat history. Two months later, they’re using your sales playbook, in your phrasing, to close deals against you.

The fix here is the same fix we keep coming back to: the account has to belong to the organization. On a business or enterprise tier, you control who has access and what’s retained. When an employee leaves, you offboard their AI account the same way you’d offboard them from email or remote access, and the work history stays with the company. Their personal AI account, if they have one, is theirs. It just doesn’t have your business inside it.

What we’ve been doing about it

When we walked these findings through a number of executive teams, the reaction was consistent. None of them knew the scale of what was going on inside their own organizations. A few were surprised it was that low. Most were surprised it was that high.

The work that came out of those conversations isn’t dramatic. It’s mostly procedural. We’ve helped clients draft AI usage policies that reflect how their employees actually work, rather than blanket bans that get ignored within a week. We’ve moved organizations from personal AI accounts onto licensed business tiers, where the terms of service explicitly prohibit training on company inputs and where administrators can see what’s being used. We’ve walked teams through what is and isn’t safe to share with these tools, which is more nuanced than people tend to assume.

This isn’t about cracking down on AI. The genie is well out of the bottle, and trying to stop employees from using AI at this point is roughly as effective as trying to stop them from using Google. The point is to get visibility, get the right licenses in place, and give people clear guidance so they can use these tools without quietly creating risk that nobody is tracking.

If you take one thing from this

You almost certainly have employees using AI right now. The real question is whether they’re using it on accounts your organization controls, with terms you’ve agreed to, on a platform you can actually see — or whether they’re using it on personal accounts that train on whatever gets pasted in and walk out the door with whoever owns them.

If you don’t know which one it is, that itself is the answer. Find out. The rest of the conversation gets considerably easier once you can see what’s actually happening on your network.

Share on Social