Why We Blocked DeepSeek on Every Client Network

Published On March 20, 2025

When the DeepSeek story broke in early 2025, we didn’t wait for a committee meeting or send out a survey asking clients how they felt about it. We blocked it across every managed client environment within a day, then we picked up the phone and explained why. That’s the short version of what this post is about. The longer version is why DeepSeek specifically — and AI tools generally — deserve a level of scrutiny most businesses aren’t applying yet, and what a reasonable internal policy actually looks like once you stop treating AI like a productivity hack and start treating it like any other vendor with access to your data.

The Problem Isn’t That DeepSeek Is Chinese. It’s Where Your Data Ends Up.

DeepSeek processes and stores user data on servers in China. That on its own wouldn’t be disqualifying — plenty of legitimate services operate from countries that aren’t your own. The disqualifying piece is China’s National Intelligence Law, which obligates companies operating in the country to hand over data when the government asks, without the kind of transparency or due process you might expect from, say, a Canadian court order.

In practical terms, anything an employee pastes into DeepSeek — a draft contract, a customer list, internal financials, a piece of source code — ends up on infrastructure where the operating company is legally required to cooperate with state requests. You don’t have to be a paranoid security professional to see why that’s a problem for a business with confidentiality obligations to its clients, regulatory requirements under PIPEDA or industry-specific privacy rules, or just a reasonable expectation that its private information stays private.

The U.S. Commerce Department banned DeepSeek from government devices for exactly this reason. That’s not a fringe position. It’s the same conclusion most security teams arrive at once they read past the marketing copy.

And Then There Was the Leak

In early 2025, security researchers found a DeepSeek database — over a million log entries — sitting on the open internet without authentication. No password, no access controls, just public. It contained user chat histories, API keys, and backend configuration data. Someone with the right URL could have read every conversation users were having with the platform and walked away with credentials sufficient to interfere with the service itself. DeepSeek closed the hole after being notified, which is the bare minimum, but the underlying issue isn’t that they patched it. The issue is that a company building infrastructure to handle other people’s sensitive information left a database wide open in the first place.

Separately, researchers found hidden code in the DeepSeek platform transmitting user data to infrastructure tied to CMPassport.com — a service operated by China Mobile, a state-linked telecom. That’s a different category of problem than the leak. The leak was incompetence. The hidden code was design.

Why We Acted on Day One

This is the part that most IT companies handle differently than we do, and it’s worth being direct about it. A lot of MSPs would have waited. They’d have queued the issue up for the next quarterly review, sent a generic newsletter, or — most commonly — done nothing and assumed clients would figure it out on their own.

We blocked DeepSeek across every managed environment within 24 hours of the story breaking, and then we called the people who actually make decisions at each of our clients. Not because we needed permission — the security case was clear enough that waiting wouldn’t have served anyone — but because part of how we work is keeping decision-makers informed about threats as they evolve, so they can make their own calls about anything that requires one. If a client had a specific reason to need DeepSeek access, we’d have had that conversation. None of them did.

This is what we mean when we say we stay in constant contact with our clients about emerging risks. It isn’t a marketing line. It’s how the relationship works. New threat surfaces, we act, we explain, the client decides what they want to do from there. Most of the time the answer is obvious and we’ve already moved. Occasionally there’s a real conversation to have. Either way, nobody finds out from a news article that something they rely on has a problem.

DeepSeek Is the Specific Case. AI Governance Is the General One.

Here’s what concerns us beyond DeepSeek: most businesses don’t have a policy on which AI tools their employees can use. Staff sign up for whatever’s free or interesting, paste sensitive information into systems nobody has vetted, and the company finds out months later when someone notices customer data appearing in unexpected places. It’s the same shadow IT problem businesses have been dealing with for twenty years, just on a faster news cycle.

A workable AI policy doesn’t have to be long, but it should answer a few specific questions. Which platforms are approved, and which aren’t? What categories of information can be processed through them, and what can’t? How does a new tool get evaluated before it joins the approved list? Who’s responsible for monitoring and enforcement? And — this is the one most policies skip — what happens when an employee genuinely needs a capability that no approved tool provides? Because if there’s no path forward, people will route around the policy. They always do.

When we evaluate an AI platform on behalf of a client, we ask the same questions any reasonable buyer would ask of any vendor. Where is the data stored? What jurisdiction governs it? Is the provider subject to laws that compel data disclosure? Are there published policies on retention, access, and sharing? Does the platform meet recognized standards like SOC 2 or ISO 27001? If the answers don’t add up, it doesn’t go on the list. DeepSeek didn’t, and we don’t expect that to change.

What to Actually Do

If you’re running a business and don’t currently have an AI policy, the practical first step isn’t writing one — it’s finding out what your team is already using. Most owners are surprised by the answer. From there, decide which tools you’re comfortable with, communicate it clearly to staff, and put basic technical controls in place to enforce the decision. The companies that handle this well treat AI like any other category of software: vetted, approved, documented, monitored. The ones that don’t are usually a few months away from finding out they have a problem they could have prevented.

DeepSeek is one tool. There will be another, and another after that. The point isn’t to chase each one as it surfaces. The point is to have a way of evaluating them before they end up on someone’s laptop.

Share on Social