Zero Tolerance for CSAM

Published On March 2, 2026

by lumitiv

This month, we published a formal policy on how Lumitiv handles any discovery of child sexual abuse material (CSAM) during the course of our work. The full document is available here, but the short version is straightforward: if credible indicators surface, we preserve the relevant metadata, limit further exposure, and report to the appropriate authorities in Canada or the United States. We don’t investigate. We don’t sit on it. We don’t make discretionary calls about whether something is “serious enough” to act on. There’s no exception.

That’s the policy. The rest of this post is about why we wrote it down and why we put it on the website.

Why an IT company would encounter this at all

The starting point is that we have access. Managed IT services aren’t a hands-off arrangement. To support a client’s environment — to monitor security, respond to incidents, recover from outages, deploy software — we need visibility into systems and logs. That access is necessary to do the work, and it’s also the reason any technology provider has to think carefully about what happens when something illegal surfaces inside the systems they’re entrusted with.

We don’t conduct proactive surveillance. We’re not scanning client data looking for criminal activity, and we’re not in the business of making judgments about what people do on their own networks. But the kind of work we do — remote support, security monitoring, incident response — generates information about what’s happening on a system. Very rarely, that information includes something that demands a response.

CSAM is one of those things. It’s not a category where reasonable people disagree about the appropriate action. The legal obligation is clear, the moral obligation is clearer, and the only question worth thinking through carefully is the procedural one: what do you do, exactly, when you encounter it?

What the policy actually requires

Discovery would happen incidentally. A technician working through an alert, an analyst reviewing logs, a support engineer responding to an issue — these are the contexts in which something might be noticed. The policy starts at that moment.

When a credible indicator appears, the response is fixed. We preserve the relevant metadata so it can be handed to investigators. We limit further exposure, which means we stop doing whatever we were doing that surfaced it and we don’t expand the search. We do not download, copy, or examine the material itself — that’s not our role, and doing so would compromise both any future investigation and the people involved. We report promptly to the appropriate authorities, which in Canada means the National Child Exploitation Crime Centre and Cybertip.ca, and in the United States means NCMEC’s CyberTipline.

The policy formalizes this so there’s no ambiguity about what happens next, no internal debate about thresholds, and no opportunity for the situation to be quietly handled in a way that protects a client relationship at the expense of an actual child.

Why we’re publishing it

Most managed services providers don’t talk about this publicly. Some have internal policies. Some don’t. Either way, the question of how an IT firm handles credible indicators of child exploitation tends to be treated as a back-office detail rather than something clients, partners, or the public deserve to see in writing.

We think that’s wrong. Organizations with broad access to other people’s systems should be transparent about how that access is used, including in the rare cases where it intersects with serious crime. A reporting policy that exists only as an internal document is one that can be revised, weakened, or selectively applied. A published policy is one that has to be lived up to.

To our knowledge, Lumitiv is the first Alberta-based IT services firm to publish a zero-tolerance CSAM reporting policy. We don’t think that’s because other firms don’t care. We think it’s because the technology sector hasn’t built a culture of public accountability on this issue, and the absence of any peer doing it makes it easier to keep things internal. That’s the pattern we’re trying to break.

The takeaway

If you’re a Lumitiv client, the policy doesn’t change anything about your day-to-day relationship with us. We aren’t going to start scanning your systems, and we aren’t adding new oversight. What changes is that the rules we follow if something does surface are now written down, public, and not subject to negotiation.

If you’re with another provider, it’s worth asking them what their policy is. Not as a gotcha — most will give you a reasonable answer — but because the question itself is the point. Any company with that level of access to your systems should be able to tell you, plainly, what they do when something serious comes up. If they can’t, that’s worth knowing.

The full policy is available here: https://lumitiv.com/child-exploitation-policy/

Share on Social