When a client joins one of our discovery or technical sessions with Read.ai, Otter, or Fireflies running in the background, we remove the bot. That’s a default position, not an exception, and we don’t soften how we explain it. Most IT companies don’t do this; given what tends to come up in technical conversations, most of them probably should.
The reasoning fits in a sentence. The conversations we have with clients tend to surface details that shouldn’t end up on a server we don’t administer, governed by a privacy policy we didn’t write, accessible to people we haven’t vetted. Network topology. The shape of someone’s directory environment. Which line-of-business application is running on an unsupported OS because replacing it would cost more than the company wants to spend this year. Vendor names mentioned in passing. The specific gaps in a security posture that a competent attacker could use as a starting map.
Once an AI notetaker has captured all of that, transcribed it, summarized it, and indexed it on someone else’s infrastructure, you don’t get to take it back. The conversation has been copied. Where it lives, who can search it, how long it’s retained, and whether it gets fed into model training — those questions are answered by the bot operator, not by you and not by your client.
This is the part most leadership teams haven’t worked through yet. Microsoft is about to make them work through it.
The Bot Was Never on the Invite
AI notetakers are tied to the account of the person who configured them, not the meeting host. A client, vendor, or contractor can join your Teams meeting with Read.ai active on their side, connected to their workspace, syncing to their cloud. The meeting organizer never approved any of that. There’s no alert in your tenant. The audit log doesn’t record the capture. The bot simply shows up as a participant, often with a generic display name like “Meeting Notetaker,” and most people scroll past it because they assume someone else invited it.
For a long time, there was no native control in Teams that addressed this — only tenant-wide app permission policies that controlled what your users could install, not what external participants could bring in on their own accounts.
That’s what changes soon.
Recording Changes the Meeting Itself
There’s a second cost to permissive notetaker policies that gets less attention than the data question, and in practice it matters more often. If you knew every meeting you walked into was being recorded, transcribed, and indexed somewhere you couldn’t fully audit, would you say the same things you say now? Would your team? Would the technical person who currently flags problems honestly start framing them more carefully? Would the manager who pushes back on an unrealistic timeline start agreeing more quickly?
Most leadership teams haven’t asked themselves what their own meetings would look like if everyone in them assumed every word could be searched by someone outside the room six months later. The answer, in most organizations, is that meetings get more political and less accurate. People hold things back. They reach for safer phrasings. The version of the truth that comes out of someone’s mouth becomes the version that’s defensible in a transcript six months from now — which isn’t the same thing as the version that gets the decision right.
That’s the loss most organizations aren’t thinking about. Not the leaked strategy call — that’s bad, but it’s recoverable. The conversation that didn’t happen because someone decided it wasn’t safe to say out loud is the one you can’t replace. Recording every meeting by default isn’t free; the cost just gets paid quietly, in the things that no longer get said.
A Law Firm Client Started Seeing Them in Real Meetings
One of our clients is a law firm. The conversations that cross Teams in their tenant in any given week include privileged communication, settlement strategy, and matter-specific detail that should never leave that environment. AI notetakers recently started appearing in their meetings — not introduced by their own staff, but brought in by external participants joining the calls.
It was concerning enough to act on right away, before Microsoft’s rollout reached their tenant. We tightened their Teams meeting policy so the lobby — the waiting room participants land in before being admitted — is on by default for every meeting. Anonymous and external joiners no longer flow directly into the call. They land in the lobby, and the organizer decides who gets admitted — including the unfamiliar “Meeting Notetaker” tile that would have slid in unnoticed before. It’s a human decision point made by the person hosting the meeting, rather than a default that lets everyone in automatically. The bot doesn’t get to decide whether it’s in the room. You do.
If your tenant is showing the same pattern — unfamiliar participants in meetings, or clients mentioning they’ve started using AI summarization tools — adjusting your lobby defaults is worth doing now rather than waiting. It’s a few clicks in the Teams admin centre. It doesn’t replace what’s coming, but it’s the right interim posture for any organization where the conversations crossing Teams are sensitive enough to care about.
What’s Actually Rolling Out: Microsoft’s Bot Detection in Teams
Microsoft notification MC1251206 covers a new Teams meeting policy that detects external AI meeting assistants attempting to join meetings hosted in your tenant. Targeted Release tenants get it mid-May 2026; worldwide General Availability follows in early to mid-June.
The behaviour is reasonable. When a third-party bot tries to join, Teams flags it in the lobby under a “Suspected threats” section with an “Unverified” trust indicator. The organizer sees the bot separated from human participants and has to explicitly decide what to do with it. An admin policy in the Teams admin centre lets you configure the default tenant-wide, with the initial options being “do not detect bots” or “require organizer approval.” More granular configuration is supposed to follow.
A few practical notes worth knowing. Detection is enabled by default — no action required to turn it on. Microsoft has been openly honest that detection isn’t perfect; bots designed to mimic human join patterns may slip through. And the PowerShell module hasn’t caught up yet, so early configuration will happen through the admin portal rather than scripted at scale.
Here’s where this falls apart for most organizations.
Once Microsoft enables this feature, IT leaders will walk into the Teams admin centre and find a control with two options: detect or don’t, approve or don’t. What happens next depends almost entirely on whether their organization has done any real thinking on this. In the mature ones, the conversation has already happened. Leadership has made a call, the policy is written down, and the admin centre setting is the implementation of a decision somebody else already owns.
In most smaller and mid-sized organizations, that’s not how it will play out. The IT team sticks with the default stricter option out of caution, and within a week or two the help-desk volume picks up. A partner’s notetaker can’t join the call. A sales lead can’t run Otter on their own meetings. A VP escalates because an external collaborator got blocked. The IT person, who has been worn down by years of being told to “just make it work,” doesn’t want the conversation with leadership about whether the friction is the point. So the setting gets quietly relaxed. The bots are back in by default — not because anyone decided that’s the right answer, but because nobody wanted the conversation.
The half hour worth spending before this feature is turned on by default isn’t on the configuration. It’s a conversation with leadership about what the position is supposed to be — made by whoever owns the risk decision, before the help desk gets its first complaint. IT can enforce a policy. They shouldn’t be expected to write one alone after an employee has already raised their voice about why their workflow broke.
The mature organizations are already having this conversation. The rest are going to have it the hard way.
Five Questions Worth Settling With Leadership
This framework borrows from a piece by Floor 16 that we thought handled the governance side of this well. The questions are good ones, and most of them haven’t been asked inside the average mid-market organization yet.
Can your own employees use AI notetakers in internal meetings?
Three positions to choose from: allowed with disclosure, allowed only with active consent from all participants, or blocked entirely. Most professional services firms with routine internal collaboration land somewhere around disclosure. Companies with frequent commercially sensitive discussions — energy, land services, anything with regular contract or pricing exposure — tend to lean stricter, especially for any meeting above project-manager level.
There isn’t a universal answer here. The real questions are what the typical sensitivity of your internal conversations actually is, whether you trust your staff to assess that on the fly, and what happens to the conversation itself when people know it’s being captured. A meeting where everyone knows a transcriber is running is a different meeting than one without — quieter on disagreement, more careful in how problems get framed, and noticeably less useful at the moment somebody needs to say the unwelcome thing out loud. That isn’t a reason to ban the tools by default, but it is a reason not to default-allow them in the rooms where candor is the point.
Some of your staff are already running these tools, by the way, whether you know about it or not. The default reality at most companies is that AI notetaking is happening; the policy question is whether it should be — and whether the meetings in which it’s happening are the right ones.
Should bots brought by external participants be allowed in internal meetings?
Different question entirely. Your policy for what your employees do is one decision. Your policy for what a vendor or contractor can bring into your meeting on their own account is a separate one — and it’s the harder of the two, because until the policy is fully rolled out you can’t enforce it through your tenant settings even if you wanted to.
The common middle position is that external bots aren’t permitted in high-sensitivity internal meetings (financial reviews, HR, executive strategy, legal) but are tolerated on routine project calls where the discussion is lower-risk and most participants are external-facing anyway. Whatever the answer, it should be a different decision than the one above, made on its own terms.
What’s the policy when a client is running a bot in a meeting they’re hosting?
This is the scenario that catches professional services firms most often, and it has the largest legal surface area. You’re a guest. The host has Read.ai active. What does your organization do?
Options range from requiring mutual disclosure before substantive discussion begins, to accepting it without comment, to treating it as a contract matter for any engagement covered by a confidentiality clause. For sectors with engagement-level NDAs or regulatory confidentiality obligations, this answer should exist in writing before the technology question, not after. If your engagement terms include confidentiality language, an undisclosed third-party transcription tool may already put one of the parties in breach of them.
What does your meeting organizer do when the new control flags a client’s bot?
You’re hosting. A client joins with a notetaker. The lobby flag fires. Your organizer needs to know what to do — admit it, ask for consent and then admit, remove it quietly, or stop the meeting — without having to invent the answer in front of the client.
Whatever the policy is, document it explicitly and brief client-facing staff before the feature goes live. Asking a client to disable their notetaker is a meaningfully easier conversation when the person doing the asking can point to a written organizational policy rather than relying on personal judgment in front of someone paying the bills.
Does the answer vary by meeting type?
It probably should. Executive reviews, HR conversations, contract negotiations, and legal discussions warrant the strictest posture — require approval, default to deny. Routine project check-ins and general collaboration calls can usually proceed with an approval prompt that surfaces awareness without grinding work to a halt.
Tier this now, before the setting lands. When the option arrives in your admin centre, you should be selecting a configuration that reflects a decision your leadership team has already made — not improvising one because Microsoft put a dropdown in front of you.
Decide Before It Walks in the Back Door
The AI bot question is the one making headlines, but the underlying pattern is older and broader. New technology arrives. Defaults get set. If no one in the organization has made an actual decision before the rollout, the configuration that goes live on day one becomes policy whether anyone intended it that way or not. The change walks in through the back door, the help desk fields a few complaints, the friction gets smoothed over, and six months later the organization is operating in a way nobody explicitly chose.
That’s the pattern worth getting in front of, and the meeting bot rollout is a useful place to practice. The question isn’t whether your organization should allow these tools — that’s a separate decision with a different answer for every business. The question is whether the answer arrives by accident or on purpose.
For our part, we agree with Microsoft’s default. We don’t allow AI bots into our own client meetings, and we’ll tell anyone who asks why. But our position isn’t the one that matters here. Yours is. What we’d ask is that you decide it before the next help-desk ticket forces an improvised answer, and that you take the same approach with whatever the next default-changing rollout turns out to be. Technology has a way of walking in through the back door when nobody’s standing at it. The work is being the person who’s there when it does.

