All Guides · M365 Copilot Field Guide · Kesslernity
Kesslernity · M365 Copilot Field Guide
Interpreter Agent
Speech-to-speech translation during Teams meetings and calls.
A Microsoft Teams feature that provides AI-assisted real-time speech-to-speech translation during meetings and calls. Each participant selects a target language independently. Translation is delivered as audio using either voice simulation or preset voices.
Status Generally Available
Updated April 2026
Languages 9 supported
Usage limits Monthly per licensed user (Microsoft-defined)
Licence Microsoft 365 Copilot (pricing varies by region and agreement)
Published by kesslernity.com
Licence required before you start

Interpreter requires the Microsoft 365 Copilot paid add-on — $30/user/month (Enterprise) or $21/user/month for organisations with up to 300 users (Business). Monthly usage limits apply per licensed user and are tracked against the user who enables Interpreter in the meeting. Check current limits in your M365 admin centre or Microsoft documentation — these figures are subject to change. Your IT admin enables Interpreter via the Set-CsTeamsMeetingPolicy cmdlet in PowerShell — refer to Microsoft Learn for required parameters and syntax. Voice simulation (translating using the speaker's own voice) can be enabled or disabled separately. Pricing reflects standard US list pricing and may vary by region, currency, or enterprise agreement.

At a glance
What it is
Real-time speech-to-speech translation
in Teams meetings and calls.
How it works
Speech → recognition → machine translation → synthesised audio in target language. Monthly usage limits apply per licensed user.
Voice simulation
Option to hear translation in the speaker's
own voice, not a preset AI voice.
🇨🇳 Chinese (Mandarin)
🇬🇧 English
🇫🇷 French
🇩🇪 German
🇮🇹 Italian
🇯🇵 Japanese
🇰🇷 Korean
🇧🇷 Portuguese (Brazil)
🇪🇸 Spanish

Additional languages are on Microsoft's roadmap. Check Microsoft's Teams feature roadmap for the latest additions.

1
Join a scheduled or channel Teams meeting
Interpreter works in scheduled meetings, channel meetings, and Teams calls (VoIP and PSTN, added January 2026). It does not work in 1:1 calls, Teams Rooms, webinars, or town halls.
2
Open Language & Speech settings
In the meeting window, click the three-dot menu → Language and speechTurn on live interpretation. This enables Interpreter for yourself — other participants do the same independently.
3
Select your target language
Choose the language you want to hear the meeting in. Each participant makes this choice independently — one person can hear English while another hears French, in the same meeting.
4
Choose your voice preference
Select Voice simulation to hear translation approximating the speaker's vocal characteristics. Or choose from available preset voices — current options include Ava (female) and Andrew (male). Additional preset voices may be available depending on tenant configuration.
5
Listen and participate
Interpreter runs continuously once enabled. The translated audio plays in your earpiece alongside (or replacing) the original speaker audio. Speak normally in your own language — Interpreter handles the translation for other participants.
Voice simulation
Translates using the speaker's own voice characteristics. The translation sounds like the original speaker, not a generic AI. Best for meetings where vocal consistency matters — executive calls, client meetings, negotiations. Attempts to approximate the speaker's vocal characteristics; results vary by voice and language.
Ava (female preset)
Preset AI voice. Clear, neutral female voice. Reliable when voice simulation is not enabled by admin, or when you prefer consistent audio quality over vocal authenticity.
Andrew (male preset)
Preset AI voice. Clear, neutral male voice. Same use case as Ava — use when voice simulation is unavailable or when you prefer a stable, predictable voice output.
Additional neutral preset voice
A gender-neutral preset voice is available in supported tenants. Preset voice availability may vary by region and tenant configuration.
ScenarioSupported?
Scheduled Teams meetingsYes
Channel meetingsYes
Teams calls (VoIP and PSTN)Yes
Windows / Mac desktop appYes
iOS / Android mobileYes
Web browser (Chrome, Edge, Safari, Firefox)Yes
1:1 callsNo
Teams Rooms (physical meeting rooms)No
Town hallsNo
WebinarsNo
Microsoft Teams FreeNo
Voice data stored or retainedNo — processed in real time, not stored
Usage limitsMonthly per licensed user (Microsoft-defined; check admin centre for current figures)

Interpreter is not available in the meeting menu. Your IT admin has not enabled it. Enable via Set-CsTeamsMeetingPolicy in PowerShell — see Microsoft Learn for the full syntax and required parameters.

The translation has noticeable lag. Interpreter works in real-time — there is an inherent delay of 1–3 seconds between the original speech and the translated audio. This is normal and expected. It is not a fault. Speakers should pause slightly between sentences to allow the translation to catch up in complex discussions.

Heavily accented or fast speech is mistranslated. Interpreter uses Azure Cognitive Services for speech recognition. Very strong regional accents or very fast speech reduce recognition accuracy, which cascades into translation errors. Speaking clearly and at a moderate pace improves output quality significantly.

Technical vocabulary or acronyms are incorrect. Machine translation handles general language well but can mistranslate industry-specific terms, abbreviations, and proper nouns. For high-stakes technical discussions, brief key participants on critical terminology before the meeting or follow up with a written summary.

The monthly usage limit is reached. Usage is tracked per licensed user. When a user hits their limit, Interpreter becomes unavailable until the next billing period. Microsoft defines the exact threshold and it may change — check current limits in the M365 admin centre. For heavy users, prioritise which meetings require live interpretation.