Microsoft has acknowledged a technical issue in its Copilot Chat AI assistant that caused it to access and summarise some users’ confidential emails by mistake.
The tool, designed for enterprise users within Microsoft 365 programs like Outlook and Teams, is meant to help staff summarise emails and answer questions. However, Microsoft said a recent error caused Copilot Chat to surface content from messages stored in Drafts and Sent Items, including those labelled confidential.
“While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience,” a Microsoft spokesperson said. A configuration update has been deployed worldwide to fix the issue.
The company emphasised that no one gained access to information they weren’t already authorised to see, and patient information at NHS sites was not exposed. The bug, reportedly traced to a “code issue,” was first identified in January.
Expert Reactions
Experts warned that rapid AI deployment increases the risk of mistakes:
- Nader Henein (Gartner) said such errors are “unavoidable” given the constant release of new AI capabilities. Organisations often lack tools to manage emerging features, and the pressure to deploy quickly makes it hard to pause updates.
- Professor Alan Woodward (University of Surrey) emphasised the need for AI tools to be private-by-default and opt-in, noting that bugs and data leakage are inevitable as these systems evolve.
This incident highlights the challenges enterprises face when adopting generative AI tools in sensitive environments, reinforcing the importance of strict governance and secure defaults.
Copilot Chat has now been updated to exclude protected content from AI access, restoring intended privacy and compliance for enterprise users.