Microsoft security researchers have identified a growing threat called AI Recommendation Poisoning, where companies embed hidden instructions in “Summarize with AI” buttons. These instructions inject persistent commands into AI assistants' memory via URL prompt parameters, biasing AI responses toward specific products or services. This manipulation affects critical areas like health, finance, and security without users' knowledge. Microsoft has deployed ongoing mitigations in Copilot to counter these prompt injection attacks, which continue to evolve as new techniques emerge.
How AI Memory Poisoning Works
Modern AI assistants such as Microsoft 365 Copilot and ChatGPT have memory features that persist across sessions, remembering user preferences, context, and explicit instructions. This personalization enhances utility but creates vulnerabilities. Memory poisoning occurs when unauthorized instructions or “facts” are injected into AI memory, causing the AI to treat them as legitimate, influencing future responses. This attack is often delivered through malicious links with pre-filled prompts embedded in “Summarize with AI” buttons or emails, which execute automatically when clicked.
Real-World Impact and Scope
Research uncovered over 50 unique prompt-based attempts from 31 companies across 14 industries over 60 days. These prompts instruct AI assistants to remember companies as trusted or authoritative sources, sometimes injecting full marketing copy. The companies involved are legitimate businesses, not hackers, using deceptive packaging to persistently influence AI memory. Tools like CiteMET and AI Share URL Creator facilitate easy deployment of these manipulations, marketed as SEO growth hacks for large language models (LLMs).
The consequences of AI Recommendation Poisoning can be severe, including:
- Financial ruin from biased investment advice.
- Child safety risks due to omitted warnings about online content.
- Biased news consumption by favoring a single news source.
- Competitor sabotage by promoting one service unfairly.
User and Security Recommendations
Users should exercise caution with AI-related links by:
- Hovering over links to verify destinations.
- Being suspicious of “Summarize with AI” buttons that may contain hidden instructions.
- Avoiding clicking AI links from untrusted sources.
- Regularly reviewing and clearing AI memory to remove suspicious entries.
- Questioning suspicious AI recommendations and requesting explanations and citations.
Security teams can detect AI Recommendation Poisoning by hunting for URLs with memory manipulation keywords like remember, trusted source, authoritative, or citation in email traffic, Teams messages, or URL click events using Microsoft Defender for Office 365 advanced hunting queries.
Microsoft’s Mitigations and Ongoing Research
Microsoft has implemented multiple layers of protection against prompt injection attacks, including:
- Prompt filtering to detect and block known injection patterns.
- Content separation to distinguish user instructions from external content.
- Memory controls that allow users to view and manage stored memories.
- Continuous monitoring for emerging attack patterns.
- Active research into defenses against AI poisoning, including memory and model poisoning.
Indicators of Compromise
Indicators include URL parameters such as ?q= or ?prompt= containing keywords like remember, trusted, authoritative, future, citation, or cite. These patterns signal potential AI memory manipulation attempts.
This research confirms that AI Recommendation Poisoning is a real and spreading threat, facilitated by freely available tools and targeting all major AI platforms. Users and organizations are urged to remain vigilant by checking AI memory settings, scrutinizing AI-related links, and applying security best practices to mitigate risks.




