Cursor AI MCPoison Vulnerability: When AI Development Tools Become Attack Vectors

August 4, 2025

A critical vulnerability in the popular AI-powered code editor Cursor has exposed a fundamental flaw in the trust model behind AI-assisted development environments. Dubbed "MCPoison," this vulnerability allows attackers to silently modify previously approved Model Context Protocol (MCP) configurations to execute malicious code without user interaction.

The MCPoison Vulnerability Overview

Check Point researchers Andrey Charikov, Roman Zaikin, and Oded Vanunu discovered that Cursor's one-time approval system for MCP configurations creates a dangerous trust gap. Once an MCP configuration is approved, Cursor trusts all future modifications without requiring additional validation, enabling attackers to poison developer environments.

Key Details:

How MCPoison Works

The vulnerability exploits Cursor's trust model for MCP (Model Context Protocol) configurations:

Attack Process:

  1. Initial Setup: Attacker adds a benign MCP configuration to a shared repository
  2. User Approval: Developer approves the harmless configuration
  3. Silent Modification: Attacker modifies the approved configuration with malicious commands
  4. Persistent Execution: Malicious code runs every time the project is opened in Cursor

Technical Mechanism:

The MCP Protocol Context

Model Context Protocol (MCP) is an open-source protocol introduced by Anthropic in November 2024 that allows AI systems to connect to external data sources and interact with each other. While MCP enhances AI capabilities, it also introduces new attack surfaces:

MCP Security Challenges:

Real-World Attack Scenarios

Scenario 1: Supply Chain Attack

Scenario 2: Collaborative Development

Scenario 3: Reverse Shell Attack

Check Point researchers demonstrated a proof-of-concept where:

Enterprise Security Implications

Development Environment Risks:

Compliance Impact:

Organizations with SOC 2, ISO 27001, or other compliance requirements may face:

AI Development Tool Security Landscape

This vulnerability highlights critical trends in AI-assisted development:

Emerging Attack Vectors:

Security Challenges:

Immediate Action Steps

For Cursor Users:

  1. Update immediately to Cursor version 1.3 or later
  2. Review existing MCP configurations for suspicious modifications
  3. Audit shared repositories for malicious configurations
  4. Monitor for unusual behavior in development environments

For Organizations:

  1. Inventory AI development tools in use
  2. Implement configuration validation procedures
  3. Review supply chain security for development dependencies
  4. Update security policies for AI-assisted development

Long-term Security Strategies

AI Tool Risk Management:

Development Security:

The Broader AI Security Impact

This vulnerability demonstrates the evolving threat landscape in AI-assisted development:

AI Security Trends:

Industry Implications:

Vendor Response and Recommendations

Cursor's response demonstrates good security practices:

However, organizations should:

For organizations concerned about AI security, see our guide on Third-Party Risk Management: Best Practices. For companies evaluating their security posture, take our Compliance Posture Survey. For organizations looking to automate security monitoring, check out Building an AWS Audit Manager Solution in Under Two Days with Amazon Q.

Need Help with AI Development Security?

Our team can help you:

  • Assess AI development tool security
  • Implement AI supply chain security controls
  • Develop AI security policies and procedures
  • Create incident response plans for AI tool compromises
Schedule a Consultation
AI security, development tools, RCE, supply chain, vulnerability