Developer Installs Fake AI Plugin, Loses $500K in Crypto

PenligentAI · 29, July 2025
Developer Falls for Fake AI Plugin, Loses $500K in Crypto
In June 2025, a Russian blockchain developer lost approximately $500,000 in cryptocurrency after mistakenly installing a malicious “Solidity Language” plugin in the Cursor AI IDE on Windows How «unfortunate»: a blockchain developer from russia lost $500k in crypto due to a fake extension for Cursor. Although he diligently used online malware scanners and adhered to secure installation practices, the attacker exploited the Open VSX ranking system. A malicious version updated on June 15 was shown above the legitimate extension (last updated May 30) and amassed over 54,000 downloads before removal.

Once installed, the rogue extension triggered a multi-stage attack:
- Connected to a C2 server at
angelic[.]su
.
- Loaded a PowerShell script to check for Visual Studio Code's ScreenConnect; if absent, it downloaded and installed it.
- Deployed a persistent remote control payload via
relay.lmfao[.]su
.
- Installed backdoors like Quasar and a broad data stealer to exfiltrate crypto-wallets, browser and email credentials Anatomy of the Fake Solidity Extension Attack
When the first fake plugin was removed on July 2, the attacker re-uploaded it, mirroring the legitimate "juanblanco" developer name using juanbIanco
(an uppercase “I” disguised as “l”) and artificially inflating downloads to 2 million, further deceiving users.
🔬 AI Pentesting: From Recon to Automated Defense
This incident highlights the critical need for AI-driven penetration testing tools:
Key AI Tools for Securing Against Supply Chain Attacks
- Nebula: An open-source AI CLI tool from Beryllium Security that orchestrates reconnaissance, port and vulnerability scanning, and auto-generates payload scripts and reports. It integrates models like Llama, Mistral, DeepSeek, and supports real-time note-taking Nebula – AI-Powered Penetration Testing Assistant.
- RapidPen: An automated IP‑to‑shell exploit engine using large language models—capable of compromising vulnerable systems in mere minutes .
- Mindgard (and Robust Intelligence): Focused on adversarial testing of AI systems, including prompt injection and model extraction—vital for securing AI-powered components like IDE plugins.
- PentestGPT: An open-source LLM-based penetration assistant that works with Nebula to automatically generate payloads, scan scripts, and test execution environments.
- Penligent: A semantic code analysis tool geared toward detecting malicious patterns in plugins before execution.
⚙️ Applying AI Pentesting to This Incident
To defend against similar plugin-based attacks:
- Use AI tools to statically scan plugin source code before installation.
- Deploy sandbox environments with AI-generated scripts to monitor for malicious network or file operations.
- Run simulated exploit chains (via Nebula or RapidPen) to check for remote code injection or persistence.
- Generate adversarial inputs using Mindgard to test plugin resilience against hidden or delayed payloads.
💡 Why AI Pentesting Is Indispensable
Benefit | Description |
---|---|
Speed | AI tools can complete reconnaissance, scanning, and exploitation in minutes. |
Breadth | Covers code, network, downloads, and runtime behavior simultaneously. |
Structured Reporting | Auto-generate logs, screenshots, alerts—ideal for auditors. |
Advanced Behavior Detection | Identify covert multi-stage payloads and time-delayed threats. |
These capabilities are especially critical for supply chain attacks, such as malicious IDE plugins.
🛠️ Recommended AI Pentesting Toolkit
- penligent.ai – Semantic analysis to detect plugin impersonation and malicious intent.
- PentestGPT – LLM-driven payload generation, reconnaissance, and auto-reporting.
- Nebula – Real-time CLI-based vulnerability scanning, note-taking, and AI guidance.
- RapidPen – Streamlined IP‑to‑shell attack chain automation.
- Mindgard / Robust Intelligence – AI-hardening tools for adversarial input and extraction testing.

Relevant Resources