LiteLLM's PyPI Backdoor: What It Means for the AI Skill Supply Chain
Attackers injected a credential stealer into litellm (95M downloads) via compromised CI/CD. What happened and why AI skills face the same threat.
On March 24, 2026, versions 1.82.7 and 1.82.8 of LiteLLM — the most widely used open-source LLM proxy, with approximately 95 million monthly downloads — were published to PyPI containing a multi-stage credential stealer and persistent backdoor. The malicious versions were live for roughly three hours before PyPI quarantined the entire package.
This was not a rogue maintainer or a typosquat. It was a CI/CD pipeline compromise executed by TeamPCP, a threat actor group that had already hit Aqua Security’s Trivy scanner and Checkmarx’s KICS GitHub Action in the weeks prior. The LiteLLM incident is the latest — and highest-impact — link in a chain of supply chain attacks targeting the AI and security tooling ecosystem.
This post examines the technical details, draws a direct line from PyPI package attacks to the emerging AI skill supply chain, and explains what SkillSafe’s model would have caught.
What Happened
The Attack Chain
TeamPCP did not compromise LiteLLM’s repository directly. They compromised a dependency in LiteLLM’s CI/CD pipeline — specifically, a Trivy GitHub Action used for security scanning. The irony is worth noting: the security scanner itself became the attack vector.
Through the compromised Trivy action, TeamPCP extracted PyPI publishing credentials from LiteLLM’s CI environment. With those credentials, they published two backdoored versions in quick succession:
- v1.82.7 at 10:39 UTC — malicious payload injected into
proxy_server.py - v1.82.8 at 10:52 UTC — added
litellm_init.pthalongside theproxy_server.pypayload
The .pth file is the more dangerous mechanism. Python’s .pth files execute automatically during interpreter startup — not when litellm is imported, but whenever any Python process starts in an environment where litellm is installed. The payload was double base64-encoded and launched via subprocess, bypassing simple inspection.
The Payload: Three Stages
Once triggered, the malware executed a three-stage attack (Sonatype, Wiz):
Stage 1: Credential Harvesting. The payload swept the host for every credential it could find:
- SSH keys (
~/.ssh/) - AWS, GCP, and Azure credentials
- Kubernetes configs and service account tokens
- CI/CD secrets and Docker configs
.envfiles,.gitconfig, and shell history- Database connection strings
- Cryptocurrency wallet files and seed phrases
Stage 2: Kubernetes Lateral Movement. On machines with cluster access, the malware deployed privileged pods to every reachable node — turning one compromised developer machine into a foothold across the organization’s infrastructure.
Stage 3: Persistent Backdoor. A systemd service was installed that polled attacker-controlled infrastructure for additional binaries, ensuring persistence even if the malicious litellm version was later uninstalled.
Stolen data was bundled into an encrypted archive (tpcp.tar.gz) and exfiltrated to models.litellm.cloud, a domain controlled by the attackers (The Hacker News).
The Blast Radius
LiteLLM averages 3.4 million downloads per day. The malicious versions were live for approximately three hours. Even accounting for CI/CD caching, mirrors, and timezone distribution of active developers, the exposure window was significant. LiteLLM’s own security advisory instructs anyone who installed v1.82.7 or v1.82.8 to assume full credential compromise.
Why This Matters for AI Skills
The LiteLLM attack targeted a Python package. AI coding skills — .md instruction files used by Claude Code, Cursor, Windsurf, and similar tools — are a different artifact type. But the underlying supply chain dynamics are identical, and in several ways the skill attack surface is worse.
Same Vectors, Higher Privilege
A Python package executes in a sandboxed interpreter with whatever permissions the calling code grants. An AI skill executes inside an agent that typically has:
- Full filesystem read/write access to the project directory (and often the home directory)
- Shell command execution
- Network access
- Access to environment variables, SSH keys, and credential files
When a developer installs a malicious Python package, the damage depends on what the package can reach. When a developer installs a malicious AI skill, the agent becomes the attacker — with the developer’s full environmental access.
The ClawHavoc campaign demonstrated this in January 2026: 1,184 malicious skills delivered credential-stealing malware through nothing more than carefully written .md instructions that directed the AI agent to download and execute external binaries, read credential files, and persist via agent memory files.
CI/CD Compromise Maps Directly to Skill Registries
TeamPCP’s method — compromising a CI/CD dependency to steal publishing credentials — applies equally to any registry that relies on publishing tokens. If an attacker steals a skill publisher’s API key through a compromised GitHub Action, a leaked .env file, or a phishing attack, they can publish a backdoored version of any skill that publisher maintains.
The LiteLLM incident proves this is not theoretical. TeamPCP executed it against one of the most popular packages in the Python ecosystem. Skill registries that store publishing credentials the same way are vulnerable to the same attack.
The .pth Trick Has a Skill Equivalent
The .pth mechanism — code that executes on interpreter startup regardless of whether the package is explicitly imported — has a direct parallel in AI skills. Skills can modify agent configuration files (CLAUDE.md, SOUL.md, MEMORY.md) to inject instructions that persist across sessions and execute in every future interaction, not just when the skill is active.
This was one of ClawHavoc’s most effective techniques. The LiteLLM attack shows the same principle — parasitic persistence via a platform mechanism designed for convenience — applied at the package level. Both exploit the same architectural assumption: that installed components are trustworthy.
What SkillSafe’s Model Catches
SkillSafe cannot prevent a CI/CD pipeline compromise at LiteLLM or any other upstream project. What it can prevent is the same class of attack succeeding in the AI skill supply chain. The defense is layered: pre-share scanning blocks malicious content, cryptographic verification catches tampering, audit logging provides forensic visibility, publish rate limits constrain blast radius, and emergency key revocation enables instant incident response.
Pre-Share Scanning Gates Distribution
Every skill on SkillSafe must pass a security scan before a share link can be created. This is not a post-publish check — sharing is blocked until the scan completes with a non-critical verdict.
If the LiteLLM attack vectors were translated to a skill, the following rules from the SkillSafe scanner ruleset v2026.03.15 would trigger:
| LiteLLM Attack Vector | SkillSafe Scanner Rule(s) | Result |
|---|---|---|
| Double base64-encoded payload | SS05 b64_decode_exec / b64_file_exec: base64 decode-and-execute pipelines (critical) | Blocked |
Credential file harvesting (~/.ssh/, .env) | SS17 cred_read_aws, cred_read_docker, cred_find_dirs: credential file access patterns (critical/high) | Blocked |
| Exfiltration to external domain | SS03 shell_exfil_service: outbound HTTP to known exfil services (high) + SS-CP cp01_exec_plus_network: process execution combined with network calls (critical) | Blocked |
curl | sh equivalent binary download | SS01 py_subprocess_run / py_os_system: shell command execution (high) + SS-CP cp01_exec_plus_network (critical) | Blocked |
| Kubernetes lateral movement commands | SS01 py_subprocess_run / py_subprocess_popen: subprocess spawning (high) | Blocked |
| Persistence via config file modification | SS04 agent_memory_write / agent_memory_inject: shell writes to CLAUDE.md, SOUL.md, MEMORY.md (high) | Blocked |
| Rapid-fire publishing with stolen credentials | Per-account daily publish limits (50/200/1000 by tier) | Rate-limited |
| Stolen key used from unknown IP | Audit log captures IP, user agent, timestamp per save/share | Detected |
| Delayed discovery of credential theft | Emergency key revocation (DELETE /v1/account/keys) | Instant response |
A skill implementing any of LiteLLM’s payload stages would fail the pre-share scan. It could be saved privately (saving is unrestricted) but could never reach other users through the registry. Even if a skill somehow evaded content scanning, the operational security layers — publish rate limits, audit trails, and emergency revocation — constrain and expose the attack.
Cryptographic Tamper Detection Defeats Credential Theft
TeamPCP’s core strategy was stealing publishing credentials to push modified versions of a legitimate package. In SkillSafe’s model, even if an attacker steals a publisher’s API key and pushes a backdoored skill version:
- The new version must pass a pre-share scan (which the malicious content would fail)
- Even if the scan were somehow bypassed, the consumer’s client re-scans on download and computes an independent tree hash
- The server compares both scan reports and tree hashes — any discrepancy produces a
criticalverdict
The attacker would need to compromise the publisher’s credentials, bypass the pre-share scan, and prevent the consumer’s client from detecting the modification. Each layer is independent. Defeating one does not weaken the others.
Namespace Verification Raises the Bar
LiteLLM is a single package name on PyPI. Anyone with PyPI credentials can publish to it. SkillSafe’s namespace model (@publisher/skill-name) means an attacker needs to compromise the specific publisher account, not just any publishing credential. Combined with email verification requirements for sharing, the attack surface for credential-based publishing attacks is narrower.
Publish Audit Trail
Every skill save and share on SkillSafe is recorded in a structured audit log — capturing the account, action, IP address, user agent, and timestamp. If a compromised key were used to publish a backdoored version, the forensic trail is immediate: which key, from which IP, at what time.
PyPI had no equivalent during the LiteLLM incident. Investigators had to reconstruct the timeline from package metadata and external logs. An integrated audit trail turns “what happened?” from a multi-day forensic exercise into a single database query.
Per-Account Daily Publish Limits
TeamPCP published two malicious versions within 13 minutes. SkillSafe enforces per-account daily publish limits (50 versions/day on free tier, 200 on paid, 1,000 on enterprise) as a defense-in-depth measure against credential-based spam attacks. A stolen key can’t be used to flood the registry with hundreds of backdoored versions across multiple skill names — the account-level budget constrains the blast radius even if the key is compromised.
Emergency Key Revocation
When LiteLLM’s team discovered the compromise, their immediate priority was revoking the stolen PyPI credentials. SkillSafe provides a one-call panic button (DELETE /v1/account/keys) that atomically revokes every active API key on an account. If a publisher suspects credential theft, they can kill all access in a single request — no need to enumerate and revoke keys one by one while the attacker continues publishing.
Lessons from the LiteLLM Incident
1. CI/CD Pipelines Are the New Attack Surface
TeamPCP didn’t hack LiteLLM’s code. They hacked the tool that checks LiteLLM’s code. This is a pattern: as direct code compromise gets harder, attackers move upstream to the build and release infrastructure.
For skill publishers, this means: your publishing credentials are as sensitive as your production database credentials. Store them in environment-specific secrets managers, not in CI environment variables that any action can read. Audit every GitHub Action and CI plugin in your pipeline.
2. Reactive Scanning Cannot Catch Zero-Day Payloads
PyPI quarantined the malicious versions within three hours — a fast response by any standard. But three hours at 3.4 million downloads per day is a significant exposure window. Reactive models (scan after publish, quarantine after detection) will always have this gap.
Pre-share scanning inverts the model: the skill cannot reach consumers until it passes. The window between “malicious code exists” and “malicious code is distributed” is zero, because distribution is gated on the scan.
3. The .pth / SOUL.md Pattern Is Becoming Standard
Both the LiteLLM attack and ClawHavoc used platform-native persistence mechanisms — .pth files for Python, SOUL.md / MEMORY.md injection for AI agents — to maintain access after the initial payload. These mechanisms are designed for legitimate use and are rarely inspected.
Security models need explicit rules for these persistence vectors. Signature-based detection will always lag behind novel payloads, but behavioral rules (“does this skill write to agent configuration files?”) catch the pattern regardless of the specific payload.
4. Credential Rotation Is Not Optional
LiteLLM’s advisory instructs affected users to assume full compromise. This is the correct guidance. If the LiteLLM credential stealer ran on your machine — even briefly — every secret on that machine should be considered exfiltrated. Rotate SSH keys, cloud credentials, API tokens, database passwords, and browser sessions.
The same applies retroactively to ClawHavoc. If you installed unverified skills during January–March 2026, a precautionary credential rotation is warranted.
What to Do Now
If you use LiteLLM: Check whether v1.82.7 or v1.82.8 was installed in any of your environments. pip show litellm shows the currently installed version. Check CI/CD logs and Docker image build histories for installs during the exposure window (approximately 10:39–13:30 UTC on March 24). If either version was installed, follow LiteLLM’s remediation guidance.
If you publish AI skills on SkillSafe: Audit your CI/CD pipeline for credential exposure. Verify that publishing tokens are stored in secrets managers, not in environment variables accessible to third-party actions. If you suspect a key has been compromised, use the emergency revoke-all endpoint (DELETE /v1/account/keys) to kill all active keys instantly, then check your audit log for unexpected publish activity. Review the daily publish limits for your tier — unusual volume against these limits can be an early indicator of compromise.
If you install AI skills: Use a registry that scans before distribution and verifies integrity at install time. If your current registry doesn’t provide pre-install scan reports, run skillsafe scan on any skill before activating it.
Sources
- LiteLLM Security Update — March 2026
- Wiz: TeamPCP Trojanizes LiteLLM in Continuation of Campaign
- Sonatype: Compromised litellm PyPI Package Delivers Multi-Stage Credential Stealer
- The Hacker News: TeamPCP Backdoors LiteLLM Versions 1.82.7–1.82.8
- BleepingComputer: Popular LiteLLM PyPI Package Backdoored
- Snyk: How a Poisoned Security Scanner Became the Key to Backdooring LiteLLM
- Wiz: KICS GitHub Action Compromised — TeamPCP Supply Chain Attack
- GitGuardian: How GitGuardian Enables Rapid Response to the LiteLLM Supply Chain Attack
- Security Boulevard: Trivy’s March Supply Chain Attack
- DreamFactory: The LiteLLM Supply Chain Attack — A Complete Technical Breakdown
- CyberInsider: New Supply Chain Attack Hits LiteLLM
- CSO Online: PyPI Warns Developers After LiteLLM Malware Found
- ReversingLabs: TeamPCP Supply Chain Attack Spreads to LiteLLM
- Security Affairs: Malicious LiteLLM Versions Linked to TeamPCP
SkillSafe did not independently verify all reported figures. We cite published security research from the sources listed above.