When Trusted Packages Turn Hostile: Cascading Supply Chain Attacks
TeamPCP compromises legitimate packages and cascades through the supply chain via stolen credentials. Why this attack pattern evades detection.
On March 27, 2026, security researchers discovered that the telnyx Python package on PyPI had been compromised. Two malicious versions - 4.87.1 and 4.87.2 - were pushed to the registry, containing credential-stealing malware hidden inside WAV audio files using steganography.
The attacker wasn’t a random actor. It was TeamPCP - the same group responsible for backdooring litellm, trivy, and KICS earlier this month. And here’s the part that should concern anyone building with AI tools: researchers believe TeamPCP compromised Telnyx’s PyPI publishing token using credentials stolen from the litellm attack.
This is a cascading supply chain attack. One compromised package gives attackers the keys to compromise the next. And the pattern has implications that go well beyond PyPI.
Typosquats vs. Trusted Package Compromise
Most supply chain attacks we’ve discussed on this blog - including the ClawHavoc campaign that hit the AI skill ecosystem in January - rely on some form of deception at the point of installation. Typosquats, name confusion, fake descriptions. The attacker publishes something new and hopes you install it by mistake.
TeamPCP’s approach is fundamentally different. They aren’t publishing new packages. They’re compromising existing, legitimate packages that developers already trust and depend on.
When you install telnyx@4.87.1, you’re not making a mistake. You’re updating a package you’ve used before, from a publisher you’ve verified before, through a registry you trust. The version number increments normally. The package name is correct. The publisher account is the real one. Every signal you’d normally check says “this is safe.”
This is what makes trusted package compromise so dangerous: it defeats the human review layer entirely.
How the Cascade Works
TeamPCP’s campaign followed a clear pattern across multiple targets:
Stage 1: Initial compromise. The group gained access to publishing credentials for litellm, a popular AI model routing library with millions of weekly downloads. The exact initial vector hasn’t been confirmed, but the result was the ability to push malicious versions to PyPI under the legitimate publisher’s account.
Stage 2: Credential harvesting. The malicious litellm versions contained a data harvester that swept environment variables, .env files, and shell histories from every machine that imported the package. This is a targeted collection strategy - developers and CI/CD pipelines that use litellm are likely to have credentials for other packages, cloud services, and infrastructure tools stored in their environments.
Stage 3: Lateral movement to new packages. Endor Labs researchers assessed that the most likely vector for the Telnyx compromise was credentials harvested during the litellm attack. If any developer or CI pipeline had both litellm installed and access to Telnyx’s PyPI publishing token, that token was already in TeamPCP’s hands.
Stage 4: Repeat. Each new compromised package harvests more credentials, expanding the attack surface for the next compromise.
The same group had already hit Aqua Security’s trivy (a container scanner) and Checkmarx’s KICS (an infrastructure-as-code scanner) using similar methods. The target selection is deliberate: tools with elevated access to automated pipelines, where a single compromised dependency can touch thousands of downstream environments.
The Payload: Steganography and Platform-Specific Persistence
TeamPCP’s malware delivery is technically sophisticated. The malicious Telnyx versions injected code into the package’s _client.py file - a module that executes automatically when the package is imported. The payload downloads a WAV audio file from a command-and-control server, then extracts an executable hidden within the audio data using steganography.
On Windows, the extracted binary is dropped into the Startup folder as msbuild.exe, providing persistence across reboots. On Linux and macOS, the approach is different: a single high-speed data harvesting operation collects everything of value and exfiltrates it immediately, then deletes all traces.
Socket’s analysis described the strategic split clearly: “Windows gets persistence. Linux/macOS gets smash-and-grab.” The harvester targets the same categories of developer credentials that made the AMOS stealer so effective during ClawHavoc - SSH keys, cloud CLI credentials, API tokens, .env files, and browser-stored secrets.
Why This Pattern Matters for AI Tools
The AI development ecosystem is particularly vulnerable to cascading supply chain attacks for three reasons:
1. Dense dependency graphs. AI tools depend on a deep stack of libraries - model providers, API wrappers, data processing frameworks, evaluation tools. Each dependency is a potential entry point. When litellm is compromised, every tool that depends on it (and every developer who uses those tools) is in the blast radius.
2. Elevated credential access. AI development environments routinely store API keys for model providers, cloud services, and internal APIs. A compromised package running in this environment doesn’t just get source code - it gets the keys to the entire infrastructure.
3. Fast-moving, trust-heavy culture. AI developers update dependencies frequently to access new model support, performance improvements, and features. The pressure to stay current creates an environment where version bumps are applied quickly and reviewed lightly.
LangChain demonstrated a parallel risk highlighted this same week. Three vulnerabilities (CVE-2026-34070, CVE-2025-68664, CVE-2025-67644) — some disclosed months ago, others new — showed that even without a compromised package, the AI framework stack can expose filesystem data, environment secrets, and conversation histories through path traversal, deserialization flaws, and SQL injection. The attack surface isn’t hypothetical - it’s being actively mapped and exploited.
What Registry-Level Verification Can and Can’t Do
Let’s be honest about the limitations. No registry can prevent a legitimate publisher’s credentials from being stolen. If TeamPCP has the real publishing token for a package, they can push a new version that looks identical to a normal update.
But a registry can make it harder for that compromised version to reach consumers undetected.
Traditional registries scan packages at upload time - if they scan at all. The problem is that when the publisher’s account is compromised, the attacker controls the upload. They can craft payloads specifically to evade the registry’s scanner, because they can test against it beforehand.
Consumer-side verification changes the equation. If the consumer independently scans the package after downloading it - using their own scanner, producing their own report - the attacker now has to evade two independent scanning passes, one of which they can’t test against in advance.
This is the core principle behind SkillSafe’s dual-side verification model. When a skill is installed, the consumer’s machine runs a full scan and produces an independent report and tree hash. The server compares both sides. A hash mismatch - meaning the archive changed between publish and install - triggers a warning. A scan that returns a CRITICAL verdict blocks installation entirely.
Would this catch a TeamPCP-style attack on AI skills? It depends on what the payload does. Steganography hidden in a WAV file might not trigger static analysis rules. But the credential harvesting code injected into _client.py - the subprocess calls, the environment variable sweeps, the HTTP POST to a C2 server - those are exactly the patterns that AST-based scanning is designed to flag.
More importantly, the cascading nature of these attacks means that stopping one link in the chain breaks the entire cascade. If the litellm-equivalent skill had been flagged at install time, the credentials needed to compromise the next target would never have been harvested.
Practical Takeaways
Pin your dependencies. Don’t auto-update AI libraries in production. Use lockfiles, pin exact versions, and review changelogs before upgrading. The malicious Telnyx versions were live for roughly six hours before being quarantined - automated update pipelines would have pulled them without human review.
Audit your credential exposure. If a package you use has been compromised, assume your environment variables, .env files, and stored credentials were exfiltrated. Rotate everything. TeamPCP’s harvester specifically targets developer credential files matching patterns like *.env, *.pem, *secret*, *token*, and *apikey*.
Verify at install time, not just publish time. Publisher-side scanning is necessary but insufficient when the publisher’s account can be compromised. Consumer-side verification adds a layer that attackers can’t preemptively bypass.
Watch for cascading indicators. When one package in your dependency tree is compromised, check whether your credentials could give an attacker access to other packages or services. The blast radius of a supply chain attack isn’t limited to the compromised package - it extends to everything those stolen credentials can reach.
The shift from typosquatting to trusted package compromise represents a maturation of supply chain attacks against developer tooling. The AI ecosystem, with its dense dependencies and credential-rich environments, is a natural target. The defenses need to mature at the same pace.