Last week, a supply chain attack on LiteLLM compromised the credentials of thousands of organizations through a single poisoned PyPI package. The attacker used stolen credentials from one security tool to hijack another, then cascaded across five package ecosystems in two weeks.
The incident was a reminder that every tool in your pipeline is a trust decision. When you install a GitHub Action, a Docker image, or a package, you're trusting that the maintainer's account hasn't been compromised and that the artifact hasn't been tampered with.
We take that trust seriously. Here's how Rosentic is designed.
Your code is analyzed and discarded. It is never persisted, never transmitted to storage, and never accessible after the scan completes.
This isn't just a policy. It's an architectural decision that holds across every deployment model we'll ever ship.
| Deployment | How code is handled | Status |
|---|---|---|
| GitHub Action | Code lives on your GitHub runner. The engine runs as a Docker container on that runner, parses the AST, posts results to the PR, and the runner is destroyed. Your code is never transmitted anywhere. | Live |
| GitHub App | Code is transmitted to our analysis server, parsed in an isolated ephemeral container, results are posted, and the code is immediately deleted. Nothing is persisted to disk or database. | Planned |
| VPC Deployment | The engine runs inside your infrastructure. Code never leaves your network. Same trust model as the GitHub Action, with full dashboard features. | Planned |
The LiteLLM attack worked because a maintainer's credentials were stolen and a malicious package was pushed to a trusted registry. For Rosentic, the equivalent risk would be someone compromising our GitHub account and pushing a malicious Docker image.
Here's what we do about it:
Two-factor authentication on all maintainer accounts. The most common supply chain attacks start with credential theft. 2FA makes that significantly harder.
Docker images pinned by SHA digest. The GitHub Action references a specific image hash, not a mutable tag like :latest. Even if someone pushed a malicious image, existing installs would continue using the verified image.
Minimal credential surface. The GitHub Action uses the standard GITHUB_TOKEN that GitHub provides to every Action automatically. No additional secrets, tokens, or configuration required to get started.
No network calls. The engine doesn't phone home, doesn't send telemetry about your code, and doesn't make outbound requests. If you monitored its network traffic, you'd see PR comment posts to the GitHub API and nothing else.
Deterministic analysis. The engine is AST parsing via tree-sitter, not LLM inference. There's no model to poison, no prompt to inject, no inference endpoint to intercept. Same input produces the same output every time.
The promise of AI coding agents is enormous. Teams shipping 10x faster. Engineers freed from mechanical work to focus on architecture and design. Every company in the world becoming a software company because the cost of building just dropped by an order of magnitude.
But right now, every headline is about risk. AI code has more bugs. AI agents caused an outage. A supply chain attack was vibe-coded. The backlash is making engineering leaders cautious when they should be accelerating.
Rosentic exists to make that acceleration safe. Not by slowing agents down, but by checking their work at the seams - the boundaries between branches where invisible breaks hide. When engineering managers trust that incompatible code will be caught before it merges, they're free to add more agents, run more experiments, and ship faster.
The more agents you run, the more you need this verification layer. And the better that layer works, the more confidently you can adopt AI across your engineering organization.
We're not slowing anyone down. We're making it safe to go faster.
Your code is never stored. Your agents are free to run. The seams are checked. That's the deal.