Back Original

Open Source Security at Astral

Astral builds tools that millions of developers around the world depend on and trust.

That trust includes confidence in our security posture: developers reasonably expect that our tools (and the processes that build, test, and release them) are secure. The rise of supply chain attacks, typified by the recent Trivy and LiteLLM hacks, has developers questioning whether they can trust their tools.

To that end, we want to share some of the techniques we use to secure our tools in the hope that they're useful to:

  1. Our users, who want to understand what we do to keep their systems secure;
  2. Other maintainers, projects, and companies, who may benefit from some of the techniques we use;
  3. Developers of CI/CD systems, so that projects do not need to follow non-obvious paths or avoid useful features to maintain secure and robust processes.

CI/CD security #

We sustain our development velocity on Ruff, uv, and ty through extensive CI/CD workflows that run on GitHub Actions. Without these workflows we would struggle to review, test, and release our tools at the pace and to the degree of confidence that we demand. Our CI/CD workflows are also a critical part of our security posture, in that they allow us to keep critical development and release processes away from local developer machines and inside of controlled, observable environments.

GitHub Actions is a logical choice for us because of its tight first-party integration with GitHub, along with its mature support for contributor workflows: anybody who wants to contribute can validate that their pull request is correct with the same processes we use ourselves.

Unfortunately, there's a flipside to this: GitHub Actions has poor security defaults, and security compromises like those of Ultralytics, tj-actions, and Nx all began with well-trodden weaknesses like pwn requests.

Here are some of the things we do to secure our CI/CD processes:

To do these things, we leverage GitHub's own settings, as well as tools like zizmor (for static analysis) and pinact (for automatic pinning).

Repository and organizational security #

Beyond our CI/CD processes, we also take a number of steps to limit both the likelihood and the impact of account and repository compromises within the Astral organization:

To help others implement these kinds of branch and tag controls, we're sharing a gist that shows some of the rulesets we use. These rulesets are specific to our GitHub organization and repositories, but you can use them as a starting point for your own policies!

Automations #

There are certain things that GitHub Actions can do, but can't do securely, such as leaving comments on third-party issues and pull requests. Most of the time it's better to just forgo these features, but in some cases they're a valuable part of our workflows.

In these latter cases, we use astral-sh-bot to safely isolate these tasks outside of GitHub Actions: GitHub sends us the same event data that GitHub Actions would have received (since GitHub Actions consumes the same webhook payloads as GitHub Apps do), but with much more control and much less implicit state.

However, there's still a catch with GitHub Apps: an app doesn't eliminate any sensitive credentials needed for an operation, it just moves them into an environment that doesn't mix code and data as pervasively as GitHub Actions does. For example, an app won't be susceptible to a template injection attack like a workflow would be, but could still contain SQLi, prompt injection, or other weaknesses that allow an attacker to abuse the app's credentials. Consequently, it's essential to treat GitHub App development with the same security mindset as any other software development. This also extends to untrusted code: using a GitHub App does not make it safe to run untrusted code, it just makes it harder to do so unexpectedly. If your processes need to run untrusted code, they must use pull_request or another "safe" trigger that doesn't provide any privileged credentials to third-party pull requests.

With all that said, we've found that the GitHub App pattern works well for us, and we recommend it to other maintainers and projects who have similar needs. The main downside to it comes in the form of complexity: it requires developing and hosting a GitHub App, rather than writing a workflow that GitHub orchestrates for you. We've found that frameworks like Gidgethub make the development process for GitHub Apps relatively straightforward, but that hosting remains a burden in terms of time and cost.

It's an unfortunate reality that there still aren't great GitHub App options for one-person and hobbyist open source projects; it's our hope that usability enhancements in this space can be led by companies and larger projects that have the resources needed to paper over GitHub Actions' shortcomings as a platform.

We recommend this tutorial by Mariatta as a good introduction to building GitHub Apps in Python. We also plan to open source astral-sh-bot in the future.

Release security #

So far, we've covered aspects that tie closely to GitHub, as the source host for Astral's tools. But many of our users install our tools via other mechanisms, such as PyPI, Homebrew, and our Docker images. These distribution channels add another "link" to the metaphorical supply chain, and require discrete consideration:

Our release processes also involve "knock-on" changes, like updating the our public documentation, version manifests, and the official pre-commit hooks. These are privileged operations that we protect through dedicated bot accounts and fine-grained PATs issued through those accounts.

Going forwards, we're also looking at adding codesigning with official developer certificates on macOS and Windows.

Dependency security #

Last but not least is the question of dependencies. Like almost all modern software, our tools depend on an ecosystem of third-party dependencies (both direct and transitive), each of which is in an implicit position of trust. Here are some of the things we do to measure and mitigate upstream risk:

Concluding thoughts #

Open source security is a hard problem, in part because it's really many problems (some technical, some social) masquerading as one. We've covered many of the techniques we use to tackle this problem, but this post is by no means an exhaustive list. It's also not a static list: attackers are dynamic participants in the security process, and defenses necessarily evolve in response to their changing techniques.

With that in mind, we'd like to recall some of the points mentioned above that deserve the most attention:

Finally, we're still evaluating many of the techniques mentioned above, and will almost certainly be tweaking (and strengthening) them over the coming weeks and months as we learn more about their limitations and how they interact with our development processes. That's to say that this post represents a point in time, not the final word on how we think about security for our open source tools.