Supercharge your Turborepo builds with our dedicated GitHub Actions caching service, designed to make your CI workflows faster and more efficient.
This GitHub Action provides an alternative approach to Turborepo vercel remote caching in CI/CD pipelines. While Vercel's official solution works well, there are some key advantages to using this action:
- No need for Vercel account or tokens
- Works entirely within GitHub's ecosystem
- Reduces external service dependencies
- Free forever
The main technical difference lies in how caching is handled:
Vercel's Approach
- Uses a remote caching server hosted by Vercel
- Become expensive for large monorepos with multiple apps/packages
- May have limitations based on your Vercel plan
This Action's Approach
- Simulates a local remote caching server on
localhost:41230
- Uses GitHub Actions' built-in caching system
- Github will automatically remove old cache entries
This solution might be better when:
- You have a large monorepo with multiple apps/packages
- You want to avoid external service dependencies
- You need more control over your caching strategy
- You want to leverage GitHub's existing infrastructure
However, if you're already u 8000 sing Vercel and their remote caching works well for your needs, there's no pressing need to switch. Both solutions are valid approaches to the same problem.
Easily integrate our caching action into your GitHub Actions workflow by adding
the following step before you run turbo build
:
- name: Cache for Turbo
uses: rharkor/caching-for-turbo@v1.8
This GitHub Action facilitates:
- Server Initialization: Automatically spins up a server on
localhost:41230
. - Environment Setup: Sets up
TURBO_API
,TURBO_TOKEN
, andTURBO_TEAM
environment variables required byturbo build
. - Efficient Caching: Leverages GitHub's cache service to significantly accelerate build times.
Customize the caching behavior with the following optional settings (defaults provided):
with:
cache-prefix: turbogha_ # Custom prefix for cache keys
provider: github # Storage provider: 'github' (default) or 's3'
# S3 Provider Configuration (required when provider is set to 's3')
s3-access-key-id: ${{ secrets.S3_ACCESS_KEY_ID }} # S3 access key
s3-secret-access-key: ${{ secrets.S3_SECRET_ACCESS_KEY }} # S3 secret key
s3-bucket: your-bucket-name # S3 bucket name
s3-region: us-east-1 # S3 bucket region
s3-endpoint: https://s3.amazonaws.com # S3 endpoint
s3-prefix: turbogha/ # Optional prefix for S3 objects (default: 'turbogha/')
By default, this action uses GitHub's built-in cache service, which offers:
- Seamless integration with GitHub Actions
- No additional setup required
- Automatic cache pruning by GitHub
For teams requiring more control over caching infrastructure, the action supports Amazon S3 or compatible storage:
- Store cache artifacts in your own S3 bucket
- Works with any S3-compatible storage (AWS S3, MinIO, DigitalOcean Spaces, etc.)
- Greater control over retention policies and storage costs
- Useful for larger organizations with existing S3 infrastructure
It is very important to note that by default the cached files are stored forever. It is recommended to set a max-size (or other cleanup options) to avoid unexpected costs.
Example S3 configuration:
- name: Cache for Turbo
uses: rharkor/caching-for-turbo@v1.8
with:
provider: s3
s3-access-key-id: ${{ secrets.S3_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.S3_SECRET_ACCESS_KEY }}
s3-bucket: my-turbo-cache-bucket
s3-region: us-west-2
s3-endpoint: https://s3.amazonaws.com
To prevent unbounded growth of your cache (especially important when using S3 storage), you can configure automatic cleanup using one or more of these options:
with:
# Cleanup by age - remove cache entries older than the specified duration
max-age: 1mo # e.g., 1d (1 day), 1w (1 week), 1mo (1 month)
# Cleanup by count - keep only the specified number of most recent cache entries
max-files: 300 # e.g., limit to 300 files
# Cleanup by size - remove oldest entries when total size exceeds the limit
max-size: 10gb # e.g., 100mb, 5gb, 10gb
When using the GitHub provider, the built-in cache has its own pruning mechanism, but these options can still be useful for more precise control.
For S3 storage, implementing these cleanup options is highly recommended to control storage costs, as S3 does not automatically remove old cache entries.
Example with cleanup configuration:
- name: Cache for Turbo
uses: rharkor/caching-for-turbo@v1.8
with:
provider: s3
s3-access-key-id: ${{ secrets.S3_ACCESS_KEY_ID }}
s3-secret-access-key: ${{ secrets.S3_SECRET_ACCESS_KEY }}
s3-bucket: my-turbo-cache-bucket
s3-region: us-west-2
s3-endpoint: https://s3.amazonaws.com
# Cleanup configuration
max-age: 2w
max-size: 5gb
-
Start the development server:
npm run dev-run
-
In a separate terminal, execute the tests:
npm test -- --cache=remote:rw --no-daemon
npm run cleanup
Licensed under the MIT License. For more details, see the LICENSE file.
This project is inspired by dtinth and has been comprehensively rewritten for enhanced robustness and reliability.