A scalable and flexible rate limiting service that provides API rate limiting capabilities as a service. This project allows you to implement rate limiting for your applications without managing the complexity of rate limiting infrastructure. Built with Go and utilizing the gorl
library for efficient rate limiting operations.
- Dynamic Rate Limiting Rules: Create and manage rate limiting rules per project and endpoint
- Flexible Rate Limiting Strategies: Support for various rate limiting strategies via
gorl
- Distributed Redis Sharding:
- Multiple Redis node support
- Configurable sharding strategies (hash_mod, consistent_hash)
- High availability and scalability
- RESTful API: Simple and intuitive API for managing rate limits
- Real-time Rate Limit Checking: Fast and efficient rate limit verification
- Project-based Management: Organize rate limits by projects
- Custom Key Generation: Flexible key generation for rate limiting (IP, user ID, custom keys)
The service is built using:
- Go (Golang) for the backend service
gorl
library for rate limiting implementation- Redis for distributed rate limiting with sharding support
- PostgreSQL for storing configuration and rules
- Docker for containerization
rlaas/
├── cmd/api/ # Application entry point
├── internal/
│ ├── database/ # Database operations
│ │ ├── database.go
│ │ └── database_test.go
│ ├── limiter/ # Rate limiting core
│ │ ├── limiter.go # Main rate limiting logic
│ │ └── shard.go # Redis sharding implementation
│ ├── models/ # Data models
│ │ ├── project.go # Project entity
│ │ └── rule.go # Rate limit rules
│ ├── server/ # HTTP server and handlers
│ │ ├── handlers/ # Request handlers
│ │ ├── routes.go # API routes
│ │ └── server.go # Server configuration
│ └── service/ # Business logic layer
│ ├── apikey.go # API key management
│ ├── config.go # Configuration
│ ├── project.go # Project logic
│ └── rule.go # Rule management
├── docker-compose.yml # Docker composition
└── Makefile # Build and development commands
- Go 1.19 or higher
- Docker and Docker Compose
- Redis (for distributed rate limiting)
- PostgreSQL (for configuration storage)
GET /
- Service health checkGET /health
- Database health status
POST /register
- Register a new project{ "name": "project_name", "api_key": "your_api_key" }
GET /rules
- List all rulesPOST /rule/add
- Create a new rate limit rule{ "project_id": 1, "endpoint": "/api/resource", "strategy": "fixed_window", "key_by": "ip", "limit_count": 100, "window_seconds": 3600 }
PUT /rule/
- Update a rule-
70C4
DELETE /rule/
- Delete a rule
POST /check
- Check if a request is within rate limits{ "api_key": "your_api_key", "endpoint": "/api/resource", "key": "127.0.0.1" }
The service supports various rate limiting configurations:
type RateLimitConfig struct {
Strategy core.StrategyType // Rate limiting strategy
KeyBy core.KeyFuncType // Key generation method
Limit int // Rate limit count
Window time.Duration // Time window
RedisCluster RedisClusterConfig // Redis configuration
}
- Fixed Window
- Sliding Window
- Token Bucket
- Leaky Bucket
- IP Address
- User ID
- Custom Keys
- Combined Keys
The service supports two sharding strategies:
- Hash Modulo: Simple distribution using modulo operation
- Consistent Hashing: More balanced distribution with minimal redistribution
Configure sharding via environment variables:
SHARDING_STRATEGY=consistent_hash
REDIS_NODE_1=redis://localhost:6379/0
REDIS_NODE_2=redis://localhost:6380/0
REDIS_NODE_3=redis://localhost:6381/0
- Fork the repository
- Create your feature branch (
git checkout -b feature/new-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request