Rate Limits
The Statly API enforces rate limits to ensure fair usage and platform stability.
Limits by Plan
| Plan | Requests per Minute |
|---|---|
| Free | 50 |
| Hobby | 100 |
| Pro | 300 |
| Enterprise | 1000 |
Rate Limit Headers
Every API response includes rate limit headers:
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 299
X-RateLimit-Reset: 1705320000| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed per window |
X-RateLimit-Remaining | Requests remaining in current window |
X-RateLimit-Reset | Unix timestamp when the window resets |
Rate Limit Exceeded
When you exceed the limit, you'll receive a 429 Too Many Requests response:
{
"error": "rate_limited",
"message": "Rate limit exceeded. Try again in 42 seconds.",
"retryAfter": 42
}The Retry-After header indicates seconds until you can retry:
HTTP/1.1 429 Too Many Requests
Retry-After: 42Handling Rate Limits
Exponential Backoff
Implement exponential backoff for retries:
async function fetchWithRetry(url, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i);
await sleep(retryAfter * 1000);
continue;
}
return response;
}
throw new Error('Max retries exceeded');
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}Check Headers Proactively
Before hitting the limit, check remaining requests:
async function makeRequest(url, options) {
const response = await fetch(url, options);
const remaining = response.headers.get('X-RateLimit-Remaining');
if (remaining < 10) {
console.warn(`Low on rate limit: ${remaining} remaining`);
}
return response;
}Best Practices
Batch Requests
Instead of making many small requests, use batch endpoints:
# Bad: 10 separate requests
GET /api/v1/monitors/1
GET /api/v1/monitors/2
...
# Good: 1 request
GET /api/v1/monitors?ids=1,2,3,4,5,6,7,8,9,10Cache Responses
Cache data that doesn't change frequently:
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function getMonitors() {
const cached = cache.get('monitors');
if (cached && Date.now() - cached.time < CACHE_TTL) {
return cached.data;
}
const response = await fetch('/api/v1/monitors');
const data = await response.json();
cache.set('monitors', { data, time: Date.now() });
return data;
}Use Webhooks
For real-time updates, use webhooks instead of polling:
// Bad: Polling every 10 seconds
setInterval(async () => {
const incidents = await fetch('/api/v1/incidents');
// Process incidents
}, 10000);
// Good: Receive webhook notifications
app.post('/webhooks/statly', (req, res) => {
const { event, incident } = req.body;
// Process incident update
res.status(200).send('OK');
});Public Endpoints
Some endpoints have separate, stricter limits:
| Endpoint | Limit |
|---|---|
POST /api/v1/subscribe | 10/minute per IP |
GET /api/v1/status/{slug} | 100/minute per IP |
GET /api/v1/widget/{slug} | 100/minute per IP |
These limits are IP-based and don't count against your API key quota.
Enterprise Limits
Enterprise customers can request:
- Higher rate limits
- Dedicated rate limit pools
- Priority request queuing
Contact [email protected] for enterprise rate limit configuration.