jeremylongshore

customerio-rate-limits

@jeremylongshore/customerio-rate-limits
jeremylongshore
1,004
123 forks
Updated 1/18/2026
View on GitHub

Implement Customer.io rate limiting and backoff. Use when handling high-volume API calls, implementing retry logic, or optimizing API usage. Trigger with phrases like "customer.io rate limit", "customer.io throttle", "customer.io 429", "customer.io backoff".

Installation

$skills install @jeremylongshore/customerio-rate-limits
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/saas-packs/customerio-pack/skills/customerio-rate-limits/SKILL.md
Branchmain
Scoped Name@jeremylongshore/customerio-rate-limits

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: customerio-rate-limits description: | Implement Customer.io rate limiting and backoff. Use when handling high-volume API calls, implementing retry logic, or optimizing API usage. Trigger with phrases like "customer.io rate limit", "customer.io throttle", "customer.io 429", "customer.io backoff". allowed-tools: Read, Grep, Bash(curl:*) version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io

Customer.io Rate Limits

Overview

Understand and implement proper rate limiting and backoff strategies for Customer.io API.

Rate Limit Details

Track API Limits

EndpointLimitWindow
Identify100 requests/secondPer workspace
Track events100 requests/secondPer workspace
Batch operations100 requests/secondPer workspace
Page/screen100 requests/secondPer workspace

App API Limits

EndpointLimitWindow
Transactional email100/secondPer workspace
Transactional push100/secondPer workspace
API queries10/secondPer workspace

Instructions

Step 1: Implement Rate Limiter

// lib/rate-limiter.ts
class RateLimiter {
  private tokens: number;
  private lastRefill: number;
  private readonly maxTokens: number;
  private readonly refillRate: number;

  constructor(maxRequestsPerSecond: number = 100) {
    this.maxTokens = maxRequestsPerSecond;
    this.tokens = maxRequestsPerSecond;
    this.refillRate = maxRequestsPerSecond;
    this.lastRefill = Date.now();
  }

  private refill(): void {
    const now = Date.now();
    const elapsed = (now - this.lastRefill) / 1000;
    this.tokens = Math.min(this.maxTokens, this.tokens + elapsed * this.refillRate);
    this.lastRefill = now;
  }

  async acquire(): Promise<void> {
    this.refill();

    if (this.tokens >= 1) {
      this.tokens -= 1;
      return;
    }

    // Wait for token to become available
    const waitTime = ((1 - this.tokens) / this.refillRate) * 1000;
    await new Promise(resolve => setTimeout(resolve, waitTime));
    this.tokens = 0;
    this.lastRefill = Date.now();
  }
}

export const trackApiLimiter = new RateLimiter(100);

Step 2: Implement Exponential Backoff

// lib/backoff.ts
interface BackoffConfig {
  maxRetries: number;
  baseDelay: number;
  maxDelay: number;
  jitterFactor: number;
}

const defaultConfig: BackoffConfig = {
  maxRetries: 5,
  baseDelay: 1000,
  maxDelay: 32000,
  jitterFactor: 0.1
};

function calculateDelay(attempt: number, config: BackoffConfig): number {
  const exponentialDelay = config.baseDelay * Math.pow(2, attempt);
  const cappedDelay = Math.min(exponentialDelay, config.maxDelay);
  const jitter = cappedDelay * config.jitterFactor * Math.random();
  return cappedDelay + jitter;
}

export async function withExponentialBackoff<T>(
  operation: () => Promise<T>,
  config: BackoffConfig = defaultConfig
): Promise<T> {
  let lastError: Error | undefined;

  for (let attempt = 0; attempt <= config.maxRetries; attempt++) {
    try {
      return await operation();
    } catch (error: any) {
      lastError = error;

      // Don't retry on client errors (except 429)
      if (error.statusCode >= 400 && error.statusCode < 500 && error.statusCode !== 429) {
        throw error;
      }

      if (attempt < config.maxRetries) {
        const delay = calculateDelay(attempt, config);
        console.log(`Retry ${attempt + 1}/${config.maxRetries} after ${delay}ms`);
        await new Promise(resolve => setTimeout(resolve, delay));
      }
    }
  }

  throw lastError;
}

Step 3: Create Rate-Limited Client

// lib/customerio-rate-limited.ts
import { TrackClient, RegionUS } from '@customerio/track';
import { trackApiLimiter } from './rate-limiter';
import { withExponentialBackoff } from './backoff';

export class RateLimitedCustomerIO {
  private client: TrackClient;

  constructor() {
    this.client = new TrackClient(
      process.env.CUSTOMERIO_SITE_ID!,
      process.env.CUSTOMERIO_API_KEY!,
      { region: RegionUS }
    );
  }

  async identify(userId: string, attributes: Record<string, any>) {
    await trackApiLimiter.acquire();
    return withExponentialBackoff(() =>
      this.client.identify(userId, attributes)
    );
  }

  async track(userId: string, event: string, data?: Record<string, any>) {
    await trackApiLimiter.acquire();
    return withExponentialBackoff(() =>
      this.client.track(userId, { name: event, data })
    );
  }

  // Batch operations for high volume
  async batchIdentify(users: Array<{ id: string; attributes: Record<string, any> }>) {
    const results: Array<{ id: string; success: boolean; error?: string }> = [];

    for (const user of users) {
      await trackApiLimiter.acquire();
      try {
        await withExponentialBackoff(() =>
          this.client.identify(user.id, user.attributes)
        );
        results.push({ id: user.id, success: true });
      } catch (error: any) {
        results.push({ id: user.id, success: false, error: error.message });
      }
    }

    return results;
  }
}

Step 4: Handle 429 Response Headers

// lib/rate-limit-handler.ts
interface RateLimitInfo {
  remaining: number;
  resetTime: Date;
  retryAfter?: number;
}

function parseRateLimitHeaders(headers: Headers): RateLimitInfo | null {
  const remaining = headers.get('X-RateLimit-Remaining');
  const reset = headers.get('X-RateLimit-Reset');
  const retryAfter = headers.get('Retry-After');

  if (!remaining || !reset) return null;

  return {
    remaining: parseInt(remaining, 10),
    resetTime: new Date(parseInt(reset, 10) * 1000),
    retryAfter: retryAfter ? parseInt(retryAfter, 10) : undefined
  };
}

async function handleRateLimitResponse(response: Response): Promise<void> {
  if (response.status === 429) {
    const info = parseRateLimitHeaders(response.headers);
    const waitTime = info?.retryAfter || 60;

    console.warn(`Rate limited. Waiting ${waitTime}s before retry.`);
    await new Promise(resolve => setTimeout(resolve, waitTime * 1000));
  }
}

Step 5: Queue-Based Rate Limiting

// lib/customerio-queue.ts
import PQueue from 'p-queue';

const queue = new PQueue({
  concurrency: 10,
  interval: 1000,
  intervalCap: 100 // 100 requests per second
});

export class QueuedCustomerIO {
  private client: TrackClient;

  constructor() {
    this.client = new TrackClient(
      process.env.CUSTOMERIO_SITE_ID!,
      process.env.CUSTOMERIO_API_KEY!,
      { region: RegionUS }
    );
  }

  async identify(userId: string, attributes: Record<string, any>) {
    return queue.add(() => this.client.identify(userId, attributes));
  }

  async track(userId: string, event: string, data?: Record<string, any>) {
    return queue.add(() => this.client.track(userId, { name: event, data }));
  }

  // Get queue stats
  getStats() {
    return {
      pending: queue.pending,
      size: queue.size,
      isPaused: queue.isPaused
    };
  }
}

Output

  • Token bucket rate limiter
  • Exponential backoff with jitter
  • Rate-limited Customer.io client
  • Queue-based rate limiting

Error Handling

ScenarioAction
429 receivedRespect Retry-After header
Burst trafficUse queue with concurrency limit
Sustained high volumeImplement sliding window

Resources

Next Steps

After implementing rate limits, proceed to customerio-security-basics for security best practices.

More by jeremylongshore

View all
rabbitmq-queue-setup
1,004

Rabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.

model-evaluation-suite
1,004

evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

neural-network-builder
1,004

building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").

oauth-callback-handler
1,004

Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.