jeremylongshore

linear-cost-tuning

@jeremylongshore/linear-cost-tuning
jeremylongshore
1,004
123 forks
Updated 1/18/2026
View on GitHub

Optimize Linear API usage and manage costs effectively. Use when reducing API calls, managing rate limits efficiently, or optimizing integration costs. Trigger with phrases like "linear cost", "reduce linear API calls", "linear efficiency", "linear API usage", "optimize linear costs".

Installation

$skills install @jeremylongshore/linear-cost-tuning
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/saas-packs/linear-pack/skills/linear-cost-tuning/SKILL.md
Branchmain
Scoped Name@jeremylongshore/linear-cost-tuning

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: linear-cost-tuning description: | Optimize Linear API usage and manage costs effectively. Use when reducing API calls, managing rate limits efficiently, or optimizing integration costs. Trigger with phrases like "linear cost", "reduce linear API calls", "linear efficiency", "linear API usage", "optimize linear costs". allowed-tools: Read, Write, Edit, Grep version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io

Linear Cost Tuning

Overview

Optimize Linear API usage to maximize efficiency and minimize costs.

Prerequisites

  • Working Linear integration
  • Monitoring in place
  • Understanding of usage patterns

Cost Factors

API Request Costs

FactorImpactOptimization Strategy
Request countDirect rate limitBatch operations
Query complexityComplexity limitMinimal field selection
Payload sizeBandwidth/latencyPagination, filtering
Webhook volumeProcessing costsEvent filtering

Instructions

Step 1: Audit Current Usage

// lib/usage-tracker.ts
interface UsageStats {
  requests: number;
  complexity: number;
  bytesTransferred: number;
  period: { start: Date; end: Date };
}

class UsageTracker {
  private stats: UsageStats = {
    requests: 0,
    complexity: 0,
    bytesTransferred: 0,
    period: { start: new Date(), end: new Date() },
  };

  recordRequest(complexity: number, bytes: number): void {
    this.stats.requests++;
    this.stats.complexity += complexity;
    this.stats.bytesTransferred += bytes;
    this.stats.period.end = new Date();
  }

  getStats(): UsageStats {
    return { ...this.stats };
  }

  getDaily(): {
    avgRequestsPerHour: number;
    avgComplexityPerRequest: number;
    projectedMonthlyRequests: number;
  } {
    const hours =
      (this.stats.period.end.getTime() - this.stats.period.start.getTime()) /
      (1000 * 60 * 60);

    return {
      avgRequestsPerHour: this.stats.requests / Math.max(hours, 1),
      avgComplexityPerRequest: this.stats.complexity / Math.max(this.stats.requests, 1),
      projectedMonthlyRequests: (this.stats.requests / Math.max(hours, 1)) * 24 * 30,
    };
  }

  reset(): void {
    this.stats = {
      requests: 0,
      complexity: 0,
      bytesTransferred: 0,
      period: { start: new Date(), end: new Date() },
    };
  }
}

export const usageTracker = new UsageTracker();

Step 2: Reduce Request Volume

Polling vs Webhooks:

// BAD: Polling every minute
setInterval(async () => {
  const issues = await client.issues({ first: 100 });
  await syncIssues(issues.nodes);
}, 60000);

// GOOD: Use webhooks for real-time updates
// See linear-webhooks-events skill
app.post("/webhooks/linear", async (req, res) => {
  const event = req.body;
  await handleEvent(event);
  res.sendStatus(200);
});

Conditional Fetching:

// lib/conditional-fetch.ts
interface ETagCache {
  data: any;
  etag: string;
  timestamp: Date;
}

const etagCache = new Map<string, ETagCache>();

async function fetchWithETag(key: string, fetcher: () => Promise<any>) {
  const cached = etagCache.get(key);

  // Only fetch if cache is stale (> 5 minutes)
  if (cached && Date.now() - cached.timestamp.getTime() < 5 * 60 * 1000) {
    return cached.data;
  }

  const data = await fetcher();
  etagCache.set(key, {
    data,
    etag: JSON.stringify(data).slice(0, 50), // Simple hash
    timestamp: new Date(),
  });

  return data;
}

Step 3: Optimize Query Complexity

Calculate Complexity:

// Linear complexity estimation
// - Each field costs 1
// - Each connection costs 1 + (first * child_complexity)
// - Nested connections multiply

// BAD: High complexity query (~500 complexity)
const expensiveQuery = `
  query {
    issues(first: 50) {
      nodes {
        id
        title
        assignee { name }
        labels { nodes { name } }
        comments(first: 10) {
          nodes { body user { name } }
        }
      }
    }
  }
`;

// GOOD: Low complexity query (~100 complexity)
const cheapQuery = `
  query {
    issues(first: 50) {
      nodes {
        id
        identifier
        title
        priority
      }
    }
  }
`;

Step 4: Implement Request Coalescing

// lib/coalesce.ts
class RequestCoalescer {
  private pending = new Map<string, Promise<any>>();

  async execute<T>(key: string, fn: () => Promise<T>): Promise<T> {
    // If same request is already in flight, reuse it
    const existing = this.pending.get(key);
    if (existing) {
      return existing;
    }

    const promise = fn().finally(() => {
      this.pending.delete(key);
    });

    this.pending.set(key, promise);
    return promise;
  }
}

const coalescer = new RequestCoalescer();

// Multiple simultaneous calls reuse the same request
const [teams1, teams2, teams3] = await Promise.all([
  coalescer.execute("teams", () => client.teams()),
  coalescer.execute("teams", () => client.teams()), // Reuses first request
  coalescer.execute("teams", () => client.teams()), // Reuses first request
]);

Step 5: Webhook Event Filtering

// Only process relevant events
async function shouldProcessEvent(event: any): boolean {
  // Skip events from bots
  if (event.data?.actor?.isBot) return false;

  // Only process certain issue states
  if (event.type === "Issue" && event.action === "update") {
    const importantFields = ["state", "priority", "assignee"];
    const changedFields = Object.keys(event.updatedFrom || {});

    if (!changedFields.some(f => importantFields.includes(f))) {
      return false; // Skip trivial updates
    }
  }

  // Only process issues from specific teams
  const allowedTeams = ["ENG", "PROD"];
  if (event.data?.team?.key && !allowedTeams.includes(event.data.team.key)) {
    return false;
  }

  return true;
}

Step 6: Lazy Loading Pattern

// lib/lazy-client.ts
class LazyLinearClient {
  private client: LinearClient;
  private teamsCache: any[] | null = null;
  private statesCache = new Map<string, any[]>();

  constructor(apiKey: string) {
    this.client = new LinearClient({ apiKey });
  }

  async getTeams() {
    if (!this.teamsCache) {
      const teams = await this.client.teams();
      this.teamsCache = teams.nodes;
    }
    return this.teamsCache;
  }

  async getStatesForTeam(teamKey: string) {
    if (!this.statesCache.has(teamKey)) {
      const teams = await this.client.teams({
        filter: { key: { eq: teamKey } },
      });
      const states = await teams.nodes[0].states();
      this.statesCache.set(teamKey, states.nodes);
    }
    return this.statesCache.get(teamKey)!;
  }

  // Invalidate on known changes
  invalidateTeams() {
    this.teamsCache = null;
    this.statesCache.clear();
  }
}

Cost Reduction Checklist

  • Replace polling with webhooks
  • Implement request caching
  • Use request coalescing
  • Filter webhook events
  • Minimize query complexity
  • Batch related operations
  • Use lazy loading for static data
  • Monitor and track usage

Monitoring Dashboard

// Example metrics to track
const metrics = {
  // Request metrics
  totalRequests: counter("linear_requests_total"),
  requestDuration: histogram("linear_request_duration_seconds"),
  complexityCost: histogram("linear_complexity_cost"),

  // Cache metrics
  cacheHits: counter("linear_cache_hits_total"),
  cacheMisses: counter("linear_cache_misses_total"),

  // Webhook metrics
  webhooksReceived: counter("linear_webhooks_received_total"),
  webhooksFiltered: counter("linear_webhooks_filtered_total"),
};

Resources

Next Steps

Learn production architecture with linear-reference-architecture.

More by jeremylongshore

View all
rabbitmq-queue-setup
1,004

Rabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.

model-evaluation-suite
1,004

evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

neural-network-builder
1,004

building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").

oauth-callback-handler
1,004

Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.