Test at extremes (1000x bigger/smaller, instant/year-long) to expose fundamental truths hidden at normal scales
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: Scale Game description: Test at extremes (1000x bigger/smaller, instant/year-long) to expose fundamental truths hidden at normal scales when_to_use: when uncertain about scalability, edge cases unclear, or validating architecture for production volumes version: 1.1.0
Scale Game
Overview
Test your approach at extreme scales to find what breaks and what surprisingly survives.
Core principle: Extremes expose fundamental truths hidden at normal scales.
Quick Reference
| Scale Dimension | Test At Extremes | What It Reveals |
|---|---|---|
| Volume | 1 item vs 1B items | Algorithmic complexity limits |
| Speed | Instant vs 1 year | Async requirements, caching needs |
| Users | 1 user vs 1B users | Concurrency issues, resource limits |
| Duration | Milliseconds vs years | Memory leaks, state growth |
| Failure rate | Never fails vs always fails | Error handling adequacy |
Process
- Pick dimension - What could vary extremely?
- Test minimum - What if this was 1000x smaller/faster/fewer?
- Test maximum - What if this was 1000x bigger/slower/more?
- Note what breaks - Where do limits appear?
- Note what survives - What's fundamentally sound?
Examples
Example 1: Error Handling
Normal scale: "Handle errors when they occur" works fine At 1B scale: Error volume overwhelms logging, crashes system Reveals: Need to make errors impossible (type systems) or expect them (chaos engineering)
Example 2: Synchronous APIs
Normal scale: Direct function calls work At global scale: Network latency makes synchronous calls unusable Reveals: Async/messaging becomes survival requirement, not optimization
Example 3: In-Memory State
Normal duration: Works for hours/days At years: Memory grows unbounded, eventual crash Reveals: Need persistence or periodic cleanup, can't rely on memory
Red Flags You Need This
- "It works in dev" (but will it work in production?)
- No idea where limits are
- "Should scale fine" (without testing)
- Surprised by production behavior
Remember
- Extremes reveal fundamentals
- What works at one scale fails at another
- Test both directions (bigger AND smaller)
- Use insights to validate architecture early
More by mrgoonie
View allProcess and generate multimedia content using Google Gemini API. Capabilities include analyze audio files (transcription with timestamps, summarization, speech understanding, music/sound analysis up to 9.5 hours), understand images (captioning, object detection, OCR, visual Q&A, segmentation), process videos (scene detection, Q&A, temporal analysis, YouTube URLs, up to 6 hours), extract from documents (PDF tables, forms, charts, diagrams, multi-page), generate images (text-to-image, editing, composition, refinement). Use when working with audio/video files, analyzing images or screenshots, processing PDF documents, extracting structured data from media, creating images from text prompts, or implementing multimodal AI features. Supports multiple models (Gemini 2.5/2.0) with context windows up to 2M tokens.
Systematically trace bugs backward through call stack to find original trigger
Work with MongoDB (document database, BSON documents, aggregation pipelines, Atlas cloud) and PostgreSQL (relational database, SQL queries, psql CLI, pgAdmin). Use when designing database schemas, writing queries and aggregations, optimizing indexes for performance, performing database migrations, configuring replication and sharding, implementing backup and restore strategies, managing database users and permissions, analyzing query performance, or administering production databases.
Browser automation, debugging, and performance analysis using Puppeteer CLI scripts. Use for automating browsers, taking screenshots, analyzing performance, monitoring network traffic, web scraping, form automation, and JavaScript debugging.