Agent SkillsAgent Skills
jeremylongshore

running-chaos-tests

@jeremylongshore/running-chaos-tests
jeremylongshore
1,853
249 forks
Updated 4/6/2026
View on GitHub

Execute chaos engineering experiments to test system resilience. Use when performing specialized testing. Trigger with phrases like "run chaos tests", "test resilience", or "inject failures".

Installation

$npx agent-skills-cli install @jeremylongshore/running-chaos-tests
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/testing/chaos-engineering-toolkit/skills/running-chaos-tests/SKILL.md
Branchmain
Scoped Name@jeremylongshore/running-chaos-tests

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: running-chaos-tests description: | Execute chaos engineering experiments to test system resilience. Use when performing specialized testing. Trigger with phrases like "run chaos tests", "test resilience", or "inject failures".

allowed-tools: Read, Write, Edit, Grep, Glob, Bash(test:chaos-*) version: 1.0.0 author: Jeremy Longshore jeremy@intentsolutions.io license: MIT compatible-with: claude-code, codex, openclaw tags: [testing, chaos-tests]

Chaos Engineering Toolkit

Overview

Execute controlled chaos engineering experiments to test system resilience, fault tolerance, and recovery capabilities. Injects failures including network latency, service crashes, resource exhaustion, and dependency outages to verify that systems degrade gracefully and recover automatically.

Prerequisites

  • Distributed system or microservice architecture deployed in a staging/test environment
  • Monitoring and alerting configured (Grafana, Datadog, CloudWatch, or Prometheus)
  • Rollback capability for the target environment (manual or automated)
  • Chaos engineering tool installed (toxiproxy, Pumba, Litmus, or Chaos Mesh)
  • Explicit approval from the team to run chaos experiments
  • Steady-state hypothesis defined (what "healthy" looks like in metrics)

Instructions

  1. Define the steady-state hypothesis:
    • Identify measurable indicators of normal system behavior (e.g., p99 latency < 500ms, error rate < 0.1%, all health checks pass).
    • Record baseline metrics before injecting any failures.
    • Define the blast radius -- which services and users are affected by the experiment.
  2. Design chaos experiments by category:
    • Network: Inject latency (200-2000ms), packet loss (5-50%), DNS failure, connection timeout.
    • Process: Kill a service instance, exhaust CPU or memory, fill disk.
    • Dependency: Block access to database, cache, or external API.
    • State: Corrupt data, introduce clock skew, simulate split-brain scenarios.
  3. Start with minimal impact and increase gradually:
    • Begin with read-only experiments (network latency on non-critical path).
    • Progress to service-level failures (kill one instance of a multi-instance service).
    • Only move to data-level chaos after infrastructure chaos is validated.
  4. Execute each experiment with safeguards:
    • Set a maximum experiment duration (5-15 minutes).
    • Configure automatic rollback triggers (error rate > 5% triggers abort).
    • Monitor system metrics in real-time during the experiment.
    • Have a manual kill switch ready (script to remove all injected failures immediately).
  5. Observe and record system behavior during the experiment:
    • Did circuit breakers activate? How quickly?
    • Did auto-scaling trigger? How long until new instances were healthy?
    • Did retries succeed? Were they idempotent?
    • Did fallback mechanisms engage (cached responses, degraded mode)?
    • Were alerts triggered? Did on-call receive notification?
  6. After the experiment, verify full recovery:
    • Remove all injected failures.
    • Verify steady-state hypothesis holds again within expected recovery time.
    • Check for data inconsistencies or orphaned state.
  7. Document findings and create action items for resilience improvements.

Output

  • Chaos experiment definition files (YAML or JSON) with hypothesis, method, and rollback
  • Experiment execution log with timeline of injected failures and observed effects
  • System behavior report covering circuit breakers, retries, fallbacks, and alerts
  • Recovery timeline showing time-to-detection and time-to-recovery
  • Action items for resilience improvements (retry policies, circuit breaker tuning, fallback additions)

Error Handling

ErrorCauseSolution
Experiment caused production outageBlast radius larger than expected or missing safeguardsAlways run in staging first; reduce scope; add automatic abort triggers; require approval
System did not recover after experimentAuto-healing mechanisms not configured or too slowAdd health-check-based restarts; configure auto-scaling; implement circuit breaker patterns
Monitoring missed the failureAlerting thresholds too lenient or wrong metrics monitoredTighten alert thresholds; add specific alerts for the failure mode tested; verify alert channels
Chaos tool cannot access targetNetwork segmentation or security policies blocking the toolDeploy chaos agent inside the target network; add security group rules for the chaos controller
Data corruption persists after rollbackStateful failure injection without transaction protectionUse read-only chaos first; snapshot databases before stateful experiments; implement compensating transactions

Examples

toxiproxy network latency injection:

set -euo pipefail
# Create a proxy for the database connection
toxiproxy-cli create postgres_proxy -l 0.0.0.0:15432 -u postgres-host:5432  # 15432: PostgreSQL port

# Inject 500ms latency
toxiproxy-cli toxic add postgres_proxy -t latency -a latency=500 -a jitter=100  # HTTP 500 Internal Server Error

# Run tests while latency is active
npm test -- --grep "handles slow database"

# Remove the toxic
toxiproxy-cli toxic remove postgres_proxy -n latency_downstream

Kubernetes pod kill experiment (Litmus Chaos):

apiVersion: litmuschaos.io/v1alpha1
kind: ChaosEngine
metadata:
  name: api-pod-kill
spec:
  appinfo:
    appns: default
    applabel: "app=api-server"
  chaosServiceAccount: litmus-admin
  experiments:
    - name: pod-delete
      spec:
        components:
          env:
            - name: TOTAL_CHAOS_DURATION
              value: "60"
            - name: CHAOS_INTERVAL
              value: "10"
            - name: FORCE
              value: "true"

Custom chaos script (process kill and verify recovery):

#!/bin/bash
set -euo pipefail
echo "=== Chaos Experiment: API server kill ==="
echo "Hypothesis: System recovers within 30 seconds"

# Record baseline
BASELINE=$(curl -s -o /dev/null -w '%{http_code}' http://app.test/health)
echo "Baseline health: $BASELINE"

# Kill one API instance
docker kill api-server-1

# Monitor recovery
for i in $(seq 1 30); do
  STATUS=$(curl -s -o /dev/null -w '%{http_code}' --max-time 2 http://app.test/health)
  echo "T+${i}s: HTTP $STATUS"
  if [ "$STATUS" = "200" ]; then  # HTTP 200 OK
    echo "RECOVERED at T+${i}s"
    break
  fi
  sleep 1
done

Resources

More by jeremylongshore

View all
genkit-production-expert
1,855

Build production Firebase Genkit applications including RAG systems, multi-step flows, and tool calling for Node.js/Python/Go. Deploy to Firebase Functions or Cloud Run with AI monitoring. Use when asked to "create genkit flow" or "implement RAG". Trigger with relevant phrases based on skill purpose.

validator-expert
1,855

Validate production readiness of Vertex AI Agent Engine deployments across security, monitoring, performance, compliance, and best practices. Generates weighted scores (0-100%) with actionable remediation plans. Use when asked to validate a deployment, run a production readiness check, audit security posture, or verify compliance for Vertex AI agents. Trigger with "validate deployment", "production readiness", "security audit", "compliance check", "is this agent ready for prod", "check my ADK agent", "review before deploy", or "production readiness check". Make sure to use this skill whenever validating ADK agents for Agent Engine.

gcp-examples-expert
1,855

Generate production-ready Google Cloud code examples from official repositories including ADK samples, Genkit templates, Vertex AI notebooks, and Gemini patterns. Use when asked to "show ADK example" or "provide GCP starter kit". Trigger with relevant phrases based on skill purpose.

adk-deployment-specialist
1,855

Deploy and orchestrate Vertex AI ADK agents using A2A protocol. Manages AgentCard discovery, task submission, Code Execution Sandbox, and Memory Bank. Use when asked to "deploy ADK agent" or "orchestrate agents". Trigger with phrases like 'deploy', 'infrastructure', or 'CI/CD'.