Agent SkillsAgent Skills
binhmuc

sequential-thinking

@binhmuc/sequential-thinking
binhmuc
22
3 forks
Updated 3/31/2026
View on GitHub

Apply structured, reflective problem-solving for complex tasks requiring multi-step analysis, revision capability, and hypothesis verification. Use for complex problem decomposition, adaptive planning, analysis needing course correction, problems with unclear scope, multi-step solutions, and hypothesis-driven work.

Installation

$npx agent-skills-cli install @binhmuc/sequential-thinking
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Path.claude/skills/sequential-thinking/SKILL.md
Branchmain
Scoped Name@binhmuc/sequential-thinking

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: sequential-thinking description: Apply structured, reflective problem-solving for complex tasks requiring multi-step analysis, revision capability, and hypothesis verification. Use for complex problem decomposition, adaptive planning, analysis needing course correction, problems with unclear scope, multi-step solutions, and hypothesis-driven work. version: 1.0.0 license: MIT

Sequential Thinking

Structured problem-solving via manageable, reflective thought sequences with dynamic adjustment.

When to Apply

  • Complex problem decomposition
  • Adaptive planning with revision capability
  • Analysis needing course correction
  • Problems with unclear/emerging scope
  • Multi-step solutions requiring context maintenance
  • Hypothesis-driven investigation/debugging

Core Process

1. Start with Loose Estimate

Thought 1/5: [Initial analysis]

Adjust dynamically as understanding evolves.

2. Structure Each Thought

  • Build on previous context explicitly
  • Address one aspect per thought
  • State assumptions, uncertainties, realizations
  • Signal what next thought should address

3. Apply Dynamic Adjustment

  • Expand: More complexity discovered → increase total
  • Contract: Simpler than expected → decrease total
  • Revise: New insight invalidates previous → mark revision
  • Branch: Multiple approaches → explore alternatives

4. Use Revision When Needed

Thought 5/8 [REVISION of Thought 2]: [Corrected understanding]
- Original: [What was stated]
- Why revised: [New insight]
- Impact: [What changes]

5. Branch for Alternatives

Thought 4/7 [BRANCH A from Thought 2]: [Approach A]
Thought 4/7 [BRANCH B from Thought 2]: [Approach B]

Compare explicitly, converge with decision rationale.

6. Generate & Verify Hypotheses

Thought 6/9 [HYPOTHESIS]: [Proposed solution]
Thought 7/9 [VERIFICATION]: [Test results]

Iterate until hypothesis verified.

7. Complete Only When Ready

Mark final: Thought N/N [FINAL]

Complete when:

  • Solution verified
  • All critical aspects addressed
  • Confidence achieved
  • No outstanding uncertainties

Application Modes

Explicit: Use visible thought markers when complexity warrants visible reasoning or user requests breakdown.

Implicit: Apply methodology internally for routine problem-solving where thinking aids accuracy without cluttering response.

Scripts (Optional)

Optional scripts for deterministic validation/tracking:

  • scripts/process-thought.js - Validate & track thoughts with history
  • scripts/format-thought.js - Format for display (box/markdown/simple)

See README.md for usage examples. Use when validation/persistence needed; otherwise apply methodology directly.

References

Load when deeper understanding needed:

  • references/core-patterns.md - Revision & branching patterns
  • references/examples-api.md - API design example
  • references/examples-debug.md - Debugging example
  • references/examples-architecture.md - Architecture decision example
  • references/advanced-techniques.md - Spiral refinement, hypothesis testing, convergence
  • references/advanced-strategies.md - Uncertainty, revision cascades, meta-thinking

More by binhmuc

View all
ai-artist
22

Write and optimize prompts for AI-generated outcomes across text and image models. Use when crafting prompts for LLMs (Claude, GPT, Gemini), image generators (Midjourney, DALL-E, Stable Diffusion, Imagen, Flux), or video generators (Veo, Runway). Covers prompt structure, style keywords, negative prompts, chain-of-thought, few-shot examples, iterative refinement, and domain-specific patterns for marketing, code, and creative writing.

mobile-development
22

Build modern mobile applications with React Native, Flutter, Swift/SwiftUI, and Kotlin/Jetpack Compose. Covers mobile-first design principles, performance optimization (battery, memory, network), offline-first architecture, platform-specific guidelines (iOS HIG, Material Design), testing strategies, security best practices, accessibility, app store deployment, and mobile development mindset. Use when building mobile apps, implementing mobile UX patterns, optimizing for mobile constraints, or making native vs cross-platform decisions.

ai-multimodal
22

Process and generate multimedia content using Google Gemini API for better vision capabilities. Capabilities include analyze audio files (transcription with timestamps, summarization, speech understanding, music/sound analysis up to 9.5 hours), understand images (better image analysis than Claude models, captioning, reasoning, object detection, design extraction, OCR, visual Q&A, segmentation, handle multiple images), process videos (scene detection, Q&A, temporal analysis, YouTube URLs, up to 6 hours), extract from documents (PDF tables, forms, charts, diagrams, multi-page), generate images (text-to-image with Imagen 4, editing, composition, refinement), generate videos (text-to-video with Veo 3, 8-second clips with native audio). Use when working with audio/video files, analyzing images or screenshots (instead of default vision capabilities of Claude, only fallback to Claude's vision capabilities if needed), processing PDF documents, extracting structured data from media, creating images/videos from text prompts, or implementing multimodal AI features. Supports Gemini 3/2.5, Imagen 4, and Veo 3 models with context windows up to 2M tokens.

code-review
22

Use when receiving code review feedback (especially if unclear or technically questionable), when completing tasks or major features requiring review before proceeding, or before making any completion/success claims. Covers three practices - receiving feedback with technical rigor over performative agreement, requesting reviews via code-reviewer subagent, and verification gates requiring evidence before any status claims. Essential for subagent-driven development, pull requests, and preventing false completion claims.