Agent SkillsAgent Skills
vneseyoungster

brainstorming

@vneseyoungster/brainstorming
vneseyoungster
26
17 forks
Updated 4/6/2026
View on GitHub

You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.

Installation

$npx agent-skills-cli install @vneseyoungster/brainstorming
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Path.claude/skills/brainstorming/SKILL.md
Branchmain
Scoped Name@vneseyoungster/brainstorming

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: brainstorming description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation."

Brainstorming Skill

Purpose

Pure collaborative dialogue skill for exploring ideas. This skill focuses ONLY on understanding and exploration - it does NOT generate specifications, tests, or implementation plans.

This skill outputs: Understanding, not artifacts.


Core Principles

1. One Question at a Time

Never overwhelm with multiple questions. Each message should contain exactly ONE question.

BAD:  "What's the purpose? Who are the users? What's the timeline?"
GOOD: "What problem are you trying to solve with this feature?"

2. Multiple Choice Preferred

When possible, offer 2-4 concrete options instead of open-ended questions.

BAD:  "How should we handle authentication?"
GOOD: "For authentication, which approach fits your needs?
       A) JWT tokens (stateless, good for APIs)
       B) Session cookies (simpler, good for web apps)
       C) OAuth only (delegate to providers)"

3. Lead with Recommendation

When presenting options, lead with your recommended choice and explain why.

GOOD: "I'd recommend option A (JWT tokens) because your API will be consumed
       by mobile apps. That said, here are the alternatives..."

4. Incremental Validation

Present ideas in 200-300 word chunks. Validate each before moving on.

"Here's how I understand the data flow so far...
[200-300 words]
Does this match your thinking?"

5. Explore Alternatives

Always propose 2-3 different approaches before settling on one.

6. YAGNI Ruthlessly

Challenge any feature that isn't essential. Ask "Do we need this for v1?"


Dialogue Flow

┌─────────────────────────────────┐
│  1. UNDERSTAND THE IDEA         │
│  - What problem are we solving? │
│  - Who is this for?             │
│  - What does success look like? │
└──────────────┬──────────────────┘
               ▼
┌─────────────────────────────────┐
│  2. EXPLORE CONSTRAINTS         │
│  - Technical limitations?       │
│  - Timeline/scope constraints?  │
│  - Integration requirements?    │
└──────────────┬──────────────────┘
               ▼
┌─────────────────────────────────┐
│  3. PROPOSE APPROACHES          │
│  - Present 2-3 options          │
│  - Explain trade-offs           │
│  - Lead with recommendation     │
└──────────────┬──────────────────┘
               ▼
┌─────────────────────────────────┐
│  4. VALIDATE UNDERSTANDING      │
│  - Summarize in sections        │
│  - Check each section           │
│  - Iterate until aligned        │
└──────────────┴──────────────────┘

Question Categories

Discovery Questions

Understanding the core idea:

  • "What problem does this solve?"
  • "Who will use this and how?"
  • "What does success look like?"
  • "Why now? What triggered this need?"

Constraint Questions

Understanding boundaries:

  • "What existing systems does this need to work with?"
  • "Are there performance requirements?"
  • "What's the scope for v1 vs later?"
  • "Any technical constraints I should know about?"

Clarification Questions

Drilling into specifics:

  • "When you say X, do you mean A or B?"
  • "Can you give me an example of...?"
  • "What should happen when...?"

Validation Questions

Confirming understanding:

  • "So if I understand correctly... Is that right?"
  • "Does this match what you had in mind?"
  • "Anything I'm missing?"

Anti-Patterns

Anti-PatternWhy It's BadDo This Instead
Multiple questions per messageOverwhelming, unfocusedOne question only
Open-ended when options existHarder to answerOffer concrete choices
Jumping to solutionsMiss requirementsUnderstand first
Long monologuesLoses engagement200-300 word chunks
Assuming requirementsBuilds wrong thingAsk, don't assume
Skipping alternativesMisses better optionsAlways explore 2-3 approaches

Output

This skill produces shared understanding, not documents.

The calling command (e.g., /research:feature) is responsible for:

  • Capturing the dialogue outcomes
  • Generating formal requirements documents
  • Creating specifications

This skill focuses purely on the conversation.


Integration Points

This skill is used by:

  • /research:feature - Feature requirements gathering
  • /research:plan - Architecture exploration
  • /start - Initial scoping

The skill provides dialogue structure; the command provides context and output handling.

More by vneseyoungster

View all
documentation
26

Topic Analysis Skill: Deep analysis of abstract topics, concepts, or technologies through multi-agent research and brainstorming. Produces comprehensive documentation with Mermaid diagrams, converted to a styled standalone HTML report.

figma-analyzer
26

Extract design assets and metadata from Figma using the Figma REST API. Supports exporting frames/components as images, extracting node metadata, design tokens, and file structure. Use with ai-multimodal skill for comprehensive UI research.

implementation
26

error-handling: Implement consistent error handling across the application. Use when adding try-catch blocks, error boundaries, or custom error classes.

ai-multimodal
26

Process and generate multimedia content using Google Gemini API. Capabilities include analyze audio files (transcription with timestamps, summarization, speech understanding, music/sound analysis up to 9.5 hours), understand images (captioning, object detection, OCR, visual Q&A, segmentation), process videos (scene detection, Q&A, temporal analysis, YouTube URLs, up to 6 hours), extract from documents (PDF tables, forms, charts, diagrams, multi-page), generate images (text-to-image, editing, composition, refinement). Use when working with audio/video files, analyzing images or screenshots, processing PDF documents, extracting structured data from media, creating images from text prompts, or implementing multimodal AI features. Supports multiple models (Gemini 2.5/2.0) with context windows up to 2M tokens.