Agent SkillsAgent Skills
SpillwaveSolutions

using-agent-brain

@SpillwaveSolutions/using-agent-brain
SpillwaveSolutions
94
17 forks
Updated 5/6/2026
View on GitHub

Expert Agent Brain skill for document search with BM25 keyword, semantic vector, hybrid, graph, and multi retrieval modes. Use when asked to "search documentation", "query domain", "find in docs", "bm25 search", "hybrid search", "semantic search", "graph search", "multi search", "find dependencies", "code relationships", "searching knowledge base", "querying indexed documents", "finding code references", "exploring codebase", "what calls this function", "find imports", "trace dependencies", "brain search", "brain query", "knowledge base search", "cache management", "clear embedding cache", "cache hit rate", or "cache status". Supports multi-instance architecture with automatic server discovery. GraphRAG mode enables relationship-aware queries for code dependencies and entity connections. Pluggable providers for embeddings (OpenAI, Cohere, Ollama) and summarization (Anthropic, OpenAI, Gemini, Grok, Ollama). Supports multiple runtimes (Claude Code, OpenCode, Gemini CLI) with shared .agent-brain/ data directory.

Installation

$npx agent-skills-cli install @SpillwaveSolutions/using-agent-brain
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathagent-brain-plugin/skills/using-agent-brain/SKILL.md
Branchmain
Scoped Name@SpillwaveSolutions/using-agent-brain

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: using-agent-brain description: | Expert Agent Brain skill for document search with BM25 keyword, semantic vector, hybrid, graph, and multi retrieval modes. Use when asked to "search documentation", "query domain", "find in docs", "bm25 search", "hybrid search", "semantic search", "graph search", "multi search", "find dependencies", "code relationships", "searching knowledge base", "querying indexed documents", "finding code references", "exploring codebase", "what calls this function", "find imports", "trace dependencies", "brain search", "brain query", "knowledge base search", "cache management", "clear embedding cache", "cache hit rate", or "cache status". Supports multi-instance architecture with automatic server discovery. GraphRAG mode enables relationship-aware queries for code dependencies and entity connections. Pluggable providers for embeddings (OpenAI, Cohere, Ollama) and summarization (Anthropic, OpenAI, Gemini, Grok, Ollama). Supports multiple runtimes (Claude Code, OpenCode, Gemini CLI) with shared .agent-brain/ data directory. license: MIT allowed-tools:

  • Bash
  • Read metadata: version: 7.0.0 category: ai-tools author: Spillwave last_validated: 2026-03-19

Agent Brain Expert Skill

Expert-level skill for Agent Brain document search with five modes: BM25 (keyword), Vector (semantic), Hybrid (fusion), Graph (knowledge graph), and Multi (comprehensive fusion).

Contents


Search Modes

ModeSpeedBest ForExample Query
bm25Fast (10-50ms)Technical terms, function names, error codes"AuthenticationError"
vectorSlower (800-1500ms)Concepts, explanations, natural language"how authentication works"
hybridSlower (1000-1800ms)Comprehensive results combining both"OAuth implementation guide"
graphMedium (500-1200ms)Relationships, dependencies, call chains"what calls AuthService"
multiSlowest (1500-2500ms)Most comprehensive with entity context"complete auth flow with dependencies"

Mode Parameters

ParameterDefaultDescription
--modehybridSearch mode: bm25, vector, hybrid, graph, multi
--threshold0.3Minimum similarity (0.0-1.0)
--top-k5Number of results
--alpha0.5Hybrid balance (0=BM25, 1=Vector)

Mode Selection Guide

Use BM25 When

Searching for exact technical terms:

agent-brain query "recursiveCharacterTextSplitter" --mode bm25
agent-brain query "ValueError: invalid token" --mode bm25
agent-brain query "def process_payment" --mode bm25

Counter-example - Wrong mode choice:

# BM25 is wrong for conceptual queries
agent-brain query "how does error handling work" --mode bm25  # Wrong
agent-brain query "how does error handling work" --mode vector  # Correct

Use Vector When

Searching for concepts or natural language:

agent-brain query "best practices for error handling" --mode vector
agent-brain query "how to implement caching" --mode vector

Counter-example - Wrong mode choice:

# Vector is wrong for exact function names
agent-brain query "getUserById" --mode vector  # Wrong - may miss exact match
agent-brain query "getUserById" --mode bm25    # Correct - finds exact match

Use Hybrid When

Need comprehensive results (default mode):

agent-brain query "OAuth implementation" --mode hybrid --alpha 0.6
agent-brain query "database connection pooling" --mode hybrid

Alpha tuning:

  • --alpha 0.3 - More keyword weight (technical docs)
  • --alpha 0.7 - More semantic weight (conceptual docs)

Use Graph When

Exploring relationships and dependencies:

agent-brain query "what functions call process_payment" --mode graph
agent-brain query "classes that inherit from BaseService" --mode graph --traversal-depth 3
agent-brain query "modules that import authentication" --mode graph

Prerequisite: Requires ENABLE_GRAPH_INDEX=true during server startup.

Use Multi When

Need the most comprehensive results:

agent-brain query "complete payment flow implementation" --mode multi --include-relationships

GraphRAG (Knowledge Graph)

GraphRAG enables relationship-aware retrieval by building a knowledge graph from indexed documents.

Enabling GraphRAG

export ENABLE_GRAPH_INDEX=true
agent-brain start

Graph Query Types

Query PatternExample
Function callers"what calls process_payment"
Class inheritance"classes extending BaseController"
Import dependencies"modules importing auth"
Data flow"where does user_id come from"

See Graph Search Guide for detailed usage.


Indexing & Folder Management

Indexing with File Type Presets

# Index only Python files
agent-brain index ./src --include-type python

# Index Python and documentation
agent-brain index ./project --include-type python,docs

# Index all code files
agent-brain index ./repo --include-type code

# Force full re-index (bypass incremental)
agent-brain index ./docs --force

Use agent-brain types list to see all 14 available presets.

Folder Management

agent-brain folders list                    # List indexed folders with chunk counts
agent-brain folders add ./docs              # Add folder (triggers indexing)
agent-brain folders add ./src --include-type python  # Add with preset filter
agent-brain folders remove ./old-docs --yes # Remove folder and evict chunks

Incremental Indexing

Re-indexing a folder automatically detects changes:

  • Unchanged files are skipped (mtime + SHA-256 checksum)
  • Changed files have old chunks evicted and new ones created
  • Deleted files have their chunks automatically removed
  • Use --force to bypass manifest and fully re-index

Content Injection

Enrich chunk metadata during indexing with custom Python scripts or static JSON metadata.

When to Use

  • Tag chunks with project/team/category metadata
  • Classify chunks by content type
  • Add custom fields for filtered search
  • Merge folder-level metadata into all chunks

Basic Usage

# Inject via Python script
agent-brain inject ./docs --script enrich.py

# Inject via static JSON metadata
agent-brain inject ./src --folder-metadata project-meta.json

# Validate script before indexing
agent-brain inject ./docs --script enrich.py --dry-run

Injector Script Protocol

Scripts export a process_chunk(chunk: dict) -> dict function:

def process_chunk(chunk: dict) -> dict:
    chunk["project"] = "my-project"
    chunk["team"] = "backend"
    return chunk
  • Values must be scalars (str, int, float, bool)
  • Per-chunk exceptions are logged as warnings, not fatal
  • See docs/INJECTOR_PROTOCOL.md for the full specification

Job Queue Management

Indexing runs asynchronously via a job queue. Monitor and manage jobs:

agent-brain jobs                    # List all jobs
agent-brain jobs --watch            # Live polling every 3s
agent-brain jobs <job_id>           # Job details + eviction summary
agent-brain jobs <job_id> --cancel  # Cancel a job

Eviction Summary

When re-indexing, job details show what changed:

Eviction Summary:
  Files added:     3
  Files changed:   2
  Files deleted:   1
  Files unchanged: 42
  Chunks evicted:  15
  Chunks created:  25

This confirms incremental indexing is working efficiently.


Server Management

Quick Start

agent-brain init              # Initialize project (first time)
agent-brain start    # Start server
agent-brain index ./docs      # Index documents
agent-brain query "search"    # Search
agent-brain stop              # Stop when done

Progress Checklist:

  • /agent-brain:agent-brain-init succeeded
  • /agent-brain:agent-brain-status shows healthy
  • Document count > 0
  • Query returns results (or "no matches" - not error)

Lifecycle Commands

CommandDescription
/agent-brain:agent-brain-initInitialize project config
/agent-brain:agent-brain-startStart with auto-port
/agent-brain:agent-brain-statusShow port, mode, document count
/agent-brain:agent-brain-listList all running instances
/agent-brain:agent-brain-stopGraceful shutdown

Pre-Query Validation

Before querying, verify setup:

agent-brain status

Expected:

  • Status: healthy
  • Documents: > 0
  • Provider: configured

Counter-example - Querying without validation:

# Wrong - querying without checking status
agent-brain query "search term"  # May fail if server not running

# Correct - validate first
agent-brain status && agent-brain query "search term"

See Server Discovery Guide for multi-instance details.


Cache Management

The embedding cache automatically stores computed embeddings to avoid redundant API calls during reindexing. No setup is required — the cache is active by default.

When to Check Cache Status

  • After indexing — verify cache is working and hit rate is growing
  • When queries seem slow — a low or zero hit rate means embeddings are being recomputed on every reindex
  • To monitor cache growth — track disk usage over time for large indexes
agent-brain cache status

A healthy cache shows:

  • Hit rate > 80% after the first full reindex cycle
  • Growing disk entries over time as more content is indexed
  • Low misses relative to hits

When to Clear the Cache

  • After changing embedding provider or model — prevents dimension mismatches and stale cached vectors
  • Suspected cache corruption — if embeddings seem incorrect or search quality degrades unexpectedly
  • To force fresh embeddings — when you need to ensure all vectors reflect the current provider/model
# Clear with confirmation prompt
agent-brain cache clear

# Clear without prompt (use in scripts)
agent-brain cache clear --yes

Cache is Automatic

No configuration is required. Embeddings are cached on first compute and reused on subsequent reindexes of unchanged content (identified by SHA-256 hash). The cache complements the ManifestTracker — files that haven't changed on disk won't need to recompute embeddings.

See the API Reference for GET /index/cache and DELETE /index/cache endpoint details, including response schemas.


When Not to Use

This skill focuses on searching and querying. Do NOT use for:

  • Installation - Use configuring-agent-brain skill
  • API key configuration - Use configuring-agent-brain skill
  • Server setup issues - Use configuring-agent-brain skill
  • Provider configuration - Use configuring-agent-brain skill

Scope boundary: This skill assumes Agent Brain is already installed, configured, and the server is running with indexed documents.


Best Practices

  1. Mode Selection: BM25 for exact terms, Vector for concepts, Hybrid for comprehensive, Graph for relationships
  2. Threshold Tuning: Start at 0.7, lower to 0.3-0.5 for more results
  3. Server Discovery: Use runtime.json rather than assuming port 8000
  4. Resource Cleanup: Run agent-brain stop when done
  5. Source Citation: Always reference source filenames in responses
  6. Graph Queries: Use graph mode for "what calls X", "what imports Y" patterns
  7. Traversal Depth: Start with depth 2, increase to 3-4 for deeper chains
  8. File Type Presets: Use --include-type python,docs instead of manual glob patterns
  9. Incremental Indexing: Re-index without --force for efficient updates
  10. Injection Validation: Always --dry-run injector scripts before full indexing
  11. Job Monitoring: Use agent-brain jobs --watch for long-running index jobs

Reference Documentation

GuideDescription
BM25 SearchKeyword matching for technical queries
Vector SearchSemantic similarity for concepts
Hybrid SearchCombined keyword and semantic search
Graph SearchKnowledge graph and relationship queries
Server DiscoveryAuto-discovery, multi-agent sharing
Provider ConfigurationEnvironment variables and API keys
Integration GuideScripts, Python API, CI/CD patterns
API ReferenceREST endpoint documentation
TroubleshootingCommon issues and solutions

Limitations

  • Vector/hybrid/graph/multi modes require embedding provider configured
  • Graph mode requires additional memory (~500MB extra)
  • Supported formats: Markdown, PDF, plain text, code files (Python, JS, TS, Java, Go, Rust, C, C++)
  • Not supported: Word docs (.docx), images
  • Server requires ~500MB RAM for typical collections (~1GB with graph)
  • Ollama requires local installation and model download

More by SpillwaveSolutions

View all
doc-serve
94

Advanced document search with BM25 keyword matching, semantic vector search, and hybrid retrieval.Enables precise technical queries, conceptual understanding, and intelligent result fusion.Supports local document indexing and provides comprehensive search capabilities for knowledge bases.

configuring-agent-brain
94

Installation and configuration skill for Agent Brain document search system. Use when asked to "install agent brain", "setup agent brain", "configure agent brain", "setting up document search", "installing agent-brain packages", "configuring API keys", "initializing project for search", "troubleshooting agent brain", "pip install agent-brain", "agent brain not working", "agent brain setup error", "configure embeddings provider", "setup ollama for agent brain", or "agent brain environment variables". Covers package installation, provider configuration, project initialization, and server management.

project-memory
76

Set up and maintain a structured project memory system in docs/project_notes/ that tracks bugs with solutions, architectural decisions, key project facts, and work history. Use this skill when asked to "set up project memory", "track our decisions", "log a bug fix", "update project memory", or "initialize memory system". Configures both CLAUDE.md and AGENTS.md to maintain memory awareness across different AI coding tools.

unknown
76

project-memory: Set up and maintain a structured project memory system in docs/project_notes/ that tracks bugs with solutions, architectural decisions, key project facts, and work history. Use this skill when asked to "set up project memory", "track our decisions", "log a bug fix", "update project memory", or "initialize memory system". Configures both CLAUDE.md and AGENTS.md to maintain memory awareness across different AI coding tools.