Generates audit-grade, hallucination-free documentation from codebase features with full traceability. Use when documenting features, creating technical specs, or generating verified documentation with code references.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: generate-verified-docs description: Generates audit-grade, hallucination-free documentation from codebase features with full traceability. Use when documenting features, creating technical specs, or generating verified documentation with code references.
Verified Documentation Generator
Generate audit-grade documentation from a codebase feature with full traceability and zero hallucination.
Arguments
$ARGUMENTS should be:
- Feature name and description (e.g., "User Registration", "Bookmark System", "Reading Progress")
Prerequisites
- Clear feature name and description from user
- Access to the codebase root directory
Hard Rules
- ❌ No assumptions or inferred behavior
- ❌ No undocumented states or transitions
- ✅ Every claim must link to file:line
- ✅ Unknowns must be explicitly reported
- ✅ Halt if verification fails
Stage 1: Scope Resolution
Goal: Define feature boundaries and reject ambiguous scopes.
- Parse the feature description to identify expected entry points (API routes, UI components, CLI commands)
- Search codebase for matching routes, exported functions/classes, and test files
- Validate at least one concrete code location exists
Gate:
- If no code locations found → HALT with failure report
- If scope is ambiguous → Ask user to narrow scope
- If confirmed → Proceed with
entry_point_candidateslist
Output: scope_definition, entry_point_candidates, scope_confidence
Stage 2: Codebase Discovery
Goal: Map all code artifacts related to the feature.
- Verify each entry point exists and classify as:
API|UI|JOB|EVENT|CLI - Trace dependencies (imports, injected dependencies, runtime-resolved modules)
- Build dependency graph (DAG)
- Classify artifacts: Controllers, Services, Repositories, Models, Utils, External
- Extract data models with types, relationships, and validation rules
Gate:
- Every entry point must be verifiable (
file_exists,line_exists,callable) - If no verifiable entry points → HALT
Output: entry_points, dependency_graph, artifacts_by_type, data_models,
external_integrations, discovery_gaps
Stage 3: Execution Flow Tracing
Goal: Trace complete execution paths from entry to exit.
- Synchronous paths: Follow function call chains, record decision branches, track variable transformations
- Async boundaries: Identify Promise/async-await, event emitters, message queues, webhooks
- Side effects: Catalog DATABASE_WRITE, DATABASE_READ, EXTERNAL_CALL, FILE_SYSTEM, CACHE_OP, EVENT_EMIT, LOGGING
- Flow enumeration: Build all paths, tag as
HAPPY_PATH|ERROR_PATH|EDGE_CASE
Gate:
- Every flow step must have a code reference
- If coverage below threshold → Ask user to accept partial or HALT
Output: execution_flows, async_boundaries, side_effects, flow_coverage,
untraced_branches
Stage 4: State & Transition Extraction
Goal: Extract explicit state machines; reject inferred states.
- Search for status/state enums, FSM patterns, status columns in schemas
- Map transitions:
STATE_A --[condition]--> STATE_B @ file:line - Verify each state value exists in code (enum, constant, or literal)
- Identify orphan states (no inbound) and terminal states (no outbound)
Gate:
- Every state must have a code reference
- Every transition must link verified states
- If undocumented states found → HALT with failure report
Output: states, transitions, state_machine, orphan_states, terminal_states
Stage 5: Failure & Edge Case Analysis
Goal: Identify error handling coverage and missing failure modes.
- Locate try/catch blocks, error callbacks, error boundaries, middleware handlers
- Trace exception propagation paths
- Detect edge cases: NULL_INPUT, EMPTY_ARRAY, BOUNDARY_VALUES, PERMISSION, TIMEOUT, CONFLICT
- Score risk:
LOW|MEDIUM|HIGH|CRITICAL
No hard gate – outputs are informational.
Output: handled_failures, unhandled_failures, silent_failures, edge_cases, risk_areas
Stage 6: Diagram Generation
Goal: Produce diagrams strictly matching verified flows.
- Generate State Diagram (Mermaid stateDiagram-v2) from
state_machine - Generate Sequence Diagrams (Mermaid sequenceDiagram) from
execution_flows - Generate Dependency Diagram (Mermaid flowchart) from
dependency_graph - Cross-check every diagram element against source stage outputs
Gate:
- Remove any unverified elements
- Log warnings for removed elements
Output: state_diagram, sequence_diagrams, dependency_diagram, diagram_verification_log
Stage 7: Documentation Assembly
Goal: Produce structured documentation.
Generate markdown with these sections:
# [Feature Name] Documentation
## Overview
## Entry Points
## Data Models
## Execution Flows
## State Machine
## Error Handling
## Known Gaps & Limitations
## Appendix: Code References
- Every claim gets
[ref: file:line]annotation - Build clickable traceability index
- Generate two views: Technical (full detail) and Non-Technical (simplified)
Output: documentation_technical, documentation_summary, traceability_index,
confidence_report
Stage 8: Verification & Validation
Goal: Cross-check all claims; fail if verification cannot pass.
- Extract all factual claims from documentation
- Verify each claim against
traceability_index:- File exists
- Line content matches claim basis
- Check diagram-flow consistency
- Verify all gaps appear in Known Limitations
- Hallucination detection: Flag claims without code references or contradicting code
Gate:
- If hallucination detected → HALT immediately
- If too many unverifiable claims → HALT
- If passed → Output final documentation
Output: verification_passed, verified_claims, failed_claims, hallucination_report,
final_documentation or failure_report
Failure Behavior
When any stage encounters a blocking failure:
- STOP execution immediately
- PRESERVE all partial outputs
- Generate failure report:
{ "stage_failed": "<stage_name>", "failure_type": "<failure_enum>", "failure_reason": "<explanation>", "partial_outputs": { ... }, "recovery_suggestions": [...] } - AWAIT user intervention
Example Usage
User: Generate documentation for the "User Registration" feature
Stage 1: Found POST /auth/register, /signup page
Stage 2: Traced AuthController → AuthService → UserRepository, EmailService
Stage 3: Mapped registration flow with email verification async boundary
Stage 4: Extracted UserStatus enum { PENDING, ACTIVE, SUSPENDED }
Stage 5: Identified missing timeout handling on EmailService
Stage 6: Generated state diagram and sequence diagram
Stage 7: Assembled full technical documentation
Stage 8: Verified 24/24 claims, no hallucinations → SUCCESS
More by quran
View allReviews PR(s) using comprehensive review guidelines including security, correctness, clean code, TypeScript, React patterns, i18n/RTL, performance, and accessibility. Use when reviewing pull requests or code changes.
Generates a comprehensive testing plan based on the current branch changes or a specific PR. Use when creating QA checklists, test plans, or verifying PR readiness.
