teng-lin

notebooklm

@teng-lin/notebooklm
teng-lin
1,215
98 forks
Updated 1/18/2026
View on GitHub

Automate Google NotebookLM - create notebooks, add sources, generate podcasts/videos/quizzes, download artifacts. Activates on explicit /notebooklm or intent like "create a podcast about X"

Installation

$skills install @teng-lin/notebooklm
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathsrc/notebooklm/data/SKILL.md
Branchmain
Scoped Name@teng-lin/notebooklm

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: notebooklm description: Automate Google NotebookLM - create notebooks, add sources, generate podcasts/videos/quizzes, download artifacts. Activates on explicit /notebooklm or intent like "create a podcast about X"

NotebookLM Automation

Automate Google NotebookLM: create notebooks, add sources, chat with content, generate artifacts (podcasts, videos, quizzes), and download results.

Installation

From PyPI (Recommended):

pip install notebooklm-py

From GitHub (use latest release tag, NOT main branch):

# Get the latest release tag (using curl)
LATEST_TAG=$(curl -s https://api.github.com/repos/teng-lin/notebooklm-py/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
pip install "git+https://github.com/teng-lin/notebooklm-py@${LATEST_TAG}"

⚠️ DO NOT install from main branch (pip install git+https://github.com/teng-lin/notebooklm-py). The main branch may contain unreleased/unstable changes. Always use PyPI or a specific release tag, unless you are testing unreleased features.

After installation, install the Claude Code skill:

notebooklm skill install

Prerequisites

IMPORTANT: Before using any command, you MUST authenticate:

notebooklm login          # Opens browser for Google OAuth
notebooklm list           # Verify authentication works

If commands fail with authentication errors, re-run notebooklm login.

CI/CD, Multiple Accounts, and Parallel Agents

For automated environments, multiple accounts, or parallel agent workflows:

VariablePurpose
NOTEBOOKLM_HOMECustom config directory (default: ~/.notebooklm)
NOTEBOOKLM_AUTH_JSONInline auth JSON - no file writes needed

CI/CD setup: Set NOTEBOOKLM_AUTH_JSON from a secret containing your storage_state.json contents.

Multiple accounts: Use different NOTEBOOKLM_HOME directories per account.

Parallel agents: The CLI stores notebook context in a shared file (~/.notebooklm/context.json). Multiple concurrent agents using notebooklm use can overwrite each other's context.

Solutions for parallel workflows:

  1. Always use explicit notebook ID (recommended): Pass -n <notebook_id> (for wait/download commands) or --notebook <notebook_id> (for others) instead of relying on use
  2. Per-agent isolation: Set unique NOTEBOOKLM_HOME per agent: export NOTEBOOKLM_HOME=/tmp/agent-$ID
  3. Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)

Agent Setup Verification

Before starting workflows, verify the CLI is ready:

  1. notebooklm status → Should show "Authenticated as: email@..."
  2. notebooklm list --json → Should return valid JSON (even if empty notebooks list)
  3. If either fails → Run notebooklm login

When This Skill Activates

Explicit: User says "/notebooklm", "use notebooklm", or mentions the tool by name

Intent detection: Recognize requests like:

  • "Create a podcast about [topic]"
  • "Summarize these URLs/documents"
  • "Generate a quiz from my research"
  • "Turn this into an audio overview"
  • "Add these sources to NotebookLM"

Autonomy Rules

Run automatically (no confirmation):

  • notebooklm status - check context
  • notebooklm auth check - diagnose auth issues
  • notebooklm list - list notebooks
  • notebooklm source list - list sources
  • notebooklm artifact list - list artifacts
  • notebooklm language list - list supported languages
  • notebooklm language get - get current language
  • notebooklm language set - set language (global setting)
  • notebooklm artifact wait - wait for artifact completion (in subagent context)
  • notebooklm source wait - wait for source processing (in subagent context)
  • notebooklm research status - check research status
  • notebooklm research wait - wait for research (in subagent context)
  • notebooklm use <id> - set context (⚠️ SINGLE-AGENT ONLY - use -n flag in parallel workflows)
  • notebooklm create - create notebook
  • notebooklm ask "..." - chat queries
  • notebooklm source add - add sources

Ask before running:

  • notebooklm delete - destructive
  • notebooklm generate * - long-running, may fail
  • notebooklm download * - writes to filesystem
  • notebooklm artifact wait - long-running (when in main conversation)
  • notebooklm source wait - long-running (when in main conversation)
  • notebooklm research wait - long-running (when in main conversation)

Quick Reference

TaskCommand
Authenticatenotebooklm login
Diagnose auth issuesnotebooklm auth check
Diagnose auth (full)notebooklm auth check --test
List notebooksnotebooklm list
Create notebooknotebooklm create "Title"
Set contextnotebooklm use <notebook_id>
Show contextnotebooklm status
Add URL sourcenotebooklm source add "https://..."
Add filenotebooklm source add ./file.pdf
Add YouTubenotebooklm source add "https://youtube.com/..."
List sourcesnotebooklm source list
Wait for source processingnotebooklm source wait <source_id>
Web research (fast)notebooklm source add-research "query"
Web research (deep)notebooklm source add-research "query" --mode deep --no-wait
Check research statusnotebooklm research status
Wait for researchnotebooklm research wait --import-all
Chatnotebooklm ask "question"
Chat (new conversation)notebooklm ask "question" --new
Chat (specific sources)notebooklm ask "question" -s src_id1 -s src_id2
Chat (with references)notebooklm ask "question" --json
Get source fulltextnotebooklm source fulltext <source_id>
Get source guidenotebooklm source guide <source_id>
Generate podcastnotebooklm generate audio "instructions"
Generate podcast (JSON)notebooklm generate audio --json
Generate podcast (specific sources)notebooklm generate audio -s src_id1 -s src_id2
Generate videonotebooklm generate video "instructions"
Generate quiznotebooklm generate quiz
Check artifact statusnotebooklm artifact list
Wait for completionnotebooklm artifact wait <artifact_id>
Download audionotebooklm download audio ./output.mp3
Download videonotebooklm download video ./output.mp4
Download reportnotebooklm download report ./report.md
Download mind mapnotebooklm download mind-map ./map.json
Download data tablenotebooklm download data-table ./data.csv
Download quiznotebooklm download quiz quiz.json
Download quiz (markdown)notebooklm download quiz --format markdown quiz.md
Download flashcardsnotebooklm download flashcards cards.json
Download flashcards (markdown)notebooklm download flashcards --format markdown cards.md
Delete notebooknotebooklm notebook delete <id>
List languagesnotebooklm language list
Get languagenotebooklm language get
Set languagenotebooklm language set zh_Hans

Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting -n shorthand: artifact wait, source wait, research wait/status, download *. Download commands also support -a/--artifact. Other commands use --notebook. For chat, use --new to start fresh conversations (avoids conversation ID conflicts).

Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for: use, delete, wait commands. For automation, prefer full UUIDs to avoid ambiguity.

Command Output Formats

Commands with --json return structured data for parsing:

Create notebook:

$ notebooklm create "Research" --json
{"id": "abc123de-...", "title": "Research"}

Add source:

$ notebooklm source add "https://example.com" --json
{"source_id": "def456...", "title": "Example", "status": "processing"}

Generate artifact:

$ notebooklm generate audio "Focus on key points" --json
{"task_id": "xyz789...", "status": "pending"}

Chat with references:

$ notebooklm ask "What is X?" --json
{"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}

Source fulltext (get indexed content):

$ notebooklm source fulltext <source_id> --json
{"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}

Understanding citations: The cited_text in references is often a snippet or section header, not the full quoted passage. The start_char/end_char positions reference NotebookLM's internal chunked index, not the raw fulltext. Use SourceFulltext.find_citation_context() to locate citations:

fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id)
matches = fulltext.find_citation_context(ref.cited_text)  # Returns list[(context, position)]
if matches:
    context, pos = matches[0]  # First match; check len(matches) > 1 for duplicates

Extract IDs: Parse the id, source_id, or task_id field from JSON output.

Generation Types

All generate commands support:

  • -s, --source to use specific source(s) instead of all sources
  • --language to set output language (defaults to configured language or 'en')
  • --json for machine-readable output (returns task_id and status)
TypeCommandDownloadable
Podcastgenerate audioYes (.mp3)
Videogenerate videoYes (.mp4)
Slidesgenerate slide-deckYes (.pdf)
Infographicgenerate infographicYes (.png)
Reportgenerate reportYes (.md)
Mind Mapgenerate mind-mapYes (.json)
Data Tablegenerate data-tableYes (.csv)
Quizgenerate quizYes (.json/.md/.html)
Flashcardsgenerate flashcardsYes (.json/.md/.html)

Common Workflows

Research to Podcast (Interactive)

Time: 5-10 minutes total

  1. notebooklm create "Research: [topic]"if fails: check auth with notebooklm login
  2. notebooklm source add for each URL/document — if one fails: log warning, continue with others
  3. Wait for sources: notebooklm source list --json until all status=READY — required before generation
  4. notebooklm generate audio "Focus on [specific angle]" (confirm when asked) — if rate limited: wait 5 min, retry once
  5. Note the artifact ID returned
  6. Check notebooklm artifact list later for status
  7. notebooklm download audio ./podcast.mp3 when complete (confirm when asked)

Research to Podcast (Automated with Subagent)

Time: 5-10 minutes, but continues in background

When user wants full automation (generate and download when ready):

  1. Create notebook and add sources as usual
  2. Wait for sources to be ready (use source wait or check source list --json)
  3. Run notebooklm generate audio "..." --json → parse artifact_id from output
  4. Spawn a background agent using Task tool:
    Task(
      prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download.
              Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600
              Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}",
      subagent_type="general-purpose"
    )
    
  5. Main conversation continues while agent waits

Error handling in subagent:

  • If artifact wait returns exit code 2 (timeout): Report timeout, suggest checking artifact list
  • If download fails: Check if artifact status is COMPLETED first

Benefits: Non-blocking, user can do other work, automatic download on completion

Document Analysis

Time: 1-2 minutes

  1. notebooklm create "Analysis: [project]"
  2. notebooklm source add ./doc.pdf (or URLs)
  3. notebooklm ask "Summarize the key points"
  4. notebooklm ask "What are the main arguments?"
  5. Continue chatting as needed

Bulk Import

Time: Varies by source count

  1. notebooklm create "Collection: [name]"
  2. Add multiple sources:
    notebooklm source add "https://url1.com"
    notebooklm source add "https://url2.com"
    notebooklm source add ./local-file.pdf
    
  3. notebooklm source list to verify

Source limits: Max 50 sources per notebook Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files

Bulk Import with Source Waiting (Subagent Pattern)

Time: Varies by source count

When adding multiple sources and needing to wait for processing before chat/generation:

  1. Add sources with --json to capture IDs:
    notebooklm source add "https://url1.com" --json  # → {"source_id": "abc..."}
    notebooklm source add "https://url2.com" --json  # → {"source_id": "def..."}
    
  2. Spawn a background agent to wait for all sources:
    Task(
      prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready.
              For each: notebooklm source wait {id} -n {notebook_id} --timeout 120
              Report when all ready or if any fail.",
      subagent_type="general-purpose"
    )
    
  3. Main conversation continues while agent waits
  4. Once sources are ready, proceed with chat or generation

Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.

Deep Web Research (Subagent Pattern)

Time: 2-5 minutes, runs in background

Deep research finds and analyzes web sources on a topic:

  1. Create notebook: notebooklm create "Research: [topic]"
  2. Start deep research (non-blocking):
    notebooklm source add-research "topic query" --mode deep --no-wait
    
  3. Spawn a background agent to wait and import:
    Task(
      prompt="Wait for research in notebook {notebook_id} to complete and import sources.
              Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300
              Report how many sources were imported.",
      subagent_type="general-purpose"
    )
    
  4. Main conversation continues while agent waits
  5. When agent completes, sources are imported automatically

Alternative (blocking): For simple cases, omit --no-wait:

notebooklm source add-research "topic" --mode deep --import-all
# Blocks for up to 5 minutes

When to use each mode:

  • --mode fast: Specific topic, quick overview needed (5-10 sources, seconds)
  • --mode deep: Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)

Research sources:

  • --from web: Search the web (default)
  • --from drive: Search Google Drive

Output Style

Progress updates: Brief status for each step

  • "Creating notebook 'Research: AI'..."
  • "Adding source: https://example.com..."
  • "Starting audio generation... (task ID: abc123)"

Fire-and-forget for long operations:

  • Start generation, return artifact ID immediately
  • Do NOT poll or wait in main conversation - generation takes 5-45 minutes (see timing table)
  • User checks status manually, OR use subagent with artifact wait

JSON output: Use --json flag for machine-readable output:

notebooklm list --json
notebooklm auth check --json
notebooklm source list --json
notebooklm artifact list --json

JSON schemas (key fields):

notebooklm list --json:

{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}

notebooklm auth check --json:

{"checks": {"storage_exists": true, "json_valid": true, "cookies_present": true, "sid_cookie": true, "token_fetch": true}, "details": {"storage_path": "...", "auth_source": "file", "cookies_found": ["SID", "HSID", "..."], "cookie_domains": [".google.com"]}}

notebooklm source list --json:

{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}

notebooklm artifact list --json:

{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}

Status values:

  • Sources: processingready (or error)
  • Artifacts: pending or in_progresscompleted (or unknown)

Error Handling

On failure, offer the user a choice:

  1. Retry the operation
  2. Skip and continue with something else
  3. Investigate the error

Error decision tree:

ErrorCauseAction
Auth/cookie errorSession expiredRun notebooklm auth check then notebooklm login
"No notebook context"Context not setUse -n <id> or --notebook <id> flag (parallel), or notebooklm use <id> (single-agent)
"No result found for RPC ID"Rate limitingWait 5-10 min, retry
GENERATION_FAILEDGoogle rate limitWait and retry later
Download failsGeneration incompleteCheck artifact list for status
Invalid notebook/source IDWrong IDRun notebooklm list to verify
RPC protocol errorGoogle changed APIsMay need CLI update

Exit Codes

All commands use consistent exit codes:

CodeMeaningAction
0SuccessContinue
1Error (not found, processing failed)Check stderr, see Error Handling
2Timeout (wait commands only)Extend timeout or check status manually

Examples:

  • source wait returns 1 if source not found or processing failed
  • artifact wait returns 2 if timeout reached before completion
  • generate returns 1 if rate limited (check stderr for details)

Known Limitations

Rate limiting: Audio, video, quiz, flashcards, infographic, and slides generation may fail due to Google's rate limits. This is an API limitation, not a bug.

Reliable operations: These always work:

  • Notebooks (list, create, delete, rename)
  • Sources (add, list, delete)
  • Chat/queries
  • Mind-map, study-guide, FAQ, data-table generation

Unreliable operations: These may fail with rate limiting:

  • Audio (podcast) generation
  • Video generation
  • Quiz and flashcard generation
  • Infographic and slides generation

Workaround: If generation fails:

  1. Check status: notebooklm artifact list
  2. Retry after 5-10 minutes
  3. Use the NotebookLM web UI as fallback

Processing times vary significantly. Use the subagent pattern for long operations:

OperationTypical timeSuggested timeout
Source processing30s - 10 min600s
Research (fast)30s - 2 min180s
Research (deep)15 - 30+ min1800s
Notesinstantn/a
Mind-mapinstant (sync)n/a
Quiz, flashcards5 - 15 min900s
Report, data-table5 - 15 min900s
Audio generation10 - 20 min1200s
Video generation15 - 45 min2700s

Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.

Language Configuration

Language setting controls the output language for generated artifacts (audio, video, etc.).

Important: Language is a GLOBAL setting that affects all notebooks in your account.

# List all 80+ supported languages with native names
notebooklm language list

# Show current language setting
notebooklm language get

# Set language for artifact generation
notebooklm language set zh_Hans  # Simplified Chinese
notebooklm language set ja       # Japanese
notebooklm language set en       # English (default)

Common language codes:

CodeLanguage
enEnglish
zh_Hans中文(简体) - Simplified Chinese
zh_Hant中文(繁體) - Traditional Chinese
ja日本語 - Japanese
ko한국어 - Korean
esEspañol - Spanish
frFrançais - French
deDeutsch - German
pt_BRPortuguês (Brasil)

Override per command: Use --language flag on generate commands:

notebooklm generate audio --language ja   # Japanese podcast
notebooklm generate video --language zh_Hans  # Chinese video

Offline mode: Use --local flag to skip server sync:

notebooklm language set zh_Hans --local  # Save locally only
notebooklm language get --local  # Read local config only

Troubleshooting

notebooklm --help              # Main commands
notebooklm auth check          # Diagnose auth issues
notebooklm auth check --test   # Full auth validation with network test
notebooklm notebook --help     # Notebook management
notebooklm source --help       # Source management
notebooklm research --help     # Research status/wait
notebooklm generate --help     # Content generation
notebooklm artifact --help     # Artifact management
notebooklm download --help     # Download content
notebooklm language --help     # Language settings

Diagnose auth: notebooklm auth check - shows cookie domains, storage path, validation status Re-authenticate: notebooklm login Check version: notebooklm --version Update skill: notebooklm skill install