Automate Google NotebookLM - create notebooks, add sources, generate podcasts/videos/quizzes, download artifacts. Activates on explicit /notebooklm or intent like "create a podcast about X"
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: notebooklm description: Automate Google NotebookLM - create notebooks, add sources, generate podcasts/videos/quizzes, download artifacts. Activates on explicit /notebooklm or intent like "create a podcast about X"
NotebookLM Automation
Automate Google NotebookLM: create notebooks, add sources, chat with content, generate artifacts (podcasts, videos, quizzes), and download results.
Installation
From PyPI (Recommended):
pip install notebooklm-py
From GitHub (use latest release tag, NOT main branch):
# Get the latest release tag (using curl)
LATEST_TAG=$(curl -s https://api.github.com/repos/teng-lin/notebooklm-py/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
pip install "git+https://github.com/teng-lin/notebooklm-py@${LATEST_TAG}"
⚠️ DO NOT install from main branch (pip install git+https://github.com/teng-lin/notebooklm-py). The main branch may contain unreleased/unstable changes. Always use PyPI or a specific release tag, unless you are testing unreleased features.
After installation, install the Claude Code skill:
notebooklm skill install
Prerequisites
IMPORTANT: Before using any command, you MUST authenticate:
notebooklm login # Opens browser for Google OAuth
notebooklm list # Verify authentication works
If commands fail with authentication errors, re-run notebooklm login.
CI/CD, Multiple Accounts, and Parallel Agents
For automated environments, multiple accounts, or parallel agent workflows:
| Variable | Purpose |
|---|---|
NOTEBOOKLM_HOME | Custom config directory (default: ~/.notebooklm) |
NOTEBOOKLM_AUTH_JSON | Inline auth JSON - no file writes needed |
CI/CD setup: Set NOTEBOOKLM_AUTH_JSON from a secret containing your storage_state.json contents.
Multiple accounts: Use different NOTEBOOKLM_HOME directories per account.
Parallel agents: The CLI stores notebook context in a shared file (~/.notebooklm/context.json). Multiple concurrent agents using notebooklm use can overwrite each other's context.
Solutions for parallel workflows:
- Always use explicit notebook ID (recommended): Pass
-n <notebook_id>(forwait/downloadcommands) or--notebook <notebook_id>(for others) instead of relying onuse - Per-agent isolation: Set unique
NOTEBOOKLM_HOMEper agent:export NOTEBOOKLM_HOME=/tmp/agent-$ID - Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)
Agent Setup Verification
Before starting workflows, verify the CLI is ready:
notebooklm status→ Should show "Authenticated as: email@..."notebooklm list --json→ Should return valid JSON (even if empty notebooks list)- If either fails → Run
notebooklm login
When This Skill Activates
Explicit: User says "/notebooklm", "use notebooklm", or mentions the tool by name
Intent detection: Recognize requests like:
- "Create a podcast about [topic]"
- "Summarize these URLs/documents"
- "Generate a quiz from my research"
- "Turn this into an audio overview"
- "Add these sources to NotebookLM"
Autonomy Rules
Run automatically (no confirmation):
notebooklm status- check contextnotebooklm auth check- diagnose auth issuesnotebooklm list- list notebooksnotebooklm source list- list sourcesnotebooklm artifact list- list artifactsnotebooklm language list- list supported languagesnotebooklm language get- get current languagenotebooklm language set- set language (global setting)notebooklm artifact wait- wait for artifact completion (in subagent context)notebooklm source wait- wait for source processing (in subagent context)notebooklm research status- check research statusnotebooklm research wait- wait for research (in subagent context)notebooklm use <id>- set context (⚠️ SINGLE-AGENT ONLY - use-nflag in parallel workflows)notebooklm create- create notebooknotebooklm ask "..."- chat queriesnotebooklm source add- add sources
Ask before running:
notebooklm delete- destructivenotebooklm generate *- long-running, may failnotebooklm download *- writes to filesystemnotebooklm artifact wait- long-running (when in main conversation)notebooklm source wait- long-running (when in main conversation)notebooklm research wait- long-running (when in main conversation)
Quick Reference
| Task | Command |
|---|---|
| Authenticate | notebooklm login |
| Diagnose auth issues | notebooklm auth check |
| Diagnose auth (full) | notebooklm auth check --test |
| List notebooks | notebooklm list |
| Create notebook | notebooklm create "Title" |
| Set context | notebooklm use <notebook_id> |
| Show context | notebooklm status |
| Add URL source | notebooklm source add "https://..." |
| Add file | notebooklm source add ./file.pdf |
| Add YouTube | notebooklm source add "https://youtube.com/..." |
| List sources | notebooklm source list |
| Wait for source processing | notebooklm source wait <source_id> |
| Web research (fast) | notebooklm source add-research "query" |
| Web research (deep) | notebooklm source add-research "query" --mode deep --no-wait |
| Check research status | notebooklm research status |
| Wait for research | notebooklm research wait --import-all |
| Chat | notebooklm ask "question" |
| Chat (new conversation) | notebooklm ask "question" --new |
| Chat (specific sources) | notebooklm ask "question" -s src_id1 -s src_id2 |
| Chat (with references) | notebooklm ask "question" --json |
| Get source fulltext | notebooklm source fulltext <source_id> |
| Get source guide | notebooklm source guide <source_id> |
| Generate podcast | notebooklm generate audio "instructions" |
| Generate podcast (JSON) | notebooklm generate audio --json |
| Generate podcast (specific sources) | notebooklm generate audio -s src_id1 -s src_id2 |
| Generate video | notebooklm generate video "instructions" |
| Generate quiz | notebooklm generate quiz |
| Check artifact status | notebooklm artifact list |
| Wait for completion | notebooklm artifact wait <artifact_id> |
| Download audio | notebooklm download audio ./output.mp3 |
| Download video | notebooklm download video ./output.mp4 |
| Download report | notebooklm download report ./report.md |
| Download mind map | notebooklm download mind-map ./map.json |
| Download data table | notebooklm download data-table ./data.csv |
| Download quiz | notebooklm download quiz quiz.json |
| Download quiz (markdown) | notebooklm download quiz --format markdown quiz.md |
| Download flashcards | notebooklm download flashcards cards.json |
| Download flashcards (markdown) | notebooklm download flashcards --format markdown cards.md |
| Delete notebook | notebooklm notebook delete <id> |
| List languages | notebooklm language list |
| Get language | notebooklm language get |
| Set language | notebooklm language set zh_Hans |
Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting -n shorthand: artifact wait, source wait, research wait/status, download *. Download commands also support -a/--artifact. Other commands use --notebook. For chat, use --new to start fresh conversations (avoids conversation ID conflicts).
Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for: use, delete, wait commands. For automation, prefer full UUIDs to avoid ambiguity.
Command Output Formats
Commands with --json return structured data for parsing:
Create notebook:
$ notebooklm create "Research" --json
{"id": "abc123de-...", "title": "Research"}
Add source:
$ notebooklm source add "https://example.com" --json
{"source_id": "def456...", "title": "Example", "status": "processing"}
Generate artifact:
$ notebooklm generate audio "Focus on key points" --json
{"task_id": "xyz789...", "status": "pending"}
Chat with references:
$ notebooklm ask "What is X?" --json
{"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}
Source fulltext (get indexed content):
$ notebooklm source fulltext <source_id> --json
{"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}
Understanding citations: The cited_text in references is often a snippet or section header, not the full quoted passage. The start_char/end_char positions reference NotebookLM's internal chunked index, not the raw fulltext. Use SourceFulltext.find_citation_context() to locate citations:
fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id)
matches = fulltext.find_citation_context(ref.cited_text) # Returns list[(context, position)]
if matches:
context, pos = matches[0] # First match; check len(matches) > 1 for duplicates
Extract IDs: Parse the id, source_id, or task_id field from JSON output.
Generation Types
All generate commands support:
-s, --sourceto use specific source(s) instead of all sources--languageto set output language (defaults to configured language or 'en')--jsonfor machine-readable output (returnstask_idandstatus)
| Type | Command | Downloadable |
|---|---|---|
| Podcast | generate audio | Yes (.mp3) |
| Video | generate video | Yes (.mp4) |
| Slides | generate slide-deck | Yes (.pdf) |
| Infographic | generate infographic | Yes (.png) |
| Report | generate report | Yes (.md) |
| Mind Map | generate mind-map | Yes (.json) |
| Data Table | generate data-table | Yes (.csv) |
| Quiz | generate quiz | Yes (.json/.md/.html) |
| Flashcards | generate flashcards | Yes (.json/.md/.html) |
Common Workflows
Research to Podcast (Interactive)
Time: 5-10 minutes total
notebooklm create "Research: [topic]"— if fails: check auth withnotebooklm loginnotebooklm source addfor each URL/document — if one fails: log warning, continue with others- Wait for sources:
notebooklm source list --jsonuntil all status=READY — required before generation notebooklm generate audio "Focus on [specific angle]"(confirm when asked) — if rate limited: wait 5 min, retry once- Note the artifact ID returned
- Check
notebooklm artifact listlater for status notebooklm download audio ./podcast.mp3when complete (confirm when asked)
Research to Podcast (Automated with Subagent)
Time: 5-10 minutes, but continues in background
When user wants full automation (generate and download when ready):
- Create notebook and add sources as usual
- Wait for sources to be ready (use
source waitor checksource list --json) - Run
notebooklm generate audio "..." --json→ parseartifact_idfrom output - Spawn a background agent using Task tool:
Task( prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download. Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600 Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}", subagent_type="general-purpose" ) - Main conversation continues while agent waits
Error handling in subagent:
- If
artifact waitreturns exit code 2 (timeout): Report timeout, suggest checkingartifact list - If download fails: Check if artifact status is COMPLETED first
Benefits: Non-blocking, user can do other work, automatic download on completion
Document Analysis
Time: 1-2 minutes
notebooklm create "Analysis: [project]"notebooklm source add ./doc.pdf(or URLs)notebooklm ask "Summarize the key points"notebooklm ask "What are the main arguments?"- Continue chatting as needed
Bulk Import
Time: Varies by source count
notebooklm create "Collection: [name]"- Add multiple sources:
notebooklm source add "https://url1.com" notebooklm source add "https://url2.com" notebooklm source add ./local-file.pdf notebooklm source listto verify
Source limits: Max 50 sources per notebook Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files
Bulk Import with Source Waiting (Subagent Pattern)
Time: Varies by source count
When adding multiple sources and needing to wait for processing before chat/generation:
- Add sources with
--jsonto capture IDs:notebooklm source add "https://url1.com" --json # → {"source_id": "abc..."} notebooklm source add "https://url2.com" --json # → {"source_id": "def..."} - Spawn a background agent to wait for all sources:
Task( prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready. For each: notebooklm source wait {id} -n {notebook_id} --timeout 120 Report when all ready or if any fail.", subagent_type="general-purpose" ) - Main conversation continues while agent waits
- Once sources are ready, proceed with chat or generation
Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.
Deep Web Research (Subagent Pattern)
Time: 2-5 minutes, runs in background
Deep research finds and analyzes web sources on a topic:
- Create notebook:
notebooklm create "Research: [topic]" - Start deep research (non-blocking):
notebooklm source add-research "topic query" --mode deep --no-wait - Spawn a background agent to wait and import:
Task( prompt="Wait for research in notebook {notebook_id} to complete and import sources. Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300 Report how many sources were imported.", subagent_type="general-purpose" ) - Main conversation continues while agent waits
- When agent completes, sources are imported automatically
Alternative (blocking): For simple cases, omit --no-wait:
notebooklm source add-research "topic" --mode deep --import-all
# Blocks for up to 5 minutes
When to use each mode:
--mode fast: Specific topic, quick overview needed (5-10 sources, seconds)--mode deep: Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)
Research sources:
--from web: Search the web (default)--from drive: Search Google Drive
Output Style
Progress updates: Brief status for each step
- "Creating notebook 'Research: AI'..."
- "Adding source: https://example.com..."
- "Starting audio generation... (task ID: abc123)"
Fire-and-forget for long operations:
- Start generation, return artifact ID immediately
- Do NOT poll or wait in main conversation - generation takes 5-45 minutes (see timing table)
- User checks status manually, OR use subagent with
artifact wait
JSON output: Use --json flag for machine-readable output:
notebooklm list --json
notebooklm auth check --json
notebooklm source list --json
notebooklm artifact list --json
JSON schemas (key fields):
notebooklm list --json:
{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}
notebooklm auth check --json:
{"checks": {"storage_exists": true, "json_valid": true, "cookies_present": true, "sid_cookie": true, "token_fetch": true}, "details": {"storage_path": "...", "auth_source": "file", "cookies_found": ["SID", "HSID", "..."], "cookie_domains": [".google.com"]}}
notebooklm source list --json:
{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}
notebooklm artifact list --json:
{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}
Status values:
- Sources:
processing→ready(orerror) - Artifacts:
pendingorin_progress→completed(orunknown)
Error Handling
On failure, offer the user a choice:
- Retry the operation
- Skip and continue with something else
- Investigate the error
Error decision tree:
| Error | Cause | Action |
|---|---|---|
| Auth/cookie error | Session expired | Run notebooklm auth check then notebooklm login |
| "No notebook context" | Context not set | Use -n <id> or --notebook <id> flag (parallel), or notebooklm use <id> (single-agent) |
| "No result found for RPC ID" | Rate limiting | Wait 5-10 min, retry |
GENERATION_FAILED | Google rate limit | Wait and retry later |
| Download fails | Generation incomplete | Check artifact list for status |
| Invalid notebook/source ID | Wrong ID | Run notebooklm list to verify |
| RPC protocol error | Google changed APIs | May need CLI update |
Exit Codes
All commands use consistent exit codes:
| Code | Meaning | Action |
|---|---|---|
| 0 | Success | Continue |
| 1 | Error (not found, processing failed) | Check stderr, see Error Handling |
| 2 | Timeout (wait commands only) | Extend timeout or check status manually |
Examples:
source waitreturns 1 if source not found or processing failedartifact waitreturns 2 if timeout reached before completiongeneratereturns 1 if rate limited (check stderr for details)
Known Limitations
Rate limiting: Audio, video, quiz, flashcards, infographic, and slides generation may fail due to Google's rate limits. This is an API limitation, not a bug.
Reliable operations: These always work:
- Notebooks (list, create, delete, rename)
- Sources (add, list, delete)
- Chat/queries
- Mind-map, study-guide, FAQ, data-table generation
Unreliable operations: These may fail with rate limiting:
- Audio (podcast) generation
- Video generation
- Quiz and flashcard generation
- Infographic and slides generation
Workaround: If generation fails:
- Check status:
notebooklm artifact list - Retry after 5-10 minutes
- Use the NotebookLM web UI as fallback
Processing times vary significantly. Use the subagent pattern for long operations:
| Operation | Typical time | Suggested timeout |
|---|---|---|
| Source processing | 30s - 10 min | 600s |
| Research (fast) | 30s - 2 min | 180s |
| Research (deep) | 15 - 30+ min | 1800s |
| Notes | instant | n/a |
| Mind-map | instant (sync) | n/a |
| Quiz, flashcards | 5 - 15 min | 900s |
| Report, data-table | 5 - 15 min | 900s |
| Audio generation | 10 - 20 min | 1200s |
| Video generation | 15 - 45 min | 2700s |
Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.
Language Configuration
Language setting controls the output language for generated artifacts (audio, video, etc.).
Important: Language is a GLOBAL setting that affects all notebooks in your account.
# List all 80+ supported languages with native names
notebooklm language list
# Show current language setting
notebooklm language get
# Set language for artifact generation
notebooklm language set zh_Hans # Simplified Chinese
notebooklm language set ja # Japanese
notebooklm language set en # English (default)
Common language codes:
| Code | Language |
|---|---|
en | English |
zh_Hans | 中文(简体) - Simplified Chinese |
zh_Hant | 中文(繁體) - Traditional Chinese |
ja | 日本語 - Japanese |
ko | 한국어 - Korean |
es | Español - Spanish |
fr | Français - French |
de | Deutsch - German |
pt_BR | Português (Brasil) |
Override per command: Use --language flag on generate commands:
notebooklm generate audio --language ja # Japanese podcast
notebooklm generate video --language zh_Hans # Chinese video
Offline mode: Use --local flag to skip server sync:
notebooklm language set zh_Hans --local # Save locally only
notebooklm language get --local # Read local config only
Troubleshooting
notebooklm --help # Main commands
notebooklm auth check # Diagnose auth issues
notebooklm auth check --test # Full auth validation with network test
notebooklm notebook --help # Notebook management
notebooklm source --help # Source management
notebooklm research --help # Research status/wait
notebooklm generate --help # Content generation
notebooklm artifact --help # Artifact management
notebooklm download --help # Download content
notebooklm language --help # Language settings
Diagnose auth: notebooklm auth check - shows cookie domains, storage path, validation status
Re-authenticate: notebooklm login
Check version: notebooklm --version
Update skill: notebooklm skill install
