Minimal implementation of Recursive Language Models (RLM) using Gemini 2.0 Flash and a local Python REPL. Enables processing of massive contexts via the Gemini CLI.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: gemini-rlm-min description: Minimal implementation of Recursive Language Models (RLM) using Gemini 2.0 Flash and a local Python REPL. Enables processing of massive contexts via the Gemini CLI. version: 1.0.0 category: cross-model allowed-tools:
- Read
- Write
- Edit
- Bash triggers:
- "gemini rlm"
- "gemini context"
- "large document gemini"
- "gemini cli"
Gemini RLM (Minimal)
Purpose: Provide a lightweight, CLI-based implementation of the Recursive Language Model architecture using Google's Gemini models. This skill allows for processing extremely large documents by orchestrating chunking, sub-LLM processing, and synthesis entirely via a Python script and the Gemini API.
Architecture
Based on arXiv:2512.24601 - Recursive Language Models.
| Component | Implementation | Model |
|---|---|---|
| Root LLM | gem_rlm.py (Orchestrator) | Gemini 2.0 Flash |
| Sub-LLM | gem_rlm.py (Chunk Processor) | Gemini 2.0 Flash |
| External Environment | scripts/rlm_repl.py | Python 3 |
Prerequisites
- Environment Variable:
GEMINI_API_KEYmust be set in your shell environment.export GEMINI_API_KEY="your_api_key_here"
Usage
The primary entry point is the gem_rlm.py script.
Syntax
${SKILLS_ROOT}/gemini-rlm-min/gem_rlm.py --context <path_to_large_file> --query <"your query"> [options]
Options
--chunk-size: Size of chunks in characters (default: 50000)--overlap: Overlap between chunks in characters (default: 0)
Examples
Analyze a large log file:
export GEMINI_API_KEY="AIza..."
${SKILLS_ROOT}/gemini-rlm-min/gem_rlm.py --context ./large_logs.txt --query "Identify all security exceptions and their timestamps"
Summarize a book:
${SKILLS_ROOT}/gemini-rlm-min/gem_rlm.py --context ./mobydick.txt --query "Summarize the relationship between Ahab and Starbuck" --chunk-size 100000
How It Works
- Initialization: The script initializes a persistent Python REPL (
rlm_repl.py) and loads the large context file into memory. - Chunking: The context is split into manageable chunks (e.g., 50k chars) using the REPL.
- Sub-LLM Processing: The script iterates through each chunk, sending it to
gemini-2.0-flash-expwith a prompt to extract relevant information. - Synthesis: The extracted findings from all chunks are aggregated and sent to the Root LLM (also Gemini 2.0 Flash) to generate the final answer.
File Structure
gemini-rlm-min/
βββ SKILL.md # This definition file
βββ gem_rlm.py # Main CLI Orchestrator
βββ scripts/
β βββ rlm_repl.py # Persistent REPL environment
βββ state/ # Runtime state storage (chunks, pickle files)
Integration with IRP
This skill serves as a high-speed, low-overhead alternative to the full rlm-context-manager when:
- Quick analysis is needed via CLI.
- The context needs to be processed entirely by Gemini models.
- Minimal dependencies are preferred (no complex agent setup required).
More by NeverSight
View allGuide remote founders through Estonian e-Residency, company formation, banking, accounting, and getting paid with stablecoins.
Master SEO orchestrator with 23 specialized sub-skills across 8 categories. Comprehensive SEO analysis for any website or business type. Performs full site audits, single-page deep analysis, technical SEO checks (crawlability, indexability, Core Web Vitals with INP), schema markup, content quality (E-E-A-T framework), image optimization, sitemap analysis, site architecture planning, AI search optimization (GEO for ChatGPT, Perplexity, AI Overviews), backlink analysis, keyword research, SERP tracking, and AI visibility monitoring. Industry detection for SaaS, e-commerce, local business, publishers, agencies. Triggers on: "SEO", "audit", "schema", "Core Web Vitals", "sitemap", "E-E-A-T", "AI Overviews", "GEO", "technical SEO", "content quality", "page speed", "structured data", "site architecture", "metadata", "AI SEO", "backlinks", "link building", "keywords", "keyword research", "SERP", "AI visibility".
Generate AI images from text prompts β one API key for GPT Image, Gemini, Seedream, and 10+ models. No juggling subscriptions. Images saved to your YouMind knowledge board. Use when user wants to "generate image", "create image", "AI image", "text to image", "ηζεΎη", "AI ηεΎ", "η»εηζ", "GPT image", "Gemini image", "Seedream", "DALL-E", "Midjourney".
Content optimization for SEO: on-page scoring (0-100), E-E-A-T analysis, title/meta optimization, heading structure, keyword density, SERP preview, content readability, featured snippet optimization, content gap and decay detection, pre-publish checklists. USE FOR: - "optimize this page", "content score", "on-page SEO", "page score" - "title tag", "meta description", "heading structure" - "E-E-A-T", "content quality", "content audit" - "pre-publish checklist", "before I publish" - "content gap", "what content should I create" - "keyword density", "content optimization" - "SERP preview", "how does this look in Google" - "featured snippet", "position zero" - "content decay", "stale content", "refresh content" - "readability", "Flesch-Kincaid" Uses firecrawl to scrape pages for analysis. Multi-client: stores results in .seo/clients/{client}/content/
