Agent SkillsAgent Skills
eyadsibai

langchain-agents

@eyadsibai/langchain-agents
eyadsibai
1
1 forks
Updated 3/31/2026
View on GitHub

Use when "LangChain", "LLM chains", "ReAct agents", "tool calling", or asking about "RAG pipelines", "conversation memory", "document QA", "agent tools", "LangSmith"

Installation

$npx agent-skills-cli install @eyadsibai/langchain-agents
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Repositoryeyadsibai/ltk
Pathplugins/ltk-core/skills/langchain-agents/SKILL.md
Branchmaster
Scoped Name@eyadsibai/langchain-agents

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: langchain-agents description: Use when "LangChain", "LLM chains", "ReAct agents", "tool calling", or asking about "RAG pipelines", "conversation memory", "document QA", "agent tools", "LangSmith" version: 1.0.0

LangChain - LLM Applications with Agents & RAG

The most popular framework for building LLM-powered applications.

When to Use

  • Building agents with tool calling and reasoning (ReAct pattern)
  • Implementing RAG (retrieval-augmented generation) pipelines
  • Need to swap LLM providers easily (OpenAI, Anthropic, Google)
  • Creating chatbots with conversation memory
  • Rapid prototyping of LLM applications

Core Components

ComponentPurposeKey Concept
Chat ModelsLLM interfaceUnified API across providers
AgentsTool use + reasoningReAct pattern
ChainsSequential operationsComposable pipelines
MemoryConversation stateBuffer, summary, vector
RetrieversDocument lookupVector search, hybrid
ToolsExternal capabilitiesFunctions agents can call

Agent Patterns

PatternDescriptionUse Case
ReActReason-Act-Observe loopGeneral tool use
Plan-and-ExecutePlan first, then executeComplex multi-step
Self-AskGenerate sub-questionsResearch tasks
Structured ChatJSON tool callingAPI integration

Tool Definition

ElementPurpose
NameHow agent refers to tool
DescriptionWhen to use (critical for selection)
ParametersInput schema
Return typeWhat agent receives back

Key concept: Tool descriptions are critical—the LLM uses them to decide which tool to call. Be specific about when and why to use each tool.


RAG Pipeline Stages

StagePurposeOptions
LoadIngest documentsWeb, PDF, GitHub, DBs
SplitChunk into piecesRecursive, semantic
EmbedConvert to vectorsOpenAI, Cohere, local
StoreIndex vectorsChroma, FAISS, Pinecone
RetrieveFind relevant chunksSimilarity, MMR, hybrid
GenerateCreate responseLLM with context

Chunking Strategies

StrategyBest ForTypical Size
RecursiveGeneral text500-1000 chars
SemanticCoherent passagesVariable
Token-basedLLM context limits256-512 tokens

Retrieval Strategies

StrategyHow It Works
SimilarityNearest neighbors by embedding
MMRDiversity + relevance balance
HybridKeyword + semantic combined
Self-queryLLM generates metadata filters

Memory Types

TypeStoresBest For
BufferFull conversationShort conversations
WindowLast N messagesMedium conversations
SummaryLLM-generated summaryLong conversations
VectorEmbedded messagesSemantic recall
EntityExtracted entitiesTrack facts about people/things

Key concept: Buffer memory grows unbounded. Use summary or vector for long conversations to stay within context limits.


Document Loaders

SourceLoader Type
Web pagesWebBaseLoader, AsyncChromium
PDFsPyPDFLoader, UnstructuredPDF
CodeGitHubLoader, DirectoryLoader
DatabasesSQLDatabase, Postgres
APIsCustom loaders

Vector Stores

StoreTypeBest For
ChromaLocalDevelopment, small datasets
FAISSLocalLarge local datasets
PineconeCloudProduction, scale
WeaviateSelf-hosted/CloudHybrid search
QdrantSelf-hosted/CloudFiltering, metadata

LangSmith Observability

FeatureBenefit
TracingSee every LLM call, tool use
EvaluationTest prompts systematically
DatasetsStore test cases
MonitoringTrack production performance

Key concept: Enable LangSmith tracing early—debugging agents without observability is extremely difficult.


Best Practices

PracticeWhy
Start simplecreate_agent() covers most cases
Enable streamingBetter UX for long responses
Use LangSmithEssential for debugging
Optimize chunk size500-1000 chars typically works
Cache embeddingsThey're expensive to compute
Test retrieval separatelyRAG quality depends on retrieval

LangChain vs LangGraph

AspectLangChainLangGraph
Best forQuick agents, RAGComplex workflows
Code to start<10 lines~30 lines
State managementLimitedNative
Branching logicBasicAdvanced
Human-in-loopManualBuilt-in

Key concept: Use LangChain for straightforward agents and RAG. Use LangGraph when you need complex state machines, branching, or human checkpoints.

Resources