Apply production-ready LangChain SDK patterns for chains, agents, and memory. Use when implementing LangChain integrations, refactoring code, or establishing team coding standards for LangChain applications. Trigger with phrases like "langchain SDK patterns", "langchain best practices", "langchain code patterns", "idiomatic langchain", "langchain architecture".
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: langchain-sdk-patterns description: | Apply production-ready LangChain SDK patterns for chains, agents, and memory. Use when implementing LangChain integrations, refactoring code, or establishing team coding standards for LangChain applications. Trigger with phrases like "langchain SDK patterns", "langchain best practices", "langchain code patterns", "idiomatic langchain", "langchain architecture". allowed-tools: Read, Write, Edit version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io
LangChain SDK Patterns
Overview
Production-ready patterns for LangChain applications including LCEL chains, structured output, and error handling.
Prerequisites
- Completed
langchain-install-authsetup - Familiarity with async/await patterns
- Understanding of error handling best practices
Core Patterns
Pattern 1: Type-Safe Chain with Pydantic
from pydantic import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
class SentimentResult(BaseModel):
"""Structured output for sentiment analysis."""
sentiment: str = Field(description="positive, negative, or neutral")
confidence: float = Field(description="Confidence score 0-1")
reasoning: str = Field(description="Brief explanation")
llm = ChatOpenAI(model="gpt-4o-mini")
structured_llm = llm.with_structured_output(SentimentResult)
prompt = ChatPromptTemplate.from_template(
"Analyze the sentiment of: {text}"
)
chain = prompt | structured_llm
# Returns typed SentimentResult
result: SentimentResult = chain.invoke({"text": "I love LangChain!"})
print(f"Sentiment: {result.sentiment} ({result.confidence})")
Pattern 2: Retry with Fallback
from langchain_openai import ChatOpenAI
from langchain_anthropic import ChatAnthropic
from langchain_core.runnables import RunnableWithFallbacks
primary = ChatOpenAI(model="gpt-4o")
fallback = ChatAnthropic(model="claude-3-5-sonnet-20241022")
# Automatically falls back on failure
robust_llm = primary.with_fallbacks([fallback])
response = robust_llm.invoke("Hello!")
Pattern 3: Async Batch Processing
import asyncio
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Summarize: {text}")
chain = prompt | llm
async def process_batch(texts: list[str]) -> list:
"""Process multiple texts concurrently."""
inputs = [{"text": t} for t in texts]
results = await chain.abatch(inputs, config={"max_concurrency": 5})
return results
# Usage
results = asyncio.run(process_batch(["text1", "text2", "text3"]))
Pattern 4: Streaming with Callbacks
from langchain_openai import ChatOpenAI
from langchain_core.callbacks import StreamingStdOutCallbackHandler
llm = ChatOpenAI(
model="gpt-4o-mini",
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()]
)
# Streams tokens to stdout as they arrive
for chunk in llm.stream("Tell me a story"):
# Each chunk contains partial content
pass
Pattern 5: Caching for Cost Reduction
from langchain_openai import ChatOpenAI
from langchain_core.globals import set_llm_cache
from langchain_community.cache import SQLiteCache
# Enable SQLite caching
set_llm_cache(SQLiteCache(database_path=".langchain_cache.db"))
llm = ChatOpenAI(model="gpt-4o-mini")
# First call hits API
response1 = llm.invoke("What is 2+2?")
# Second identical call uses cache (no API cost)
response2 = llm.invoke("What is 2+2?")
Output
- Type-safe chains with Pydantic models
- Robust error handling with fallbacks
- Efficient async batch processing
- Cost-effective caching strategies
Error Handling
Standard Error Pattern
from langchain_core.exceptions import OutputParserException
from openai import RateLimitError, APIError
def safe_invoke(chain, input_data, max_retries=3):
"""Invoke chain with error handling."""
for attempt in range(max_retries):
try:
return chain.invoke(input_data)
except RateLimitError:
if attempt < max_retries - 1:
time.sleep(2 ** attempt)
continue
raise
except OutputParserException as e:
# Handle parsing failures
return {"error": str(e), "raw": e.llm_output}
except APIError as e:
raise RuntimeError(f"API error: {e}")
Resources
Next Steps
Proceed to langchain-core-workflow-a for chains and prompts workflow.
More by jeremylongshore
View allRabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.
evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".
building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").
Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.
