Build LangChain chains and prompts for structured LLM workflows. Use when creating prompt templates, building LCEL chains, or implementing sequential processing pipelines. Trigger with phrases like "langchain chains", "langchain prompts", "LCEL workflow", "langchain pipeline", "prompt template".
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: langchain-core-workflow-a description: | Build LangChain chains and prompts for structured LLM workflows. Use when creating prompt templates, building LCEL chains, or implementing sequential processing pipelines. Trigger with phrases like "langchain chains", "langchain prompts", "LCEL workflow", "langchain pipeline", "prompt template". allowed-tools: Read, Write, Edit version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io
LangChain Core Workflow A: Chains & Prompts
Overview
Build production-ready chains using LangChain Expression Language (LCEL) with prompt templates, output parsers, and composition patterns.
Prerequisites
- Completed
langchain-install-authsetup - Understanding of prompt engineering basics
- Familiarity with Python type hints
Instructions
Step 1: Create Prompt Templates
from langchain_core.prompts import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder
)
# Simple template
simple_prompt = ChatPromptTemplate.from_template(
"Translate '{text}' to {language}"
)
# Chat-style template
chat_prompt = ChatPromptTemplate.from_messages([
SystemMessagePromptTemplate.from_template(
"You are a {role}. Respond in {style} style."
),
MessagesPlaceholder(variable_name="history", optional=True),
HumanMessagePromptTemplate.from_template("{input}")
])
Step 2: Build LCEL Chains
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser, JsonOutputParser
llm = ChatOpenAI(model="gpt-4o-mini")
# Basic chain: prompt -> llm -> parser
basic_chain = simple_prompt | llm | StrOutputParser()
# Invoke the chain
result = basic_chain.invoke({
"text": "Hello, world!",
"language": "Spanish"
})
print(result) # "Hola, mundo!"
Step 3: Chain Composition
from langchain_core.runnables import RunnablePassthrough, RunnableParallel
# Sequential chain
chain1 = prompt1 | llm | StrOutputParser()
chain2 = prompt2 | llm | StrOutputParser()
sequential = chain1 | (lambda x: {"summary": x}) | chain2
# Parallel execution
parallel = RunnableParallel(
summary=prompt1 | llm | StrOutputParser(),
keywords=prompt2 | llm | StrOutputParser(),
sentiment=prompt3 | llm | StrOutputParser()
)
results = parallel.invoke({"text": "Your input text"})
# Returns: {"summary": "...", "keywords": "...", "sentiment": "..."}
Step 4: Branching Logic
from langchain_core.runnables import RunnableBranch
# Conditional branching
branch = RunnableBranch(
(lambda x: x["type"] == "question", question_chain),
(lambda x: x["type"] == "command", command_chain),
default_chain # Fallback
)
result = branch.invoke({"type": "question", "input": "What is AI?"})
Output
- Reusable prompt templates with variable substitution
- Type-safe LCEL chains with clear data flow
- Composable chain patterns (sequential, parallel, branching)
- Consistent output parsing
Examples
Multi-Step Processing Chain
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
llm = ChatOpenAI(model="gpt-4o-mini")
# Step 1: Extract key points
extract_prompt = ChatPromptTemplate.from_template(
"Extract 3 key points from: {text}"
)
# Step 2: Summarize
summarize_prompt = ChatPromptTemplate.from_template(
"Create a one-sentence summary from these points: {points}"
)
# Compose the chain
chain = (
{"points": extract_prompt | llm | StrOutputParser()}
| summarize_prompt
| llm
| StrOutputParser()
)
summary = chain.invoke({"text": "Long article text here..."})
With Context Injection
from langchain_core.runnables import RunnablePassthrough
def get_context(input_dict):
"""Fetch relevant context from database."""
return f"Context for: {input_dict['query']}"
chain = (
RunnablePassthrough.assign(context=get_context)
| prompt
| llm
| StrOutputParser()
)
result = chain.invoke({"query": "user question"})
Error Handling
| Error | Cause | Solution |
|---|---|---|
| Missing Variable | Template variable not provided | Check input dict keys match template |
| Type Error | Wrong input type | Ensure inputs match expected schema |
| Parse Error | Output doesn't match parser | Use more specific prompts or fallback |
Resources
Next Steps
Proceed to langchain-core-workflow-b for agents and tools workflow.
More by jeremylongshore
View allRabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.
evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".
building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").
Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.
