jeremylongshore

langchain-core-workflow-b

@jeremylongshore/langchain-core-workflow-b
jeremylongshore
1,004
123 forks
Updated 1/18/2026
View on GitHub

Build LangChain agents with tools for autonomous task execution. Use when creating AI agents, implementing tool calling, or building autonomous workflows with decision-making. Trigger with phrases like "langchain agents", "langchain tools", "tool calling", "langchain autonomous", "create agent", "function calling".

Installation

$skills install @jeremylongshore/langchain-core-workflow-b
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/saas-packs/langchain-pack/skills/langchain-core-workflow-b/SKILL.md
Branchmain
Scoped Name@jeremylongshore/langchain-core-workflow-b

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: langchain-core-workflow-b description: | Build LangChain agents with tools for autonomous task execution. Use when creating AI agents, implementing tool calling, or building autonomous workflows with decision-making. Trigger with phrases like "langchain agents", "langchain tools", "tool calling", "langchain autonomous", "create agent", "function calling". allowed-tools: Read, Write, Edit version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io

LangChain Core Workflow B: Agents & Tools

Overview

Build autonomous agents that can use tools, make decisions, and execute multi-step tasks using LangChain's agent framework.

Prerequisites

  • Completed langchain-core-workflow-a (chains)
  • Understanding of function/tool calling concepts
  • Familiarity with async programming

Instructions

Step 1: Define Tools

from langchain_core.tools import tool
from pydantic import BaseModel, Field

class SearchInput(BaseModel):
    query: str = Field(description="The search query")

@tool(args_schema=SearchInput)
def search_web(query: str) -> str:
    """Search the web for information."""
    # Implement actual search logic
    return f"Search results for: {query}"

@tool
def calculate(expression: str) -> str:
    """Evaluate a mathematical expression."""
    try:
        result = eval(expression)  # Use safer alternative in production
        return str(result)
    except Exception as e:
        return f"Error: {e}"

@tool
def get_current_time() -> str:
    """Get the current date and time."""
    from datetime import datetime
    return datetime.now().isoformat()

tools = [search_web, calculate, get_current_time]

Step 2: Create Agent with Tools

from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

llm = ChatOpenAI(model="gpt-4o-mini")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant with access to tools."),
    MessagesPlaceholder(variable_name="chat_history", optional=True),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

agent = create_tool_calling_agent(llm, tools, prompt)

agent_executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,
    max_iterations=10,
    handle_parsing_errors=True
)

Step 3: Run the Agent

# Simple invocation
result = agent_executor.invoke({
    "input": "What's 25 * 4 and what time is it?"
})
print(result["output"])

# With chat history
from langchain_core.messages import HumanMessage, AIMessage

history = [
    HumanMessage(content="Hi, I'm Alice"),
    AIMessage(content="Hello Alice! How can I help you?")
]

result = agent_executor.invoke({
    "input": "What's my name?",
    "chat_history": history
})

Step 4: Streaming Agent Output

async def stream_agent():
    async for event in agent_executor.astream_events(
        {"input": "Search for LangChain news"},
        version="v2"
    ):
        if event["event"] == "on_chat_model_stream":
            print(event["data"]["chunk"].content, end="", flush=True)
        elif event["event"] == "on_tool_start":
            print(f"\n[Using tool: {event['name']}]")

Output

  • Typed tool definitions with Pydantic schemas
  • Configured agent executor with error handling
  • Working agent that can reason and use tools
  • Streaming output for real-time feedback

Advanced Patterns

Custom Tool with Async Support

from langchain_core.tools import StructuredTool

async def async_search(query: str) -> str:
    """Async search implementation."""
    import aiohttp
    async with aiohttp.ClientSession() as session:
        # Implement async search
        return f"Async results for: {query}"

search_tool = StructuredTool.from_function(
    func=lambda q: "sync fallback",
    coroutine=async_search,
    name="search",
    description="Search the web"
)

Agent with Memory

from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

message_history = ChatMessageHistory()

agent_with_memory = RunnableWithMessageHistory(
    agent_executor,
    lambda session_id: message_history,
    input_messages_key="input",
    history_messages_key="chat_history"
)

result = agent_with_memory.invoke(
    {"input": "Remember, I prefer Python"},
    config={"configurable": {"session_id": "user123"}}
)

Error Handling

ErrorCauseSolution
Tool Not FoundTool name mismatchVerify tool names in prompt
Max IterationsAgent stuck in loopIncrease limit or improve prompts
Parse ErrorInvalid tool call formatEnable handle_parsing_errors
Tool ErrorTool execution failedAdd try/except in tool functions

Resources

Next Steps

Proceed to langchain-common-errors for debugging guidance.

More by jeremylongshore

View all
rabbitmq-queue-setup
1,004

Rabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.

model-evaluation-suite
1,004

evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

neural-network-builder
1,004

building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").

oauth-callback-handler
1,004

Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.