jeremylongshore

langchain-hello-world

@jeremylongshore/langchain-hello-world
jeremylongshore
1,004
123 forks
Updated 1/18/2026
View on GitHub

Create a minimal working LangChain example. Use when starting a new LangChain integration, testing your setup, or learning basic LangChain patterns with chains and prompts. Trigger with phrases like "langchain hello world", "langchain example", "langchain quick start", "simple langchain code", "first langchain app".

Installation

$skills install @jeremylongshore/langchain-hello-world
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/saas-packs/langchain-pack/skills/langchain-hello-world/SKILL.md
Branchmain
Scoped Name@jeremylongshore/langchain-hello-world

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: langchain-hello-world description: | Create a minimal working LangChain example. Use when starting a new LangChain integration, testing your setup, or learning basic LangChain patterns with chains and prompts. Trigger with phrases like "langchain hello world", "langchain example", "langchain quick start", "simple langchain code", "first langchain app". allowed-tools: Read, Write, Edit version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io

LangChain Hello World

Overview

Minimal working example demonstrating core LangChain functionality with chains and prompts.

Prerequisites

  • Completed langchain-install-auth setup
  • Valid LLM provider API credentials configured
  • Python 3.9+ or Node.js 18+ environment ready

Instructions

Step 1: Create Entry File

Create a new file hello_langchain.py for your hello world example.

Step 2: Import and Initialize

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")

Step 3: Create Your First Chain

from langchain_core.output_parsers import StrOutputParser

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

chain = prompt | llm | StrOutputParser()

response = chain.invoke({"input": "Hello, LangChain!"})
print(response)

Output

  • Working Python file with LangChain chain
  • Successful LLM response confirming connection
  • Console output showing:
Hello! I'm your LangChain-powered assistant. How can I help you today?

Error Handling

ErrorCauseSolution
Import ErrorSDK not installedRun pip install langchain langchain-openai
Auth ErrorInvalid credentialsCheck environment variable is set
TimeoutNetwork issuesIncrease timeout or check connectivity
Rate LimitToo many requestsWait and retry with exponential backoff
Model Not FoundInvalid model nameCheck available models in provider docs

Examples

Simple Chain (Python)

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | llm | StrOutputParser()

result = chain.invoke({"topic": "programming"})
print(result)

With Memory (Python)

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    MessagesPlaceholder(variable_name="history"),
    ("user", "{input}")
])

chain = prompt | llm

history = []
response = chain.invoke({"input": "Hi!", "history": history})
print(response.content)

TypeScript Example

import { ChatOpenAI } from "@langchain/openai";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { StringOutputParser } from "@langchain/core/output_parsers";

const llm = new ChatOpenAI({ modelName: "gpt-4o-mini" });
const prompt = ChatPromptTemplate.fromTemplate("Tell me about {topic}");
const chain = prompt.pipe(llm).pipe(new StringOutputParser());

const result = await chain.invoke({ topic: "LangChain" });
console.log(result);

Resources

Next Steps

Proceed to langchain-local-dev-loop for development workflow setup.

More by jeremylongshore

View all
rabbitmq-queue-setup
1,004

Rabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.

model-evaluation-suite
1,004

evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

neural-network-builder
1,004

building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").

oauth-callback-handler
1,004

Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.