jeremylongshore

langchain-ci-integration

@jeremylongshore/langchain-ci-integration
jeremylongshore
1,004
123 forks
Updated 1/18/2026
View on GitHub

Configure LangChain CI/CD integration with GitHub Actions and testing. Use when setting up automated testing, configuring CI pipelines, or integrating LangChain tests into your build process. Trigger with phrases like "langchain CI", "langchain GitHub Actions", "langchain automated tests", "CI langchain", "langchain pipeline".

Installation

$skills install @jeremylongshore/langchain-ci-integration
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/saas-packs/langchain-pack/skills/langchain-ci-integration/SKILL.md
Branchmain
Scoped Name@jeremylongshore/langchain-ci-integration

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: langchain-ci-integration description: | Configure LangChain CI/CD integration with GitHub Actions and testing. Use when setting up automated testing, configuring CI pipelines, or integrating LangChain tests into your build process. Trigger with phrases like "langchain CI", "langchain GitHub Actions", "langchain automated tests", "CI langchain", "langchain pipeline". allowed-tools: Read, Write, Edit, Bash(gh:*) version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io

LangChain CI Integration

Overview

Configure comprehensive CI/CD pipelines for LangChain applications with testing, linting, and deployment automation.

Prerequisites

  • GitHub repository with Actions enabled
  • LangChain application with test suite
  • API keys for testing (stored as GitHub Secrets)

Instructions

Step 1: Create GitHub Actions Workflow

# .github/workflows/langchain-ci.yml
name: LangChain CI

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

env:
  PYTHON_VERSION: "3.11"

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install dependencies
        run: |
          pip install ruff mypy

      - name: Lint with Ruff
        run: ruff check .

      - name: Type check with mypy
        run: mypy src/

  test-unit:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install dependencies
        run: |
          pip install -e ".[dev]"

      - name: Run unit tests
        run: |
          pytest tests/unit -v --cov=src --cov-report=xml

      - name: Upload coverage
        uses: codecov/codecov-action@v4
        with:
          files: coverage.xml

  test-integration:
    runs-on: ubuntu-latest
    needs: [lint, test-unit]
    # Only run on main branch or manual trigger
    if: github.ref == 'refs/heads/main' || github.event_name == 'workflow_dispatch'
    steps:
      - uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: ${{ env.PYTHON_VERSION }}

      - name: Install dependencies
        run: |
          pip install -e ".[dev]"

      - name: Run integration tests
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          pytest tests/integration -v -m integration

Step 2: Configure Test Markers

# pyproject.toml
[tool.pytest.ini_options]
markers = [
    "unit: Unit tests (no external API calls)",
    "integration: Integration tests (requires API keys)",
    "slow: Slow tests (skip in fast mode)",
]
asyncio_mode = "auto"
testpaths = ["tests"]

Step 3: Create Mock Fixtures

# tests/conftest.py
import pytest
from unittest.mock import MagicMock, AsyncMock
from langchain_core.messages import AIMessage

@pytest.fixture
def mock_llm():
    """Mock LLM for unit tests."""
    mock = MagicMock()
    mock.invoke.return_value = AIMessage(content="Mock response")
    mock.ainvoke = AsyncMock(return_value=AIMessage(content="Mock response"))
    return mock

@pytest.fixture
def mock_chain(mock_llm):
    """Mock chain for testing."""
    from langchain_core.prompts import ChatPromptTemplate
    from langchain_core.output_parsers import StrOutputParser

    prompt = ChatPromptTemplate.from_template("{input}")
    return prompt | mock_llm | StrOutputParser()

Step 4: Add Pre-commit Hooks

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/astral-sh/ruff-pre-commit
    rev: v0.1.6
    hooks:
      - id: ruff
        args: [--fix]
      - id: ruff-format

  - repo: https://github.com/pre-commit/mirrors-mypy
    rev: v1.7.1
    hooks:
      - id: mypy
        additional_dependencies:
          - langchain-core
          - pydantic

Step 5: Add Deployment Stage

# Add to .github/workflows/langchain-ci.yml
  deploy:
    runs-on: ubuntu-latest
    needs: [test-integration]
    if: github.ref == 'refs/heads/main'
    environment: production
    steps:
      - uses: actions/checkout@v4

      - name: Deploy to Cloud Run
        uses: google-github-actions/deploy-cloudrun@v2
        with:
          service: langchain-api
          source: .
          env_vars: |
            LANGCHAIN_PROJECT=production

Output

  • GitHub Actions workflow with lint, test, deploy stages
  • pytest configuration with markers
  • Mock fixtures for unit testing
  • Pre-commit hooks for code quality

Examples

Running Tests Locally

# Run unit tests only (fast)
pytest tests/unit -v

# Run with coverage
pytest tests/unit --cov=src --cov-report=html

# Run integration tests (requires API key)
OPENAI_API_KEY=sk-... pytest tests/integration -v -m integration

# Skip slow tests
pytest tests/ -v -m "not slow"

Integration Test Example

# tests/integration/test_chain.py
import pytest
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

@pytest.mark.integration
def test_real_chain_invocation():
    """Test with real LLM (requires API key)."""
    llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
    prompt = ChatPromptTemplate.from_template("Say exactly: {word}")
    chain = prompt | llm

    result = chain.invoke({"word": "hello"})
    assert "hello" in result.content.lower()

Error Handling

ErrorCauseSolution
Secret Not FoundMissing GitHub secretAdd OPENAI_API_KEY to repository secrets
Rate Limit in CIToo many API callsUse mocks for unit tests, limit integration tests
TimeoutSlow testsAdd timeout markers, parallelize tests
Import ErrorMissing dev dependenciesEnsure .[dev] extras installed

Resources

Next Steps

Proceed to langchain-deploy-integration for deployment configuration.

More by jeremylongshore

View all
rabbitmq-queue-setup
1,004

Rabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.

model-evaluation-suite
1,004

evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

neural-network-builder
1,004

building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").

oauth-callback-handler
1,004

Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.