Execute use when setting up log aggregation solutions using ELK, Loki, or Splunk. Trigger with phrases like "setup log aggregation", "deploy ELK stack", "configure Loki", or "install Splunk". Generates production-ready configurations for data ingestion, processing, storage, and visualization with proper security and scalability.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: setting-up-log-aggregation description: | Execute use when setting up log aggregation solutions using ELK, Loki, or Splunk. Trigger with phrases like "setup log aggregation", "deploy ELK stack", "configure Loki", or "install Splunk". Generates production-ready configurations for data ingestion, processing, storage, and visualization with proper security and scalability. allowed-tools: Read, Write, Edit, Grep, Glob, Bash(docker:), Bash(kubectl:) version: 1.0.0 author: Jeremy Longshore jeremy@intentsolutions.io license: MIT
Log Aggregation Setup
This skill provides automated assistance for log aggregation setup tasks.
Overview
Sets up centralized log aggregation (ELK/Loki/Splunk) including ingestion pipelines, parsing, retention policies, dashboards, and security controls.
Prerequisites
Before using this skill, ensure:
- Target infrastructure is identified (Kubernetes, Docker, VMs)
- Storage requirements are calculated based on log volume
- Network connectivity between log sources and aggregation platform
- Authentication mechanism is defined (LDAP, OAuth, basic auth)
- Resource allocation planned (CPU, memory, disk)
Instructions
- Select Platform: Choose ELK, Loki, Grafana Loki, or Splunk
- Configure Ingestion: Set up log shippers (Filebeat, Promtail, Fluentd)
- Define Storage: Configure retention policies and index lifecycle
- Set Up Processing: Create parsing rules and field extractions
- Deploy Visualization: Configure Kibana/Grafana dashboards
- Implement Security: Enable authentication, encryption, and RBAC
- Test Pipeline: Verify logs flow from sources to visualization
Output
ELK Stack (Docker Compose):
# {baseDir}/elk/docker-compose.yml
## Overview
This skill provides automated assistance for the described functionality.
## Examples
Example usage patterns will be demonstrated in context.
version: '3.8'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
environment:
- discovery.type=single-node
- xpack.security.enabled=true
volumes:
- es-data:/usr/share/elasticsearch/data
ports:
- "9200:9200"
logstash:
image: docker.elastic.co/logstash/logstash:8.11.0
volumes:
- ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
depends_on:
- elasticsearch
kibana:
image: docker.elastic.co/kibana/kibana:8.11.0
ports:
- "5601:5601"
depends_on:
- elasticsearch
Loki Configuration:
# {baseDir}/loki/loki-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
ingester:
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
chunk_idle_period: 5m
chunk_retain_period: 30s
schema_config:
configs:
- from: 2024-01-01
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
Error Handling
Out of Memory
- Error: "Elasticsearch heap space exhausted"
- Solution: Increase heap size in elasticsearch.yml or add more nodes
Connection Refused
- Error: "Cannot connect to Elasticsearch"
- Solution: Verify network connectivity and firewall rules
Index Creation Failed
- Error: "Failed to create index"
- Solution: Check disk space and index template configuration
Log Parsing Errors
- Error: "Failed to parse log line"
- Solution: Review grok patterns or JSON parsing configuration
Examples
- "Deploy Loki + Promtail on Kubernetes with 14-day retention and basic auth."
- "Set up an ELK stack for app + nginx logs and create a dashboard for 5xx errors."
Resources
- ELK Stack guide: https://www.elastic.co/guide/
- Loki documentation: https://grafana.com/docs/loki/
- Example configurations in {baseDir}/log-aggregation-examples/
More by jeremylongshore
View allRabbitmq Queue Setup - Auto-activating skill for Backend Development. Triggers on: rabbitmq queue setup, rabbitmq queue setup Part of the Backend Development skill category.
evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".
building-neural-networks: This skill allows Claude to construct and configure neural network architectures using the neural-network-builder plugin. It should be used when the user requests the creation of a new neural network, modification of an existing one, or assistance with defining the layers, parameters, and training process. The skill is triggered by requests involving terms like "build a neural network," "define network architecture," "configure layers," or specific mentions of neural network types (e.g., "CNN," "RNN," "transformer").
Oauth Callback Handler - Auto-activating skill for API Integration. Triggers on: oauth callback handler, oauth callback handler Part of the API Integration skill category.
