Agent SkillsAgent Skills
anton-abyzov

security-patterns

@anton-abyzov/security-patterns
anton-abyzov
117
13 forks
Updated 4/6/2026
View on GitHub

Security Pattern Detector Skill: Real-time security pattern detector based on Anthropic's official security-guidance plugin. Use proactively when writing code to detect command injection, XSS, unsafe deserialization, and dynamic code execution risks. Identifies dangerous patterns BEFORE they're committed.

Installation

$npx agent-skills-cli install @anton-abyzov/security-patterns
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/specweave/skills/security-patterns/SKILL.md
Branchdevelop
Scoped Name@anton-abyzov/security-patterns

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


description: Real-time security pattern detector based on Anthropic's official security-guidance plugin. Use proactively when writing code to detect command injection, XSS, unsafe deserialization, and dynamic code execution risks. Identifies dangerous patterns BEFORE they're committed. allowed-tools: Read, Grep, Glob user-invocable: false

Security Pattern Detector Skill

Project Overrides

!s="security-patterns"; for d in .specweave/skill-memories .claude/skill-memories "$HOME/.claude/skill-memories"; do p="$d/$s.md"; [ -f "$p" ] && awk '/^## Learnings$/{ok=1;next}/^## /{ok=0}ok' "$p" && break; done 2>/dev/null; true

Overview

This skill provides real-time security pattern detection based on Anthropic's official security-guidance plugin. It identifies potentially dangerous coding patterns BEFORE they're committed.

Scope Boundaries

This skill is a REAL-TIME DETECTOR. Activates proactively when writing code. Detects: command injection, XSS, unsafe deserialization, dynamic code execution.

  • For comprehensive security audits → use /sw:security

Detection Categories

1. Command Injection Risks

GitHub Actions Workflow Injection

# DANGEROUS - User input directly in run command
run: echo "${{ github.event.issue.title }}"

# SAFE - Use environment variable
env:
  TITLE: ${{ github.event.issue.title }}
run: echo "$TITLE"

Node.js Child Process Execution

// DANGEROUS - Shell command with user input
exec(`ls ${userInput}`);
spawn('sh', ['-c', userInput]);

// SAFE - Array arguments, no shell
execFile('ls', [sanitizedPath]);
spawn('ls', [sanitizedPath], { shell: false });

Python OS Commands

# DANGEROUS
os.system(f"grep {user_input} file.txt")
subprocess.call(user_input, shell=True)

# SAFE
subprocess.run(['grep', sanitized_input, 'file.txt'], shell=False)

2. Dynamic Code Execution

JavaScript eval-like Patterns

// DANGEROUS - All of these execute arbitrary code
eval(userInput);
new Function(userInput)();
setTimeout(userInput, 1000);  // When string passed
setInterval(userInput, 1000); // When string passed

// SAFE - Use parsed data, not code
const config = JSON.parse(configString);

3. DOM-based XSS Risks

React dangerouslySetInnerHTML

// DANGEROUS - Renders arbitrary HTML
<div dangerouslySetInnerHTML={{ __html: userContent }} />

// SAFE - Use proper sanitization
import DOMPurify from 'dompurify';
<div dangerouslySetInnerHTML={{ __html: DOMPurify.sanitize(userContent) }} />

Direct DOM Manipulation

// DANGEROUS
element.innerHTML = userInput;
document.write(userInput);

// SAFE
element.textContent = userInput;
element.innerText = userInput;

4. Unsafe Deserialization

Python Pickle

# DANGEROUS - Pickle can execute arbitrary code
import pickle
data = pickle.loads(user_provided_bytes)

# SAFE - Use JSON for untrusted data
import json
data = json.loads(user_provided_string)

JavaScript unsafe deserialization

// DANGEROUS with untrusted input
const obj = eval('(' + jsonString + ')');

// SAFE
const obj = JSON.parse(jsonString);

5. SQL Injection

String Interpolation in Queries

// DANGEROUS
const query = `SELECT * FROM users WHERE id = ${userId}`;
db.query(`SELECT * FROM users WHERE name = '${userName}'`);

// SAFE - Parameterized queries
const query = 'SELECT * FROM users WHERE id = $1';
db.query(query, [userId]);

6. Path Traversal

Unsanitized File Paths

// DANGEROUS
const filePath = `./uploads/${userFilename}`;
fs.readFile(filePath); // User could pass "../../../etc/passwd"

// SAFE
const safePath = path.join('./uploads', path.basename(userFilename));
if (!safePath.startsWith('./uploads/')) throw new Error('Invalid path');

Pattern Detection Rules

PatternCategorySeverityAction
eval(Code ExecutionCRITICALBlock
new Function(Code ExecutionCRITICALBlock
dangerouslySetInnerHTMLXSSHIGHWarn
innerHTML =XSSHIGHWarn
document.write(XSSHIGHWarn
exec( + string concatCommand InjectionCRITICALBlock
spawn( + shell:trueCommand InjectionHIGHWarn
pickle.loads(DeserializationCRITICALWarn
${{ github.eventGH Actions InjectionCRITICALWarn
Template literal in SQLSQL InjectionCRITICALBlock

Response Format

When detecting a pattern:

⚠️ **Security Warning**: [Pattern Category]

**File**: `path/to/file.ts:123`
**Pattern Detected**: `eval(userInput)`
**Risk**: Remote Code Execution - Attacker-controlled input can execute arbitrary JavaScript

**Recommendation**:
1. Never use eval() with user input
2. Use JSON.parse() for data parsing
3. Use safe alternatives for dynamic behavior

**Safe Alternative**:
```typescript
// Instead of eval(userInput), use:
const data = JSON.parse(userInput);

## Integration with Code Review

This skill should be invoked:
1. During PR reviews when new code is written
2. As part of security audits
3. When flagged by the code-reviewer skill

## False Positive Handling

Some patterns may be false positives:
- `dangerouslySetInnerHTML` with DOMPurify is safe
- `eval` in build tools (not user input) may be acceptable
- `exec` with hardcoded commands is lower risk

Always check the context before blocking.