DaveSkender

performance-testing

@DaveSkender/performance-testing
DaveSkender
1,166
265 forks
Updated 1/18/2026
View on GitHub

Benchmark indicator performance with BenchmarkDotNet. Use for Series/Buffer/Stream benchmarks, regression detection, and optimization patterns. Target 1.5x Series for StreamHub, 1.2x for BufferList.

Installation

$skills install @DaveSkender/performance-testing
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Path.github/skills/performance-testing/SKILL.md
Branchv3
Scoped Name@DaveSkender/performance-testing

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: performance-testing description: Benchmark indicator performance with BenchmarkDotNet. Use for Series/Buffer/Stream benchmarks, regression detection, and optimization patterns. Target 1.5x Series for StreamHub, 1.2x for BufferList.

Performance testing

Running benchmarks

cd tools/performance

# Run all benchmarks (~15-20 minutes)
dotnet run -c Release

# Run specific category
dotnet run -c Release --filter *StreamIndicators*
dotnet run -c Release --filter *BufferIndicators*
dotnet run -c Release --filter *SeriesIndicators*

# Run specific indicator
dotnet run -c Release --filter *.EmaHub

Adding benchmarks

Series pattern

[Benchmark]
public void ToMyIndicator() => quotes.ToMyIndicator(14);

Stream pattern

[Benchmark]
public object MyIndicatorHub() => quoteHub.ToMyIndicatorHub(14).Results;

Buffer pattern

[Benchmark]
public MyIndicatorList MyIndicatorList() => new(14) { quotes };

Style comparison

[Benchmark]
public IReadOnlyList<MyResult> MyIndicatorSeries() => quotes.ToMyIndicator(14);

[Benchmark]
public IReadOnlyList<MyResult> MyIndicatorBuffer() => quotes.ToMyIndicatorList(14);

[Benchmark]
public IReadOnlyList<MyResult> MyIndicatorStream() => quoteHub.ToMyIndicator(14).Results;

Performance targets

Note: These are optimization goals for future v3.1+ effort. Current implementations vary—see PERFORMANCE_ANALYSIS.md for actual measured performance. Some indicator families (e.g., EMA) have inherent framework overhead due to simple operation costs.

StyleTarget vs SeriesUse Case
SeriesBaselineBatch processing
BufferList≤ 1.2xIncremental data
StreamHub≤ 1.5xReal-time feeds

Expected execution times (502 periods)

Note: These are optimization targets. Actual execution times vary by indicator complexity and current implementation.

ComplexityTimeExamples
Fast< 30μsSMA, EMA, WMA, RSI
Medium30-60μsMACD, Bollinger Bands, ATR
Complex60-100μsHMA, ADX, Stochastic
Advanced100-200μs+Ichimoku, Hurst

Regression detection

# Auto-detect baseline and results
pwsh detect-regressions.ps1

# Custom threshold (default 10%)
pwsh detect-regressions.ps1 -ThresholdPercent 15

Exit codes:

  • 0 - No regressions
  • 1 - Regressions found

Creating baselines

cp BenchmarkDotNet.Artifacts/results/Performance.*-report-full.json \
   baselines/baseline-v3.0.0.json

Required optimization patterns

  • Minimize allocations in hot paths
  • Avoid LINQ in performance-critical loops
  • Use Span<T> for zero-copy operations
  • Cache calculations when possible
  • Test with realistic data sizes (502 periods)

Prohibited patterns

  • Excessive LINQ in hot paths
  • Boxing/unboxing of value types
  • Unnecessary string allocations
  • Redundant calculations in loops
  • Poor cache locality

See references/benchmark-patterns.md for detailed patterns.


Last updated: December 31, 2025