Chaos Engineering Iot: Chaos Engineering for IoT enables systematic testing of IoT system resilience by introducing controlled failures to identify weaknesses before they impact production. This practice is essential for en
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
id: SKL-chaos-CHAOSENGINEERINGIOT name: Chaos Engineering Iot description: Chaos Engineering for IoT enables systematic testing of IoT system resilience by introducing controlled failures to identify weaknesses before they impact production. This practice is essential for en version: 1.0.0 status: active owner: '@cerebra-team' last_updated: '2026-02-22' category: Backend tags:
- api
- backend
- server
- database stack:
- Python
- Node.js
- REST API
- GraphQL difficulty: Intermediate
Chaos Engineering Iot
Skill Profile
(Select at least one profile to enable specific modules)
- DevOps
- Backend
- Frontend
- AI-RAG
- Security Critical
Overview
Chaos Engineering for IoT enables systematic testing of IoT system resilience by introducing controlled failures to identify weaknesses before they impact production. This practice is essential for ensuring reliability of distributed IoT systems that operate across heterogeneous environments with varying network conditions, device capabilities, and failure modes.
Why This Matters
- Resilience: Identify and fix failure points proactively
- Reliability: Ensure systems recover gracefully from failures
- Confidence: Build confidence in system behavior under stress
- Cost Reduction: Prevent costly outages through proactive testing
- Customer Trust: Maintain service availability and performance
Core Concepts & Rules
1. Core Principles
- Follow established patterns and conventions
- Maintain consistency across codebase
- Document decisions and trade-offs
2. Implementation Guidelines
- Start with the simplest viable solution
- Iterate based on feedback and requirements
- Test thoroughly before deployment
Inputs / Outputs / Contracts
- Inputs:
- Fault injection configuration (type, severity, duration, targets)
- Monitoring and alerting setup
- Rollback procedures
- Entry Conditions:
- IoT infrastructure deployed and operational
- Monitoring system in place
- Rollback procedures documented
- Outputs:
- Fault injection results
- System resilience metrics
- Failure analysis reports
- Remediation recommendations
- Artifacts Required (Deliverables):
- Chaos experiment manifests
- Fault injection scripts
- Monitoring dashboards
- Recovery procedures
- Acceptance Evidence:
- Faults successfully injected and removed
- System recovers gracefully
- No data loss during experiments
- Metrics collected and analyzed
- Success Criteria:
- Fault injection success rate > 95%
- System recovery time < 5 minutes
- No data loss during experiments
- All critical failure scenarios tested
Skill Composition
- Depends on: Advanced IaC for IoT
- Compatible with: Disaster Recovery for IoT, GitOps for IoT Infrastructure
- Conflicts with: None
- Related Skills: Disaster Recovery for IoT, Multi-Cloud IoT Strategy
Quick Start / Implementation Example
- Review requirements and constraints
- Set up development environment
- Implement core functionality following patterns
- Write tests for critical paths
- Run tests and fix issues
- Document any deviations or decisions
# Example implementation following best practices
def example_function():
# Your implementation here
pass
Assumptions / Constraints / Non-goals
- Assumptions:
- Development environment is properly configured
- Required dependencies are available
- Team has basic understanding of domain
- Constraints:
- Must follow existing codebase conventions
- Time and resource limitations
- Compatibility requirements
- Non-goals:
- This skill does not cover edge cases outside scope
- Not a replacement for formal training
Compatibility & Prerequisites
- Supported Versions:
- Python 3.8+
- Node.js 16+
- Modern browsers (Chrome, Firefox, Safari, Edge)
- Required AI Tools:
- Code editor (VS Code recommended)
- Testing framework appropriate for language
- Version control (Git)
- Dependencies:
- Language-specific package manager
- Build tools
- Testing libraries
- Environment Setup:
.env.examplekeys:API_KEY,DATABASE_URL(no values)
Test Scenario Matrix (QA Strategy)
| Type | Focus Area | Required Scenarios / Mocks |
|---|---|---|
| Unit | Core Logic | Must cover primary logic and at least 3 edge/error cases. Target minimum 80% coverage |
| Integration | DB / API | All external API calls or database connections must be mocked during unit tests |
| E2E | User Journey | Critical user flows to test |
| Performance | Latency / Load | Benchmark requirements |
| Security | Vuln / Auth | SAST/DAST or dependency audit |
| Frontend | UX / A11y | Accessibility checklist (WCAG), Performance Budget (Lighthouse score) |
Technical Guardrails & Security Threat Model
1. Security & Privacy (Threat Model)
- Top Threats: Injection attacks, authentication bypass, data exposure
- Data Handling: Sanitize all user inputs to prevent Injection attacks. Never log raw PII
- Secrets Management: No hardcoded API keys. Use Env Vars/Secrets Manager
- Authorization: Validate user permissions before state changes
2. Performance & Resources
- Execution Efficiency: Consider time complexity for algorithms
- Memory Management: Use streams/pagination for large data
- Resource Cleanup: Close DB connections/file handlers in finally blocks
3. Architecture & Scalability
- Design Pattern: Follow SOLID principles, use Dependency Injection
- Modularity: Decouple logic from UI/Frameworks
4. Observability & Reliability
- Logging Standards: Structured JSON, include trace IDs
request_id - Metrics: Track
error_rate,latency,queue_depth - Error Handling: Standardized error codes, no bare except
- Observability Artifacts:
- Log Fields: timestamp, level, message, request_id
- Metrics: request_count, error_count, response_time
- Dashboards/Alerts: High Error Rate > 5%
Agent Directives & Error Recovery
(ข้อกำหนดสำหรับ AI Agent ในการคิดและแก้ปัญหาเมื่อเกิดข้อผิดพลาด)
- Thinking Process: Analyze root cause before fixing. Do not brute-force.
- Fallback Strategy: Stop after 3 failed test attempts. Output root cause and ask for human intervention/clarification.
- Self-Review: Check against Guardrails & Anti-patterns before finalizing.
- Output Constraints: Output ONLY the modified code block. Do not explain unless asked.
Definition of Done (DoD) Checklist
- Tests passed + coverage met
- Lint/Typecheck passed
- Logging/Metrics/Trace implemented
- Security checks passed
- Documentation/Changelog updated
- Accessibility/Performance requirements met (if frontend)
Anti-patterns / Pitfalls
- ⛔ Don't: Log PII, catch-all exception, N+1 queries
- ⚠️ Watch out for: Common symptoms and quick fixes
- 💡 Instead: Use proper error handling, pagination, and logging
Reference Links & Examples
- Internal documentation and examples
- Official documentation and best practices
- Community resources and discussions
Versioning & Changelog
- Version: 1.0.0
- Changelog:
- 2026-02-22: Initial version with complete template structure
More by AmnadTaowsoam
View allWeb3 authentication uses cryptographic signatures to verify user identity without passwords. This guide covers Sign-In with Ethereum (SIWE), message signing, signature verification, and session manage
Ai Agents: AI agents are autonomous systems that use language models to perform tasks, make decisions, and interact with users or other systems. They combine reasoning (thinking) with action (doing) in iterative
Rag Implementation: Comprehensive guide for Retrieval-Augmented Generation (RAG) implementation using LangChain. This skill covers the complete RAG pipeline from document processing and chunking, through embedding genera
Platform Product Design enables creation of multi-tenant, extensible platforms that support third-party developers, partners, and ecosystem growth. This capability is essential for SaaS platforms, mar
