Skip to content

Instantly share code, notes, and snippets.

@SilenNaihin
Last active February 4, 2026 06:19
Show Gist options
  • Select an option

  • Save SilenNaihin/ead2b10f83dcc17b9e087c8f8684da88 to your computer and use it in GitHub Desktop.

Select an option

Save SilenNaihin/ead2b10f83dcc17b9e087c8f8684da88 to your computer and use it in GitHub Desktop.
Claude Code: Integration Stress Test command

Integration Stress Test

Build comprehensive integration and edge case tests that intentionally try to break the specified functionality.


FOR BUG FIXES: TEST-FIRST IS MANDATORY

STOP. Before touching ANY implementation code:

  1. Write a failing test that reproduces the bug
  2. Run the test and confirm it FAILS
  3. Only THEN fix the bug
  4. Run the test again to confirm it PASSES

This is non-negotiable. If you cannot reproduce the bug in a test, you don't understand it well enough to fix it. The test proves the bug exists and proves your fix works.


Usage

/integration-stress-test <feature or module to test>

Process

  1. Analyze the Feature

    • Read the implementation code
    • Identify inputs, outputs, and dependencies
    • Understand the expected behavior and invariants
  2. Identify Edge Cases

    • Numerical boundaries (0, negative, infinity, NaN, very large, very small)
    • Empty inputs (empty lists, None, missing keys)
    • Type mismatches and unexpected formats
    • Boundary conditions (exactly at limits, off-by-one)
    • Race conditions and ordering issues (if applicable)
    • Resource exhaustion (large inputs, memory pressure)
  3. Design Tests by Category

    Category What to Test
    Numerical Edge Cases NaN, Inf, overflow, underflow, division by zero
    Empty/Missing Data Empty collections, None values, missing keys
    Boundary Conditions Min/max values, exact thresholds, off-by-one
    Type Mismatches Wrong types, mixed types, format variations
    Scale/Performance Large inputs, many iterations, memory usage
    Integration Component interactions, end-to-end flows
    Error Handling Invalid inputs should fail gracefully
    Concurrency Thread safety, race conditions (if applicable)
  4. Write Tests That Try to Break Things

    • Each test should have a clear intent (what it's trying to break)
    • Use descriptive test names that explain the edge case
    • Include comments explaining why this edge case matters
    • Assert specific behavior, not just "doesn't crash"
  5. Run and Iterate

    • Run the tests
    • If tests fail, determine if it's a bug or incorrect test expectation
    • Fix bugs found, adjust tests if expectations were wrong
    • Document what was found

Test File Naming

Create test files with the pattern:

test_<module>_edge_cases.py

Example Test Structure

"""
Edge case and stress tests for <feature>.

These tests intentionally try to break the implementation
by exploring boundary conditions, numerical edge cases, and integration scenarios.
"""

class TestNumericalEdgeCases:
    """Tests for numerical stability and edge cases."""

    def test_nan_input_handled(self):
        """NaN values should not crash, should produce defined behavior."""
        ...

    def test_infinity_input_handled(self):
        """Infinity should produce finite or well-defined output."""
        ...

class TestEmptyInputs:
    """Tests for empty and missing data."""

    def test_empty_list_returns_empty(self):
        """Empty input should return empty output, not crash."""
        ...

class TestBoundaryConditions:
    """Tests for edge values and boundaries."""

    def test_exactly_at_threshold(self):
        """Values exactly at threshold should behave correctly."""
        ...

class TestIntegration:
    """Tests for component interactions."""

    def test_full_pipeline_with_edge_data(self):
        """Complete flow should handle edge cases end-to-end."""
        ...

class TestRealWorldScenarios:
    """Tests simulating realistic edge cases."""

    def test_converged_state(self):
        """System in converged state should still function."""
        ...

Output

After creating and running tests, provide:

  1. Summary of test categories and count
  2. Any bugs or issues found
  3. Fixes applied
  4. Final test pass/fail status
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment