Category: DevOps

  • Use Git History to Understand Legacy Code Evolution

    When you inherit a codebase, understanding WHY code exists is often more valuable than WHAT it does. Git history is your time machine.

    The Basic Timeline

    # Get simple commit history for a file
    git log --follow --all --pretty=format:"%h|%ai|%an|%s" -- app/Services/ReportGenerator.php

    Output:

    f4bc663|2018-05-09 17:46:43 +0800|John Smith|TASK-707 Add report tracking
    201105b|2019-03-27 10:02:08 +0800|Jane Doe|Refactor - add dependency injection

    The Full Story (Patches + Stats)

    # See actual code changes with statistics
    git log --follow --all --stat --patch -- app/Services/ReportGenerator.php

    What You Learn

    • Original author: Who to ask if you have questions
    • Creation date: How old this code is (affects modernization priority)
    • Commit message: Why it was added (often references a ticket/task)
    • Evolution pattern: Was it written once and forgotten, or actively maintained?
    • Refactoring history: What patterns were replaced (helps avoid repeating mistakes)

    Reading the Story

    The example above tells us:

    • Created in May 2018 for admin dashboard tracking
    • Refactored in March 2019 to add dependency injection
    • One year gap between changes suggests low-priority maintenance code
    • Two different authors = knowledge might be spread across team

    Pro Tips

    • Use --follow: Tracks files even if renamed (critical!)
    • Filter by date: --since="2020-01-01" to see recent changes only
    • Check for deletions: If no commits since 2020, might be deprecated
    • Look for patterns: Frequent refactors = evolving requirements

    When to Use This

    • Before refactoring legacy code (understand why it’s weird)
    • When debugging mysterious behavior (was it always like this?)
    • During code review (does this change make sense given the history?)
    • When deciding whether to delete code (has it been touched recently?)

    Next time you see weird legacy code, don’t guess — check the git history. It often explains everything.

  • Safely Update Environment Variables Across Multiple Production Servers

    The Check-Update-Verify Pattern

    When you need to update configuration values like API credentials across multiple production servers, a systematic approach prevents costly mistakes. Here’s a three-step pattern that gives you full visibility and confidence:

    Step 1: Check Current State

    Before making changes, audit what’s currently deployed across all servers. Use SSH with grep to read environment variables:

    ssh web01 "grep '^API_KEY=' /var/www/app/.env"
    ssh web02 "grep '^API_KEY=' /var/www/app/.env"
    ssh api01 "grep '^API_KEY=' /var/www/app/.env"
    

    This reveals discrepancies immediately. You might discover that some servers already have the updated value, or that different servers are using different credentials entirely.

    Step 2: Update with Precision

    Use sed to make surgical changes without touching other configuration values:

    ssh web01 "sed -i 's/^API_KEY=.*/API_KEY=\"sk_live_abc123xyz\"/' /var/www/app/.env"
    ssh web02 "sed -i 's/^API_KEY=.*/API_KEY=\"sk_live_abc123xyz\"/' /var/www/app/.env"
    ssh api01 "sed -i 's/^API_KEY=.*/API_KEY=\"sk_live_abc123xyz\"/' /var/www/app/.env"
    

    The caret (^) anchor is crucial here—it ensures you only match lines that start with API_KEY=, preventing accidental modifications elsewhere in the file where that string might appear in comments or other contexts.

    Step 3: Verify Success

    Run the same grep command from step 1 across all servers again to confirm consistency:

    ssh web01 "grep '^API_KEY=' /var/www/app/.env"
    ssh web02 "grep '^API_KEY=' /var/www/app/.env"  
    ssh api01 "grep '^API_KEY=' /var/www/app/.env"
    

    All servers should now return identical output. If any server differs, you caught it before it causes problems.

    When to Use This Pattern

    This approach shines when:

    • You don’t have configuration management tools like Ansible or Puppet set up
    • You need to make a one-off change quickly without going through a full deployment pipeline
    • You’re working in an environment where you’re sudoer on production servers
    • The number of servers is small enough that manual SSH is practical (typically under 10-15 servers)

    The pattern trades automation for visibility and control. You see exactly what’s happening at each step, which is valuable when working with sensitive configuration like API credentials or database passwords.

    Pro Tips

    Escape quotes properly: When the value contains quotes, escape them in the sed command: \"value-here\"

    Use a consistent naming pattern: If your .env file has similar variable names like API_KEY, API_KEY_SANDBOX, and PARTNER_API_KEY, the ^ anchor prevents accidentally matching the wrong one.

    Test on one server first: If you’re unsure about the sed syntax, run it on one server, verify the result, then proceed to the others.

    Consider a for loop: For many servers, wrap it in a loop:

    for server in web01 web02 api01; do
        echo "Updating $server..."
        ssh "$server" "sed -i 's/^API_KEY=.*/API_KEY=\"new-value\"/' /var/www/app/.env"
        ssh "$server" "grep '^API_KEY=' /var/www/app/.env"
    done
    

    This pattern isn’t a replacement for proper configuration management, but it’s a pragmatic technique for those moments when you need to make a quick, safe change across a handful of servers without the overhead of a full deployment pipeline.

  • Fixing 504 Gateway Timeout in Docker Development

    The Problem

    You’re running a Laravel app in Docker (nginx + PHP-FPM), and you keep hitting 504 Gateway Timeout errors on pages that work fine in production. Long-running reports, imports, and exports all fail locally but succeed on the live server.

    This is a configuration mismatch: your local Docker setup has default timeouts, but production doesn’t.

    The Root Cause

    Two layers have timeout settings that need to align:

    1. PHP execution limits: How long PHP will run before killing a script
    2. nginx FastCGI timeouts: How long nginx will wait for PHP-FPM to respond

    When either of these times out before your code finishes, you get a 504.

    Check Production Settings First

    SSH into your production server and check what’s actually running:

    php -i | grep -E 'max_execution_time|max_input_time|memory_limit'

    You’ll probably see something like:

    max_execution_time => 0
    max_input_time => -1
    memory_limit => -1

    0 and -1 mean unlimited. Production doesn’t kill long-running scripts. Your local Docker setup probably has defaults like 30s for execution time and 60s for nginx FastCGI timeout.

    Fix 1: PHP Timeout Settings

    Create or update your PHP overrides file:

    ; docker/php/php-overrides.ini
    max_execution_time = 0
    max_input_time = -1
    memory_limit = -1
    upload_max_filesize = 50M
    post_max_size = 50M

    Mount this file in your docker-compose.yml:

    services:
      php:
        image: php:8.2-fpm
        volumes:
          - ./docker/php/php-overrides.ini:/usr/local/etc/php/conf.d/99-overrides.ini
          - ./:/var/www/html

    Fix 2: nginx FastCGI Timeouts

    Update your nginx site config:

    # docker/nginx/site.conf
    server {
        listen 80;
        root /var/www/html/public;
        index index.php;
    
        location ~ \.php$ {
            fastcgi_pass php:9000;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
    
            # Add these timeout settings
            fastcgi_read_timeout 300s;
            fastcgi_connect_timeout 300s;
            fastcgi_send_timeout 300s;
        }
    }

    Mount this in docker-compose.yml:

    services:
      nginx:
        image: nginx:alpine
        volumes:
          - ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
          - ./:/var/www/html
        ports:
          - "8080:80"

    Why These Numbers Matter

    Default Behavior (Broken)

    • PHP max_execution_time: 30s
    • nginx fastcgi_read_timeout: 60s
    • Your report generation: 120s

    Result: PHP kills the script at 30s → nginx waits until 60s → 504 error at 60s

    Production Behavior (Works)

    • PHP max_execution_time: 0 (unlimited)
    • nginx fastcgi_read_timeout: 300s (5 minutes)
    • Your report generation: 120s

    Result: PHP finishes at 120s → nginx receives response → success

    Full docker-compose.yml Example

    version: '3.8'
    
    services:
      nginx:
        image: nginx:alpine
        ports:
          - "8080:80"
        volumes:
          - ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
          - ./:/var/www/html
        depends_on:
          - php
    
      php:
        image: php:8.2-fpm
        volumes:
          - ./docker/php/php-overrides.ini:/usr/local/etc/php/conf.d/99-overrides.ini
          - ./:/var/www/html
        environment:
          - DB_HOST=mysql
          - DB_DATABASE=laravel
          - DB_USERNAME=root
          - DB_PASSWORD=secret
    
      mysql:
        image: mysql:8.0
        environment:
          MYSQL_ROOT_PASSWORD: secret
          MYSQL_DATABASE: laravel
        volumes:
          - mysql_data:/var/lib/mysql
    
    volumes:
      mysql_data:

    After Making Changes

    Restart your containers:

    docker-compose down
    docker-compose up -d

    Verify PHP settings took effect:

    docker-compose exec php php -i | grep max_execution_time

    You should see max_execution_time => 0.

    When to Use Unlimited vs Fixed Timeouts

    Development: Unlimited (What We Just Did)

    • Mirrors production behavior
    • Prevents false negatives (things that work in prod fail locally)
    • Easier debugging (long operations don’t timeout mid-execution)

    Production: Consider Limits

    Unlimited timeouts in production can be dangerous:

    • Runaway scripts can hang forever
    • Resource exhaustion under load
    • Harder to detect infinite loops

    If your production has unlimited timeouts and you’re seeing issues, consider:

    max_execution_time = 300  ; 5 minutes
    memory_limit = 512M        ; Generous but not unlimited

    The Takeaway

    When you get 504 errors in Docker that don’t happen in production, check timeout alignment between:

    1. PHP execution limits (php.ini or php-overrides.ini)
    2. nginx FastCGI timeouts (fastcgi_read_timeout)

    Mirror your production settings locally. Don’t debug phantom timeout issues—just make the environments match.

  • Strategic Sentry Filtering: Finding Quick Wins in Production Bugs

    When Sentry shows hundreds of unresolved issues, how do you find the bugs worth fixing without drowning in noise?

    The key is strategic filtering—exclude performance and architectural problems, surface defensive coding gaps.

    The Filtering Strategy

    Start with your baseline query:

    • is:unresolved
    • environment:production
    • statsPeriod:14d (last 14 days)

    Then exclude the noise:

    !issue.type:performance_n_plus_one_db_queries
    !issue.type:performance_slow_db_query
    !title:"External API timeout"
    

    Why exclude these?

    • N+1 queries → Need query optimization, not bug fixes
    • Slow queries → Architectural problem (indexing, caching)
    • Third-party errors → External dependency, can’t fix on your end

    What You’re Left With: Quick Wins

    After filtering, you’ll surface bugs that are actually fixable:

    • Defensive coding gaps: Null checks, division by zero, type validation
    • Edge case handling: Missing data, unexpected input formats
    • Race conditions: Duplicate queue entries, concurrency issues

    Example Quick Wins

    // Before: Division by zero crash
    $average = $total / $count;
    
    // After: Defensive guard
    $average = $count > 0 ? $total / $count : 0;
    
    // Before: Type error when null
    $data = json_decode($response->body);
    
    // After: Null coalescing + validation
    $data = json_decode($response->body ?? '{}') ?: [];
    
    // Before: Duplicate queue entries
    ProcessReport::dispatch($reportId);
    
    // After: Idempotency key
    ProcessReport::dispatch($reportId)
        ->onQueue('reports')
        ->withChain([
            new MarkReportProcessed($reportId)
        ]);
    

    The Mindset Shift

    Don’t try to fix everything. Filter out architectural problems (performance, scalability, external APIs). Focus on defensive coding (validation, null safety, edge cases).

    Quick wins = small PRs with immediate impact. Fix 5 crashes in an hour instead of chasing one slow query for a week.

    Your Action Items

    1. Build your exclusion filter template (save it as a bookmark)
    2. Run it weekly to surface new defensive coding gaps
    3. Prioritize by event count + affected users
    4. Create small, focused PRs

    Bonus tip: If an issue keeps recurring after a “fix”, it’s probably architectural. Exclude it and move on.

  • Safely Updating Environment Variables Across Multiple Production Servers

    When running a Laravel application across multiple production servers (load-balanced or distributed architecture), you need to update configuration values consistently across all servers without causing downtime or configuration drift.

    The Problem

    You have 8 servers running your application:

    • 2 web servers (web1, web2)
    • 2 API servers (api1, api2)
    • 2 queue worker servers (worker1, worker2)
    • 2 admin dashboard servers (admin1, admin2)

    A third-party service changed their API endpoint from http://api.vendor.com/ to https://platform.vendor.com/, and you need to update the VENDOR_API_URL environment variable across all servers immediately.

    The Solution

    Use SSH commands combined with sed and grep to update .env files across all servers in one operation, with immediate verification.

    Step 1: Check Current Values

    for server in web1 web2 api1 api2 worker1 worker2 admin1 admin2; do
      echo "=== $server ==="
      ssh $server "grep '^VENDOR_API_URL=' /var/www/app/.env"
    done

    This gives you a baseline and confirms the current value is consistent across all servers.

    Step 2: Update All Servers

    for server in web1 web2 api1 api2 worker1 worker2 admin1 admin2; do
      ssh $server "sudo sed -i 's|^VENDOR_API_URL=\"http://api.vendor.com/\"|VENDOR_API_URL=\"https://platform.vendor.com/\"|' /var/www/app/.env && sudo grep '^VENDOR_API_URL=' /var/www/app/.env"
    done

    Key techniques:

    • Pipe delimiters in sed: s|old|new| instead of s/old/new/ avoids escaping forward slashes in URLs
    • Chain with grep: && immediately verifies each update
    • Use sudo: Handle file permission requirements
    • In-place edit: sed -i modifies the file directly

    Step 3: Independent Verification

    After all updates complete, run a fresh check to confirm consistency:

    for server in web1 web2 api1 api2 worker1 worker2 admin1 admin2; do
      echo "=== $server ==="
      ssh $server "grep '^VENDOR_API_URL=' /var/www/app/.env"
    done

    All servers should now show the new value. Document the results in your deployment notes.

    Important Considerations

    Service Restarts Required

    .env changes don’t take effect until services restart. Plan your restart strategy:

    • PHP-FPM: sudo systemctl reload php8.2-fpm
    • Queue Workers: sudo supervisorctl restart all or php artisan queue:restart
    • Laravel Octane: php artisan octane:reload

    Consider rolling restarts to avoid downtime (restart workers first, then web servers one at a time).

    Config Cache

    If you’re using php artisan config:cache in production, updating .env alone won’t work. You need to rebuild the cache:

    for server in web1 web2 api1 api2 worker1 worker2 admin1 admin2; do
      ssh $server "cd /var/www/app && php artisan config:clear && php artisan config:cache"
    done

    Why Not Ansible/Chef/Puppet?

    Configuration management tools are great for infrastructure-as-code, but for urgent hotfixes or one-off changes, this SSH approach is:

    • Faster – No playbook to write/test/deploy
    • Transparent – You see exactly what changed in real-time
    • Auditable – Terminal history captures the exact commands
    • Simple – Works even if CM tools aren’t set up yet

    For routine configuration updates, absolutely use your CM tool. But when you need to update an API endpoint across 8 servers right now, this approach gets the job done safely.

    Pro Tips

    • Document before changing: Keep a record of old vs new values
    • Test on one server first: Verify the sed command works before running across all servers
    • Use SSH config: Simplify server names with ~/.ssh/config aliases
    • Consider tmux: Run parallel SSH sessions to see all updates simultaneously
    • Backup first: cp /var/www/app/.env /var/www/app/.env.backup before making changes

    Need to update environment variables across your server fleet? This pattern ensures consistency, provides immediate verification, and gives you the confidence that all servers are aligned.

  • Pause Your Asset Compilation Container Before Frontend Changes

    # Pause Your Asset Compilation Container Before Frontend Changes

    When working on a Laravel application with separate containers for asset compilation (Vite, Mix, or Webpack), pause the compilation container before editing frontend files, then unpause after your changes are complete.

    **Why this matters:**

    Hot reload watchers can:
    – Lock files mid-edit
    – Trigger partial recompilations
    – Create race conditions with your IDE’s file writes
    – Generate confusing browser cache states

    **The workflow:**

    “`bash
    # Before editing JS/CSS/Vue files
    docker pause myapp-vite

    # Make your frontend changes
    # … edit components, styles, scripts …

    # After all changes are saved
    docker unpause myapp-vite
    “`

    **What to tell your users:**

    Instead of checking Docker logs after unpause, give them:
    – **Which page to view:** “Check `/dashboard/reports` page”
    – **What to test:** “Click ‘Export’ button, verify CSV downloads”
    – **What should change:** “The table should now be sortable”

    This keeps testing focused and avoids the “it compiled, now what?” confusion.

    **When to skip this:** If you’re only editing a single file and want live reload, keep the container running. But for multi-file refactors or component restructures, pause first.

    The two seconds spent pausing/unpausing saves minutes of debugging phantom reload issues.

    **Category:** DevOps

  • Make Shell Script Hooks Visible with Stderr Redirection

    I was setting up Git hooks to auto-sync my project workspace on exit, but I couldn’t see any output. The hook ran silently—no success messages, no errors, nothing. I had no idea if it was even working.

    The problem? Hook scripts inherit their parent’s stdout/stderr, but they often run in contexts where those streams are redirected to /dev/null. So even if you echo messages, you’ll never see them.

    The Solution: Redirect stderr to stdout, Then Format

    To make hook output visible, you need to:

    1. Redirect stderr to stdout (2>&1)
    2. Pipe through sed to add indentation
    3. Redirect back to stderr (>&2) so it appears in the terminal

    Here’s the pattern:

    #!/bin/bash
    
    # Run your command and make output visible
    my-sync-script.sh 2>&1 | sed 's/^/  /' >&2
    

    What’s Happening Here?

    • 2>&1 — Merge stderr into stdout (captures all output)
    • | sed 's/^/ /' — Add 2 spaces to the start of each line (formatting)
    • >&2 — Send the formatted output to stderr (visible in terminal)

    Example: Auto-Sync on Git Exit

    I used this pattern in a boot script that syncs project files from Google Drive before running:

    #!/bin/bash
    # ~/.local/bin/sync-workspace.sh
    
    echo "🔄 Syncing workspace from Google Drive..."
    
    rclone sync gdrive:workspace /local/workspace \
        --fast-list \
        --transfers 8 \
        2>&1 | sed 's/^/  /' >&2
    
    echo "✅ Workspace synced"
    

    Now when the script runs, I see:

    🔄 Syncing workspace from Google Drive...
      Transferred: 1.2 MiB / 1.2 MiB, 100%
      Checks: 47 / 47, 100%
    ✅ Workspace synced
    

    When to Use This

    • Git hooks (pre-commit, post-checkout, etc.)
    • Boot scripts that run on system startup
    • Cron jobs where you want output logged
    • Any background process where visibility helps debugging

    Without this pattern, your hooks run silently. With it, you get clear feedback—success messages, error details, everything.

  • Track Code Deployment with Git Tags

    When managing releases via git tags, use git tag --contains <commit-sha> to check if specific code has been deployed yet. Combine with GitHub CLI to trace PR status: gh pr view <pr-number> --json mergeCommit gets the merge SHA, then check git tag --contains <sha> to see which releases include that change. Compare timestamps with git log <tag> -1 --format='%ci' to understand deployment timing. Useful for tracking when fixes reach production.

    # Check if PR #123 is in any release
    $ gh pr view 123 --json mergeCommit
    {
      "mergeCommit": {
        "oid": "abc123..."
      }
    }
    
    # Check which tags contain this commit
    $ git tag --contains abc123
    v2024.03.10.1
    v2024.03.11.1
    
    # Compare timestamps
    $ git log v2024.03.10.1 -1 --format="%H %ci"
    abc123 2024-03-10 15:04:11 +0800
  • Automate Git Branch Management with Shell Scripts

    Managing multiple feature branches in a Laravel project can be tedious—especially when you need to rebase them all onto master before a release. Instead of manually running the same Git commands for each branch, automate it with a shell script.

    The Manual Process

    # For each branch, manually run:
    git checkout master
    git checkout origin/feature/user-dashboard
    git rebase master
    git push origin HEAD:feature/user-dashboard --force
    
    git checkout master
    git checkout origin/feature/api-endpoints
    git rebase master
    git push origin HEAD:feature/api-endpoints --force
    
    # ...repeat for every branch

    This is error-prone and wastes time. You might forget a branch, push to the wrong remote, or leave the repository in a bad state if you hit conflicts.

    The Automated Approach

    Create a rebase-branches.sh script that handles the entire workflow with proper error handling:

    #!/bin/bash
    # rebase-branches.sh
    
    set -e
    
    # Parse arguments
    TARGET_BRANCH="master"
    while [[ $# -gt 0 ]]; do
        case $1 in
            -t|--target)
                TARGET_BRANCH="$2"
                shift 2
                ;;
            *)
                BRANCHES+=("$1")
                shift
                ;;
        esac
    done
    
    # Validate repository
    REMOTE_URL=$(git remote get-url origin 2>/dev/null || echo "")
    if [[ "$REMOTE_URL" != "[email protected]:yourcompany/yourapp.git" ]]; then
        echo "Error: Must run from correct repository"
        exit 1
    fi
    
    echo "Fetching from origin..."
    git fetch origin
    
    SKIPPED=()
    SUCCESSFUL=()
    
    for branch in "${BRANCHES[@]}"; do
        echo "Processing: $branch"
        
        # Checkout target branch
        if ! git checkout "$TARGET_BRANCH" 2>/dev/null; then
            SKIPPED+=("$branch (checkout failed)")
            continue
        fi
        
        # Checkout feature branch from origin
        if ! git checkout "origin/$branch" 2>/dev/null; then
            SKIPPED+=("$branch (branch not found)")
            continue
        fi
        
        # Attempt rebase
        if git rebase "$TARGET_BRANCH" 2>/dev/null; then
            # Force push if successful
            if git push origin "HEAD:$branch" --force 2>/dev/null; then
                SUCCESSFUL+=("$branch")
            else
                SKIPPED+=("$branch (push failed)")
                git rebase --abort 2>/dev/null || true
            fi
        else
            # Abort on conflicts
            git rebase --abort 2>/dev/null || true
            SKIPPED+=("$branch (conflicts)")
        fi
    done
    
    # Return to target branch
    git checkout "$TARGET_BRANCH" 2>/dev/null || true
    
    # Report results
    echo "Successfully rebased: ${#SUCCESSFUL[@]}"
    for branch in "${SUCCESSFUL[@]}"; do
        echo "  ✓ $branch"
    done
    
    if [ ${#SKIPPED[@]} -gt 0 ]; then
        echo "Skipped (conflicts/errors): ${#SKIPPED[@]}"
        for branch in "${SKIPPED[@]}"; do
            echo "  ✗ $branch"
        done
    fi

    Usage

    # Make it executable
    chmod +x rebase-branches.sh
    
    # Rebase onto master (default)
    ./rebase-branches.sh feature/user-dashboard feature/api-endpoints feature/admin-tools
    
    # Rebase onto a different branch
    ./rebase-branches.sh -t develop feature/user-dashboard feature/api-endpoints

    What It Does

    • Validates repository: Prevents accidentally running it in the wrong project
    • Fetches from origin: Ensures you’re working with latest remote changes
    • Handles conflicts gracefully: Aborts rebase and continues to next branch instead of leaving you in a broken state
    • Reports results: Shows you exactly which branches succeeded and which had issues
    • Returns to safe state: Always checks out the target branch when done

    Why This Matters

    Automating repetitive Git workflows saves time and reduces errors. This script is idempotent—if something fails, it cleans up and moves on without leaving your repository in a broken state. Perfect for teams managing multiple long-running feature branches that need frequent rebasing onto master or develop.

    The script pattern can be adapted for other batch Git operations: merging multiple branches, deleting stale branches, or updating multiple repositories at once.

  • Avoid Double-Reporting Errors (Log vs Throw)

    Have you ever noticed one production failure turning into two (or more) alerts?

    A frequent cause is double-reporting the same problem: your code logs an error and throws an exception, while your error tracker captures both the log entry and the unhandled exception.

    The problem pattern

    This is the classic “log and throw” antipattern:

    use Illuminate\Support\Facades\Log;
    
    try {
        $result = $service->run();
    } catch (\Throwable $e) {
        Log::error('Service failed', ['exception' => $e]);
        throw $e;
    }
    

    Depending on your monitoring setup, that can create:

    • One event for the log entry
    • Another event for the unhandled exception

    A cleaner approach

    Decide which signal is the source of truth:

    • If you’re going to rethrow: skip explicit logging and just add context where you catch it.
    • If you handle it: log it (with context) and do not rethrow.

    Example: add context, then rethrow without logging:

    try {
        $result = $service->run();
    } catch (\Throwable $e) {
        // Attach context for your exception handler / error tracker.
        // (How you do this depends on your app; keep it lightweight.)
        throw new \RuntimeException('Processing failed', 0, $e);
    }
    

    Bonus: use a correlation id

    When you do log, include a request/job correlation id so you can trace everything without multiplying alerts.

    Log::withContext([
        'correlation_id' => request()->header('X-Correlation-Id') ?? (string) \Illuminate\Support\Str::uuid(),
    ]);
    

    The end goal isn’t “less logging” — it’s one clear alert per real failure, with enough context to debug fast.