Category: DevOps

  • Make Shell Script Hooks Visible with Stderr Redirection

    I was setting up Git hooks to auto-sync my project workspace on exit, but I couldn’t see any output. The hook ran silently—no success messages, no errors, nothing. I had no idea if it was even working.

    The problem? Hook scripts inherit their parent’s stdout/stderr, but they often run in contexts where those streams are redirected to /dev/null. So even if you echo messages, you’ll never see them.

    The Solution: Redirect stderr to stdout, Then Format

    To make hook output visible, you need to:

    1. Redirect stderr to stdout (2>&1)
    2. Pipe through sed to add indentation
    3. Redirect back to stderr (>&2) so it appears in the terminal

    Here’s the pattern:

    #!/bin/bash
    
    # Run your command and make output visible
    my-sync-script.sh 2>&1 | sed 's/^/  /' >&2
    

    What’s Happening Here?

    • 2>&1 — Merge stderr into stdout (captures all output)
    • | sed 's/^/ /' — Add 2 spaces to the start of each line (formatting)
    • >&2 — Send the formatted output to stderr (visible in terminal)

    Example: Auto-Sync on Git Exit

    I used this pattern in a boot script that syncs project files from Google Drive before running:

    #!/bin/bash
    # ~/.local/bin/sync-workspace.sh
    
    echo "🔄 Syncing workspace from Google Drive..."
    
    rclone sync gdrive:workspace /local/workspace \
        --fast-list \
        --transfers 8 \
        2>&1 | sed 's/^/  /' >&2
    
    echo "✅ Workspace synced"
    

    Now when the script runs, I see:

    🔄 Syncing workspace from Google Drive...
      Transferred: 1.2 MiB / 1.2 MiB, 100%
      Checks: 47 / 47, 100%
    ✅ Workspace synced
    

    When to Use This

    • Git hooks (pre-commit, post-checkout, etc.)
    • Boot scripts that run on system startup
    • Cron jobs where you want output logged
    • Any background process where visibility helps debugging

    Without this pattern, your hooks run silently. With it, you get clear feedback—success messages, error details, everything.

  • Track Code Deployment with Git Tags

    When managing releases via git tags, use git tag --contains <commit-sha> to check if specific code has been deployed yet. Combine with GitHub CLI to trace PR status: gh pr view <pr-number> --json mergeCommit gets the merge SHA, then check git tag --contains <sha> to see which releases include that change. Compare timestamps with git log <tag> -1 --format='%ci' to understand deployment timing. Useful for tracking when fixes reach production.

    # Check if PR #123 is in any release
    $ gh pr view 123 --json mergeCommit
    {
      "mergeCommit": {
        "oid": "abc123..."
      }
    }
    
    # Check which tags contain this commit
    $ git tag --contains abc123
    v2024.03.10.1
    v2024.03.11.1
    
    # Compare timestamps
    $ git log v2024.03.10.1 -1 --format="%H %ci"
    abc123 2024-03-10 15:04:11 +0800
  • Automate Git Branch Management with Shell Scripts

    Managing multiple feature branches in a Laravel project can be tedious—especially when you need to rebase them all onto master before a release. Instead of manually running the same Git commands for each branch, automate it with a shell script.

    The Manual Process

    # For each branch, manually run:
    git checkout master
    git checkout origin/feature/user-dashboard
    git rebase master
    git push origin HEAD:feature/user-dashboard --force
    
    git checkout master
    git checkout origin/feature/api-endpoints
    git rebase master
    git push origin HEAD:feature/api-endpoints --force
    
    # ...repeat for every branch

    This is error-prone and wastes time. You might forget a branch, push to the wrong remote, or leave the repository in a bad state if you hit conflicts.

    The Automated Approach

    Create a rebase-branches.sh script that handles the entire workflow with proper error handling:

    #!/bin/bash
    # rebase-branches.sh
    
    set -e
    
    # Parse arguments
    TARGET_BRANCH="master"
    while [[ $# -gt 0 ]]; do
        case $1 in
            -t|--target)
                TARGET_BRANCH="$2"
                shift 2
                ;;
            *)
                BRANCHES+=("$1")
                shift
                ;;
        esac
    done
    
    # Validate repository
    REMOTE_URL=$(git remote get-url origin 2>/dev/null || echo "")
    if [[ "$REMOTE_URL" != "[email protected]:yourcompany/yourapp.git" ]]; then
        echo "Error: Must run from correct repository"
        exit 1
    fi
    
    echo "Fetching from origin..."
    git fetch origin
    
    SKIPPED=()
    SUCCESSFUL=()
    
    for branch in "${BRANCHES[@]}"; do
        echo "Processing: $branch"
        
        # Checkout target branch
        if ! git checkout "$TARGET_BRANCH" 2>/dev/null; then
            SKIPPED+=("$branch (checkout failed)")
            continue
        fi
        
        # Checkout feature branch from origin
        if ! git checkout "origin/$branch" 2>/dev/null; then
            SKIPPED+=("$branch (branch not found)")
            continue
        fi
        
        # Attempt rebase
        if git rebase "$TARGET_BRANCH" 2>/dev/null; then
            # Force push if successful
            if git push origin "HEAD:$branch" --force 2>/dev/null; then
                SUCCESSFUL+=("$branch")
            else
                SKIPPED+=("$branch (push failed)")
                git rebase --abort 2>/dev/null || true
            fi
        else
            # Abort on conflicts
            git rebase --abort 2>/dev/null || true
            SKIPPED+=("$branch (conflicts)")
        fi
    done
    
    # Return to target branch
    git checkout "$TARGET_BRANCH" 2>/dev/null || true
    
    # Report results
    echo "Successfully rebased: ${#SUCCESSFUL[@]}"
    for branch in "${SUCCESSFUL[@]}"; do
        echo "  ✓ $branch"
    done
    
    if [ ${#SKIPPED[@]} -gt 0 ]; then
        echo "Skipped (conflicts/errors): ${#SKIPPED[@]}"
        for branch in "${SKIPPED[@]}"; do
            echo "  ✗ $branch"
        done
    fi

    Usage

    # Make it executable
    chmod +x rebase-branches.sh
    
    # Rebase onto master (default)
    ./rebase-branches.sh feature/user-dashboard feature/api-endpoints feature/admin-tools
    
    # Rebase onto a different branch
    ./rebase-branches.sh -t develop feature/user-dashboard feature/api-endpoints

    What It Does

    • Validates repository: Prevents accidentally running it in the wrong project
    • Fetches from origin: Ensures you’re working with latest remote changes
    • Handles conflicts gracefully: Aborts rebase and continues to next branch instead of leaving you in a broken state
    • Reports results: Shows you exactly which branches succeeded and which had issues
    • Returns to safe state: Always checks out the target branch when done

    Why This Matters

    Automating repetitive Git workflows saves time and reduces errors. This script is idempotent—if something fails, it cleans up and moves on without leaving your repository in a broken state. Perfect for teams managing multiple long-running feature branches that need frequent rebasing onto master or develop.

    The script pattern can be adapted for other batch Git operations: merging multiple branches, deleting stale branches, or updating multiple repositories at once.

  • Avoid Double-Reporting Errors (Log vs Throw)

    Have you ever noticed one production failure turning into two (or more) alerts?

    A frequent cause is double-reporting the same problem: your code logs an error and throws an exception, while your error tracker captures both the log entry and the unhandled exception.

    The problem pattern

    This is the classic “log and throw” antipattern:

    use Illuminate\Support\Facades\Log;
    
    try {
        $result = $service->run();
    } catch (\Throwable $e) {
        Log::error('Service failed', ['exception' => $e]);
        throw $e;
    }
    

    Depending on your monitoring setup, that can create:

    • One event for the log entry
    • Another event for the unhandled exception

    A cleaner approach

    Decide which signal is the source of truth:

    • If you’re going to rethrow: skip explicit logging and just add context where you catch it.
    • If you handle it: log it (with context) and do not rethrow.

    Example: add context, then rethrow without logging:

    try {
        $result = $service->run();
    } catch (\Throwable $e) {
        // Attach context for your exception handler / error tracker.
        // (How you do this depends on your app; keep it lightweight.)
        throw new \RuntimeException('Processing failed', 0, $e);
    }
    

    Bonus: use a correlation id

    When you do log, include a request/job correlation id so you can trace everything without multiplying alerts.

    Log::withContext([
        'correlation_id' => request()->header('X-Correlation-Id') ?? (string) \Illuminate\Support\Str::uuid(),
    ]);
    

    The end goal isn’t “less logging” — it’s one clear alert per real failure, with enough context to debug fast.

  • Multi-Method Extension Verification for Debugging

    Multi-Method Extension Verification for Debugging

    When debugging missing PHP extension errors in Docker, use multiple verification methods to confirm what’s actually installed vs what the application can see.

    Method 1: List All Loaded Modules

    docker exec my-app php -m
    

    Method 2: Runtime Check via PHP Code

    docker exec my-app php -r "echo extension_loaded('redis') ? 'YES' : 'NO';"
    

    Method 3: Check php.ini Configuration

    docker exec my-app php --ini
    # Shows which ini files are loaded
    

    Method 4: Test Actual Function Availability

    docker exec my-app php -r "var_dump(function_exists('redis_connect'));"
    

    Why Multiple Methods?

    • php -m shows modules PHP knows about
    • extension_loaded() checks if the extension is active in the current context
    • Function checks verify the extension’s API is actually usable

    In debugging sessions, cross-reference all three to isolate whether the issue is installation, configuration, or application-level detection.

    Pro tip: If php -m shows the extension but extension_loaded() returns false, check your php.ini configuration and verify the extension is enabled for the SAPI you’re using (CLI vs FPM).

  • Docker PHP Extensions: Build-Time vs Runtime Installation

    Docker PHP Extensions: Build-Time vs Runtime Installation

    When working with PHP in Docker, you have two choices for installing extensions: build them into your Dockerfile (permanent) or install them at runtime via setup scripts (temporary). Build-time installation is the recommended approach.

    Build-Time (Dockerfile)

    FROM php:8.2-fpm
    
    RUN docker-php-ext-install \
        pdo_pgsql \
        redis \
        gd \
        opcache
    

    Runtime (setup script)

    #!/bin/bash
    docker exec my-app docker-php-ext-install redis
    docker restart my-app
    

    The Problem with Runtime Installation

    You must run your setup script after every docker-compose up. If you forget, your application breaks with “extension not found” errors.

    Build-time installation ensures extensions are always present when the container starts. No manual intervention required.

    Real-World Impact

    If your CMS shows “missing database extension” errors after container restarts, check whether the extension is in your Dockerfile’s RUN statement or only installed via post-startup scripts.

    Rule of thumb: If it needs to exist every time the container runs, it belongs in your Dockerfile.

  • Nginx Config for Laravel + WordPress Hybrid Apps

    Running Laravel as an API backend alongside WordPress on the same server? Here’s how to configure Nginx to route /api/* requests to Laravel while serving everything else through WordPress.

    This pattern is useful when you want Laravel’s powerful API capabilities but need WordPress for content management.

    The Nginx Configuration

    server {
        listen 80;
        server_name app.example.com;
        
        # Default to WordPress
        root /var/www/wordpress;
        index index.php index.html;
    
        # Route /api/* to Laravel
        location ~ ^/api {
            root /var/www/laravel/public;
            try_files $uri $uri/ /index.php?$query_string;
            
            location ~ \.php$ {
                fastcgi_pass php:9000;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root/index.php;
                fastcgi_param SCRIPT_NAME /index.php;
            }
        }
    
        # WordPress permalinks
        location / {
            try_files $uri $uri/ /index.php?$args;
        }
    
        # PHP handler for WordPress
        location ~ \.php$ {
            fastcgi_pass php:9000;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        }
    }
    

    How It Works

    The key is location priority. Nginx processes locations in this order:

    1. Exact matches (=)
    2. Prefix matches with ^~
    3. Regex matches (processed in order)
    4. Prefix matches

    The ~ ^/api regex catches API routes first, switches the root to Laravel’s public directory, and passes PHP requests to Laravel’s front controller.

    Everything else falls through to the default root (WordPress) and uses WordPress’s permalink handling.

    Why This Pattern?

    You might need this when:

    • Migrating from WordPress to Laravel incrementally
    • Building a mobile app that needs clean REST APIs but wants to keep WordPress for the marketing site
    • Your team knows WordPress for content but prefers Laravel for backend logic

    The hybrid setup lets each framework do what it does best without migration pressure.

    Gotcha: Don’t forget to change the root directive inside the /api location block. If you only set try_files without changing root, Nginx will look for Laravel files in the WordPress directory.

  • Match Your Timeout Chain: Nginx → PHP-FPM → PHP

    Match Your Timeout Chain: Nginx → PHP-FPM → PHP

    You’ve bumped max_execution_time to 300 seconds in PHP, but your app still throws 504 Gateway Timeout after about a minute. Sound familiar?

    The problem isn’t PHP. It’s the timeout chain — every layer between the browser and your code has its own timeout, and they all need to agree.

    The typical Docker stack

    Browser → Nginx (reverse proxy) → PHP-FPM → PHP script

    Each layer has a default timeout:

    Layer Setting Default
    Nginx fastcgi_read_timeout 60s
    PHP-FPM request_terminate_timeout 0 (unlimited)
    PHP max_execution_time 30s

    If you only change max_execution_time to 300s, PHP is happy to run that long — but Nginx kills the connection at 60 seconds because nobody told it to wait longer. You get a 504, PHP keeps running in the background (wasting resources), and your logs show no PHP errors because PHP didn’t fail.

    The fix: align the entire chain

    php.ini overrides:

    max_execution_time = 300
    max_input_time = 300
    memory_limit = 512M

    Nginx site config (inside your location ~ \.php$ block):

    location ~ \.php$ {
        fastcgi_pass php:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    
        # Match PHP's max_execution_time
        fastcgi_read_timeout 300s;
        fastcgi_send_timeout 300s;
        fastcgi_connect_timeout 60s;
    
        # Prevent 504 on large responses
        fastcgi_buffers 8 16k;
        fastcgi_buffer_size 32k;
        fastcgi_max_temp_file_size 0;
    }

    If you have a CDN or load balancer in front (Cloudflare, AWS ALB), add that to the chain too:

    CDN (100s) → Nginx (300s) → PHP-FPM (0) → PHP (300s)
                                                  ↑ This is the actual limit

    The debugging trick

    When you get a 504, test from the inside out:

    1. Hit PHP-FPM directly (bypass Nginx): Does the request complete?
    2. Hit Nginx locally (bypass CDN): Does it time out?
    3. Hit the public URL: Does the CDN add its own timeout?

    This tells you exactly which layer is killing the request. Compare your local dev config against production — often the mismatch is the missing fastcgi_read_timeout that production has but your Docker setup doesn’t.

    Rule of thumb: every layer’s timeout should be ≥ the layer below it. If PHP allows 300s, Nginx must allow at least 300s. If Nginx allows 300s, the CDN must allow at least 300s. One weak link and the whole chain breaks.

  • The Git Staging Trap: When Your Commit References Code That Doesn’t Exist Yet

    You make a change in Client.php that calls a new method getRemainingTtl(). You stage Client.php, write a clean commit message, hit commit. Everything looks fine.

    Except getRemainingTtl() lives in AuthSession.php — and you forgot to stage that file.

    The Problem

    When you selectively stage files with git add, Git doesn’t check whether your staged code actually works together. It just commits whatever’s in the staging area. If File A calls a method defined in File B, and you only stage File A, your commit is broken — even though your working directory is fine.

    git add src/Client.php
    git commit -m "Add cache TTL awareness to client"
    # AuthSession.php with getRemainingTtl() is NOT in this commit

    This compiles to a commit where Client.php references a method that doesn’t exist yet. If someone checks out this specific commit — or if CI runs against it — it breaks.

    Why It Happens

    Selective staging is a good practice. Small, focused commits make history readable. But the trap is that your working directory always has both files, so you never notice the gap. Your editor doesn’t complain. Your local tests pass. Everything works — until it doesn’t.

    The Fix: Review the Diff Before Committing

    Always check what you’re actually committing:

    # See exactly what's staged
    git diff --cached
    
    # Or see the file list
    git diff --cached --name-only

    When you see Client.php calling $this->session->getRemainingTtl(), ask yourself: “Is the file that defines this method also staged?”

    A Better Habit

    Before committing, scan the staged diff for:

    • New method calls — is the definition staged too?
    • New imports/use statements — is the imported class staged?
    • New interface implementations — is the interface file staged?
    • Constructor changes — are the new dependencies staged?

    If you catch it before pushing, it’s a 5-second fix: git add AuthSession.php && git commit --amend. If you catch it after CI fails, it’s a new commit plus an embarrassing red build.

    Selective staging is powerful, but Git won’t hold your hand. Review the diff, not just the file list.

  • Feature Branch Subdomains: Every PR Gets Its Own URL

    Staging environments are great until you have three developers all waiting to test on the same one. Feature branch subdomains solve this: every branch gets its own isolated URL like feature-auth-refactor.staging.example.com.

    How It Works

    The setup has three parts:

    1. Wildcard DNS — Point *.staging.example.com to your staging server
    2. Wildcard SSL — One certificate covers all subdomains
    3. Dynamic Nginx config — Route each subdomain to the right container

    The DNS

    Add a single wildcard A record:

    *.staging.example.com  A  203.0.113.50

    Every subdomain now resolves to your staging server. No DNS changes needed per branch.

    The SSL Certificate

    Use Let’s Encrypt with DNS validation for wildcard certs:

    certbot certonly \
      --dns-cloudflare \
      --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \
      -d "*.staging.example.com" \
      -d "staging.example.com"

    The Nginx Config

    Extract the subdomain and proxy to the matching container:

    server {
        listen 443 ssl;
        server_name ~^(?<branch>.+)\.staging\.example\.com$;
    
        ssl_certificate     /etc/letsencrypt/live/staging.example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/staging.example.com/privkey.pem;
    
        location / {
            resolver 127.0.0.11 valid=10s;
            proxy_pass http://$branch:80;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

    The regex capture (?<branch>.+) extracts the subdomain. If your CI names Docker containers after the branch slug, Nginx routes directly to them.

    The CI Pipeline

    In your CI config, deploy each branch as a named container:

    deploy_review:
      stage: deploy
      script:
        - docker compose -p "$CI_COMMIT_REF_SLUG" up -d
      environment:
        name: review/$CI_COMMIT_REF_NAME
        url: https://$CI_COMMIT_REF_SLUG.staging.example.com
        on_stop: stop_review

    Why This Beats Shared Staging

    With shared staging, you get merge conflicts, “don’t deploy, I’m testing” Slack messages, and broken environments that block everyone. With per-branch subdomains, each developer (and each PR reviewer) gets their own isolated environment. QA can test three features simultaneously. No coordination needed.

    The wildcard DNS + wildcard SSL + dynamic Nginx combo means zero manual setup per branch. Push a branch, CI deploys it, URL works automatically.