Category: DevOps

  • Multi-Method Extension Verification for Debugging

    Multi-Method Extension Verification for Debugging

    When debugging missing PHP extension errors in Docker, use multiple verification methods to confirm what’s actually installed vs what the application can see.

    Method 1: List All Loaded Modules

    docker exec my-app php -m
    

    Method 2: Runtime Check via PHP Code

    docker exec my-app php -r "echo extension_loaded('redis') ? 'YES' : 'NO';"
    

    Method 3: Check php.ini Configuration

    docker exec my-app php --ini
    # Shows which ini files are loaded
    

    Method 4: Test Actual Function Availability

    docker exec my-app php -r "var_dump(function_exists('redis_connect'));"
    

    Why Multiple Methods?

    • php -m shows modules PHP knows about
    • extension_loaded() checks if the extension is active in the current context
    • Function checks verify the extension’s API is actually usable

    In debugging sessions, cross-reference all three to isolate whether the issue is installation, configuration, or application-level detection.

    Pro tip: If php -m shows the extension but extension_loaded() returns false, check your php.ini configuration and verify the extension is enabled for the SAPI you’re using (CLI vs FPM).

  • Docker PHP Extensions: Build-Time vs Runtime Installation

    Docker PHP Extensions: Build-Time vs Runtime Installation

    When working with PHP in Docker, you have two choices for installing extensions: build them into your Dockerfile (permanent) or install them at runtime via setup scripts (temporary). Build-time installation is the recommended approach.

    Build-Time (Dockerfile)

    FROM php:8.2-fpm
    
    RUN docker-php-ext-install \
        pdo_pgsql \
        redis \
        gd \
        opcache
    

    Runtime (setup script)

    #!/bin/bash
    docker exec my-app docker-php-ext-install redis
    docker restart my-app
    

    The Problem with Runtime Installation

    You must run your setup script after every docker-compose up. If you forget, your application breaks with “extension not found” errors.

    Build-time installation ensures extensions are always present when the container starts. No manual intervention required.

    Real-World Impact

    If your CMS shows “missing database extension” errors after container restarts, check whether the extension is in your Dockerfile’s RUN statement or only installed via post-startup scripts.

    Rule of thumb: If it needs to exist every time the container runs, it belongs in your Dockerfile.

  • Nginx Config for Laravel + WordPress Hybrid Apps

    Running Laravel as an API backend alongside WordPress on the same server? Here’s how to configure Nginx to route /api/* requests to Laravel while serving everything else through WordPress.

    This pattern is useful when you want Laravel’s powerful API capabilities but need WordPress for content management.

    The Nginx Configuration

    server {
        listen 80;
        server_name app.example.com;
        
        # Default to WordPress
        root /var/www/wordpress;
        index index.php index.html;
    
        # Route /api/* to Laravel
        location ~ ^/api {
            root /var/www/laravel/public;
            try_files $uri $uri/ /index.php?$query_string;
            
            location ~ \.php$ {
                fastcgi_pass php:9000;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root/index.php;
                fastcgi_param SCRIPT_NAME /index.php;
            }
        }
    
        # WordPress permalinks
        location / {
            try_files $uri $uri/ /index.php?$args;
        }
    
        # PHP handler for WordPress
        location ~ \.php$ {
            fastcgi_pass php:9000;
            include fastcgi_params;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        }
    }
    

    How It Works

    The key is location priority. Nginx processes locations in this order:

    1. Exact matches (=)
    2. Prefix matches with ^~
    3. Regex matches (processed in order)
    4. Prefix matches

    The ~ ^/api regex catches API routes first, switches the root to Laravel’s public directory, and passes PHP requests to Laravel’s front controller.

    Everything else falls through to the default root (WordPress) and uses WordPress’s permalink handling.

    Why This Pattern?

    You might need this when:

    • Migrating from WordPress to Laravel incrementally
    • Building a mobile app that needs clean REST APIs but wants to keep WordPress for the marketing site
    • Your team knows WordPress for content but prefers Laravel for backend logic

    The hybrid setup lets each framework do what it does best without migration pressure.

    Gotcha: Don’t forget to change the root directive inside the /api location block. If you only set try_files without changing root, Nginx will look for Laravel files in the WordPress directory.

  • Match Your Timeout Chain: Nginx → PHP-FPM → PHP

    Match Your Timeout Chain: Nginx → PHP-FPM → PHP

    You’ve bumped max_execution_time to 300 seconds in PHP, but your app still throws 504 Gateway Timeout after about a minute. Sound familiar?

    The problem isn’t PHP. It’s the timeout chain — every layer between the browser and your code has its own timeout, and they all need to agree.

    The typical Docker stack

    Browser → Nginx (reverse proxy) → PHP-FPM → PHP script

    Each layer has a default timeout:

    Layer Setting Default
    Nginx fastcgi_read_timeout 60s
    PHP-FPM request_terminate_timeout 0 (unlimited)
    PHP max_execution_time 30s

    If you only change max_execution_time to 300s, PHP is happy to run that long — but Nginx kills the connection at 60 seconds because nobody told it to wait longer. You get a 504, PHP keeps running in the background (wasting resources), and your logs show no PHP errors because PHP didn’t fail.

    The fix: align the entire chain

    php.ini overrides:

    max_execution_time = 300
    max_input_time = 300
    memory_limit = 512M

    Nginx site config (inside your location ~ \.php$ block):

    location ~ \.php$ {
        fastcgi_pass php:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    
        # Match PHP's max_execution_time
        fastcgi_read_timeout 300s;
        fastcgi_send_timeout 300s;
        fastcgi_connect_timeout 60s;
    
        # Prevent 504 on large responses
        fastcgi_buffers 8 16k;
        fastcgi_buffer_size 32k;
        fastcgi_max_temp_file_size 0;
    }

    If you have a CDN or load balancer in front (Cloudflare, AWS ALB), add that to the chain too:

    CDN (100s) → Nginx (300s) → PHP-FPM (0) → PHP (300s)
                                                  ↑ This is the actual limit

    The debugging trick

    When you get a 504, test from the inside out:

    1. Hit PHP-FPM directly (bypass Nginx): Does the request complete?
    2. Hit Nginx locally (bypass CDN): Does it time out?
    3. Hit the public URL: Does the CDN add its own timeout?

    This tells you exactly which layer is killing the request. Compare your local dev config against production — often the mismatch is the missing fastcgi_read_timeout that production has but your Docker setup doesn’t.

    Rule of thumb: every layer’s timeout should be ≥ the layer below it. If PHP allows 300s, Nginx must allow at least 300s. If Nginx allows 300s, the CDN must allow at least 300s. One weak link and the whole chain breaks.

  • The Git Staging Trap: When Your Commit References Code That Doesn’t Exist Yet

    You make a change in Client.php that calls a new method getRemainingTtl(). You stage Client.php, write a clean commit message, hit commit. Everything looks fine.

    Except getRemainingTtl() lives in AuthSession.php — and you forgot to stage that file.

    The Problem

    When you selectively stage files with git add, Git doesn’t check whether your staged code actually works together. It just commits whatever’s in the staging area. If File A calls a method defined in File B, and you only stage File A, your commit is broken — even though your working directory is fine.

    git add src/Client.php
    git commit -m "Add cache TTL awareness to client"
    # AuthSession.php with getRemainingTtl() is NOT in this commit

    This compiles to a commit where Client.php references a method that doesn’t exist yet. If someone checks out this specific commit — or if CI runs against it — it breaks.

    Why It Happens

    Selective staging is a good practice. Small, focused commits make history readable. But the trap is that your working directory always has both files, so you never notice the gap. Your editor doesn’t complain. Your local tests pass. Everything works — until it doesn’t.

    The Fix: Review the Diff Before Committing

    Always check what you’re actually committing:

    # See exactly what's staged
    git diff --cached
    
    # Or see the file list
    git diff --cached --name-only

    When you see Client.php calling $this->session->getRemainingTtl(), ask yourself: “Is the file that defines this method also staged?”

    A Better Habit

    Before committing, scan the staged diff for:

    • New method calls — is the definition staged too?
    • New imports/use statements — is the imported class staged?
    • New interface implementations — is the interface file staged?
    • Constructor changes — are the new dependencies staged?

    If you catch it before pushing, it’s a 5-second fix: git add AuthSession.php && git commit --amend. If you catch it after CI fails, it’s a new commit plus an embarrassing red build.

    Selective staging is powerful, but Git won’t hold your hand. Review the diff, not just the file list.

  • Feature Branch Subdomains: Every PR Gets Its Own URL

    Staging environments are great until you have three developers all waiting to test on the same one. Feature branch subdomains solve this: every branch gets its own isolated URL like feature-auth-refactor.staging.example.com.

    How It Works

    The setup has three parts:

    1. Wildcard DNS — Point *.staging.example.com to your staging server
    2. Wildcard SSL — One certificate covers all subdomains
    3. Dynamic Nginx config — Route each subdomain to the right container

    The DNS

    Add a single wildcard A record:

    *.staging.example.com  A  203.0.113.50

    Every subdomain now resolves to your staging server. No DNS changes needed per branch.

    The SSL Certificate

    Use Let’s Encrypt with DNS validation for wildcard certs:

    certbot certonly \
      --dns-cloudflare \
      --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \
      -d "*.staging.example.com" \
      -d "staging.example.com"

    The Nginx Config

    Extract the subdomain and proxy to the matching container:

    server {
        listen 443 ssl;
        server_name ~^(?<branch>.+)\.staging\.example\.com$;
    
        ssl_certificate     /etc/letsencrypt/live/staging.example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/staging.example.com/privkey.pem;
    
        location / {
            resolver 127.0.0.11 valid=10s;
            proxy_pass http://$branch:80;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

    The regex capture (?<branch>.+) extracts the subdomain. If your CI names Docker containers after the branch slug, Nginx routes directly to them.

    The CI Pipeline

    In your CI config, deploy each branch as a named container:

    deploy_review:
      stage: deploy
      script:
        - docker compose -p "$CI_COMMIT_REF_SLUG" up -d
      environment:
        name: review/$CI_COMMIT_REF_NAME
        url: https://$CI_COMMIT_REF_SLUG.staging.example.com
        on_stop: stop_review

    Why This Beats Shared Staging

    With shared staging, you get merge conflicts, “don’t deploy, I’m testing” Slack messages, and broken environments that block everyone. With per-branch subdomains, each developer (and each PR reviewer) gets their own isolated environment. QA can test three features simultaneously. No coordination needed.

    The wildcard DNS + wildcard SSL + dynamic Nginx combo means zero manual setup per branch. Push a branch, CI deploys it, URL works automatically.

  • Make Shell Scripts Username-Portable with sed

    You write a shell script that works perfectly on your machine. You share it with the team. It breaks immediately because your username is hardcoded in every path.

    #!/bin/bash
    source /home/jake/.config/app/settings.sh
    cp /home/jake/templates/nginx.conf /etc/nginx/sites-available/

    Classic. The fix isn’t “use variables from the start” (though you should). The fix for right now is a one-liner that makes any script portable after the fact.

    The sed One-Liner

    sed -i "s|/home/jake|/home/$(whoami)|g" setup.sh

    That’s it. Every instance of /home/jake becomes /home/<current_user>. The | delimiter avoids escaping the forward slashes in paths (using / as a delimiter with paths containing / is a nightmare).

    Make It Part of Your Install

    If you distribute scripts that reference paths, add a self-patching step at the top:

    #!/bin/bash
    SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
    CURRENT_USER="$(whoami)"
    
    # Patch all config files for the current user
    for f in "$SCRIPT_DIR"/configs/*.conf; do
        sed -i "s|/home/[a-zA-Z0-9_-]*/|/home/$CURRENT_USER/|g" "$f"
    done

    The regex /home/[a-zA-Z0-9_-]*/ matches any username in a home path, not just one specific name. Way more robust than hardcoding the original username.

    The Better Long-Term Fix

    Obviously, the real solution is to never hardcode paths in the first place:

    #!/bin/bash
    HOME_DIR="${HOME:-/home/$(whoami)}"
    CONFIG_DIR="${XDG_CONFIG_HOME:-$HOME_DIR/.config}"
    
    source "$CONFIG_DIR/app/settings.sh"

    Use $HOME, $USER, and $XDG_CONFIG_HOME from the start. But when you’re retrofitting an existing script or inheriting someone else’s work, sed with a regex pattern gets you portable in seconds.

  • PR Descriptions: Describe the Final State, Not the Journey

    PR Descriptions: Describe the Final State, Not the Journey

    Stop writing PR descriptions that read like diary entries.

    The Problem

    Most PR descriptions describe the journey. “First I tried X, then I realized Y, then I refactored Z, and finally I settled on W.” That’s useful for a blog post. It’s terrible for a code review.

    The reviewer doesn’t need your autobiography. They need to understand what the code does right now and why.

    What to Write Instead

    A good PR description answers three questions:

    1. What does this change? “Replaces the CSV export with a streaming download that handles 100K+ rows without timing out.”
    2. Why? “Users with large datasets were hitting the 30s gateway timeout.”
    3. Anything non-obvious? “The chunked response means we can’t set Content-Length upfront, so download progress bars won’t show a percentage.”

    That’s it. Three short sections. The reviewer knows exactly what to look for.

    But What About the Investigation?

    That’s what commit history is for. Your commits capture the evolution: “try batch approach,” “switch to streaming,” “fix memory leak in chunk callback.” Anyone who wants the full story can read the log.

    The PR description is the summary. The commits are the chapters. Don’t put the chapters in the summary.

    The Litmus Test

    Read your PR description six months from now. Will you understand the change in 30 seconds? If you have to re-read your own journey narrative to figure out what the code actually does, you wrote the wrong thing.

  • Docker Build-Time vs Runtime: The Post-Install Hook Pattern

    Docker Build-Time vs Runtime: The Post-Install Hook Pattern

    Here’s a pattern I use in nearly every Docker project: create scripts at build time, execute them at runtime.

    The Problem

    Some things can’t happen during docker build. Maybe you need environment variables that only exist at runtime. Maybe you need to run migrations against a database that isn’t available yet. Maybe you need to generate config files from templates.

    The instinct is to shove everything into the entrypoint script. But then your entrypoint becomes a 200-line monster that’s impossible to debug.

    The Pattern

    Split it into two phases:

    # Build time: COPY or CREATE the scripts
    COPY docker/post-install/*.sh /docker-entrypoint.d/
    RUN chmod +x /docker-entrypoint.d/*.sh
    #!/bin/bash
    # entrypoint.sh — Runtime: execute the hooks
    for f in /docker-entrypoint.d/*.sh; do
        echo "Running post-install hook: $f"
        bash "$f"
    done
    
    exec "$@"

    Why This Works

    Each hook is a single-purpose script. 01-generate-config.sh renders templates from env vars. 02-run-migrations.sh handles database setup. 03-create-cache-dirs.sh ensures directories exist with correct permissions.

    You can test each hook independently. You can add new ones without touching the entrypoint. And if one fails, the error message tells you exactly which hook broke.

    The Key Insight

    Build time is for things that are static — installing packages, copying files, compiling assets. Runtime is for things that depend on the environment — config generation, service discovery, data setup.

    The hook directory pattern bridges the two. Your Dockerfile prepares the hooks. Your entrypoint runs them. Clean separation, easy debugging.

    If you’ve used the official Nginx or PostgreSQL Docker images, you’ve already seen this pattern — they use /docker-entrypoint-initdb.d/ for the exact same reason.

  • Ollama num_ctx: Why Setting It Higher Than the Model Supports Backfires

    Ollama num_ctx: Why Setting It Higher Than the Model Supports Backfires

    When running local LLMs with Ollama, you can set num_ctx to control the context window size. But there’s a ceiling you might not expect.

    The Gotcha

    Every model has an architectural limit baked into its training. Setting num_ctx higher than that limit doesn’t give you more context — it gives you garbage output or silent truncation:

    # This model was trained with 8K context
    ollama run llama3 --num_ctx 32768
    # Result: degraded output beyond 8K, not extended context

    The num_ctx parameter allocates memory for the KV cache, but the model’s positional embeddings only know how to handle positions it saw during training.

    How to Check the Real Limit

    # Check model metadata
    ollama show llama3 --modelfile | grep num_ctx
    
    # Or check the model card
    ollama show llama3

    The model card or GGUF metadata will tell you the trained context length. That’s your actual ceiling.

    What About YaRN and RoPE Scaling?

    Some models support extended context through YaRN (Yet another RoPE extensioN) or other RoPE scaling methods. These are baked into the model weights during fine-tuning — you can’t just enable them with a flag.

    If a model advertises 128K context, it was trained or fine-tuned with RoPE scaling to handle that. If it advertises 8K, setting num_ctx=128000 won’t magically give you 128K.

    The Rule

    Match num_ctx to what the model actually supports. Going lower saves memory. Going higher wastes memory and produces worse output. Check the model card, not your wishful thinking.