Category: DevOps

  • Run the Command, Then Build What It Needs

    Run the Command, Then Build What It Needs

    When integrating with a complex third-party API, don’t try to architect everything upfront. Start by running the integration point and let the errors guide you.

    The Anti-Pattern

    You read the API docs (if they exist). You design your data models. You write adapters, mappers, and DTOs. Then you finally make your first API call and… nothing works as documented.

    The Better Way

    Run the command first. Let it fail. Each error tells you exactly what to build next:

    # Step 1: Try the API call
    php artisan integration:sync
    
    # Error: "Class ApiClient not found"
    # → Build the client
    
    # Error: "Missing authentication"  
    # → Add the auth flow
    
    # Error: "Cannot map response to DTO"
    # → Build the DTO from the actual response

    Why This Works

    Errors are free documentation. Each one tells you the next thing to build — nothing more. You avoid over-engineering, and every line of code you write solves an actual problem.

    Takeaway

    Stop planning. Start running. Let errors drive your implementation order. You’ll ship faster and build only what you actually need.

  • Google Drive Mounted ≠ Google Drive Synced

    Google Drive Mounted ≠ Google Drive Synced

    Google Drive is mounted. The folder exists. You open a file. It’s empty or throws an error. Welcome to the cloud storage race condition.

    Mounted ≠ Synced

    Cloud storage clients (Google Drive, OneDrive, Dropbox) mount folders immediately on startup. But syncing the actual files? That happens in the background. The folder exists, but the files are stubs waiting to download.

    If your startup script tries to read those files before sync completes, you’ll get random failures.

    The Fix: Wait for Sync

    Check if files are actually synced before using them:

    #!/bin/bash
    FILE="/path/to/cloud-storage/config.json"
    
    while [ ! -s "$FILE" ]; do
        echo "Waiting for $FILE to sync..."
        sleep 2
    done
    
    echo "File synced, proceeding..."

    The -s flag checks if the file exists and has content (not a 0-byte stub).

    Takeaway

    Cloud storage mount operations are instant lies. Always verify files are actually synced before using them in startup scripts. Check for file size, not just existence.

  • Why WSL boot Command Doesn’t Work When systemd=true

    Why WSL boot Command Doesn’t Work When systemd=true

    Enabled systemd=true in WSL2 and suddenly your boot commands stopped working? Yeah, that’s by design. The boot command in wsl.conf doesn’t play nice with systemd.

    Why It Breaks

    When you enable systemd in WSL2, it becomes PID 1. The traditional boot command runs before systemd starts, but systemd remounts everything and resets the environment. Your boot script ran, but systemd wiped it out.

    The Solution: Use systemd Services

    Stop fighting systemd. Use it instead. Create a proper systemd service:

    # /etc/systemd/system/my-startup.service
    [Unit]
    Description=My WSL Startup Script
    After=network.target
    
    [Service]
    Type=oneshot
    ExecStart=/usr/local/bin/my-startup.sh
    RemainAfterExit=yes
    
    [Install]
    WantedBy=multi-user.target

    Enable it: sudo systemctl enable my-startup.service

    Takeaway

    When systemd=true, forget the boot command exists. Create systemd services like you would on any Linux system. It’s more work upfront, but it actually works.

  • Git Rebase: Drop Commits But Keep the Files

    Git Rebase: Drop Commits But Keep the Files

    Ever need to clean up your Git history but keep the files locally? Here’s a trick I used recently.

    I had a commit that added some temp markdown docs — useful for my WIP but not ready for the branch history. I wanted those files to stay local (untracked) while removing the commit entirely.

    The solution: interactive rebase with the drop command:

    git rebase -i <commit-before-the-one-you-want-to-drop>
    # In the editor, change 'pick' to 'drop' for that commit
    # Save and exit
    

    The commit disappears from history, and the files become untracked. If you accidentally lose them, recover from reflog:

    git show <dropped-commit-hash>:path/to/file > path/to/file
    

    Be careful: this rewrites history. Use git push --force-with-lease when ready. Always stash current work first as a safety net.

  • Debug Nginx Proxies Layer by Layer

    Debug Nginx Proxies Layer by Layer

    When your Nginx reverse proxy isn’t working, test in this exact order:

    Step 1 — Test from the server itself:

    curl http://localhost/api

    Step 2 — Test via server IP:

    curl -k https://SERVER_IP/api

    Step 3 — Test via domain:

    curl https://yourdomain.com/api

    Why this order matters:

    • Step 1 fails → Nginx config is broken
    • Step 2 fails → SSL certificate issue
    • Step 3 fails → DNS, CDN, or firewall issue

    I recently debugged a proxy where Step 1 returned perfect responses but Step 3 returned just “OK” (2 bytes). That immediately told me the problem wasn’t Nginx — it was the CDN caching a broken response from an earlier deployment.

    Each layer adds complexity. Test each layer separately. When something breaks, you instantly know which layer to investigate instead of guessing.

  • Enable proxy_ssl_server_name for Nginx HTTPS Backends

    Enable proxy_ssl_server_name for Nginx HTTPS Backends

    Proxying to an HTTPS backend in Nginx? Don’t forget this one directive:

    location / {
        proxy_pass https://backend.example.com;
        proxy_ssl_server_name on;  # This one
    }

    Why it matters: modern web servers host multiple SSL sites on one IP using SNI (Server Name Indication). When your Nginx proxy connects to https://backend.example.com, it needs to tell the backend “I want the cert for backend.example.com.”

    Without proxy_ssl_server_name on, Nginx doesn’t send the SNI header. The backend doesn’t know which SSL cert to use, and you get connection failures or wrong cert errors.

    When you need it:

    • Proxying to Cloudflare-backed sites
    • Proxying to shared hosting
    • Any backend with multiple domains on one IP

    Think of it like calling a company with multiple departments — you need to tell the receptionist which one you want, not just dial the main number.

  • Docker /etc/hosts Only Accepts IPs, Not Hostnames

    Docker /etc/hosts Only Accepts IPs, Not Hostnames

    TIL you can’t do this in a Docker container:

    echo "proxy.example.com www.api.com" >> /etc/hosts

    I wanted to redirect www.api.com to proxy.example.com inside a container without hardcoding the proxy’s IP address.

    The problem: /etc/hosts format is strict — IP_ADDRESS hostname [hostname...]. You can’t use hostnames on the left side. Only IP addresses.

    What works:

    143.198.210.158 www.api.com

    What doesn’t:

    proxy.example.com www.api.com  # NOPE

    The workaround: use Docker’s extra_hosts in docker-compose.yaml:

    extra_hosts:
      - "www.api.com:143.198.210.158"

    Or resolve the hostname before writing to /etc/hosts:

    IP=$(getent hosts proxy.example.com | awk '{print $1}')
    echo "$IP www.api.com" >> /etc/hosts

    /etc/hosts is old-school. It predates DNS. It doesn’t do hostname resolution — it IS the resolution.

  • Branch From the Right Base When Stacking PRs

    Branch From the Right Base When Stacking PRs

    When you have a feature that spans multiple PRs — say, PR #1 builds the core, PR #2 adds an enhancement on top — you need to stack them properly.

    The mistake I see developers make: they branch PR #2 from master instead of from PR #1’s branch.

    # ❌ Wrong — branches from master, will conflict with PR #1
    git checkout master
    git checkout -b feature/enhancement
    
    # ✅ Right — branches from PR #1, builds on top of it
    git checkout feature/core-feature
    git checkout -b feature/enhancement

    And the PR target matters too:

    • PR #1 targets master (or main)
    • PR #2 targets feature/core-feature (PR #1’s branch), not master

    Once PR #1 is merged, you update PR #2’s target to master. GitHub and GitLab both handle this cleanly — the diff will shrink to just PR #2’s changes.

    When to combine instead: If the PRs are small enough and touch the same files, sometimes it’s simpler to combine them into one branch. We do this when features are tightly coupled and reviewing them separately would lose context.

    # Combining two feature branches into one
    git checkout master
    git checkout -b feature/combined
    git merge feature/part-1 --no-ff
    git merge feature/part-2 --no-ff

    The goal is always the same: make the reviewer’s job easy. Small, focused PRs with clear lineage beat a single massive PR every time.