Blog

  • Git Rebase: Drop Commits But Keep the Files

    Git Rebase: Drop Commits But Keep the Files

    Ever need to clean up your Git history but keep the files locally? Here’s a trick I used recently.

    I had a commit that added some temp markdown docs — useful for my WIP but not ready for the branch history. I wanted those files to stay local (untracked) while removing the commit entirely.

    The solution: interactive rebase with the drop command:

    git rebase -i <commit-before-the-one-you-want-to-drop>
    # In the editor, change 'pick' to 'drop' for that commit
    # Save and exit
    

    The commit disappears from history, and the files become untracked. If you accidentally lose them, recover from reflog:

    git show <dropped-commit-hash>:path/to/file > path/to/file
    

    Be careful: this rewrites history. Use git push --force-with-lease when ready. Always stash current work first as a safety net.

  • When Old Code Fixes Are Actually Bugs

    When Old Code Fixes Are Actually Bugs

    Sometimes the best fix is deleting old fixes.

    While consolidating API client code, I found this “normalization” logic in production:

    $normalized = str_replace(
        ['SOAP-ENV:', 'xmlns:SOAP-ENV', 'xmlns:ns1'],
        ['soapenv:', 'xmlns:soapenv', 'xmlns'],
        $request
    );

    It looked intentional. Maybe the API was picky about namespace prefixes? Nope. It was breaking everything.

    Removing the normalization fixed server errors that had been happening silently. The API worked fine with standard SOAP envelopes — the str_replace was stripping namespace declarations and causing “undeclared prefix” errors on the server side.

    This kind of code survives because: (1) it was probably copy-pasted from a StackOverflow answer, (2) maybe it worked once on a different API, (3) nobody questioned it because it was already there.

    When refactoring, actually test the old code path independently. Sometimes those “necessary” workarounds are just ancient bugs in disguise.

  • Debug Nginx Proxies Layer by Layer

    Debug Nginx Proxies Layer by Layer

    When your Nginx reverse proxy isn’t working, test in this exact order:

    Step 1 — Test from the server itself:

    curl http://localhost/api

    Step 2 — Test via server IP:

    curl -k https://SERVER_IP/api

    Step 3 — Test via domain:

    curl https://yourdomain.com/api

    Why this order matters:

    • Step 1 fails → Nginx config is broken
    • Step 2 fails → SSL certificate issue
    • Step 3 fails → DNS, CDN, or firewall issue

    I recently debugged a proxy where Step 1 returned perfect responses but Step 3 returned just “OK” (2 bytes). That immediately told me the problem wasn’t Nginx — it was the CDN caching a broken response from an earlier deployment.

    Each layer adds complexity. Test each layer separately. When something breaks, you instantly know which layer to investigate instead of guessing.

  • SoapClient Timeouts Don’t Work the Way You Think

    SoapClient Timeouts Don’t Work the Way You Think

    You’d think adding a timeout to PHP’s SoapClient would be straightforward. Maybe a constructor option like 'timeout' => 30?

    Nope. Welcome to PHP SOAP hell.

    The only way to set a request timeout on native SoapClient is ini_set():

    ini_set('default_socket_timeout', 30);
    $client = new SoapClient($wsdl, $options);

    The connection_timeout constructor option? That’s only for the initial TCP handshake, not the actual SOAP call. It won’t save you from slow API responses.

    But there’s a cleaner approach — extend SoapClient and override __doRequest() to use Guzzle:

    class GuzzleSoapClient extends SoapClient
    {
        private GuzzleClient $guzzle;
    
        public function __construct($wsdl, array $options = [])
        {
            parent::__construct($wsdl, $options);
            $this->guzzle = new GuzzleClient([
                'timeout' => $options['timeout'] ?? 30,
            ]);
        }
    
        public function __doRequest(
            $request, $location, $action, $version, $oneWay = 0
        ): ?string {
            $response = $this->guzzle->post($location, [
                'body' => $request,
                'headers' => ['SOAPAction' => $action],
            ]);
            return (string)$response->getBody();
        }
    }

    Drop-in replacement. Actual timeout support. No ini_set() hacks.

  • Enable proxy_ssl_server_name for Nginx HTTPS Backends

    Enable proxy_ssl_server_name for Nginx HTTPS Backends

    Proxying to an HTTPS backend in Nginx? Don’t forget this one directive:

    location / {
        proxy_pass https://backend.example.com;
        proxy_ssl_server_name on;  # This one
    }

    Why it matters: modern web servers host multiple SSL sites on one IP using SNI (Server Name Indication). When your Nginx proxy connects to https://backend.example.com, it needs to tell the backend “I want the cert for backend.example.com.”

    Without proxy_ssl_server_name on, Nginx doesn’t send the SNI header. The backend doesn’t know which SSL cert to use, and you get connection failures or wrong cert errors.

    When you need it:

    • Proxying to Cloudflare-backed sites
    • Proxying to shared hosting
    • Any backend with multiple domains on one IP

    Think of it like calling a company with multiple departments — you need to tell the receptionist which one you want, not just dial the main number.

  • Docker /etc/hosts Only Accepts IPs, Not Hostnames

    Docker /etc/hosts Only Accepts IPs, Not Hostnames

    TIL you can’t do this in a Docker container:

    echo "proxy.example.com www.api.com" >> /etc/hosts

    I wanted to redirect www.api.com to proxy.example.com inside a container without hardcoding the proxy’s IP address.

    The problem: /etc/hosts format is strict — IP_ADDRESS hostname [hostname...]. You can’t use hostnames on the left side. Only IP addresses.

    What works:

    143.198.210.158 www.api.com

    What doesn’t:

    proxy.example.com www.api.com  # NOPE

    The workaround: use Docker’s extra_hosts in docker-compose.yaml:

    extra_hosts:
      - "www.api.com:143.198.210.158"

    Or resolve the hostname before writing to /etc/hosts:

    IP=$(getent hosts proxy.example.com | awk '{print $1}')
    echo "$IP www.api.com" >> /etc/hosts

    /etc/hosts is old-school. It predates DNS. It doesn’t do hostname resolution — it IS the resolution.

  • Use Tinker –execute for Automation

    Use Tinker –execute for Automation

    Ever tried to automate Laravel Tinker commands in a Docker container or CI pipeline? If you’ve used stdin piping with heredocs, you probably discovered it just… doesn’t work reliably.

    I spent way too long debugging why docker exec php artisan tinker with piped PHP commands would run silently and produce zero output. Turns out, Tinker’s interactive mode doesn’t play nice with stdin automation.

    The solution that actually works? The --execute flag.

    // ❌ This doesn't work reliably:
    docker exec my-app bash -c "php artisan tinker <<'EOF'
    \$user = App\Models\User::first();
    echo \$user->email;
    EOF"
    
    // ✅ This does:
    docker exec my-app php artisan tinker --execute="
    \$user = App\Models\User::first();
    echo \$user->email;
    "
    
    // Or for multi-line commands in bash scripts:
    docker exec my-app php artisan tinker --execute="$(cat <<'EOF'
    $user = App\Models\User::first();
    $user->email = '[email protected]';
    $user->save();
    echo 'Updated: ' . $user->email;
    EOF
    )"

    The --execute flag runs Tinker in non-interactive mode. It evaluates the PHP code, prints the output, and exits cleanly. No TTY required, no stdin gymnastics, no mystery silent failures.

    This is a lifesaver for:

    • CI/CD pipelines — seed data, run health checks, warm caches
    • Docker automation — scripts that need to interact with Laravel in containers
    • Cron jobs — quick data fixes without writing full artisan commands

    Pro tip: for complex multi-statement logic, you can still pass a full PHP script to --execute. Just wrap it in quotes and keep newlines intact with $(cat <<'EOF' ... EOF).

    Stop fighting with heredocs. Use --execute and move on with your life.

  • Use Named Parameters for Boolean Flags

    Use Named Parameters for Boolean Flags

    Quick quiz: what does processOrder($order, true, false, true, false, false) do?

    You have no idea. Neither do I. And neither will you in three months when you come back to debug this code.

    I used to think this was just “how PHP worked” — positional parameters, take it or leave it. Then PHP 8.0 dropped named parameters, and suddenly that cryptic boolean soup became self-documenting code.

    Here’s the before and after:

    // Before: positional boolean hell
    function processOrder(
        Order $order,
        bool $validateStock = true,
        bool $sendEmail = false,
        bool $applyDiscount = true,
        bool $updateInventory = false,
        bool $logActivity = false
    ) {
        // ...
    }
    
    // What does this even mean?
    processOrder($order, true, false, true, false, false);
    
    // After: PHP 8 named parameters
    processOrder(
        order: $order,
        validateStock: true,
        applyDiscount: true
    );
    
    // Or when you need to flip a flag deep in the parameter list:
    processOrder(
        order: $order,
        sendEmail: true,
        logActivity: true
    );

    The beauty of named parameters is that you skip the defaults you don’t need. No more passing null, null, null, true just to reach the parameter you actually want to change.

    This isn’t just about readability (though that’s huge). It’s about maintenance. When you add a new optional parameter, existing calls don’t break. When you reorder parameters (carefully!), named calls stay stable.

    Rule of thumb: if you have more than two boolean parameters, or any boolean parameter after the first argument, use named parameters at the call site. Your code reviewers will love you.

    PHP 8 is six years old now. If you’re not using named parameters yet, you’re missing out on one of the best DX improvements PHP has ever shipped.

  • Don’t Mix Cached and Fresh Data in the Same Transaction

    Don’t Mix Cached and Fresh Data in the Same Transaction

    Ever debug a bug where your financial reports showed negative margins, only to discover you were mixing cached and fresh data in the same transaction?

    I hit this in an API endpoint that calculated order profitability. The method fetched product costs fresh from the database (good!), but the companion method that grabbed retail prices was still using Redis cache (bad!). When prices changed, we’d calculate margins using old cached prices and new costs. Hello, mystery losses.

    The fix wasn’t just “disable cache everywhere” — Redis cache is there for a reason. The real issue was inconsistent cache behavior within the same transaction.

    Here’s the pattern that saved us:

    class OrderCalculator
    {
        public function calculateMargin(Order $order, bool $useCache = true): float
        {
            $cost = $this->getCost($order->product_id, useCache: $useCache);
            $price = $this->getPrice($order->product_id, useCache: $useCache);
            
            return $price - $cost;
        }
        
        private function getCost(int $productId, bool $useCache = true): float
        {
            if (!$useCache) {
                return Product::find($productId)->cost;
            }
            
            return Cache::remember("product.{$productId}.cost", 3600, function() use ($productId) {
                return Product::find($productId)->cost;
            });
        }
        
        private function getPrice(int $productId, bool $useCache = true): float
        {
            if (!$useCache) {
                return Product::find($productId)->price;
            }
            
            return Cache::remember("product.{$productId}.price", 3600, function() use ($productId) {
                return Product::find($productId)->price;
            });
        }
    }
    
    // In write operations (order creation, updates):
    $margin = $calculator->calculateMargin($order, useCache: false);
    
    // In read operations (reports, dashboards):
    $margin = $calculator->calculateMargin($order, useCache: true);

    The key insight: make cache behavior explicit and consistent. When you’re writing data or making financial calculations, all related lookups should use the same freshness guarantee. Add a useCache parameter (default true for reads) and disable it for writes.

    Your future self debugging production at 2am will thank you.

  • Nullable Typed Properties: The PHP Gotcha That Bites During API Deserialization

    Nullable Typed Properties: The PHP Gotcha That Bites During API Deserialization

    Here’s a PHP gotcha that’s bitten me more than once when working with typed properties and external data sources like API responses or deserialized objects.

    In PHP 7.4+, when you declare a typed property like this:

    class ApiResponse
    {
        public ResponseData $data;
    }

    You might assume that $data defaults to null if it’s never assigned. It doesn’t. It’s in an uninitialized state — which is different from null. Try to access it and you’ll get:

    TypeError: Typed property ApiResponse::$data must not be accessed before initialization

    This is especially common when deserializing API responses. If the external service returns a malformed payload missing expected fields, your deserializer creates the object but never sets the property. PHP then explodes when you try to read it.

    The fix is straightforward — make the property nullable and give it a default:

    class ApiResponse
    {
        public ?ResponseData $data = null;
    }

    Two things changed: the ? prefix makes the type nullable, and = null provides an explicit default. Both are required — even a ?Type property without = null stays uninitialized.

    Then add a null-safe check where you access it:

    if (!$response->data?->items) {
        // Handle missing data gracefully
        return [];
    }

    The ?-> operator (PHP 8.0+) short-circuits to null instead of throwing an error. Clean and defensive.

    The takeaway: Any typed property that might not get initialized — especially in DTOs, API response objects, or anything populated by external data — should be nullable with an explicit = null default. Don’t assume your data sources will always send complete payloads.