Author: Daryle De Silva

  • SoapClient Timeouts Don’t Work the Way You Think

    SoapClient Timeouts Don’t Work the Way You Think

    You’d think adding a timeout to PHP’s SoapClient would be straightforward. Maybe a constructor option like 'timeout' => 30?

    Nope. Welcome to PHP SOAP hell.

    The only way to set a request timeout on native SoapClient is ini_set():

    ini_set('default_socket_timeout', 30);
    $client = new SoapClient($wsdl, $options);

    The connection_timeout constructor option? That’s only for the initial TCP handshake, not the actual SOAP call. It won’t save you from slow API responses.

    But there’s a cleaner approach — extend SoapClient and override __doRequest() to use Guzzle:

    class GuzzleSoapClient extends SoapClient
    {
        private GuzzleClient $guzzle;
    
        public function __construct($wsdl, array $options = [])
        {
            parent::__construct($wsdl, $options);
            $this->guzzle = new GuzzleClient([
                'timeout' => $options['timeout'] ?? 30,
            ]);
        }
    
        public function __doRequest(
            $request, $location, $action, $version, $oneWay = 0
        ): ?string {
            $response = $this->guzzle->post($location, [
                'body' => $request,
                'headers' => ['SOAPAction' => $action],
            ]);
            return (string)$response->getBody();
        }
    }

    Drop-in replacement. Actual timeout support. No ini_set() hacks.

  • Enable proxy_ssl_server_name for Nginx HTTPS Backends

    Enable proxy_ssl_server_name for Nginx HTTPS Backends

    Proxying to an HTTPS backend in Nginx? Don’t forget this one directive:

    location / {
        proxy_pass https://backend.example.com;
        proxy_ssl_server_name on;  # This one
    }

    Why it matters: modern web servers host multiple SSL sites on one IP using SNI (Server Name Indication). When your Nginx proxy connects to https://backend.example.com, it needs to tell the backend “I want the cert for backend.example.com.”

    Without proxy_ssl_server_name on, Nginx doesn’t send the SNI header. The backend doesn’t know which SSL cert to use, and you get connection failures or wrong cert errors.

    When you need it:

    • Proxying to Cloudflare-backed sites
    • Proxying to shared hosting
    • Any backend with multiple domains on one IP

    Think of it like calling a company with multiple departments — you need to tell the receptionist which one you want, not just dial the main number.

  • Docker /etc/hosts Only Accepts IPs, Not Hostnames

    Docker /etc/hosts Only Accepts IPs, Not Hostnames

    TIL you can’t do this in a Docker container:

    echo "proxy.example.com www.api.com" >> /etc/hosts

    I wanted to redirect www.api.com to proxy.example.com inside a container without hardcoding the proxy’s IP address.

    The problem: /etc/hosts format is strict — IP_ADDRESS hostname [hostname...]. You can’t use hostnames on the left side. Only IP addresses.

    What works:

    143.198.210.158 www.api.com

    What doesn’t:

    proxy.example.com www.api.com  # NOPE

    The workaround: use Docker’s extra_hosts in docker-compose.yaml:

    extra_hosts:
      - "www.api.com:143.198.210.158"

    Or resolve the hostname before writing to /etc/hosts:

    IP=$(getent hosts proxy.example.com | awk '{print $1}')
    echo "$IP www.api.com" >> /etc/hosts

    /etc/hosts is old-school. It predates DNS. It doesn’t do hostname resolution — it IS the resolution.

  • Use Tinker –execute for Automation

    Use Tinker –execute for Automation

    Ever tried to automate Laravel Tinker commands in a Docker container or CI pipeline? If you’ve used stdin piping with heredocs, you probably discovered it just… doesn’t work reliably.

    I spent way too long debugging why docker exec php artisan tinker with piped PHP commands would run silently and produce zero output. Turns out, Tinker’s interactive mode doesn’t play nice with stdin automation.

    The solution that actually works? The --execute flag.

    // ❌ This doesn't work reliably:
    docker exec my-app bash -c "php artisan tinker <<'EOF'
    \$user = App\Models\User::first();
    echo \$user->email;
    EOF"
    
    // ✅ This does:
    docker exec my-app php artisan tinker --execute="
    \$user = App\Models\User::first();
    echo \$user->email;
    "
    
    // Or for multi-line commands in bash scripts:
    docker exec my-app php artisan tinker --execute="$(cat <<'EOF'
    $user = App\Models\User::first();
    $user->email = '[email protected]';
    $user->save();
    echo 'Updated: ' . $user->email;
    EOF
    )"

    The --execute flag runs Tinker in non-interactive mode. It evaluates the PHP code, prints the output, and exits cleanly. No TTY required, no stdin gymnastics, no mystery silent failures.

    This is a lifesaver for:

    • CI/CD pipelines — seed data, run health checks, warm caches
    • Docker automation — scripts that need to interact with Laravel in containers
    • Cron jobs — quick data fixes without writing full artisan commands

    Pro tip: for complex multi-statement logic, you can still pass a full PHP script to --execute. Just wrap it in quotes and keep newlines intact with $(cat <<'EOF' ... EOF).

    Stop fighting with heredocs. Use --execute and move on with your life.

  • Use Named Parameters for Boolean Flags

    Use Named Parameters for Boolean Flags

    Quick quiz: what does processOrder($order, true, false, true, false, false) do?

    You have no idea. Neither do I. And neither will you in three months when you come back to debug this code.

    I used to think this was just “how PHP worked” — positional parameters, take it or leave it. Then PHP 8.0 dropped named parameters, and suddenly that cryptic boolean soup became self-documenting code.

    Here’s the before and after:

    // Before: positional boolean hell
    function processOrder(
        Order $order,
        bool $validateStock = true,
        bool $sendEmail = false,
        bool $applyDiscount = true,
        bool $updateInventory = false,
        bool $logActivity = false
    ) {
        // ...
    }
    
    // What does this even mean?
    processOrder($order, true, false, true, false, false);
    
    // After: PHP 8 named parameters
    processOrder(
        order: $order,
        validateStock: true,
        applyDiscount: true
    );
    
    // Or when you need to flip a flag deep in the parameter list:
    processOrder(
        order: $order,
        sendEmail: true,
        logActivity: true
    );

    The beauty of named parameters is that you skip the defaults you don’t need. No more passing null, null, null, true just to reach the parameter you actually want to change.

    This isn’t just about readability (though that’s huge). It’s about maintenance. When you add a new optional parameter, existing calls don’t break. When you reorder parameters (carefully!), named calls stay stable.

    Rule of thumb: if you have more than two boolean parameters, or any boolean parameter after the first argument, use named parameters at the call site. Your code reviewers will love you.

    PHP 8 is six years old now. If you’re not using named parameters yet, you’re missing out on one of the best DX improvements PHP has ever shipped.

  • Don’t Mix Cached and Fresh Data in the Same Transaction

    Don’t Mix Cached and Fresh Data in the Same Transaction

    Ever debug a bug where your financial reports showed negative margins, only to discover you were mixing cached and fresh data in the same transaction?

    I hit this in an API endpoint that calculated order profitability. The method fetched product costs fresh from the database (good!), but the companion method that grabbed retail prices was still using Redis cache (bad!). When prices changed, we’d calculate margins using old cached prices and new costs. Hello, mystery losses.

    The fix wasn’t just “disable cache everywhere” — Redis cache is there for a reason. The real issue was inconsistent cache behavior within the same transaction.

    Here’s the pattern that saved us:

    class OrderCalculator
    {
        public function calculateMargin(Order $order, bool $useCache = true): float
        {
            $cost = $this->getCost($order->product_id, useCache: $useCache);
            $price = $this->getPrice($order->product_id, useCache: $useCache);
            
            return $price - $cost;
        }
        
        private function getCost(int $productId, bool $useCache = true): float
        {
            if (!$useCache) {
                return Product::find($productId)->cost;
            }
            
            return Cache::remember("product.{$productId}.cost", 3600, function() use ($productId) {
                return Product::find($productId)->cost;
            });
        }
        
        private function getPrice(int $productId, bool $useCache = true): float
        {
            if (!$useCache) {
                return Product::find($productId)->price;
            }
            
            return Cache::remember("product.{$productId}.price", 3600, function() use ($productId) {
                return Product::find($productId)->price;
            });
        }
    }
    
    // In write operations (order creation, updates):
    $margin = $calculator->calculateMargin($order, useCache: false);
    
    // In read operations (reports, dashboards):
    $margin = $calculator->calculateMargin($order, useCache: true);

    The key insight: make cache behavior explicit and consistent. When you’re writing data or making financial calculations, all related lookups should use the same freshness guarantee. Add a useCache parameter (default true for reads) and disable it for writes.

    Your future self debugging production at 2am will thank you.

  • Nullable Typed Properties: The PHP Gotcha That Bites During API Deserialization

    Nullable Typed Properties: The PHP Gotcha That Bites During API Deserialization

    Here’s a PHP gotcha that’s bitten me more than once when working with typed properties and external data sources like API responses or deserialized objects.

    In PHP 7.4+, when you declare a typed property like this:

    class ApiResponse
    {
        public ResponseData $data;
    }

    You might assume that $data defaults to null if it’s never assigned. It doesn’t. It’s in an uninitialized state — which is different from null. Try to access it and you’ll get:

    TypeError: Typed property ApiResponse::$data must not be accessed before initialization

    This is especially common when deserializing API responses. If the external service returns a malformed payload missing expected fields, your deserializer creates the object but never sets the property. PHP then explodes when you try to read it.

    The fix is straightforward — make the property nullable and give it a default:

    class ApiResponse
    {
        public ?ResponseData $data = null;
    }

    Two things changed: the ? prefix makes the type nullable, and = null provides an explicit default. Both are required — even a ?Type property without = null stays uninitialized.

    Then add a null-safe check where you access it:

    if (!$response->data?->items) {
        // Handle missing data gracefully
        return [];
    }

    The ?-> operator (PHP 8.0+) short-circuits to null instead of throwing an error. Clean and defensive.

    The takeaway: Any typed property that might not get initialized — especially in DTOs, API response objects, or anything populated by external data — should be nullable with an explicit = null default. Don’t assume your data sources will always send complete payloads.

  • Stop Error Tracking Sprawl: Keep Exception Messages Static

    Stop Error Tracking Sprawl: Keep Exception Messages Static

    If you use Sentry (or any error tracking tool) with a Laravel app, you’ve probably noticed this problem: one error creates dozens of separate entries instead of grouping into one.

    The culprit is almost always dynamic data in the exception message.

    The Problem

    // ❌ This creates a NEW error entry for every different order ID
    throw new \RuntimeException(
        "Failed to sync pricing for order {$order->id}: API returned error {$response->error_code}"
    );
    

    Sentry groups issues by their stack trace and exception message. When the message changes with every occurrence — because it includes an ID, a timestamp, or an API reference code — each one becomes its own entry.

    Instead of seeing “Failed to sync pricing — thousands of occurrences” neatly grouped, you get a wall of individual entries. Good luck triaging that.

    The Fix

    Keep exception messages static. Pass the dynamic parts as context instead:

    // ✅ Static message — Sentry groups these together
    throw new SyncFailedException(
        'Failed to sync pricing: API returned error',
        previous: $exception,
        context: [
            'order_id' => $order->id,
            'error_code' => $response->error_code,
            'error_ref' => $response->error_reference,
        ]
    );
    

    The dynamic data still gets captured — you can see it in Sentry’s event detail view. But since the message is identical across occurrences, they all group under one entry.

    Using Sentry’s Context API

    If you can’t change the exception class, use Sentry’s scope to attach context before the exception is captured:

    use Sentry\State\Scope;
    
    \Sentry\configureScope(function (Scope $scope) use ($order, $response) {
        $scope->setContext('sync_details', [
            'order_id' => $order->id,
            'error_code' => $response->error_code,
        ]);
    });
    
    throw new \RuntimeException('Pricing sync failed');
    

    Or even simpler in Laravel, use report() with context in your exception handler:

    try {
        $this->syncPricing($order);
    } catch (ApiException $e) {
        report($e->setContext([
            'order_id' => $order->id,
        ]));
    }
    

    The Rule of Thumb

    If you’re interpolating a variable into an exception message, ask yourself: will this string be different for each occurrence?

    If yes — pull it out. Static messages, dynamic context. Your Sentry dashboard (and your on-call engineer at 3 AM) will thank you.

  • Storage::build() — Laravel’s Hidden Gem for Temp File Operations

    Storage::build() — Laravel’s Hidden Gem for Temp File Operations

    Here’s a pattern I reach for any time I need temporary file operations in an artisan command or queue job: Storage::build().

    Most Laravel developers know about configuring disks in config/filesystems.php. But what happens when you need a quick, disposable filesystem rooted in the system temp directory? You don’t want to pollute your configured disks with throwaway stuff.

    The Pattern

    use Illuminate\Support\Facades\Storage;
    
    $storage = Storage::build([
        'driver' => 'local',
        'root' => sys_get_temp_dir(),
    ]);
    
    // Now use it like any other disk
    $storage->makeDirectory('my-export');
    $storage->put('my-export/report.csv', $csvContent);
    
    // Clean up when done
    $storage->deleteDirectory('my-export');
    

    This creates a filesystem instance on the fly, pointed at /tmp (or whatever your OS temp directory is). No config file changes. No new disk to maintain.

    Why This Beats the Alternatives

    You might be tempted to just use raw PHP file functions:

    // Don't do this
    file_put_contents('/tmp/my-export/report.csv', $csvContent);
    

    But then you lose all the niceties of Laravel’s filesystem API — directory creation, deletion, visibility management, and consistent error handling.

    You could also define a temp disk in your config:

    // config/filesystems.php
    'temp' => [
        'driver' => 'local',
        'root' => sys_get_temp_dir(),
    ],
    

    But that’s permanent config for a temporary need. Storage::build() keeps it scoped to the code that actually needs it.

    A Real-World Use Case

    This shines in artisan commands that generate files, process them, and clean up:

    class GenerateReportCommand extends Command
    {
        protected $signature = 'report:generate {userId}';
    
        private Filesystem $storage;
    
        public function __construct()
        {
            parent::__construct();
            $this->storage = Storage::build([
                'driver' => 'local',
                'root' => sys_get_temp_dir(),
            ]);
        }
    
        public function handle()
        {
            $dir = 'report-' . $this->argument('userId');
            $this->storage->makeDirectory($dir);
    
            try {
                // Generate CSV
                $this->storage->put("$dir/data.csv", $this->buildCsv());
    
                // Maybe encrypt it
                $encrypted = $this->encrypt("$dir/data.csv");
    
                // Upload somewhere permanent
                Storage::disk('s3')->put("reports/$dir.csv.gpg", 
                    $this->storage->get($encrypted)
                );
            } finally {
                // Always clean up
                $this->storage->deleteDirectory($dir);
            }
        }
    }
    

    The try/finally ensures temp files get cleaned up even if something throws. The command’s temp filesystem stays completely isolated from the rest of your app.

    Storage::build() has been available since Laravel 9. If you’re still using raw file_put_contents() calls for temp files, give this a try.

  • Use insertOrIgnore() to Handle Race Conditions Gracefully

    Use insertOrIgnore() to Handle Race Conditions Gracefully

    If you’re logging records to a database table that has a unique constraint — say, tracking processed job IDs or idempotency keys — you’ve probably hit this at some point:

    SQLSTATE[23000]: Integrity constraint violation: 1062
    Duplicate entry 'abc-123' for key 'jobs_uuid_unique'
    

    This typically happens when two processes try to insert the same record at roughly the same time. Classic race condition. And it’s especially common in queue workers, where multiple workers might process the same failed job simultaneously.

    The blunt solution is to wrap it in a try/catch:

    try {
        DB::table('processed_jobs')->insert([
            'uuid' => $job->uuid(),
            'failed_at' => now(),
        ]);
    } catch (\Illuminate\Database\QueryException $e) {
        // Ignore duplicate
    }
    

    But there’s a cleaner way. Laravel’s query builder has insertOrIgnore() — available since Laravel 5.8:

    DB::table('processed_jobs')->insertOrIgnore([
        'uuid' => $job->uuid(),
        'failed_at' => now(),
        'payload' => $job->payload(),
    ]);
    

    Under the hood, this generates INSERT IGNORE INTO ... on MySQL, which silently skips the insert if it would violate a unique constraint. No exception thrown, no try/catch needed.

    A few things to keep in mind:

    • insertOrIgnore() will suppress all duplicate key errors, not just the one you’re expecting — so make sure your unique constraints are intentional
    • On MySQL, it also bypasses strict mode checks, which means other data integrity issues might be silently swallowed
    • If you need to update the existing record on conflict, use upsert() instead

    Here’s the decision framework I use:

    • “Skip if exists”insertOrIgnore()
    • “Update if exists”upsert()
    • “Fail if exists” → Regular insert() (and let the exception surface)

    If you’re writing any kind of idempotent operation — event processing, job tracking, audit logging — insertOrIgnore() is the simplest way to handle the race condition without cluttering your code with try/catch blocks.