Author: Daryle De Silva

  • Catch ConnectException Before RequestException in Guzzle

    Catch ConnectException Before RequestException in Guzzle

    When you catch Guzzle exceptions, you’re probably writing something like this:

    use GuzzleHttp\Exception\RequestException;
    
    try {
        $response = $client->request('POST', '/api/orders');
    } catch (RequestException $e) {
        // Handle error
        Log::error('API call failed: ' . $e->getMessage());
    }
    

    This works, but it treats every failure the same way. A connection timeout is fundamentally different from a 400 Bad Request — and your error handling should reflect that.

    Guzzle has a clear exception hierarchy:

    • TransferException (base class)
      • RequestException (server returned a response, but it was an error)
        • ConnectException (couldn’t connect at all — timeouts, DNS failures, refused connections)

    The key insight: ConnectException extends RequestException, so if you catch RequestException first, you’ll catch both. But if you catch ConnectException first, you can handle connection issues differently:

    use GuzzleHttp\Exception\ConnectException;
    use GuzzleHttp\Exception\RequestException;
    
    try {
        $response = $client->request('POST', '/api/orders', [
            'json' => $payload,
        ]);
    } catch (ConnectException $e) {
        // The server is unreachable — retry later, don't log the payload
        throw new ServiceTemporarilyUnavailableException(
            'Could not reach external API: ' . $e->getMessage(),
            $e->getCode(),
            $e
        );
    } catch (RequestException $e) {
        // We got a response, but it was an error (4xx, 5xx)
        $statusCode = $e->getResponse()?->getStatusCode();
        Log::error("API returned {$statusCode}", [
            'body' => $e->getResponse()?->getBody()->getContents(),
        ]);
        throw $e;
    }
    

    Why does this matter? Because the recovery strategy is different:

    • Connection failures are transient — retry with backoff, mark the service as temporarily unavailable, or delete the job from the queue so it doesn’t burn through retry attempts
    • HTTP errors (4xx/5xx) might be permanent — a 422 means your payload is wrong, retrying won’t help

    In queue jobs, this distinction is especially powerful. You can catch ConnectException and throw a domain-specific exception that your job’s failed() method handles differently — maybe releasing the job back to the queue with a delay instead of marking it as permanently failed.

    Next time you’re writing a try/catch around an HTTP call, ask yourself: “Am I handling connection failures differently from response errors?” If not, add that ConnectException catch block. Your future debugging self will thank you.

  • Guzzle’s Default Timeout Is Infinite — And That’s a Problem

    Guzzle’s Default Timeout Is Infinite — And That’s a Problem

    Here’s something that might surprise you: Guzzle’s default timeout is zero — which means wait forever.

    That’s right. If you create an HTTP client without setting a timeout, your application will happily sit there for minutes (or longer) waiting for a response that may never come. In a web request, your users see a spinner. In a queue job, you burn through worker capacity while the job just… hangs.

    I ran into this recently when debugging why queue workers kept dying. The root cause? An external API was occasionally slow to respond, and the HTTP client had no timeout configured. The job would hang until the queue worker’s own timeout killed it — by which point it had already wasted 5+ minutes of processing capacity.

    The fix was embarrassingly simple:

    use GuzzleHttp\Client;
    
    $client = new Client([
        'base_uri' => 'https://api.example.com/',
        'timeout' => 30,
        'connect_timeout' => 10,
    ]);
    

    Two options to know:

    • timeout — Total seconds to wait for a response (including transfer time)
    • connect_timeout — Seconds to wait just for the TCP connection to establish

    If you’re configuring HTTP clients in a Laravel service provider (which you should be), set these defaults at the client level:

    $this->app->bind(ApiClient::class, function ($app) {
        $http = new Client([
            'base_uri' => config('services.vendor.url'),
            'timeout' => 30,
            'connect_timeout' => 10,
        ]);
    
        return new ApiClient($http);
    });
    

    A quick audit tip: search your codebase for new Client( or wherever you instantiate Guzzle clients. If you don’t see timeout in the options array, you’ve got a ticking time bomb. Especially in queue jobs, where a hanging HTTP request can cascade into MaxAttemptsExceededException and fill your failed jobs table.

    Rule of thumb: 30 seconds for most API calls, 10 seconds for connection timeout. Adjust based on what you know about the external service, but never leave it at zero.

  • Use concat() Instead of merge() When Combining Collections With Numeric Keys

    Use concat() Instead of merge() When Combining Collections With Numeric Keys

    This one has bitten me and probably every Laravel developer at least once. You’re combining two collections and items silently disappear. No error, no warning — just missing data.

    The Trap

    Laravel’s merge() and concat() look interchangeable at first glance. They’re not. The difference is all about how they handle numeric keys.

    merge() overwrites values when keys match. concat() appends items regardless of keys.

    $dbJobs = collect([
        0 => ['name' => 'Send Weekly Report', 'schedule' => '0 9 * * 1'],
        1 => ['name' => 'Clean Temp Files', 'schedule' => '0 0 * * *'],
    ]);
    
    $configJobs = collect([
        0 => ['name' => 'Sync Inventory', 'schedule' => '*/30 * * * *'],
        1 => ['name' => 'Generate Invoices', 'schedule' => '0 6 * * 1'],
    ]);
    
    // ❌ merge() - overwrites keys 0 and 1, you get 2 items instead of 4
    $all = $dbJobs->merge($configJobs);
    // Result: Sync Inventory, Generate Invoices (DB jobs silently lost!)
    
    // ✅ concat() - appends all items, you get all 4
    $all = $dbJobs->concat($configJobs);
    // Result: Send Weekly Report, Clean Temp Files, Sync Inventory, Generate Invoices

    Why This Is Sneaky

    This bug is particularly nasty because it usually passes your tests. Test fixtures tend to have unique keys or small data sets that don’t overlap. It’s only in production — when two data sources happen to produce the same numeric indices — that items start vanishing.

    And since there’s no error or exception, you won’t know anything is wrong until someone notices the missing data downstream.

    The Rule of Thumb

    If you’re combining two collections and you want all items from both, use concat(). Full stop.

    Only use merge() when you specifically want the overwrite behavior — for example, merging config arrays where later values should replace earlier ones (the same way PHP’s array_merge() works with string keys).

    Quick Reference

    • merge() — like array_merge(): string keys overwrite, numeric keys also overwrite (unlike native PHP!)
    • concat() — like array_push(): just appends everything, keys don’t matter
    • push() — adds a single item

    One-line fix, saves you hours of debugging “where did my data go?”

  • Override getAttribute() for Backward-Compatible Schema Migrations

    Override getAttribute() for Backward-Compatible Schema Migrations

    Schema migrations are easy to plan and hard to deploy — especially when a column rename or consolidation touches dozens of call sites across the codebase. Here’s a trick I’ve used to make incremental migrations way less painful.

    The Scenario

    Imagine you have a model with numbered columns: price_1, price_2, all the way through price_60. A past design decision that seemed reasonable at the time but now just wastes space. You want to consolidate everything down to price_1.

    The data migration is straightforward — copy the canonical value into price_1 for every row. But the scary part is all the existing code that reads $record->price_5 or $record->price_23 scattered across the app.

    The Trick: Override getAttribute()

    Instead of finding and updating every reference before deploying, override Eloquent’s getAttribute() to silently redirect old column access to the new consolidated column:

    class PricingRecord extends Model
    {
        public function getAttribute($key)
        {
            $consolidatedPrefixes = [
                'price_',
                'cost_',
                'fee_',
            ];
    
            foreach ($consolidatedPrefixes as $prefix) {
                if (preg_match('/^' . preg_quote($prefix) . '\d+/', $key)
                    && $key !== $prefix . '1') {
                    // Redirect any tier N access to tier 1
                    return parent::getAttribute($prefix . '1');
                }
            }
    
            return parent::getAttribute($key);
        }
    }

    The Deployment Sequence

    1. Run the data migration — consolidate all values into the _1 columns
    2. Deploy the getAttribute() override — old code reading $record->price_5 transparently gets price_1
    3. Clean up call sites at your own pace — update references one PR at a time, no rush
    4. Remove the override once all references point to _1
    5. Drop the old columns in a final migration

    No big-bang refactor. No merge conflicts from a 40-file PR. No “please don’t deploy anything until my branch lands.”

    One Caveat

    This adds a regex check on every attribute access for that model. For a model you load a few times per request, it’s negligible. But if you’re hydrating thousands of records in a loop, profile it first. The regex is cheap, but “cheap × 10,000” can add up.

    For most real-world use cases though, this buys you a safe, incremental migration path with zero downtime and zero broken features.

  • Use prepareForValidation() to Normalize Input Before Rules Run

    Use prepareForValidation() to Normalize Input Before Rules Run

    Here’s one of those Laravel features that’s been around forever but still doesn’t get the love it deserves: prepareForValidation().

    If you’ve ever written gnarly conditional validation rules just because the incoming form data was messy, this one’s for you.

    The Problem

    Say you have a pricing form where users can pick different plan tiers. The form sends data keyed by the tier number they selected. But your validation rules really only care about the final, normalized value — always stored against tier 1.

    You could write a pile of conditional rules to handle every possible tier. Or you could just normalize the data before validation even runs.

    The Fix

    Laravel’s FormRequest class has a prepareForValidation() method that fires right before your rules() are evaluated. Use it to reshape incoming data so your rules only deal with a clean, predictable structure:

    class UpdatePricingRequest extends FormRequest
    {
        protected function prepareForValidation(): void
        {
            $tiers = (array) $this->get('tiers');
            foreach ($tiers as $tierName => $tierData) {
                $selectedTier = $tierData['selected_tier'] ?? 1;
                if ($selectedTier > 1) {
                    // Copy the selected tier's value to tier 1
                    $this->merge([
                        "prices.{$tierName}.1" => $this->input("prices.{$tierName}.{$selectedTier}"),
                    ]);
                }
            }
        }
    
        public function rules(): array
        {
            return [
                'prices.*.1' => 'required|numeric|min:0',
            ];
        }
    }

    Why This Works

    Your validation rules only need to care about one shape: prices.*.1. The real-world messiness — users picking tier 3 on one plan and tier 7 on another — gets handled upstream, before rules even see it.

    This keeps your rules dead simple and your normalization logic in one obvious place. No more spreading data transformation across rules, controllers, or middleware.

    When to Reach For It

    Any time your incoming data doesn’t match the shape your rules expect. Common cases:

    • Trimming or lowercasing string fields
    • Remapping aliased keys from different API consumers
    • Setting defaults for optional fields
    • Normalizing nested data like the tier example above

    The official docs cover it briefly, but once you start using it, you’ll wonder how you ever managed without it.

  • Branch From the Right Base When Stacking PRs

    Branch From the Right Base When Stacking PRs

    When you have a feature that spans multiple PRs — say, PR #1 builds the core, PR #2 adds an enhancement on top — you need to stack them properly.

    The mistake I see developers make: they branch PR #2 from master instead of from PR #1’s branch.

    # ❌ Wrong — branches from master, will conflict with PR #1
    git checkout master
    git checkout -b feature/enhancement
    
    # ✅ Right — branches from PR #1, builds on top of it
    git checkout feature/core-feature
    git checkout -b feature/enhancement

    And the PR target matters too:

    • PR #1 targets master (or main)
    • PR #2 targets feature/core-feature (PR #1’s branch), not master

    Once PR #1 is merged, you update PR #2’s target to master. GitHub and GitLab both handle this cleanly — the diff will shrink to just PR #2’s changes.

    When to combine instead: If the PRs are small enough and touch the same files, sometimes it’s simpler to combine them into one branch. We do this when features are tightly coupled and reviewing them separately would lose context.

    # Combining two feature branches into one
    git checkout master
    git checkout -b feature/combined
    git merge feature/part-1 --no-ff
    git merge feature/part-2 --no-ff

    The goal is always the same: make the reviewer’s job easy. Small, focused PRs with clear lineage beat a single massive PR every time.

  • Make Your Artisan Commands Idempotent

    Make Your Artisan Commands Idempotent

    We recently had to bulk-process a batch of records — update statuses, add notes, trigger some side effects. The kind of thing you write a one-off artisan command for.

    The first instinct is to just loop and execute. But what happens when the command fails halfway through? Or the queue worker restarts? You need to run it again, and now half your records get double-processed.

    The fix: Make every step check if it’s already been done.

    public function handle()
    {
        $orders = Order::whereIn('reference', $this->references)->get();
    
        foreach ($orders as $order) {
            // Step 1: Always safe to repeat
            $order->addNote('Bulk processed on ' . now()->toDateString());
    
            // Step 2: Skip if already done
            if ($order->status === OrderStatus::CANCELLED) {
                $this->info("Skipping {$order->reference} — already cancelled");
                continue;
            }
    
            // Step 3: Do the actual work
            $order->cancel();
            $this->info("Cancelled {$order->reference}");
        }
    }

    Key patterns:

    • Check before acting — If the record is already in the target state, skip it
    • Log what you skip — So you can verify the second run did nothing harmful
    • Add a --dry-run flag — Always. Test the logic before committing to production changes
    $this->addOption('dry-run', null, InputOption::VALUE_NONE, 'Run without making changes');
    
    // In your handle method:
    if ($this->option('dry-run')) {
        $this->info("[DRY RUN] Would cancel {$order->reference}");
        continue;
    }

    The rule is simple: if you can’t safely run it twice, it’s not ready for production.

  • Use toRawSql() to See What Eloquent Actually Runs

    Use toRawSql() to See What Eloquent Actually Runs

    When you’re debugging a complex Eloquent query, toSql() gives you the query with ? placeholders. Not super helpful when you need to paste it into your database client.

    Since Laravel 10.15, you can use toRawSql() instead:

    // Before — placeholders, not useful for debugging
    $query->toSql();
    // SELECT * FROM users WHERE status = ? AND role = ?
    
    // After — actual values inline
    $query->toRawSql();
    // SELECT * FROM users WHERE status = 'active' AND role = 'admin'

    Works on both the Eloquent builder and the base query builder:

    // Eloquent builder
    User::where('status', 'active')->toRawSql();
    
    // Base query builder (useful for complex joins)
    User::where('status', 'active')->toBase()->toRawSql();

    I use ->toBase()->toRawSql() constantly when comparing old vs new query approaches during refactors. Paste both into your DB client, compare the execution plans, and you know exactly what changed.

    Bonus: If you’re on an older Laravel version, the query log approach still works:

    DB::enableQueryLog();
    // ...run your query...
    dd(DB::getQueryLog());

    But toRawSql() is cleaner when you just need one query.