Blog

  • Stop Error Tracking Sprawl: Keep Exception Messages Static

    Stop Error Tracking Sprawl: Keep Exception Messages Static

    If you use Sentry (or any error tracking tool) with a Laravel app, you’ve probably noticed this problem: one error creates dozens of separate entries instead of grouping into one.

    The culprit is almost always dynamic data in the exception message.

    The Problem

    // ❌ This creates a NEW error entry for every different order ID
    throw new \RuntimeException(
        "Failed to sync pricing for order {$order->id}: API returned error {$response->error_code}"
    );
    

    Sentry groups issues by their stack trace and exception message. When the message changes with every occurrence — because it includes an ID, a timestamp, or an API reference code — each one becomes its own entry.

    Instead of seeing “Failed to sync pricing — thousands of occurrences” neatly grouped, you get a wall of individual entries. Good luck triaging that.

    The Fix

    Keep exception messages static. Pass the dynamic parts as context instead:

    // ✅ Static message — Sentry groups these together
    throw new SyncFailedException(
        'Failed to sync pricing: API returned error',
        previous: $exception,
        context: [
            'order_id' => $order->id,
            'error_code' => $response->error_code,
            'error_ref' => $response->error_reference,
        ]
    );
    

    The dynamic data still gets captured — you can see it in Sentry’s event detail view. But since the message is identical across occurrences, they all group under one entry.

    Using Sentry’s Context API

    If you can’t change the exception class, use Sentry’s scope to attach context before the exception is captured:

    use Sentry\State\Scope;
    
    \Sentry\configureScope(function (Scope $scope) use ($order, $response) {
        $scope->setContext('sync_details', [
            'order_id' => $order->id,
            'error_code' => $response->error_code,
        ]);
    });
    
    throw new \RuntimeException('Pricing sync failed');
    

    Or even simpler in Laravel, use report() with context in your exception handler:

    try {
        $this->syncPricing($order);
    } catch (ApiException $e) {
        report($e->setContext([
            'order_id' => $order->id,
        ]));
    }
    

    The Rule of Thumb

    If you’re interpolating a variable into an exception message, ask yourself: will this string be different for each occurrence?

    If yes — pull it out. Static messages, dynamic context. Your Sentry dashboard (and your on-call engineer at 3 AM) will thank you.

  • Storage::build() — Laravel’s Hidden Gem for Temp File Operations

    Storage::build() — Laravel’s Hidden Gem for Temp File Operations

    Here’s a pattern I reach for any time I need temporary file operations in an artisan command or queue job: Storage::build().

    Most Laravel developers know about configuring disks in config/filesystems.php. But what happens when you need a quick, disposable filesystem rooted in the system temp directory? You don’t want to pollute your configured disks with throwaway stuff.

    The Pattern

    use Illuminate\Support\Facades\Storage;
    
    $storage = Storage::build([
        'driver' => 'local',
        'root' => sys_get_temp_dir(),
    ]);
    
    // Now use it like any other disk
    $storage->makeDirectory('my-export');
    $storage->put('my-export/report.csv', $csvContent);
    
    // Clean up when done
    $storage->deleteDirectory('my-export');
    

    This creates a filesystem instance on the fly, pointed at /tmp (or whatever your OS temp directory is). No config file changes. No new disk to maintain.

    Why This Beats the Alternatives

    You might be tempted to just use raw PHP file functions:

    // Don't do this
    file_put_contents('/tmp/my-export/report.csv', $csvContent);
    

    But then you lose all the niceties of Laravel’s filesystem API — directory creation, deletion, visibility management, and consistent error handling.

    You could also define a temp disk in your config:

    // config/filesystems.php
    'temp' => [
        'driver' => 'local',
        'root' => sys_get_temp_dir(),
    ],
    

    But that’s permanent config for a temporary need. Storage::build() keeps it scoped to the code that actually needs it.

    A Real-World Use Case

    This shines in artisan commands that generate files, process them, and clean up:

    class GenerateReportCommand extends Command
    {
        protected $signature = 'report:generate {userId}';
    
        private Filesystem $storage;
    
        public function __construct()
        {
            parent::__construct();
            $this->storage = Storage::build([
                'driver' => 'local',
                'root' => sys_get_temp_dir(),
            ]);
        }
    
        public function handle()
        {
            $dir = 'report-' . $this->argument('userId');
            $this->storage->makeDirectory($dir);
    
            try {
                // Generate CSV
                $this->storage->put("$dir/data.csv", $this->buildCsv());
    
                // Maybe encrypt it
                $encrypted = $this->encrypt("$dir/data.csv");
    
                // Upload somewhere permanent
                Storage::disk('s3')->put("reports/$dir.csv.gpg", 
                    $this->storage->get($encrypted)
                );
            } finally {
                // Always clean up
                $this->storage->deleteDirectory($dir);
            }
        }
    }
    

    The try/finally ensures temp files get cleaned up even if something throws. The command’s temp filesystem stays completely isolated from the rest of your app.

    Storage::build() has been available since Laravel 9. If you’re still using raw file_put_contents() calls for temp files, give this a try.

  • Use insertOrIgnore() to Handle Race Conditions Gracefully

    Use insertOrIgnore() to Handle Race Conditions Gracefully

    If you’re logging records to a database table that has a unique constraint — say, tracking processed job IDs or idempotency keys — you’ve probably hit this at some point:

    SQLSTATE[23000]: Integrity constraint violation: 1062
    Duplicate entry 'abc-123' for key 'jobs_uuid_unique'
    

    This typically happens when two processes try to insert the same record at roughly the same time. Classic race condition. And it’s especially common in queue workers, where multiple workers might process the same failed job simultaneously.

    The blunt solution is to wrap it in a try/catch:

    try {
        DB::table('processed_jobs')->insert([
            'uuid' => $job->uuid(),
            'failed_at' => now(),
        ]);
    } catch (\Illuminate\Database\QueryException $e) {
        // Ignore duplicate
    }
    

    But there’s a cleaner way. Laravel’s query builder has insertOrIgnore() — available since Laravel 5.8:

    DB::table('processed_jobs')->insertOrIgnore([
        'uuid' => $job->uuid(),
        'failed_at' => now(),
        'payload' => $job->payload(),
    ]);
    

    Under the hood, this generates INSERT IGNORE INTO ... on MySQL, which silently skips the insert if it would violate a unique constraint. No exception thrown, no try/catch needed.

    A few things to keep in mind:

    • insertOrIgnore() will suppress all duplicate key errors, not just the one you’re expecting — so make sure your unique constraints are intentional
    • On MySQL, it also bypasses strict mode checks, which means other data integrity issues might be silently swallowed
    • If you need to update the existing record on conflict, use upsert() instead

    Here’s the decision framework I use:

    • “Skip if exists”insertOrIgnore()
    • “Update if exists”upsert()
    • “Fail if exists” → Regular insert() (and let the exception surface)

    If you’re writing any kind of idempotent operation — event processing, job tracking, audit logging — insertOrIgnore() is the simplest way to handle the race condition without cluttering your code with try/catch blocks.

  • Catch ConnectException Before RequestException in Guzzle

    Catch ConnectException Before RequestException in Guzzle

    When you catch Guzzle exceptions, you’re probably writing something like this:

    use GuzzleHttp\Exception\RequestException;
    
    try {
        $response = $client->request('POST', '/api/orders');
    } catch (RequestException $e) {
        // Handle error
        Log::error('API call failed: ' . $e->getMessage());
    }
    

    This works, but it treats every failure the same way. A connection timeout is fundamentally different from a 400 Bad Request — and your error handling should reflect that.

    Guzzle has a clear exception hierarchy:

    • TransferException (base class)
      • RequestException (server returned a response, but it was an error)
        • ConnectException (couldn’t connect at all — timeouts, DNS failures, refused connections)

    The key insight: ConnectException extends RequestException, so if you catch RequestException first, you’ll catch both. But if you catch ConnectException first, you can handle connection issues differently:

    use GuzzleHttp\Exception\ConnectException;
    use GuzzleHttp\Exception\RequestException;
    
    try {
        $response = $client->request('POST', '/api/orders', [
            'json' => $payload,
        ]);
    } catch (ConnectException $e) {
        // The server is unreachable — retry later, don't log the payload
        throw new ServiceTemporarilyUnavailableException(
            'Could not reach external API: ' . $e->getMessage(),
            $e->getCode(),
            $e
        );
    } catch (RequestException $e) {
        // We got a response, but it was an error (4xx, 5xx)
        $statusCode = $e->getResponse()?->getStatusCode();
        Log::error("API returned {$statusCode}", [
            'body' => $e->getResponse()?->getBody()->getContents(),
        ]);
        throw $e;
    }
    

    Why does this matter? Because the recovery strategy is different:

    • Connection failures are transient — retry with backoff, mark the service as temporarily unavailable, or delete the job from the queue so it doesn’t burn through retry attempts
    • HTTP errors (4xx/5xx) might be permanent — a 422 means your payload is wrong, retrying won’t help

    In queue jobs, this distinction is especially powerful. You can catch ConnectException and throw a domain-specific exception that your job’s failed() method handles differently — maybe releasing the job back to the queue with a delay instead of marking it as permanently failed.

    Next time you’re writing a try/catch around an HTTP call, ask yourself: “Am I handling connection failures differently from response errors?” If not, add that ConnectException catch block. Your future debugging self will thank you.

  • Guzzle’s Default Timeout Is Infinite — And That’s a Problem

    Guzzle’s Default Timeout Is Infinite — And That’s a Problem

    Here’s something that might surprise you: Guzzle’s default timeout is zero — which means wait forever.

    That’s right. If you create an HTTP client without setting a timeout, your application will happily sit there for minutes (or longer) waiting for a response that may never come. In a web request, your users see a spinner. In a queue job, you burn through worker capacity while the job just… hangs.

    I ran into this recently when debugging why queue workers kept dying. The root cause? An external API was occasionally slow to respond, and the HTTP client had no timeout configured. The job would hang until the queue worker’s own timeout killed it — by which point it had already wasted 5+ minutes of processing capacity.

    The fix was embarrassingly simple:

    use GuzzleHttp\Client;
    
    $client = new Client([
        'base_uri' => 'https://api.example.com/',
        'timeout' => 30,
        'connect_timeout' => 10,
    ]);
    

    Two options to know:

    • timeout — Total seconds to wait for a response (including transfer time)
    • connect_timeout — Seconds to wait just for the TCP connection to establish

    If you’re configuring HTTP clients in a Laravel service provider (which you should be), set these defaults at the client level:

    $this->app->bind(ApiClient::class, function ($app) {
        $http = new Client([
            'base_uri' => config('services.vendor.url'),
            'timeout' => 30,
            'connect_timeout' => 10,
        ]);
    
        return new ApiClient($http);
    });
    

    A quick audit tip: search your codebase for new Client( or wherever you instantiate Guzzle clients. If you don’t see timeout in the options array, you’ve got a ticking time bomb. Especially in queue jobs, where a hanging HTTP request can cascade into MaxAttemptsExceededException and fill your failed jobs table.

    Rule of thumb: 30 seconds for most API calls, 10 seconds for connection timeout. Adjust based on what you know about the external service, but never leave it at zero.

  • Use concat() Instead of merge() When Combining Collections With Numeric Keys

    Use concat() Instead of merge() When Combining Collections With Numeric Keys

    This one has bitten me and probably every Laravel developer at least once. You’re combining two collections and items silently disappear. No error, no warning — just missing data.

    The Trap

    Laravel’s merge() and concat() look interchangeable at first glance. They’re not. The difference is all about how they handle numeric keys.

    merge() overwrites values when keys match. concat() appends items regardless of keys.

    $dbJobs = collect([
        0 => ['name' => 'Send Weekly Report', 'schedule' => '0 9 * * 1'],
        1 => ['name' => 'Clean Temp Files', 'schedule' => '0 0 * * *'],
    ]);
    
    $configJobs = collect([
        0 => ['name' => 'Sync Inventory', 'schedule' => '*/30 * * * *'],
        1 => ['name' => 'Generate Invoices', 'schedule' => '0 6 * * 1'],
    ]);
    
    // ❌ merge() - overwrites keys 0 and 1, you get 2 items instead of 4
    $all = $dbJobs->merge($configJobs);
    // Result: Sync Inventory, Generate Invoices (DB jobs silently lost!)
    
    // ✅ concat() - appends all items, you get all 4
    $all = $dbJobs->concat($configJobs);
    // Result: Send Weekly Report, Clean Temp Files, Sync Inventory, Generate Invoices

    Why This Is Sneaky

    This bug is particularly nasty because it usually passes your tests. Test fixtures tend to have unique keys or small data sets that don’t overlap. It’s only in production — when two data sources happen to produce the same numeric indices — that items start vanishing.

    And since there’s no error or exception, you won’t know anything is wrong until someone notices the missing data downstream.

    The Rule of Thumb

    If you’re combining two collections and you want all items from both, use concat(). Full stop.

    Only use merge() when you specifically want the overwrite behavior — for example, merging config arrays where later values should replace earlier ones (the same way PHP’s array_merge() works with string keys).

    Quick Reference

    • merge() — like array_merge(): string keys overwrite, numeric keys also overwrite (unlike native PHP!)
    • concat() — like array_push(): just appends everything, keys don’t matter
    • push() — adds a single item

    One-line fix, saves you hours of debugging “where did my data go?”

  • Override getAttribute() for Backward-Compatible Schema Migrations

    Override getAttribute() for Backward-Compatible Schema Migrations

    Schema migrations are easy to plan and hard to deploy — especially when a column rename or consolidation touches dozens of call sites across the codebase. Here’s a trick I’ve used to make incremental migrations way less painful.

    The Scenario

    Imagine you have a model with numbered columns: price_1, price_2, all the way through price_60. A past design decision that seemed reasonable at the time but now just wastes space. You want to consolidate everything down to price_1.

    The data migration is straightforward — copy the canonical value into price_1 for every row. But the scary part is all the existing code that reads $record->price_5 or $record->price_23 scattered across the app.

    The Trick: Override getAttribute()

    Instead of finding and updating every reference before deploying, override Eloquent’s getAttribute() to silently redirect old column access to the new consolidated column:

    class PricingRecord extends Model
    {
        public function getAttribute($key)
        {
            $consolidatedPrefixes = [
                'price_',
                'cost_',
                'fee_',
            ];
    
            foreach ($consolidatedPrefixes as $prefix) {
                if (preg_match('/^' . preg_quote($prefix) . '\d+/', $key)
                    && $key !== $prefix . '1') {
                    // Redirect any tier N access to tier 1
                    return parent::getAttribute($prefix . '1');
                }
            }
    
            return parent::getAttribute($key);
        }
    }

    The Deployment Sequence

    1. Run the data migration — consolidate all values into the _1 columns
    2. Deploy the getAttribute() override — old code reading $record->price_5 transparently gets price_1
    3. Clean up call sites at your own pace — update references one PR at a time, no rush
    4. Remove the override once all references point to _1
    5. Drop the old columns in a final migration

    No big-bang refactor. No merge conflicts from a 40-file PR. No “please don’t deploy anything until my branch lands.”

    One Caveat

    This adds a regex check on every attribute access for that model. For a model you load a few times per request, it’s negligible. But if you’re hydrating thousands of records in a loop, profile it first. The regex is cheap, but “cheap × 10,000” can add up.

    For most real-world use cases though, this buys you a safe, incremental migration path with zero downtime and zero broken features.

  • Use prepareForValidation() to Normalize Input Before Rules Run

    Use prepareForValidation() to Normalize Input Before Rules Run

    Here’s one of those Laravel features that’s been around forever but still doesn’t get the love it deserves: prepareForValidation().

    If you’ve ever written gnarly conditional validation rules just because the incoming form data was messy, this one’s for you.

    The Problem

    Say you have a pricing form where users can pick different plan tiers. The form sends data keyed by the tier number they selected. But your validation rules really only care about the final, normalized value — always stored against tier 1.

    You could write a pile of conditional rules to handle every possible tier. Or you could just normalize the data before validation even runs.

    The Fix

    Laravel’s FormRequest class has a prepareForValidation() method that fires right before your rules() are evaluated. Use it to reshape incoming data so your rules only deal with a clean, predictable structure:

    class UpdatePricingRequest extends FormRequest
    {
        protected function prepareForValidation(): void
        {
            $tiers = (array) $this->get('tiers');
            foreach ($tiers as $tierName => $tierData) {
                $selectedTier = $tierData['selected_tier'] ?? 1;
                if ($selectedTier > 1) {
                    // Copy the selected tier's value to tier 1
                    $this->merge([
                        "prices.{$tierName}.1" => $this->input("prices.{$tierName}.{$selectedTier}"),
                    ]);
                }
            }
        }
    
        public function rules(): array
        {
            return [
                'prices.*.1' => 'required|numeric|min:0',
            ];
        }
    }

    Why This Works

    Your validation rules only need to care about one shape: prices.*.1. The real-world messiness — users picking tier 3 on one plan and tier 7 on another — gets handled upstream, before rules even see it.

    This keeps your rules dead simple and your normalization logic in one obvious place. No more spreading data transformation across rules, controllers, or middleware.

    When to Reach For It

    Any time your incoming data doesn’t match the shape your rules expect. Common cases:

    • Trimming or lowercasing string fields
    • Remapping aliased keys from different API consumers
    • Setting defaults for optional fields
    • Normalizing nested data like the tier example above

    The official docs cover it briefly, but once you start using it, you’ll wonder how you ever managed without it.

  • Branch From the Right Base When Stacking PRs

    Branch From the Right Base When Stacking PRs

    When you have a feature that spans multiple PRs — say, PR #1 builds the core, PR #2 adds an enhancement on top — you need to stack them properly.

    The mistake I see developers make: they branch PR #2 from master instead of from PR #1’s branch.

    # ❌ Wrong — branches from master, will conflict with PR #1
    git checkout master
    git checkout -b feature/enhancement
    
    # ✅ Right — branches from PR #1, builds on top of it
    git checkout feature/core-feature
    git checkout -b feature/enhancement

    And the PR target matters too:

    • PR #1 targets master (or main)
    • PR #2 targets feature/core-feature (PR #1’s branch), not master

    Once PR #1 is merged, you update PR #2’s target to master. GitHub and GitLab both handle this cleanly — the diff will shrink to just PR #2’s changes.

    When to combine instead: If the PRs are small enough and touch the same files, sometimes it’s simpler to combine them into one branch. We do this when features are tightly coupled and reviewing them separately would lose context.

    # Combining two feature branches into one
    git checkout master
    git checkout -b feature/combined
    git merge feature/part-1 --no-ff
    git merge feature/part-2 --no-ff

    The goal is always the same: make the reviewer’s job easy. Small, focused PRs with clear lineage beat a single massive PR every time.

  • Make Your Artisan Commands Idempotent

    Make Your Artisan Commands Idempotent

    We recently had to bulk-process a batch of records — update statuses, add notes, trigger some side effects. The kind of thing you write a one-off artisan command for.

    The first instinct is to just loop and execute. But what happens when the command fails halfway through? Or the queue worker restarts? You need to run it again, and now half your records get double-processed.

    The fix: Make every step check if it’s already been done.

    public function handle()
    {
        $orders = Order::whereIn('reference', $this->references)->get();
    
        foreach ($orders as $order) {
            // Step 1: Always safe to repeat
            $order->addNote('Bulk processed on ' . now()->toDateString());
    
            // Step 2: Skip if already done
            if ($order->status === OrderStatus::CANCELLED) {
                $this->info("Skipping {$order->reference} — already cancelled");
                continue;
            }
    
            // Step 3: Do the actual work
            $order->cancel();
            $this->info("Cancelled {$order->reference}");
        }
    }

    Key patterns:

    • Check before acting — If the record is already in the target state, skip it
    • Log what you skip — So you can verify the second run did nothing harmful
    • Add a --dry-run flag — Always. Test the logic before committing to production changes
    $this->addOption('dry-run', null, InputOption::VALUE_NONE, 'Run without making changes');
    
    // In your handle method:
    if ($this->option('dry-run')) {
        $this->info("[DRY RUN] Would cancel {$order->reference}");
        continue;
    }

    The rule is simple: if you can’t safely run it twice, it’s not ready for production.