Category: Laravel

  • Laravel Queue Jobs: Stop Using $this in Closures

    Laravel Queue Jobs: Stop Using $this in Closures

    You dispatch a Laravel job. It runs fine locally. You push to production, and suddenly: CallbackNotFound errors everywhere.

    The culprit? A closure inside your job that references $this->someService.

    The Problem

    Here’s what breaks:

    class ProcessOrder implements ShouldQueue
    {
        use SerializesModels;
        
        public function __construct(
            private PaymentService $paymentService
        ) {}
        
        public function handle()
        {
            $orders = Order::pending()->get();
            
            // This closure references $this
            $orders->each(function ($order) {
                // 💥 BOOM after serialization
                $this->paymentService->charge($order);
            });
        }
    }

    When Laravel serializes the job for the queue, closures can’t capture $this properly. The callback becomes unresolvable after deserialization, and your job dies silently or throws cryptic errors.

    The Fix

    Stop referencing $this inside closures. Use app() to resolve the service fresh from the container:

    class ProcessOrder implements ShouldQueue
    {
        use SerializesModels;
        
        public function handle()
        {
            $orders = Order::pending()->get();
            
            // ✅ Resolve fresh from container
            $orders->each(function ($order) {
                app(PaymentService::class)->charge($order);
            });
        }
    }

    Now the closure is self-contained. Every time it runs, it pulls the service from the container. No serialization issues, no broken callbacks.

    Why It Works

    Queue jobs get serialized to JSON and stored (database, Redis, SQS, etc.). When the worker picks up the job:

    1. Laravel deserializes the job class
    2. Runs your handle() method
    3. Executes any closures inside

    But PHP can’t serialize object references inside closures. Using app() defers service resolution until the closure actually runs — after deserialization.

    Bonus: Same Rule for Collection Transforms

    This isn’t just a queue problem. Any time you serialize data with closures (caching collections, API responses, etc.), the same rule applies:

    // ❌ Breaks if serialized
    $cached = $items->map(fn($item) => $this->transformer->format($item));
    
    // ✅ Safe
    $cached = $items->map(fn($item) => app(Transformer::class)->format($item));

    The Takeaway

    Never use $this-> inside closures in queue jobs. Resolve services from the container with app() or pass them as closure parameters instead.

    Your queue workers will thank you.

  • HTTP Response throw() Makes Your Error Handling Unreachable

    HTTP Response throw() Makes Your Error Handling Unreachable

    Found dead code in an API client today. Classic mistake with Laravel’s HTTP client.

    The code looked reasonable at first glance:

    $response = Http::post($url, $data)->throw();
    
    if ($response->failed()) {
        throw $response->serverError() 
            ? new ServerException() 
            : $response->toException();
    }
    

    Spot the problem? The throw() method already throws an exception if the request fails. That if ($response->failed()) block will never execute — it’s unreachable.

    Laravel’s throw() is basically:

    if ($this->failed()) {
        throw $this->toException();
    }
    return $this;
    

    So you either use throw() for automatic exception handling, OR check failed() manually. Not both.

    The fix: if you need custom exception logic, don’t use throw():

    $response = Http::post($url, $data);
    
    if ($response->failed()) {
        throw $response->serverError() 
            ? new CustomServerException() 
            : new CustomClientException();
    }
    

    Small mistake, but good reminder to understand what framework helpers actually do under the hood.

  • Laravel Collections: flip() for Instant Lookups

    Laravel Collections: flip() for Instant Lookups

    Here’s a Laravel Collection performance trick: use flip() to turn expensive searches into instant lookups.

    I was looping through a date range, checking if each date existed in a collection. Using contains() works, but it’s O(n) for each check — if you have 100 dates to check against 50 items, that’s 5,000 comparisons.

    // Slow: O(n) per check
    $available = collect(['2026-01-25', '2026-01-26', '2026-01-27']);
    if ($available->contains($date)) { ... }
    

    The fix — flip() converts values to keys, making lookups O(1):

    // Fast: O(1) per check
    $available = collect(['2026-01-25', '2026-01-26'])
        ->flip(); // Now: ['2026-01-25' => 0, '2026-01-26' => 1]
    
    if ($available->has($date)) { ... }
    

    For small datasets, the difference is negligible. But checking hundreds or thousands of items? This tiny change saves significant processing time. Costs nothing to implement.

  • Use Tinker –execute for Automation

    Use Tinker –execute for Automation

    Ever tried to automate Laravel Tinker commands in a Docker container or CI pipeline? If you’ve used stdin piping with heredocs, you probably discovered it just… doesn’t work reliably.

    I spent way too long debugging why docker exec php artisan tinker with piped PHP commands would run silently and produce zero output. Turns out, Tinker’s interactive mode doesn’t play nice with stdin automation.

    The solution that actually works? The --execute flag.

    // ❌ This doesn't work reliably:
    docker exec my-app bash -c "php artisan tinker <<'EOF'
    \$user = App\Models\User::first();
    echo \$user->email;
    EOF"
    
    // ✅ This does:
    docker exec my-app php artisan tinker --execute="
    \$user = App\Models\User::first();
    echo \$user->email;
    "
    
    // Or for multi-line commands in bash scripts:
    docker exec my-app php artisan tinker --execute="$(cat <<'EOF'
    $user = App\Models\User::first();
    $user->email = '[email protected]';
    $user->save();
    echo 'Updated: ' . $user->email;
    EOF
    )"

    The --execute flag runs Tinker in non-interactive mode. It evaluates the PHP code, prints the output, and exits cleanly. No TTY required, no stdin gymnastics, no mystery silent failures.

    This is a lifesaver for:

    • CI/CD pipelines — seed data, run health checks, warm caches
    • Docker automation — scripts that need to interact with Laravel in containers
    • Cron jobs — quick data fixes without writing full artisan commands

    Pro tip: for complex multi-statement logic, you can still pass a full PHP script to --execute. Just wrap it in quotes and keep newlines intact with $(cat <<'EOF' ... EOF).

    Stop fighting with heredocs. Use --execute and move on with your life.

  • Don’t Mix Cached and Fresh Data in the Same Transaction

    Don’t Mix Cached and Fresh Data in the Same Transaction

    Ever debug a bug where your financial reports showed negative margins, only to discover you were mixing cached and fresh data in the same transaction?

    I hit this in an API endpoint that calculated order profitability. The method fetched product costs fresh from the database (good!), but the companion method that grabbed retail prices was still using Redis cache (bad!). When prices changed, we’d calculate margins using old cached prices and new costs. Hello, mystery losses.

    The fix wasn’t just “disable cache everywhere” — Redis cache is there for a reason. The real issue was inconsistent cache behavior within the same transaction.

    Here’s the pattern that saved us:

    class OrderCalculator
    {
        public function calculateMargin(Order $order, bool $useCache = true): float
        {
            $cost = $this->getCost($order->product_id, useCache: $useCache);
            $price = $this->getPrice($order->product_id, useCache: $useCache);
            
            return $price - $cost;
        }
        
        private function getCost(int $productId, bool $useCache = true): float
        {
            if (!$useCache) {
                return Product::find($productId)->cost;
            }
            
            return Cache::remember("product.{$productId}.cost", 3600, function() use ($productId) {
                return Product::find($productId)->cost;
            });
        }
        
        private function getPrice(int $productId, bool $useCache = true): float
        {
            if (!$useCache) {
                return Product::find($productId)->price;
            }
            
            return Cache::remember("product.{$productId}.price", 3600, function() use ($productId) {
                return Product::find($productId)->price;
            });
        }
    }
    
    // In write operations (order creation, updates):
    $margin = $calculator->calculateMargin($order, useCache: false);
    
    // In read operations (reports, dashboards):
    $margin = $calculator->calculateMargin($order, useCache: true);

    The key insight: make cache behavior explicit and consistent. When you’re writing data or making financial calculations, all related lookups should use the same freshness guarantee. Add a useCache parameter (default true for reads) and disable it for writes.

    Your future self debugging production at 2am will thank you.

  • Storage::build() — Laravel’s Hidden Gem for Temp File Operations

    Storage::build() — Laravel’s Hidden Gem for Temp File Operations

    Here’s a pattern I reach for any time I need temporary file operations in an artisan command or queue job: Storage::build().

    Most Laravel developers know about configuring disks in config/filesystems.php. But what happens when you need a quick, disposable filesystem rooted in the system temp directory? You don’t want to pollute your configured disks with throwaway stuff.

    The Pattern

    use Illuminate\Support\Facades\Storage;
    
    $storage = Storage::build([
        'driver' => 'local',
        'root' => sys_get_temp_dir(),
    ]);
    
    // Now use it like any other disk
    $storage->makeDirectory('my-export');
    $storage->put('my-export/report.csv', $csvContent);
    
    // Clean up when done
    $storage->deleteDirectory('my-export');
    

    This creates a filesystem instance on the fly, pointed at /tmp (or whatever your OS temp directory is). No config file changes. No new disk to maintain.

    Why This Beats the Alternatives

    You might be tempted to just use raw PHP file functions:

    // Don't do this
    file_put_contents('/tmp/my-export/report.csv', $csvContent);
    

    But then you lose all the niceties of Laravel’s filesystem API — directory creation, deletion, visibility management, and consistent error handling.

    You could also define a temp disk in your config:

    // config/filesystems.php
    'temp' => [
        'driver' => 'local',
        'root' => sys_get_temp_dir(),
    ],
    

    But that’s permanent config for a temporary need. Storage::build() keeps it scoped to the code that actually needs it.

    A Real-World Use Case

    This shines in artisan commands that generate files, process them, and clean up:

    class GenerateReportCommand extends Command
    {
        protected $signature = 'report:generate {userId}';
    
        private Filesystem $storage;
    
        public function __construct()
        {
            parent::__construct();
            $this->storage = Storage::build([
                'driver' => 'local',
                'root' => sys_get_temp_dir(),
            ]);
        }
    
        public function handle()
        {
            $dir = 'report-' . $this->argument('userId');
            $this->storage->makeDirectory($dir);
    
            try {
                // Generate CSV
                $this->storage->put("$dir/data.csv", $this->buildCsv());
    
                // Maybe encrypt it
                $encrypted = $this->encrypt("$dir/data.csv");
    
                // Upload somewhere permanent
                Storage::disk('s3')->put("reports/$dir.csv.gpg", 
                    $this->storage->get($encrypted)
                );
            } finally {
                // Always clean up
                $this->storage->deleteDirectory($dir);
            }
        }
    }
    

    The try/finally ensures temp files get cleaned up even if something throws. The command’s temp filesystem stays completely isolated from the rest of your app.

    Storage::build() has been available since Laravel 9. If you’re still using raw file_put_contents() calls for temp files, give this a try.

  • Use insertOrIgnore() to Handle Race Conditions Gracefully

    Use insertOrIgnore() to Handle Race Conditions Gracefully

    If you’re logging records to a database table that has a unique constraint — say, tracking processed job IDs or idempotency keys — you’ve probably hit this at some point:

    SQLSTATE[23000]: Integrity constraint violation: 1062
    Duplicate entry 'abc-123' for key 'jobs_uuid_unique'
    

    This typically happens when two processes try to insert the same record at roughly the same time. Classic race condition. And it’s especially common in queue workers, where multiple workers might process the same failed job simultaneously.

    The blunt solution is to wrap it in a try/catch:

    try {
        DB::table('processed_jobs')->insert([
            'uuid' => $job->uuid(),
            'failed_at' => now(),
        ]);
    } catch (\Illuminate\Database\QueryException $e) {
        // Ignore duplicate
    }
    

    But there’s a cleaner way. Laravel’s query builder has insertOrIgnore() — available since Laravel 5.8:

    DB::table('processed_jobs')->insertOrIgnore([
        'uuid' => $job->uuid(),
        'failed_at' => now(),
        'payload' => $job->payload(),
    ]);
    

    Under the hood, this generates INSERT IGNORE INTO ... on MySQL, which silently skips the insert if it would violate a unique constraint. No exception thrown, no try/catch needed.

    A few things to keep in mind:

    • insertOrIgnore() will suppress all duplicate key errors, not just the one you’re expecting — so make sure your unique constraints are intentional
    • On MySQL, it also bypasses strict mode checks, which means other data integrity issues might be silently swallowed
    • If you need to update the existing record on conflict, use upsert() instead

    Here’s the decision framework I use:

    • “Skip if exists”insertOrIgnore()
    • “Update if exists”upsert()
    • “Fail if exists” → Regular insert() (and let the exception surface)

    If you’re writing any kind of idempotent operation — event processing, job tracking, audit logging — insertOrIgnore() is the simplest way to handle the race condition without cluttering your code with try/catch blocks.

  • Use concat() Instead of merge() When Combining Collections With Numeric Keys

    Use concat() Instead of merge() When Combining Collections With Numeric Keys

    This one has bitten me and probably every Laravel developer at least once. You’re combining two collections and items silently disappear. No error, no warning — just missing data.

    The Trap

    Laravel’s merge() and concat() look interchangeable at first glance. They’re not. The difference is all about how they handle numeric keys.

    merge() overwrites values when keys match. concat() appends items regardless of keys.

    $dbJobs = collect([
        0 => ['name' => 'Send Weekly Report', 'schedule' => '0 9 * * 1'],
        1 => ['name' => 'Clean Temp Files', 'schedule' => '0 0 * * *'],
    ]);
    
    $configJobs = collect([
        0 => ['name' => 'Sync Inventory', 'schedule' => '*/30 * * * *'],
        1 => ['name' => 'Generate Invoices', 'schedule' => '0 6 * * 1'],
    ]);
    
    // ❌ merge() - overwrites keys 0 and 1, you get 2 items instead of 4
    $all = $dbJobs->merge($configJobs);
    // Result: Sync Inventory, Generate Invoices (DB jobs silently lost!)
    
    // ✅ concat() - appends all items, you get all 4
    $all = $dbJobs->concat($configJobs);
    // Result: Send Weekly Report, Clean Temp Files, Sync Inventory, Generate Invoices

    Why This Is Sneaky

    This bug is particularly nasty because it usually passes your tests. Test fixtures tend to have unique keys or small data sets that don’t overlap. It’s only in production — when two data sources happen to produce the same numeric indices — that items start vanishing.

    And since there’s no error or exception, you won’t know anything is wrong until someone notices the missing data downstream.

    The Rule of Thumb

    If you’re combining two collections and you want all items from both, use concat(). Full stop.

    Only use merge() when you specifically want the overwrite behavior — for example, merging config arrays where later values should replace earlier ones (the same way PHP’s array_merge() works with string keys).

    Quick Reference

    • merge() — like array_merge(): string keys overwrite, numeric keys also overwrite (unlike native PHP!)
    • concat() — like array_push(): just appends everything, keys don’t matter
    • push() — adds a single item

    One-line fix, saves you hours of debugging “where did my data go?”

  • Override getAttribute() for Backward-Compatible Schema Migrations

    Override getAttribute() for Backward-Compatible Schema Migrations

    Schema migrations are easy to plan and hard to deploy — especially when a column rename or consolidation touches dozens of call sites across the codebase. Here’s a trick I’ve used to make incremental migrations way less painful.

    The Scenario

    Imagine you have a model with numbered columns: price_1, price_2, all the way through price_60. A past design decision that seemed reasonable at the time but now just wastes space. You want to consolidate everything down to price_1.

    The data migration is straightforward — copy the canonical value into price_1 for every row. But the scary part is all the existing code that reads $record->price_5 or $record->price_23 scattered across the app.

    The Trick: Override getAttribute()

    Instead of finding and updating every reference before deploying, override Eloquent’s getAttribute() to silently redirect old column access to the new consolidated column:

    class PricingRecord extends Model
    {
        public function getAttribute($key)
        {
            $consolidatedPrefixes = [
                'price_',
                'cost_',
                'fee_',
            ];
    
            foreach ($consolidatedPrefixes as $prefix) {
                if (preg_match('/^' . preg_quote($prefix) . '\d+/', $key)
                    && $key !== $prefix . '1') {
                    // Redirect any tier N access to tier 1
                    return parent::getAttribute($prefix . '1');
                }
            }
    
            return parent::getAttribute($key);
        }
    }

    The Deployment Sequence

    1. Run the data migration — consolidate all values into the _1 columns
    2. Deploy the getAttribute() override — old code reading $record->price_5 transparently gets price_1
    3. Clean up call sites at your own pace — update references one PR at a time, no rush
    4. Remove the override once all references point to _1
    5. Drop the old columns in a final migration

    No big-bang refactor. No merge conflicts from a 40-file PR. No “please don’t deploy anything until my branch lands.”

    One Caveat

    This adds a regex check on every attribute access for that model. For a model you load a few times per request, it’s negligible. But if you’re hydrating thousands of records in a loop, profile it first. The regex is cheap, but “cheap × 10,000” can add up.

    For most real-world use cases though, this buys you a safe, incremental migration path with zero downtime and zero broken features.

  • Use prepareForValidation() to Normalize Input Before Rules Run

    Use prepareForValidation() to Normalize Input Before Rules Run

    Here’s one of those Laravel features that’s been around forever but still doesn’t get the love it deserves: prepareForValidation().

    If you’ve ever written gnarly conditional validation rules just because the incoming form data was messy, this one’s for you.

    The Problem

    Say you have a pricing form where users can pick different plan tiers. The form sends data keyed by the tier number they selected. But your validation rules really only care about the final, normalized value — always stored against tier 1.

    You could write a pile of conditional rules to handle every possible tier. Or you could just normalize the data before validation even runs.

    The Fix

    Laravel’s FormRequest class has a prepareForValidation() method that fires right before your rules() are evaluated. Use it to reshape incoming data so your rules only deal with a clean, predictable structure:

    class UpdatePricingRequest extends FormRequest
    {
        protected function prepareForValidation(): void
        {
            $tiers = (array) $this->get('tiers');
            foreach ($tiers as $tierName => $tierData) {
                $selectedTier = $tierData['selected_tier'] ?? 1;
                if ($selectedTier > 1) {
                    // Copy the selected tier's value to tier 1
                    $this->merge([
                        "prices.{$tierName}.1" => $this->input("prices.{$tierName}.{$selectedTier}"),
                    ]);
                }
            }
        }
    
        public function rules(): array
        {
            return [
                'prices.*.1' => 'required|numeric|min:0',
            ];
        }
    }

    Why This Works

    Your validation rules only need to care about one shape: prices.*.1. The real-world messiness — users picking tier 3 on one plan and tier 7 on another — gets handled upstream, before rules even see it.

    This keeps your rules dead simple and your normalization logic in one obvious place. No more spreading data transformation across rules, controllers, or middleware.

    When to Reach For It

    Any time your incoming data doesn’t match the shape your rules expect. Common cases:

    • Trimming or lowercasing string fields
    • Remapping aliased keys from different API consumers
    • Setting defaults for optional fields
    • Normalizing nested data like the tier example above

    The official docs cover it briefly, but once you start using it, you’ll wonder how you ever managed without it.