Category: Laravel

  • When Queued Event Listeners Silently Die: The ShouldQueue Trap

    You dispatch an event from inside a queued job. The event has a listener that implements ShouldQueue. Your job completes successfully, but the listener never executes. No exception. No failed job entry. No log. It just… doesn’t run.

    This is one of Laravel’s most frustrating silent failures.

    The Setup

    You have a workflow: when a user account is deactivated, trigger a data export automatically. The architecture looks clean:

    // In your DeactivationHandler (queued job)
    class HandleAccountDeactivation implements ShouldQueue
    {
        public function handle(): void
        {
            // Revoke access tokens for the account
            $this->revokeAccessTokens($this->account);
    
            // Dispatch event for downstream processing
            event(new AccountDeactivated($this->account));
        }
    }
    
    // The listener
    class TriggerDataExport implements ShouldQueue
    {
        public function handle(AccountDeactivated $event): void
        {
            // This never runs!
            $this->exportService->generate($event->account);
        }
    }

    Why It Fails Silently

    When you dispatch an event from within a queued job, and the listener also implements ShouldQueue, the listener gets pushed onto the queue as a new job. But here’s the catch: if the dispatching job’s database transaction hasn’t committed yet (or if the queue connection has issues during nested dispatching), the listener job can fail before it even starts — and this failure happens at the queue infrastructure level, not in your application code.

    A try-catch around event() won’t help. The event dispatch itself succeeds — it pushes a message onto the queue. The failure happens later, when the queue worker tries to process the listener job.

    The Fix: Make Critical Listeners Synchronous

    For listeners that are part of a critical workflow — where silent failure is unacceptable — remove ShouldQueue:

    // Make it synchronous — runs in the same process as the dispatcher
    class TriggerDataExport // No ShouldQueue
    {
        public function handle(AccountDeactivated $event): void
        {
            try {
                $this->exportService->generate($event->account);
            } catch (\Throwable $e) {
                // Now you CAN catch failures
                $event->account->addNote(
                    "Automatic data export failed: {$e->getMessage()}"
                );
                $event->account->flagForReview('compliance');
            }
        }
    }

    Alternative: Direct Method Calls for Critical Paths

    If the listener exists solely because of one dispatcher, skip events entirely for the critical path:

    class HandleAccountDeactivation implements ShouldQueue
    {
        public function handle(DataExportService $exportService): void
        {
            $this->revokeAccessTokens($this->account);
    
            // Direct call instead of event dispatch
            try {
                $exportService->generateComplianceExport($this->account);
            } catch (\Throwable $e) {
                $this->account->addNote("Automatic data export failed: {$e->getMessage()}");
                $this->account->flagForReview('compliance');
            }
        }
    }

    When Events Are Still Right

    Events shine when:

    • Multiple independent listeners react to the same event
    • The listener’s failure doesn’t affect the main workflow
    • You genuinely need decoupling (different bounded contexts)

    But when a queued job dispatches an event to a queued listener for a single critical operation? That’s a fragile chain with a silent failure mode. Make it synchronous or call the service directly.

    The rule of thumb: if the listener failing means the workflow is broken, don’t put a queue boundary between them.

  • Circuit Breakers: Stop Hammering Dead APIs From Your Queue Workers

    Your queue workers are burning through jobs at full speed, retrying a third-party API endpoint that’s been down for three hours. Every retry fails. Every failure generates a Sentry alert. You’re 55,000 errors deep, your queue is backed up, and the external service doesn’t care how many times you knock.

    This is what happens when you don’t have a circuit breaker.

    The Pattern

    A circuit breaker sits between your application and an unreliable external service. It tracks failures and, after a threshold, stops sending requests entirely for a cooldown period. The metaphor comes from electrical engineering — when there’s too much current, the breaker trips to prevent damage.

    Three states:

    • Closed — everything works normally, requests flow through
    • Open — too many failures, all requests short-circuit immediately (return error without calling the API)
    • Half-Open — after cooldown, let one request through to test if the service recovered

    A Simple Implementation

    class CircuitBreaker
    {
        public function __construct(
            private string $service,
            private int $threshold = 5,
            private int $cooldownSeconds = 300,
        ) {}
    
        public function isAvailable(): bool
        {
            $failures = Cache::get("circuit:{$this->service}:failures", 0);
            $openedAt = Cache::get("circuit:{$this->service}:opened_at");
    
            if ($failures < $this->threshold) {
                return true; // Closed state
            }
    
            if ($openedAt && now()->diffInSeconds($openedAt) > $this->cooldownSeconds) {
                return true; // Half-open: try one request
            }
    
            return false; // Open: reject immediately
        }
    
        public function recordFailure(): void
        {
            $failures = Cache::increment("circuit:{$this->service}:failures");
    
            if ($failures >= $this->threshold) {
                Cache::put("circuit:{$this->service}:opened_at", now(), $this->cooldownSeconds * 2);
            }
        }
    
        public function recordSuccess(): void
        {
            Cache::forget("circuit:{$this->service}:failures");
            Cache::forget("circuit:{$this->service}:opened_at");
        }
    }

    Using It in a Queue Job

    class FetchWeatherDataJob implements ShouldQueue
    {
        public function handle(WeatherApiClient $client): void
        {
            $breaker = new CircuitBreaker('weather-api', threshold: 5, cooldownSeconds: 300);
    
            if (! $breaker->isAvailable()) {
                // Release back to queue for later
                $this->release(60);
                return;
            }
    
            try {
                $response = $client->getConditions($this->stationId);
                $breaker->recordSuccess();
                $this->storeWeatherData($response);
            } catch (ApiException $e) {
                $breaker->recordFailure();
                throw $e; // Let Laravel's retry handle it
            }
        }
    }

    Pair It With Exponential Backoff

    Circuit breakers prevent hammering. Exponential backoff spaces out retries. Use both:

    class FetchWeatherDataJob implements ShouldQueue
    {
        public int $tries = 5;
    
        public function backoff(): array
        {
            return [30, 60, 120, 300, 600]; // 30s, 1m, 2m, 5m, 10m
        }
    }

    When You Need This

    If your application integrates with external APIs that can go down — email verification services, geocoding providers, analytics feeds — you need circuit breakers. The symptoms that tell you it’s time:

    • Thousands of identical errors in your error tracker from one endpoint
    • Queue workers stuck retrying failed jobs instead of processing good ones
    • Your application slowing down because every request waits for a timeout
    • Rate limit responses (HTTP 429) from the external service

    Without a circuit breaker, a flaky external service doesn’t just affect itself — it takes your entire queue infrastructure down with it. Five minutes of setup saves hours of firefighting.

  • Eloquent Relationship Caching: Why attach() Leaves Your Model Stale

    You call attach() on a relationship, then immediately check that relationship in the next line. It returns empty. The data is in the database, but your model doesn’t know about it.

    The Problem

    Eloquent caches loaded relationships in memory. Once you access a relationship, Laravel stores the result on the model instance. Subsequent accesses return the cached version — even if the underlying data has changed.

    // Load the relationship (caches in memory)
    $article->assignedCategory;  // null
    
    // Update the pivot table
    $newCategory->articles()->attach($article);
    
    // This still returns null! Cached.
    $article->assignedCategory;  // null (stale)
    

    The attach() call writes to the database, but the model’s in-memory relationship cache still holds the old value.

    The Fix: refresh()

    Call refresh() on the model to reload it and clear all cached relationships:

    $newCategory->articles()->attach($article);
    
    // Reload the model from the database
    $article->refresh();
    
    // Now it returns the fresh data
    $article->assignedCategory;  // Category { name: 'Technology' }
    

    refresh() re-fetches the model’s attributes and clears the relationship cache, so the next access hits the database.

    refresh() vs load()

    You might think load() would work:

    // This re-queries the relationship
    $article->load('assignedCategory');
    

    It does work for this specific relationship, but refresh() is more thorough. It reloads everything — attributes and all eager-loaded relationships. Use load() when you want to reload a specific relationship. Use refresh() when the model’s state might be stale across multiple attributes.

    When This Bites You

    This typically surfaces in multi-step workflows where the same model passes through several operations:

    // Step 1: Assign to initial category
    $defaultCategory->articles()->attach($article);
    
    // Step 2: Process the article
    $result = $pipeline->run($article);
    
    // Step 3: On failure, reassign to a different category
    if (!$result->success) {
        $defaultCategory->articles()->detach($article);
        $reviewCategory->articles()->attach($article);
    
        $article->refresh();  // Critical! Without this, downstream code sees stale category.
    
        // Step 4: Log the transition
        $transition = new CategoryReassigned($article, $reviewCategory, $defaultCategory);
        $logger->record($transition);
    }
    

    Without the refresh(), any code that checks $article->assignedCategory after step 3 will still see the old category (or null). Event handlers, logging, validation — all get stale data.

    The Pattern

    Any time you modify a model’s relationships via attach(), detach(), sync(), or toggle(), and then need to read that relationship in the same request:

    // Write
    $model->relationship()->attach($relatedId);
    
    // Refresh
    $model->refresh();
    
    // Read (now safe)
    $model->relationship;
    

    This is different from updating model attributes directly, where save() keeps the in-memory state in sync. Pivot table operations bypass the model’s state management entirely — they go straight to the database without telling the model.

    Small habit. Prevents a class of bugs that are genuinely confusing to debug because the database looks correct but the code behaves like the data doesn’t exist.

  • afterCommit(): Dispatch Queue Jobs Only After the Transaction Commits

    You dispatch a queue job right after saving a model. The job fires, tries to look up the record… and it’s not there. The transaction hasn’t committed yet.

    This is one of those bugs that works fine locally (where your database has near-zero latency) but bites you in production with read replicas or under load.

    The Problem

    Consider this common pattern:

    DB::transaction(function () use ($report) {
        $report->status = 'generating';
        $report->save();
    
        GenerateReport::dispatch($report);
    });
    

    The job gets dispatched inside the transaction. Depending on your queue driver, the job might start executing before the transaction commits. The job tries to find the report with status = 'generating', but the database still shows the old state.

    The Fix: afterCommit()

    Laravel provides afterCommit() on dispatchable jobs to ensure the job only hits the queue after the wrapping transaction commits:

    DB::transaction(function () use ($report) {
        $report->status = 'generating';
        $report->save();
    
        GenerateReport::dispatch($report)->afterCommit();
    });
    

    Now the job waits until the transaction successfully commits before being pushed to the queue. If the transaction rolls back, the job never dispatches at all.

    Setting It Globally

    If you want all your jobs to behave this way by default, add the property to your job class:

    class GenerateReport implements ShouldQueue
    {
        use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    
        public $afterCommit = true;
    
        // ...
    }
    

    Or set it in your queue.php config for the entire connection:

    // config/queue.php
    'redis' => [
        'driver' => 'redis',
        'connection' => 'default',
        'queue' => 'default',
        'after_commit' => true,  // All jobs wait for commit
    ],
    

    When You Actually Need This

    This pattern is critical when:

    • You use read replicas — the replica might not have the committed data yet
    • Your job uses SerializesModels — it re-fetches the model from the database when deserializing
    • You dispatch jobs that depend on the data you just saved in the same transaction
    • You’re dispatching webhook notifications — the external system might call back before your DB commits

    The Gotcha

    If there’s no wrapping transaction, afterCommit() dispatches immediately — same as without it. It only delays when there’s an active transaction to wait for.

    This is a good thing. It means you can set $afterCommit = true on all your jobs without worrying about jobs that are dispatched outside transactions.

    One of those small changes that prevents a whole class of race condition bugs in production.

  • Read Replica Lag Breaks Laravel Queue Jobs Before handle() Runs

    You dispatch a queue job right after saving a model. The job picks it up in milliseconds. And then — ModelNotFoundException.

    The model definitely exists. You just created it. You can query it manually. But the queue worker says otherwise.

    The Culprit: Read Replicas

    If your database uses read replicas (and most production setups do), there’s a lag between the primary and the replicas. Usually milliseconds, sometimes longer under load.

    Laravel’s SerializesModels trait only stores the model’s ID when the job is serialized. When the worker deserializes it, it runs a fresh query — against the read replica. If the replica hasn’t caught up yet, the model doesn’t exist from the worker’s perspective.

    The cruel part: this happens before your handle() method runs. Your retry logic never fires because the job fails during deserialization.

    The Fix: afterCommit()

    Laravel has a built-in solution. Add afterCommit to your job:

    class GenerateInvoice implements ShouldQueue
    {
        use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    
        public $afterCommit = true;
    
        public function handle(): void
        {
            // This now runs after the transaction commits
            // and the replica has had time to sync
        }
    }

    Or dispatch with the method:

    GenerateInvoice::dispatch($invoice)->afterCommit();

    Other Options

    If afterCommit isn’t enough (replicas can still lag after commit), you have two more tools:

    // Option 1: Add a small delay
    GenerateInvoice::dispatch($invoice)->delay(now()->addSeconds(5));
    
    // Option 2: Skip missing models instead of failing
    public $deleteWhenMissingModels = true;

    Option 2 is a silent skip — the job just disappears. Use it only when the job is truly optional (like sending a notification for a model that might get deleted).

    The Lesson

    If you’re dispatching queue jobs immediately after writes and seeing phantom ModelNotFoundException errors, check your database topology. Read replicas + SerializesModels + fast workers = a race condition that only shows up under load. afterCommit() is the cleanest fix.

  • Translation Placeholders: Enable Word Order Flexibility for Localizers

    When defining translation strings in Laravel, always use named placeholders (like :count or :percent) instead of positional ones or manual string concatenation.

    
    // Bad
    __('You have ' . $count . ' notifications');
    
    // Good
    __('You have :count notifications', ['count' => $count]);
    

    Different languages have varying word orders; named placeholders allow translators to move variables within the sentence without breaking the logic or requiring code changes for each locale.

  • Frontend/Backend Parity: Treat Your UI Logic as the Specification

    When duplicating complex UI validation or calculation logic on the backend, treat the frontend component as the source of truth. Copy the logic line-by-line into your PHP services, and include comments with the original file and line numbers.

    
    // Replicating logic from ProductForm.vue:142
    if ($data['price'] > 0) {
        // ...
    }
    

    This makes future parity checks easier and ensures your backend handles edge cases exactly as the user experience defines them. It also serves as great documentation for why certain backend checks exist.

  • Stop Passing App::getLocale() to Laravel’s __() Helper

    I’ve seen this pattern in multiple Laravel codebases — a translation helper that manually fetches the locale before passing it to the translation function:

    // Don't do this
    $locale = App::getLocale();
    $label = __('orders.status_label', [], $locale);

    That third parameter is unnecessary. The __() helper already calls App::getLocale() internally when no locale is provided.

    How __() Actually Works

    Under the hood, __() delegates to the Translator’s get() method:

    // Illuminate\Translation\Translator::get()
    public function get($key, array $replace = [], $locale = null, $fallback = true)
    {
        $locale = $locale ?: $this->locale;
        // ...
    }

    When $locale is null (the default), it uses $this->locale — which is the application locale set by App::setLocale(). It’s the same value App::getLocale() returns.

    So the clean version is just:

    // Do this instead
    $label = __('orders.status_label');

    When You DO Need the Locale Parameter

    The third parameter exists for a reason — when you need a translation in a specific locale that differs from the current one:

    // Sending an email in the user's preferred language
    $subject = __('emails.welcome_subject', [], $user->preferred_locale);
    
    // Generating a PDF in a specific language regardless of current request
    $label = __('invoice.total', [], 'ja');

    These are legitimate uses. The anti-pattern is fetching the current locale just to pass it right back.

    The Compound Version

    This gets worse when the manual locale fetch spreads across a method:

    // This entire method is doing unnecessary work
    public function getLabels()
    {
        $locale = App::getLocale();
        
        return [
            'name'    => __('fields.name', [], $locale),
            'email'   => __('fields.email', [], $locale),
            'phone'   => __('fields.phone', [], $locale),
            'address' => __('fields.address', [], $locale),
        ];
    }

    Every single $locale parameter is redundant. This should be:

    public function getLabels()
    {
        return [
            'name'    => __('fields.name'),
            'email'   => __('fields.email'),
            'phone'   => __('fields.phone'),
            'address' => __('fields.address'),
        ];
    }

    Same output, less noise, fewer places to introduce bugs. The framework already handles locale resolution — let it do its job.

  • The Hidden public/hot File in Laravel Mix HMR

    You run npm run hot and Laravel Mix starts a webpack dev server with Hot Module Replacement. Your browser auto-refreshes when you edit Vue components or CSS. Magic. But have you ever wondered how Laravel knows to serve assets from the dev server instead of the compiled files in public/?

    The answer is a tiny file you’ve probably never noticed: public/hot.

    What public/hot Does

    When you run npm run hot, Laravel Mix creates a file at public/hot. It contains the dev server URL — typically http://localhost:8080.

    Laravel’s mix() helper checks for this file on every request:

    // Simplified version of what mix() does internally
    if (file_exists(public_path('hot'))) {
        $devServerUrl = rtrim(file_get_contents(public_path('hot')));
        return $devServerUrl . $path;
    }
    
    // No hot file? Serve from mix-manifest.json as normal
    return $manifest[$path];

    So when public/hot exists, mix('js/app.js') returns http://localhost:8080/js/app.js instead of /js/app.js?id=abc123.

    When This Bites You

    The classic gotcha: you kill the dev server with Ctrl+C, but the public/hot file doesn’t get cleaned up. Now your app is trying to load assets from a server that doesn’t exist.

    # Symptoms: blank page, console full of ERR_CONNECTION_REFUSED
    # Fix:
    rm public/hot

    Add it to your troubleshooting checklist. If assets suddenly stop loading after you were running HMR, check if public/hot is still hanging around.

    Why This Matters for Teams

    The public/hot file should always be in your .gitignore. If someone accidentally commits it, everyone else’s app will try to load assets from localhost:8080 — which won’t be running on their machines.

    # .gitignore
    public/hot

    Most Laravel projects already have this, but if you bootstrapped your project a long time ago or generated your .gitignore manually, double-check.

    It’s a tiny file with a simple job, but understanding it saves you 20 minutes of confused debugging when HMR stops working or your assets vanish after killing the dev server.

  • Per-Step Try/Catch: Don’t Let One Bad Record Kill Your Entire Batch

    Per-Step Try/Catch: Don’t Let One Bad Record Kill Your Entire Batch

    Last week I had an Artisan command that processed about 2,000 records. The first version used a transaction wrapper — if any single record failed, the whole batch rolled back. Clean, right?

    Except when record #1,847 hit an edge case, all 1,846 successful records got nuked. That’s not clean. That’s a landmine.

    The Fix: Per-Step Try/Catch

    Instead of wrapping the entire loop in one big try/catch, wrap each iteration individually:

    $records->each(function ($record) {
        try {
            $this->processRecord($record);
            $this->info("✅ Processed #{$record->id}");
        } catch (\Throwable $e) {
            $this->error("❌ Failed #{$record->id}: {$e->getMessage()}");
            Log::error("Batch process failed", [
                'record_id' => $record->id,
                'error' => $e->getMessage(),
            ]);
        }
    });

    Why This Matters

    The all-or-nothing approach feels safer because it’s “atomic.” But for batch operations where each record is independent, it’s actually worse. One bad record shouldn’t hold 1,999 good ones hostage.

    The status symbols (✅/❌) aren’t just cute either. When you’re watching a command chug through thousands of records, that visual feedback tells you instantly if something’s going sideways without reading log files.

    When to Use Which

    Use transactions (all-or-nothing) when records depend on each other. Think: transferring money between accounts, or creating a parent record with its children.

    Use per-step try/catch when each record is independent. Think: sending notification emails, syncing external data, or migrating legacy records.

    The pattern is simple but I’ve seen teams default to transactions for everything. Sometimes the safest thing is to let the failures fail and keep the successes.