Category: Laravel

  • When Denormalized Data Creates Filter-Display Mismatches

    I was debugging a filter that seemed to work perfectly—until users started reporting missing results. The filter UI said “Show items with USD currency,” but items with USD weren’t appearing.

    The problem? The filter was checking one field (batch_items.currency), but the UI was displaying a different field (batches.cached_currencies)—a denormalized summary column that aggregated currencies from all batch items.

    Here’s what was happening:

    // Filter checked individual batch items
    $query->whereHas('batchItems', function ($q) use ($currency) {
        $q->where('currency', $currency);
    });
    
    // But the UI displayed the denormalized summary
    $batch->cached_currencies; // ["USD", "EUR", "GBP"]
    

    When all batch items were sold out or inactive, the whereHas() check would return nothing—even though cached_currencies still showed “USD” because it hadn’t been refreshed.

    The Fix

    Match the filter logic to what users actually see. If you’re displaying denormalized data, either:

    1. Filter against the same denormalized field:
    // Filter matches what's displayed
    $query->whereJsonContains('cached_currencies', $currency);
    
    1. Or ensure the denormalized field stays in sync:
    // Observer to keep cached data fresh
    class BatchObserver
    {
        public function saved(Batch $batch)
        {
            $batch->update([
                'cached_currencies' => $batch->batchItems()
                    ->distinct('currency')
                    ->pluck('currency')
            ]);
        }
    }
    

    The Lesson

    When users see one thing but your filter checks another, you’ll get confusing bugs. Always verify: Does my filter logic match what’s displayed in the UI?

    If you’re caching/denormalizing data for performance, make sure filters query the same cached field—or keep it strictly in sync.

  • Role-Based Historical Data Migrations with Progress Tracking

    Fixing historical data issues in production is nerve-wracking. You need to update thousands of records, but a single wrong JOIN can cascade the update to user data you shouldn’t touch.

    Here’s a pattern I use: role-based filtering combined with progress tracking to make data migrations safer and more observable.

    The Problem: Broad Updates Are Dangerous

    Say you need to fix a file naming pattern in a generated_reports table. The naive approach:

    public function up(): void
    {
        DB::table('generated_reports')
            ->where('file_name', 'LIKE', 'Old_Format_%')
            ->update([
                'file_name' => DB::raw("REPLACE(file_name, 'Old_Format_', 'New_Format_')")
            ]);
    }
    

    But what if this table has reports generated by customers, internal admins, AND automated systems? You only want to fix admin reports, but this query hits everything.

    The Pattern: Role-Based Scoping

    Use JOINs to filter by user roles, then add progress tracking:

    use Symfony\Component\Console\Output\ConsoleOutput;
    
    public function up(): void
    {
        $output = new ConsoleOutput();
        
        // Step 1: Find affected records with role filtering
        $affected = DB::table('generated_reports')
            ->join('users', 'generated_reports.user_id', '=', 'users.id')
            ->join('role_user', 'users.id', '=', 'role_user.user_id')
            ->join('roles', 'role_user.role_id', '=', 'roles.id')
            ->whereIn('roles.slug', ['admin', 'finance', 'manager'])
            ->where('generated_reports.file_name', 'LIKE', 'Old_Format_%')
            ->where('generated_reports.created_at', '>=', '2024-01-01')
            ->select('generated_reports.id')
            ->distinct()  // Prevent duplicate IDs from multiple role assignments
            ->get();
        
        $output->writeln("Found {$affected->count()} records to update");
        
        // Step 2: Update in chunks with progress indicator
        $affected->chunk(100)->each(function ($chunk) use ($output) {
            DB::table('generated_reports')
                ->whereIn('id', $chunk->pluck('id'))
                ->update([
                    'file_name' => DB::raw("REPLACE(file_name, 'Old_Format_', 'New_Format_')")
                ]);
            
            $output->write('.');  // Progress indicator
        });
        
        $output->writeln("\nMigration complete: {$affected->count()} records updated");
    }
    

    Why This Matters

    This pattern:

    • Scopes updates safely: JOINs to roles table ensure you only touch records created by specific user types
    • Prevents accidental cascade: If roles misconfigured, updates fail instead of hitting wrong records
    • Visible progress: ConsoleOutput lets you watch long-running migrations in real-time
    • Handles duplicates: distinct() prevents row multiplication from many-to-many role relationships
    • Testable in dev: Run on staging with --pretend to see affected IDs without changes

    Key Components Explained

    1. Role-Based JOIN Chain

    ->join('users', 'generated_reports.user_id', '=', 'users.id')
    ->join('role_user', 'users.id', '=', 'role_user.user_id')
    ->join('roles', 'role_user.role_id', '=', 'roles.id')
    ->whereIn('roles.slug', ['admin', 'finance', 'manager'])
    

    This filters to records created by users with specific roles. If your app uses a different permission system (Spatie, custom), adjust accordingly.

    2. ConsoleOutput for Progress

    use Symfony\Component\Console\Output\ConsoleOutput;
    
    $output = new ConsoleOutput();
    $output->writeln("Found X records");  // Green text
    $output->write('.');  // Progress dots
    

    Works in migrations because they run via Artisan. You see progress live in your terminal.

    3. distinct() to Avoid Duplicates

    When a user has multiple roles, JOIN creates duplicate rows. distinct() collapses them:

    ->select('generated_reports.id')
    ->distinct()
    

    Without this, the same report gets updated multiple times (harmless but wasteful).

    When to Use This

    • Historical data fixes that should only affect specific user types (admins, internal users, etc.)
    • Migrations on large tables where you need to see progress
    • When UPDATE scope is safety-critical (don’t want to accidentally touch customer data)
    • When you need to generate an affected-IDs report before applying changes

    Testing Before Running

    Always verify scope on staging first:

    # See the query without executing
    php artisan migrate --pretend
    
    # Or add a dry-run to your migration:
    if (app()->environment('local')) {
        $output->writeln("DRY RUN - would update: " . $affected->pluck('id')->implode(', '));
        return;
    }
    

    Remember: Data migrations in production are one-way. Role-based filtering gives you an extra safety net to ensure you’re only touching the records you intend to fix.

  • Bypassing Global Query Scopes for Admin/Backend Features

    Global query scopes are great for filtering production data — hiding soft-deleted records, filtering by status, etc. But when you’re building admin tools, those same scopes become obstacles.

    Here’s the problem: you’re building a translation editor for your e-commerce dashboard. Your Product model has a global scope that hides discontinued products. But translators need to update ALL products, including discontinued ones, because those translations might be needed for historical orders or future reactivation.

    The Pattern: Explicit Scope-Bypassing Methods

    Instead of fighting with scopes everywhere, create dedicated methods that explicitly bypass them:

    // In your Order model
    public function all_items()
    {
        return $this->hasMany(OrderItem::class)
            ->withoutGlobalScopes()
            ->get();
    }
    
    // Usage in admin controllers
    $order = Order::find($id);
    $allItems = $order->all_items(); // Gets even soft-deleted items
    

    Compare this to the default relationship:

    // Normal relationship - respects global scopes
    $order->items; // Hides discontinued/soft-deleted items
    
    // Admin relationship - explicit about bypassing
    $order->all_items(); // Shows everything
    

    Why This Matters

    This pattern:

    • Makes intent explicit: all_items() signals “I know what I’m doing, show me everything”
    • Isolates scope-bypassing: Only admin code uses these methods; production code uses normal relationships
    • Prevents bugs: No accidental scope bypassing in customer-facing features
    • Self-documenting: Method name explains why scopes are bypassed

    When to Use This

    • Admin dashboards that need full data visibility
    • Translation/content management interfaces
    • Data export tools
    • Debugging utilities
    • Bulk operations that shouldn’t skip “hidden” records

    Alternative Approaches

    You could use withoutGlobalScopes() inline everywhere:

    $order->items()->withoutGlobalScopes()->get();
    

    But this is:

    • Verbose and repetitive
    • Easy to forget in some places
    • Harder to grep for “where are we bypassing scopes?”

    A named method is cleaner, easier to test, and more maintainable.

    Bonus: Selective Scope Bypassing

    You can also bypass specific scopes while keeping others:

    public function published_items_including_deleted()
    {
        return $this->hasMany(OrderItem::class)
            ->withoutGlobalScope(SoftDeletingScope::class)
            ->get();
    }
    

    This keeps your “published” scope active but removes soft delete filtering — useful when you want partial bypassing.

    Remember: Global scopes exist to protect production users from seeing the wrong data. When you need admin superpowers, make it explicit with dedicated methods. Your future self (and your code reviewers) will thank you.

  • Laravel UX: Silent Success, Loud Failures

    Good UX in Laravel isn’t about confirming every action. It’s about staying silent when things go as expected and being loud only when something breaks.

    The Anti-Pattern

    You’ve seen this: user clicks “Delete”, and a success toast pops up: “Success! Item deleted!” But… the user just clicked delete. They expected it to work. Why confirm the obvious?

    public function delete(Request $request, $id)
    {
        $item = Item::findOrFail($id);
        $item->delete();
    
        // Unnecessary confirmation
        session()->flash('success', 'Item deleted successfully!');
    
        return back();
    }

    This creates notification fatigue. Users start ignoring all flash messages because most are just noise.

    The Better Approach

    Only show messages when something unexpected happens:

    public function delete(Request $request, $id)
    {
        $item = Item::findOrFail($id);
    
        try {
            $item->delete();
            // Silent success - user clicked delete, it deleted
            return back();
        } catch (\Exception $e) {
            // Loud failure - unexpected! User needs to know
            session()->flash('error', 'Could not delete item. Please try again.');
            report($e);
            return back();
        }
    }

    When to Show Success Messages

    Success confirmations are useful in these cases:

    • Long-running operations — “Export complete! Download ready.”
    • Background processing — “We’ll email you when import finishes.”
    • Non-obvious outcomes — “Payment scheduled for next Monday.”
    • Multi-step processes — “Step 2 of 3 complete.”

    But for immediate, synchronous actions where the UI already shows the result? Stay silent.

    Real-World Impact

    In one dashboard, removing unnecessary success confirmations for routine actions reduced notification spam by ~70%. Users reported the interface felt “less noisy” and they actually started noticing error messages when they appeared.

    Key Takeaway

    Expected outcomes should be silent. Unexpected outcomes need alerts. Save your flash messages for exceptions and warnings—that’s when users actually need them.

  • Laravel API Integration: Stop Retrying Permanent Failures

    When integrating with external APIs in Laravel, catching generic exceptions and retrying blindly wastes queue resources. Here’s a better approach: parse error responses and throw domain-specific exceptions.

    The Problem

    Your queue job calls an API. Sometimes it returns 500 Internal Server Error (temporary). Sometimes it returns 400 Bad Request - Invalid Resource ID (permanent config error). If you catch both with a generic exception, the queue retries forever even for permanent failures.

    try {
        $data = $this->apiClient->fetchResource($resourceId);
    } catch (ApiException $exception) {
        // Generic catch = queue retries uselessly for config errors
        throw $exception;
    }

    The Solution

    Parse the API error response and throw appropriate exceptions:

    try {
        $data = $this->apiClient->fetchResource($resourceId);
    } catch (ApiException $exception) {
        $errorCode = json_decode(
            (string)$exception->getPrevious()?->getResponse()->getBody(),
            true
        )['error'] ?? null;
    
        if ($errorCode === 'INTERNAL_SERVER_ERROR') {
            // Temporary failure - queue should retry
            throw new TemporarilyUnavailableException(
                'Cannot fetch data',
                500,
                $exception
            );
        }
    
        if ($errorCode === 'INVALID_RESOURCE_ID') {
            // Permanent config error - stop retrying immediately
            throw new InvalidMappingException(
                $this->getResourceMapping($resourceId),
                previous: $exception
            );
        }
    
        // Unknown error - rethrow original
        throw $exception;
    }

    Why This Works

    Different exception types let your queue orchestrator handle them appropriately:

    • TemporarilyUnavailableException — Queue retries with backoff (API might recover)
    • InvalidMappingException — Queue deletes job immediately (config needs manual fix)
    • Generic exceptions — Default retry behavior

    In one real-world case, this pattern stopped ~20,900 useless retry attempts for permanent mapping errors, freeing queue resources immediately.

    Key Takeaway

    Don’t treat all API failures the same. Parse error responses, throw specific exceptions, and let your queue orchestrator make smart retry decisions.

  • Building Dual Vue Version Compatible Components in Legacy Laravel Apps

    When maintaining legacy Laravel dashboards that use Vue 1.x but planning migration to Vue 2.x, you can build forward-compatible components using template literals instead of single-file components.

    The Problem

    Vue 1.x doesn’t support modern single-file component syntax with <template>, <script>, and <style> tags. Direct Vue 2.x components break in Vue 1.x runtimes.

    The Solution

    Export components as JavaScript objects with template literals:

    // Compatible with both Vue 1.x and 2.x
    const DataTableComponent = {
      data() {
        return {
          items: [],
          loading: false
        }
      },
      template: `
        <div class="data-table">
          <div v-if="loading">Loading...</div>
          <table v-else>
            <tr v-for="item in items">
              <td>{{ item.name }}</td>
            </tr>
          </table>
        </div>
      `,
      mounted() {
        this.fetchData()
      },
      methods: {
        fetchData() {
          // API call logic
        }
      }
    }
    
    // Export for global registration
    if (typeof window !== 'undefined') {
      window.DataTableComponent = DataTableComponent
    }

    Integration Pattern

    1. Compile in webpack/mix: resources/assets/js/components.js
    2. Export to window in old dashboard
    3. Register in legacy Vue 1.x app: Vue.component('data-table', window.DataTableComponent)
    4. Use in Blade: <data-table :config="config"></data-table>
    5. Same component works when you upgrade to Vue 2.x later

    Why This Works

    • Template literals are standard ES6 JavaScript: Both Vue versions understand them
    • Plain component object definitions: Vue 1.x and 2.x both support this format
    • Window exports bypass module incompatibilities: No need for different build setups

    Benefits

    You maintain one codebase instead of duplicating components across Vue versions during multi-month migrations. When you finally upgrade to Vue 2.x, these components continue working without any changes.

    Real-World Example

    const ReportBuilderComponent = {
      props: ['initialFilters'],
      data() {
        return {
          filters: this.initialFilters || {},
          results: [],
          loading: false
        }
      },
      template: `
        <div class="report-builder">
          <div class="filters">
            <input v-model="filters.startDate" type="date">
            <input v-model="filters.endDate" type="date">
            <button @click="runReport">Generate</button>
          </div>
          <div v-if="loading" class="loading">Loading...</div>
          <table v-else class="results">
            <tr v-for="row in results">
              <td>{{ row.metric }}</td>
              <td>{{ row.value }}</td>
            </tr>
          </table>
        </div>
      `,
      methods: {
        async runReport() {
          this.loading = true
          const response = await fetch('/api/reports', {
            method: 'POST',
            body: JSON.stringify(this.filters)
          })
          this.results = await response.json()
          this.loading = false
        }
      }
    }
    
    if (typeof window !== 'undefined') {
      window.ReportBuilderComponent = ReportBuilderComponent
    }

    This pattern has saved countless hours during gradual Vue upgrades in legacy Laravel applications.

  • Handling Zero vs Null in API Responses: The != null Pattern

    A common bug in Laravel APIs: numeric fields that can be zero display as empty or dash in the frontend because of JavaScript’s falsy value handling. The fix is simple but often overlooked.

    The Problem

    You have an API that returns configuration with numeric values:

    // Backend returns: {"discount_rate": 0}

    Frontend code using truthy checks:

    // ❌ Displays "—" for zero
    {{ config.discount_rate ? config.discount_rate + '%' : '—' }}
    // Result: "—" (wrong!)

    Why It Fails

    In JavaScript, 0 is falsy, so the ternary condition fails and shows the fallback. But 0 might be a perfectly valid value (like “0% discount”).

    The Fix

    // ✅ Explicit null check
    {{ config.discount_rate != null ? config.discount_rate + '%' : '—' }}
    // Result: "0%" (correct!)

    Backend Considerations

    Make sure your Laravel API distinguishes between null (not set) and zero (explicitly set to 0):

    // ❌ Bad: Converts 0 to null
    public function index()
    {
        return [
            'discount_rate' => $model->discount_rate ?: null
        ];
    }
    
    // ✅ Good: Preserves zero, only null when actually null
    public function index()
    {
        return [
            'discount_rate' => $model->discount_rate
        ];
    }

    Common Scenarios

    • Percentages: 0% is valid (no discount/markup)
    • Counts: 0 items is different from unknown
    • Ratings: 0 stars might be valid vs unrated
    • Prices: $0.00 (free) vs null (price not set)

    Pattern for Multiple Fields

    // Vue/React pattern
    const formatValue = (value) => {
        return value != null ? `${value}%` : '—';
    };
    
    <template>
        <div>Base Rate: {{ formatValue(config.base_rate_markup) }}</div>
        <div>Extra Rate: {{ formatValue(config.extra_rate_markup) }}</div>
    </template>

    Testing Strategy

    Always test these edge cases:

    1. Field is null (not set) → shows fallback
    2. Field is 0 → shows “0” or “0%”
    3. Field is positive number → shows number
    4. Field is negative number (if valid) → shows number

    Database Schema Consideration

    In migrations, be explicit about nullability:

    // If 0 and null have different meanings
    $table->decimal('discount_rate', 5, 2)->nullable();
    // null = not configured, 0 = explicitly set to zero
    
    // If 0 is the default ("no discount")
    $table->decimal('discount_rate', 5, 2)->default(0);
    // Always has a value, never null

    The Rule of Thumb

    • Use value != null when zero is meaningful
    • Use value || default when zero should fallback (rare for numbers)
    • Document the distinction in your API responses

    This small change prevents countless “the UI shows dash instead of zero” bug reports.

  • Enum-Based Validation and Type Casting Pattern

    When working with dynamic configuration or polymorphic data structures, PHP 8.1+ backed enums can do more than just define constants—they can encapsulate validation rules and type casting logic, creating a self-documenting, single-source-of-truth for field definitions.

    The Pattern

    enum ConfigType: string
    {
        case DISCOUNT_RATE = 'discount_rate';
        case MAX_QUANTITY = 'max_quantity';
        case ENABLED = 'enabled';
        
        // Define the expected PHP type for each config
        public function castType(): string
        {
            return match($this) {
                self::DISCOUNT_RATE => 'float',
                self::MAX_QUANTITY => 'int',
                self::ENABLED => 'bool',
            };
        }
        
        // Generate validation rules based on type
        public function validationRules(): array
        {
            return match($this->castType()) {
                'float' => ['numeric', 'min:0', 'max:100'],
                'int' => ['integer', 'min:0'],
                'bool' => ['boolean'],
                default => ['string'],
            };
        }
        
        // Get all enum cases with their cast types
        public static function casts(): array
        {
            return collect(self::cases())
                ->mapWithKeys(fn($case) => [
                    $case->value => $case->castType()
                ])
                ->toArray();
        }
    }

    Using in Models

    class Config extends Model
    {
        protected $casts = [
            'type' => ConfigType::class,
        ];
        
        // Cast the stored string value to the correct type
        public function getValue(): mixed
        {
            return match($this->type->castType()) {
                'float' => (float) $this->value,
                'int' => (int) $this->value,
                'bool' => filter_var($this->value, FILTER_VALIDATE_BOOLEAN),
                default => $this->value,
            };
        }
    }

    Using in Controllers

    public function update(Request $request, $model)
    {
        // Generate validation rules dynamically from enum
        $rules = collect(ConfigType::cases())
            ->mapWithKeys(fn($type) => [
                $type->value => [
                    ...$type->validationRules(),
                    'nullable'
                ]
            ])
            ->toArray();
        
        $validated = $request->validate($rules);
        
        // Save each config with proper type casting
        foreach (ConfigType::cases() as $type) {
            if (array_key_exists($type->value, $validated)) {
                $model->setConfig($type, $validated[$type->value]);
            }
        }
    }

    Why This Works

    • Single Source of Truth: Type definitions, validation, and casting logic live together
    • Self-Documenting: New developers can read the enum to understand all available configuration fields
    • Type-Safe: PHP 8.1+ backed enums with IDE autocomplete
    • Maintainable: Adding a new config type is just one new case
    • Testable: Easy to unit test each enum method independently

    This pattern shines when building admin panels, user preferences, or any polymorphic configuration system where each field has different validation and type requirements.

  • Laravel Webhooks: Complete Side Effects Before Firing

    The Problem: Webhooks Firing with Incomplete Data

    You’ve built a bulk operation tool that processes multiple records—replacing files, updating statuses, generating documents. At the end of the process, you fire a webhook to notify downstream systems. The webhook delivers successfully… but the data is incomplete.

    The tool thinks the job is done. Downstream systems think they have everything. But some operations didn’t actually finish before the webhook fired.

    What Happened

    This pattern emerges when your bulk operation tool handles mixed workflows:

    • Some items are simple file attachments (just attach and done)
    • Other items require generation steps (QR codes, PDFs, consolidated documents)

    The tool was originally built to handle only file attachments. It attaches all the files, then fires the webhook—”from its point of view the flow is complete.”

    But when you start using it for mixed workflows, the generation step gets skipped. The webhook fires too early, and downstream systems receive incomplete records.

    Example: Mixed Document Attachments

    Let’s say you’re building a bulk order processor that handles digital fulfillment:

    // BulkOrderProcessorController.php
    public function process(Request $request)
    {
        $orders = Order::whereIn('uuid', $request->order_ids)->get();
        
        foreach ($orders as $order) {
            // Attach pre-existing PDF files
            foreach ($order->getPreExistingDocuments() as $doc) {
                $order->attachments()->attach($doc->id);
            }
        }
        
        // Send webhook to downstream system
        $this->dispatchWebhook($orders);
        
        return response()->json(['status' => 'complete']);
    }

    This works great when all documents are pre-existing files. But what if some orders need generated documents (e.g., a consolidated invoice PDF or QR codes for ticket verification)?

    The webhook fires after attaching files, but before generating the missing documents. Downstream systems see “Order complete” but some documents are missing.

    The Solution: Complete All Side Effects Before Firing Webhooks

    The fix is simple: don’t send webhooks until every side effect is complete.

    // BulkOrderProcessorController.php
    public function process(Request $request)
    {
        $orders = Order::whereIn('uuid', $request->order_ids)->get();
        
        foreach ($orders as $order) {
            // Step 1: Attach pre-existing files
            foreach ($order->getPreExistingDocuments() as $doc) {
                $order->attachments()->attach($doc->id);
            }
            
            // Step 2: Generate missing documents BEFORE webhook
            app(DocumentGenerator::class)->generateFromOrder($order);
        }
        
        // Step 3: NOW send webhook (all data ready)
        $this->dispatchWebhook($orders);
        
        return response()->json(['status' => 'complete']);
    }

    Key change: Call DocumentGenerator for each order before firing the webhook. This ensures every order has all its attachments—both pre-existing files and generated documents.

    When to Watch Out for This

    This pattern is especially common when:

    • You repurpose a tool for a broader use case than it was originally built for
    • Your workflow has mixed requirements (some simple, some complex)
    • The tool was built iteratively—”it worked fine until we added feature X”
    • Webhooks notify external systems (they can’t easily retry or detect missing data)

    Real-World Indicators

    You might be hitting this issue if:

    • Downstream systems complain about missing data even though your tool reports success
    • Re-running the bulk operation “fixes” the problem (because the second run actually generates the missing items)
    • The tool’s definition of “complete” doesn’t match downstream expectations

    Key Takeaways

    1. Webhooks are promises—don’t send them until you can fulfill every part of the promise
    2. Mixed workflows need mixed handling—check for items that need generation, not just attachment
    3. Test with real-world data—edge cases reveal assumptions about what “complete” means
    4. Idempotency helps—if downstream systems can safely receive duplicate webhooks, recovery is easier

    A webhook saying “job complete” should mean actually complete—not “complete according to the original scope.”

  • Laravel Queue Jobs: Handling External API Timeouts

    The Problem: MaxAttemptsExceededException on External API Calls

    You’ve queued a job that syncs data from an external API. The job processes multiple records in a loop, making HTTP requests for each one. Everything works fine in development, but in production you start seeing this error:

    Illuminate\Queue\MaxAttemptsExceededException: 
    App\Jobs\DataSyncJob has been attempted too many times or run too long. 
    The job may have previously timed out.

    The job hasn’t actually failed—it’s just slow. Laravel thinks it’s taking too long and kills it before it finishes.

    Why This Happens

    When a queue job makes multiple external API calls (especially in a loop over date ranges or collections), three timeout layers can conflict:

    1. HTTP client timeout (default: no limit in Guzzle)
    2. Job timeout (default: 60 seconds in Laravel)
    3. Job retry logic (default: 3 attempts)

    If the external API is slow or your loop has many iterations, the job times out before completing. Laravel marks it as “max attempts exceeded” even though the real issue is timing, not failure.

    The Solution: Set Explicit Timeouts at Every Level

    1. Configure the Job Timeout

    Add a timeout property to your queue job to tell Laravel how long it can run:

    class DataSyncJob implements ShouldQueue
    {
        public int $timeout = 300; // 5 minutes
        public int $tries = 3;
        public int $maxExceptions = 1;
        
        public function __construct(
            protected Report $report,
            protected string $startDate,
            protected string $endDate,
        ) {}
        
        public function handle(ExternalApiService $api): void
        {
            $period = CarbonPeriod::create($this->startDate, $this->endDate);
            
            foreach ($period as $date) {
                $api->fetchData($date->toDateString());
            }
        }
    }

    2. Set HTTP Client Timeouts

    Configure Guzzle (or whatever HTTP client you use) with explicit connect and request timeouts:

    // In your API client class
    protected function makeRequest(string $method, string $url, array $data = [])
    {
        try {
            return $this->http->request($method, $url, [
                'json' => $data,
                'timeout' => 30,        // Total request timeout: 30 seconds
                'connect_timeout' => 5, // Connection timeout: 5 seconds
            ]);
        } catch (RequestException $e) {
            // Handle timeouts gracefully
            throw new ApiException("External API request failed", 0, $e);
        }
    }

    3. Add Redis Rate Limiting (Bonus)

    If your job processes many items and the external API has rate limits, use Laravel’s Redis throttle to avoid hammering their servers:

    use Illuminate\Redis\RedisManager;
    
    public function handle(ExternalApiService $api, RedisManager $redis): void
    {
        $codes = $this->report->getCodes();
        
        foreach ($codes as $code) {
            $redis
                ->throttle('api_sync_' . $api->getName())
                ->allow(10)      // 10 requests
                ->every(60)      // per 60 seconds
                ->then(function () use ($api, $code) {
                    try {
                        $api->fetchData($code);
                    } catch (\Exception $e) {
                        // Log but don't fail the entire job
                        logger()->error("API sync failed for code {$code}", [
                            'exception' => $e->getMessage()
                        ]);
                    }
                }, function () {
                    // Rate limit hit—release job back to queue
                    $this->release($this->attempts());
                });
        }
    }

    When to Use This Pattern

    This approach works well when:

    • Your job makes multiple external API calls (loops, date ranges, batches)
    • The external API can be slow or unreliable
    • You need graceful degradation—one failed request shouldn’t kill the entire job
    • The external API has rate limits

    Key Takeaways

    1. Always set explicit timeout properties on long-running queue jobs
    2. Configure HTTP client timeouts (timeout and connect_timeout)
    3. Use Redis throttling for rate-limited APIs
    4. Catch exceptions inside loops so one failure doesn’t kill the entire job
    5. Monitor Sentry/logs for timeout patterns—they reveal slow external dependencies

    The MaxAttemptsExceededException isn’t always a failure—sometimes it’s just a sign your timeouts need tuning.