Category: Laravel

  • How to Use Laravel AI Provider Tools with the agent() Helper

    The Question

    How do you enable Laravel AI’s provider tools (like WebSearch, WebFetch, or FileSearch) when using the agent() helper? They’re not regular tools—they’re native provider capabilities—but the API surface looks identical.

    The Answer

    Provider tools pass through the same tools parameter as regular tools. Laravel AI’s gateway layer automatically detects and separates them under the hood.

    Basic Usage

    use Laravel\Ai\Providers\Tools\WebSearch;
    use function Laravel\Ai\agent;
    
    $response = agent(
        instructions: 'You are a research assistant.',
        tools: [new WebSearch],
    )->prompt('What are the best practices for API rate limiting?');
    

    With Configuration Options

    Provider tools support fluent configuration:

    $response = agent(
        instructions: 'You are a research assistant.',
        tools: [
            (new WebSearch)
                ->max(5)                                      // limit number of searches
                ->allow(['stackoverflow.com', 'laravel.com']) // restrict to specific domains
                ->location(city: 'London', country: 'UK'),    // refine by geographic context
        ],
    )->prompt('Find recent discussions on queue optimization.');
    

    Mixing Provider Tools with Regular Tools

    You can combine them seamlessly:

    $response = agent(
        instructions: 'You are a research assistant.',
        tools: [
            new WebSearch,         // provider tool (handled natively)
            new DatabaseQuery,     // your custom Tool implementation
        ],
    )->prompt('Research this topic and query our internal database for related records.');
    

    Available Provider Tools

    • WebSearch — Search the web (Anthropic, OpenAI, Gemini)
    • WebFetch — Fetch and extract content from URLs (Anthropic, Gemini)
    • FileSearch — Semantic search over file stores (OpenAI, Gemini)

    Provider Support Matrix

    Tool Anthropic OpenAI Gemini
    WebSearch web_search_20250305 web_search google_search
    WebFetch web_fetch_20250910 url_context
    FileSearch file_search fileSearch

    Note: If your configured provider doesn’t support a given tool, Laravel AI throws a RuntimeException at runtime.

    Why This Matters

    Provider tools unlock powerful native capabilities without the overhead of implementing custom tool handlers. When your agent needs to search the web, fetch external content, or query semantic file stores, you can hand that responsibility directly to the provider’s optimized implementation.

    The unified API keeps your code simple: whether it’s a native provider tool or your own custom tool, they all live in the same tools array.

  • When Filter Logic Doesn’t Match Display Logic

    Ever get a bug report that sounds impossible? “I filtered by SGD, but I’m still seeing USD records in the table.”

    You double-check the filter. It works. You check the data. It’s correct. But the user is right — something’s broken.

    Here’s what happened: the filter queries one field, but the UI displays a different field.

    The Bug Pattern

    In our case, we had a pricing dashboard with a currency filter. The filter queried pricing_configs.currency, but the table’s Currency column displayed pricing_batches.default_currency.

    Different fields. Different data sources. Completely different results.

    A record might have pricing_configs.currency = 'SGD' but pricing_batches.default_currency = 'USD'. When you filter by SGD, the filter includes it (because pricing_configs.currency matches). But the Currency column shows USD (because that’s what pricing_batches.default_currency contains).

    From the user’s perspective: “I filtered by SGD. Why am I seeing USD?”

    The Fix

    Make the filter query the exact same field that’s displayed in the UI.

    If your Currency column shows project.budget_currency, your filter must query project.budget_currency. Not client.default_currency. Not invoice.currency. The exact same field.

    // ❌ BAD: Filter and display query different sources
    $query->where('client.default_currency', $request->currency);
    // But the table shows: $record->project->budget_currency
    
    // ✅ GOOD: Filter matches display logic
    $query->whereHas('project', function ($q) use ($request) {
        $q->where('budget_currency', $request->currency);
    });
    // And table shows: $record->project->budget_currency
    

    Why This Happens

    Usually because the table evolved over time:

    • Original implementation showed client.default_currency
    • Later, someone changed the display to show project.budget_currency (better UX)
    • But nobody updated the filter logic to match

    Or because the data model is complex — multiple currency fields across relationships, and different parts of the code made different assumptions about which one to use.

    How to Avoid It

    1. Document what each column displays.

    Don’t just write “Currency” in the table header. Document it:

    // In your table config or component
    'columns' => [
        'currency' => [
            'label' => 'Currency',
            'source' => 'project.budget_currency', // ← This!
        ],
    ]
    

    2. Test with mismatched data.

    Create test records where client.default_currency = 'SGD' but project.budget_currency = 'USD'. Filter by SGD. What shows up in the Currency column? If you see USD, your filter doesn’t match your display.

    3. Make an intentional decision.

    If you have multiple currency fields, pick one as the canonical “display currency” for this view. Then use that everywhere — filters, sorting, exports, everything.

    Bonus: Complex Fields

    What if the displayed field comes from a JSON column or computed value?

    // Displayed value is extracted from JSON
    'currency' => $record->task_config['payment']['currency']
    
    // Filter must extract the same way
    $query->whereJsonContains('task_config->payment->currency', $request->currency);
    

    Or better yet, extract it consistently in your model:

    // Model accessor
    public function getDisplayCurrencyAttribute()
    {
        return $this->task_config['payment']['currency'] ?? 'USD';
    }
    
    // Now both filter and display use the same accessor
    $query->whereRaw("JSON_EXTRACT(task_config, '$.payment.currency') = ?", [$request->currency]);
    // Display: $record->display_currency
    

    The Takeaway

    When users report “the filter doesn’t work,” don’t just check if the filter query is valid. Check if it matches what’s actually displayed in the table.

    Because a working filter that queries the wrong field is worse than a broken one — it silently returns the wrong results.

  • Watch Out: Queued Events Can Fail if the Record Gets Deleted

    The Problem: Race Condition Between Job Queuing and Processing

    You fire off a queued event with an Eloquent model, everything looks good, but hours later your queue worker crashes with a ModelNotFoundException. What happened?

    When you pass an Eloquent model to a queued event or listener, Laravel doesn’t serialize the entire model object. Instead, it just stores the model’s class name and its ID. When the queue worker picks up the job later, it tries to refetch that model using firstOrFail().

    The problem? If the record gets deleted between queuing and processing, your job fatally crashes.

    Example: The Classic Gotcha

    // In your controller or service
    event(new OrderUpdated($order, $changes));
    
    // Meanwhile, in OrderUpdated event class...
    class OrderUpdated
    {
        use SerializesModels;
        
        public $order;  // This gets serialized as: Order::class, id: 123
        
        public function __construct(Order $order, array $changes)
        {
            $this->order = $order;
            $this->changes = $changes;
        }
    }
    
    // Later, when the queue worker processes this job...
    // Laravel does: Order::findOrFail(123)
    // If order #123 was deleted: ModelNotFoundException!

    This is especially common in high-traffic applications where records get created and deleted quickly — cancelled transactions, temporary data, race conditions between user actions and background cleanup jobs.

    Solution 1: Pass Only What You Need

    Instead of serializing the entire model, extract just the data you’ll actually need:

    class OrderUpdated
    {
        public $orderId;
        public $changes;
        public $customerEmail;
        
        public function __construct(Order $order, array $changes)
        {
            // Extract primitives, not objects
            $this->orderId = $order->id;
            $this->customerEmail = $order->customer->email;
            $this->changes = $changes;
        }
    }

    Now if the order gets deleted, your listener can handle it gracefully — maybe just log “order no longer exists” instead of crashing.

    Solution 2: Handle Missing Models Gracefully

    If you do serialize the model, check for null in your listener:

    class SendOrderUpdateEmail
    {
        public function handle(OrderUpdated $event)
        {
            // Laravel sets the property to null if model can't be restored
            if (!$event->order) {
                Log::info('Order no longer exists, skipping notification');
                return;
            }
            
            // Safe to use here
            Mail::to($event->order->customer)->send(new OrderChanged($event->order));
        }
    }

    Solution 3: Use Custom Restoration Logic

    For more control, override the model restoration behavior:

    use Illuminate\Contracts\Queue\ShouldQueue;
    use Illuminate\Queue\SerializesModels;
    
    class OrderUpdated implements ShouldQueue
    {
        use SerializesModels {
            __sleep as protected traitSleep;
            __wakeup as protected traitWakeup;
        }
        
        public $order;
        
        public function __wakeup()
        {
            $this->traitWakeup();
            
            // If model restoration failed, log and set to null
            if (!$this->order) {
                logger()->warning('Order model could not be restored for queued event');
            }
        }
    }

    When to Worry About This

    This pattern matters most when:

    • Records are short-lived — temporary carts, pending transactions, OTPs
    • Users can delete things — if delete actions fire cleanup jobs that might race with notification jobs
    • You have cascading deletes — parent record deletion triggers child deletions while jobs are in flight
    • Queue delays are significant — if your queue is backed up, more time = more opportunity for deletions

    Takeaway

    Queued events with Eloquent models are convenient, but they assume the record still exists when the job runs. For critical paths, consider passing primitives instead of models, or add defensive checks in your listeners. Your queue workers will thank you.

  • Building Resilient API Fallback Chains in Laravel

    External APIs fail. Networks timeout. Services go down for maintenance. Your app needs to handle this gracefully instead of showing error pages to users.

    One approach: build fallback chains. Try the fastest/best method first, then degrade to alternatives when things break.

    The Pattern

    use Illuminate\Support\Facades\Http;
    
    class ContentFetcher
    {
        public function fetchArticle(string $slug): ?array
        {
            // Strategy 1: Try the clean REST API first
            $response = Http::timeout(5)->get("https://api.news.com/articles", [
                'slug' => $slug,
                'format' => 'json',
            ]);
    
            if ($response->successful() && !empty($response->json())) {
                return $this->normalizeApiResponse($response->json());
            }
    
            // Strategy 2: Fallback to HTML scraping
            Log::warning("API unavailable for {$slug}, using HTML fallback");
    
            $response = Http::timeout(10)->get("https://news.com/articles/{$slug}");
    
            if ($response->failed()) {
                Log::error("All fetch strategies failed for {$slug}");
                return null;
            }
    
            // Validate we got HTML, not an error page
            if (!str_contains($response->header('Content-Type'), 'text/html')) {
                return null;
            }
    
            return $this->parseHtml($response->body());
        }
    
        private function normalizeApiResponse(array $data): array
        {
            return [
                'title' => $data['title'],
                'body' => $data['content'],
                'author' => $data['author']['name'],
                'published_at' => $data['published'],
            ];
        }
    
        private function parseHtml(string $html): ?array
        {
            // Your HTML parsing logic here
            // DOMDocument, Symfony DomCrawler, or Laravel AI for extraction
            return [...];
        }
    }

    Why This Works

    1. Speed first: APIs are faster and cleaner than HTML parsing. Try that first.
    2. Graceful degradation: If the API is down, fall back to a slower but reliable method.
    3. User experience: Users get data either way—they don’t see errors.
    4. Observability: Log fallback usage so you can monitor API reliability.

    Validation at Each Step

    Don’t assume a 200 status code means success. Validate the response:

    // ❌ WRONG: Assumes 200 = valid data
    if ($response->successful()) {
        return $response->json();
    }
    
    // ✅ RIGHT: Check the data structure
    if ($response->successful()) {
        $data = $response->json();
        
        // Verify required fields exist
        if (empty($data['id']) || empty($data['title'])) {
            Log::warning('API returned malformed data', ['response' => $data]);
            return $this->tryFallback();
        }
        
        return $data;
    }

    Adding Timeouts

    Use aggressive timeouts for fallback strategies. If the API is slow, you want to fail fast and move to the next option:

    // Primary: 5 second timeout (should be fast)
    $response = Http::timeout(5)->get($apiUrl);
    
    if ($response->failed() || $response->timedOut()) {
        // Fallback: 10 second timeout (HTML parsing takes longer)
        $response = Http::timeout(10)->get($htmlUrl);
    }

    Caching Across Strategies

    Cache the result after choosing a strategy, so future requests skip the fallback entirely:

    use Illuminate\Support\Facades\Cache;
    
    public function fetchArticle(string $slug): ?array
    {
        return Cache::remember("article:{$slug}", 3600, function () use ($slug) {
            // Try API first
            $data = $this->tryApi($slug);
            
            if ($data !== null) {
                return $data;
            }
            
            // Fallback to HTML
            return $this->tryHtml($slug);
        });
    }

    Now subsequent requests use the cached result regardless of which strategy succeeded.

    Multiple Fallback Levels

    You can chain more than two strategies:

    public function fetchData(string $id): ?array
    {
        // Level 1: Fast API
        $data = $this->tryFastApi($id);
        if ($data) return $data;
    
        // Level 2: Slower API with more features
        $data = $this->trySlowApi($id);
        if ($data) return $data;
    
        // Level 3: HTML scraping
        $data = $this->tryHtmlScrape($id);
        if ($data) return $data;
    
        // Level 4: Stale cached data (last resort)
        return Cache::get("article:{$id}:stale");
    }

    When NOT to Use This Pattern

    Fallback chains add complexity. Don’t use them if:

    • The API is reliable (99.9%+ uptime)
    • There’s no reasonable fallback (you need that specific API’s data)
    • Fallback data quality is too degraded to be useful

    For critical integrations, consider a different approach: retry with exponential backoff, or queue the request for later processing.

    Monitoring Fallback Usage

    Track how often fallbacks trigger:

    if ($this->tryApi($slug) === null) {
        Metrics::increment('api.fallback.triggered', ['service' => 'news-api']);
        return $this->tryHtml($slug);
    }

    If fallbacks fire frequently, it’s a signal to investigate the primary API’s reliability or adjust timeouts.

    The Bottom Line

    Fallback chains make your app resilient. Instead of failing when an API hiccups, gracefully degrade to alternative data sources. Users stay happy, and you get telemetry on when primary services are unreliable.

  • Chain Artisan Commands Silently with callSilently()

    When building Artisan commands that orchestrate other commands—batch processors, deployment scripts, or data pipelines—you often don’t want child command output cluttering your carefully formatted progress bars.

    Laravel gives you callSilently() for exactly this:

    use Illuminate\Console\Command;
    
    class ProcessQueueCommand extends Command
    {
        protected $signature = 'queue:process-all {--batch=}';
    
        public function handle(): int
        {
            $jobs = $this->option('batch') 
                ? Job::where('batch_id', $this->option('batch'))->get()
                : Job::pending()->get();
    
            $this->info("Processing {$jobs->count()} jobs...");
            $this->output->progressStart($jobs->count());
    
            $failed = 0;
    
            foreach ($jobs as $job) {
                // Call another command without its output
                $exitCode = $this->callSilently('job:process', [
                    'id' => $job->id,
                    '--force' => true,
                ]);
    
                if ($exitCode !== 0) {
                    $failed++;
                    $this->warn("Job {$job->id} failed");
                }
    
                $this->output->progressAdvance();
            }
    
            $this->output->progressFinish();
            $this->info("Done. {$failed} failures.");
    
            return $failed > 0 ? 1 : 0;
        }
    }

    call() vs callSilently()

    Both methods invoke another Artisan command programmatically, but they differ in output handling:

    • call(): Passes through all output from the child command to your terminal. Good when you want the user to see what’s happening.
    • callSilently(): Suppresses all child output completely. Exit codes still work for error handling.
    // User sees everything
    $this->call('db:seed', ['--class' => 'UserSeeder']);
    
    // Silent execution, only exit code returned
    $exitCode = $this->callSilently('db:seed', ['--class' => 'UserSeeder']);
    
    if ($exitCode !== 0) {
        $this->error('Seeding failed');
    }

    When to Use Each

    Use call() when:

    • You’re delegating to a command that should show its own progress
    • Debugging—you want to see what the child is doing
    • The child command has important user-facing messages

    Use callSilently() when:

    • Building batch processors or orchestrators
    • Your parent command has its own progress UI
    • Child command output would duplicate information or clutter the terminal
    • You only care about success/failure (exit code)

    Combining with Progress Bars

    Progress bars + callSilently() = clean batch operations:

    $bar = $this->output->createProgressBar($items->count());
    $bar->setFormat('Processing: %current%/%max% [%bar%] %percent:3s%% %message%');
    
    foreach ($items as $item) {
        $bar->setMessage("Processing {$item->name}...");
        
        $exitCode = $this->callSilently('item:sync', ['id' => $item->id]);
        
        if ($exitCode === 0) {
            $bar->setMessage("✓ {$item->name}");
        } else {
            $bar->setMessage("✗ {$item->name}");
        }
        
        $bar->advance();
    }
    
    $bar->finish();
    $this->newLine();

    Testing Commands That Use callSilently()

    In tests, both methods work the same way. You can assert on exit codes:

    public function test_batch_processor_handles_failures()
    {
        // Simulate a failing job
        Job::factory()->create(['id' => 999, 'status' => 'broken']);
    
        $this->artisan('queue:process-all')
            ->expectsOutput('Processing 1 jobs...')
            ->expectsOutput('Done. 1 failures.')
            ->assertExitCode(1);
    }

    Alternative: Output Buffering

    If you need to capture child output for logging (not display), use output buffering instead:

    use Symfony\Component\Console\Output\BufferedOutput;
    
    $buffer = new BufferedOutput();
    
    $exitCode = $this->call('some:command', [], $buffer);
    
    $output = $buffer->fetch(); // Get all output as string
    Log::debug('Command output', ['output' => $output]);

    This is useful for debugging or auditing what child commands did, without showing it to users.

    The Bottom Line

    callSilently() keeps orchestrator commands clean. Use it when your parent command owns the UI—the user doesn’t need to see every detail of what child processes are doing, just the overall progress and results.

  • Cache External API Calls to Save Money and Time

    External API calls are expensive—in money, time, and rate limits. Whether you’re hitting a weather service, geocoding API, or AI model, repeated requests for the same data waste resources.

    Laravel’s Cache::remember() is your friend here. Wrap API calls to cache results automatically:

    use Illuminate\Support\Facades\Cache;
    use Illuminate\Support\Facades\Http;
    
    class WeatherService
    {
        private const CACHE_TTL = 3600; // 1 hour
    
        public function getWeather(string $city): ?array
        {
            $cacheKey = "weather_{$city}";
    
            return Cache::remember($cacheKey, self::CACHE_TTL, function () use ($city) {
                $response = Http::get("https://api.weather.com/v1/current", [
                    'city' => $city,
                    'apiKey' => config('services.weather.key'),
                ]);
    
                if ($response->failed()) {
                    return null;
                }
    
                return $response->json();
            });
        }
    }

    Why This Matters

    • Cost savings: Paid APIs (especially AI/LLM services) charge per request. Caching can reduce costs by 90%+.
    • Speed: Cached responses return instantly instead of waiting for network round-trips.
    • Reliability: If the API goes down, cached data keeps your app running.
    • Rate limits: Stay under API quotas without complex request tracking.

    Choosing the Right Cache Driver

    Laravel supports multiple cache backends. Pick based on your needs:

    • Redis: Fast, shared across servers, but ephemeral (data lost on restart).
    • File: Survives deployments, great for expensive AI API results that should persist.
    • Database: When you need queryable cached data or longer retention.

    For AI API responses that are expensive to regenerate, file cache is ideal:

    // In config/cache.php, add a dedicated store
    'stores' => [
        'ai_responses' => [
            'driver' => 'file',
            'path' => storage_path('cache/ai'),
        ],
    ],
    
    // Use it explicitly
    Cache::store('ai_responses')->remember($key, 86400 * 7, function () {
        return $this->callExpensiveAI();
    });

    Cache Key Best Practices

    Make keys descriptive and collision-resistant:

    // ❌ Too generic
    $key = "data_{$id}";
    
    // ✅ Namespaced and specific
    $key = "weather:current:{$city}:" . date('Y-m-d-H');
    
    // ✅ For complex parameters, hash them
    $key = "report:" . md5(json_encode($filters));

    Handling Failures

    When the API fails, you have options:

    return Cache::remember($key, $ttl, function () {
        $response = Http::get($url);
    
        if ($response->failed()) {
            // Option 1: Return null (cache the failure briefly to avoid hammering)
            Cache::put($key, null, 60); // 1 min
            return null;
    
            // Option 2: Throw exception (let it propagate, don't cache)
            throw new ApiUnavailableException();
    
            // Option 3: Return stale data if available
            return Cache::get($key . ':stale');
        }
    
        return $response->json();
    });

    The right choice depends on your app—some can tolerate null, others need exceptions to trigger fallback logic.

    When NOT to Cache

    Don’t cache if the data:

    • Changes constantly (real-time stock prices)
    • Is user-specific and high-cardinality (unique per user ID)
    • Is already fast (sub-10ms database queries)

    Otherwise, cache liberally. Your API bill will thank you.

  • Debugging Database-Backed Laravel Scheduled Tasks

    Laravel’s task scheduler is powerful, but when tasks are dynamically loaded from a database instead of hardcoded in app/Console/Kernel.php, finding their actual execution frequency becomes less obvious.

    If your Kernel.php looks like this:

    protected function schedule(Schedule $schedule)
    {
        foreach (ScheduledTask::active()->get() as $task) {
            $schedule->command($task->getCommand())
                ->cron($task->getCronExpression());
        }
    }
    

    You won’t see individual task schedules in the code – they’re in the database.

    Finding the Schedule

    Query the database table directly to see what’s actually scheduled:

    SELECT 
        name,
        command,
        cron_expression,
        is_active,
        last_run_at,
        next_run_at
    FROM scheduled_tasks 
    WHERE command LIKE '%import:data%'
    ORDER BY command;
    

    This reveals the actual cron expressions, active status, and execution history – everything hidden behind the dynamic loader.

    Why This Pattern Exists

    Database-backed schedules let you:

    • Manage schedules without deployments – Change frequencies through admin panels or migrations
    • Store execution metadata – Track runs, failures, and overlapping prevention
    • Enable/disable tasks dynamically – No code changes required

    The tradeoff: schedules are less discoverable. When debugging “why isn’t this running?”, remember to check the database, not just the code.

    Common Gotchas

    Schema variations: Column names vary by implementation. Your table might use cron instead of cron_expression, or active instead of is_active. Run DESCRIBE scheduled_tasks first.

    Multiple environments: Staging and production databases may have different schedules. Always verify against the environment you’re debugging.

    Overlapping prevention: Tasks with withoutOverlapping() won’t run if the previous execution hasn’t finished. Check started_at and finished_at timestamps.

    Next time a scheduled task behaves mysteriously, skip the code and query the database first – that’s where the truth lives.

  • Map API Error Codes to Non-Retriable Exceptions

    When you call an external API, not all failures are equal.

    Some errors are transient (timeouts, rate limits). Retrying makes sense. Others are permanent for a given resource (disabled/expired/missing). Retrying just burns worker time and creates noise.

    Step 1: extract a machine-readable error code

    If the API returns a JSON error payload, parse it once and normalize it.

    use GuzzleHttp\Exception\RequestException;
    
    function extractRemoteErrorCode(RequestException $e): ?string
    {
        $body = (string) optional($e->getResponse())->getBody();
    
        $data = json_decode($body, true);
        if (!is_array($data)) {
            return null;
        }
    
        return $data['error_code'] ?? $data['error'] ?? null;
    }
    

    Step 2: map codes to a “don’t retry” exception

    Create a dedicated exception type that your job runner can treat as non-retriable.

    final class PermanentRemoteFailure extends \RuntimeException {}
    
    try {
        $client->get('/v1/resource/' . $resourceId);
    } catch (RequestException $e) {
        $code = extractRemoteErrorCode($e);
    
        $permanentCodes = [
            'RESOURCE_DISABLED',
            'RESOURCE_EXPIRED',
            'RESOURCE_NOT_FOUND',
        ];
    
        if (in_array($code, $permanentCodes, true)) {
            throw new PermanentRemoteFailure('Remote resource is not usable', 0, $e);
        }
    
        // Unknown/transient: let the queue retry policy handle it.
        throw $e;
    }
    

    Step 3: teach your job what to do next

    The point isn’t just to stop retries — it’s to move the system forward.

    public function handle()
    {
        try {
            $this->syncOne($this->remoteId);
        } catch (PermanentRemoteFailure $e) {
            $this->markAsInactive($this->remoteId);
            return; // stop here; no retry
        }
    }
    

    Why this is worth doing

    • Fewer wasted retries
    • Cleaner alerts
    • More predictable queue behavior
    • A single place to expand your error taxonomy as you learn

    If you’re already catching exceptions, you’re 80% there — the rest is classifying them so your system reacts appropriately.

  • Store Files by ID, Not by Slug

    One of the easiest ways to create accidental “data migrations” is to put user-facing strings in places you later treat as stable identifiers.

    A common example: storing uploaded files under a directory that includes a slug or title. It feels tidy… until someone renames the record and suddenly your storage path no longer matches reality.

    The rule of thumb

    Use an immutable identifier for storage paths. Keep human-readable names as metadata.

    That usually means:

    • A stable key (ULID/UUID/integer id) for file paths and URLs
    • A separate column for the original filename / display name
    • A separate column for the current “pretty” label (which can change)

    A simple Laravel approach

    Create a model that owns the upload, and give it a stable public identifier.

    // database/migrations/xxxx_xx_xx_create_documents_table.php
    Schema::create('documents', function ($table) {
        $table->id();
        $table->ulid('public_id')->unique();
    
        $table->string('display_name');      // can change
        $table->string('original_name');     // what the user uploaded
        $table->string('storage_path');      // immutable once set
    
        $table->timestamps();
    });
    

    On upload, generate the storage path from the immutable identifier (not the title):

    use Illuminate\Http\Request;
    use Illuminate\Support\Facades\Storage;
    use Illuminate\Support\Str;
    
    public function store(Request $request)
    {
        $file = $request->file('file');
    
        $publicId = (string) Str::ulid();
        $originalName = $file->getClientOriginalName();
    
        $path = "documents/{$publicId}/" . Str::random(16) . "-" . $originalName;
    
        Storage::disk('public')->put($path, file_get_contents($file->getRealPath()));
    
        $document = Document::create([
            'public_id' => $publicId,
            'display_name' => pathinfo($originalName, PATHINFO_FILENAME),
            'original_name' => $originalName,
            'storage_path' => $path,
        ]);
    
        return response()->json([
            'id' => $document->public_id,
        ]);
    }
    

    Why this pays off

    • Renames are trivial: update display_name, nothing else.
    • Downloads don’t break when labels change.
    • You can safely rebuild “pretty URLs” later without moving files.

    If you want tidy URLs, you can still add a slug — just don’t make your storage system depend on it.

  • Reusing Single-Record Components for Bulk Operations




    Reusing Single-Record Components for Bulk Operations

    Reusing Single-Record Components for Bulk Operations

    You have a TaskEditor component that works perfectly for editing one task. Now you need bulk editing. Should you build a separate BulkTaskEditor component from scratch?

    No. Make your single-record component bulk-capable by designing for optional prepopulation.

    The Pattern

    Your single-record editor passes all the defaults explicitly:

    <!-- SingleTaskEdit.vue -->
    <StatusUpdateForm
        :defaultStatus="task.status"
        :defaultDueDate="task.due_date"
        :defaultAssignee="task.assignee_id"
        :tasks="[task]"
    />

    For bulk editing, just pass the common values (or nothing if they differ):

    <!-- BulkTaskEdit.vue -->
    <StatusUpdateForm
        :defaultStatus="commonStatus"
        :defaultDueDate="commonDueDate"
        :defaultAssignee="commonAssignee"
        :tasks="selectedTasks"
    />

    The StatusUpdateForm component doesn’t know (or care) if it’s handling one task or fifty. It just uses whatever defaults you pass.

    How to Make This Work

    Design your child component with optional props that have sensible defaults:

    // StatusUpdateForm.vue
    export default {
        props: {
            tasks: {
                type: Array,
                required: true
            },
            defaultStatus: {
                type: String,
                default: null  // Null = user must choose
            },
            defaultDueDate: {
                type: String,
                default: null
            },
            defaultAssignee: {
                type: Number,
                default: null
            }
        }
    }

    When a default is null, the form field starts empty. When it has a value, the field is prepopulated.

    The Parent’s Job

    The parent component (single or bulk) decides what to pass:

    // BulkTaskEdit.vue
    computed: {
        commonStatus() {
            const statuses = this.selectedTasks.map(t => t.status);
            const unique = [...new Set(statuses)];
            return unique.length === 1 ? unique[0] : null;
        }
    }

    If all selected tasks have status: "pending", pass "pending". Otherwise, pass null.

    Why This Beats Separate Components

    • One source of truth: Bug fixes apply to both single and bulk editing
    • Consistent validation: Same rules, same error messages
    • Less code: No duplicated form logic
    • Easier testing: Test the component once with different prop combinations

    Key Insight

    The child component should be data-agnostic. It doesn’t care where the defaults come from or how many records it’s editing. It just needs:

    1. Optional default values (or null)
    2. An array of records to update

    The parent decides what defaults to pass based on whether it’s single or bulk editing.

    Rule of thumb: If your component takes a single record as input, refactor it to accept an array of records plus optional defaults. You’ve just made it bulk-capable with zero extra UI code.

    Category: Laravel | Keywords: Vue.js, component reuse, optional props, DRY, Laravel, architecture