Author: Daryle De Silva

  • Breaking CLI Commands into Chunked Web APIs to Avoid Timeouts

    Long-running CLI commands that work perfectly in terminal contexts often fail spectacularly in web environments. The culprit? Execution time limits and connection timeouts.

    Consider a typical bulk operation command:

    // CLI command - works but unusable in web context
    public function handle()
    {
        DB::beginTransaction();
        
        $items = Item::whereIn('id', $this->option('items'))->get();
        
        foreach ($items as $item) {
            $item->update(['status' => 'maintenance']);
            $this->processRelatedRecords($item);
        }
        
        DB::commit();
        $this->info('Done!');
    }
    

    This works great in Artisan but dies immediately when exposed via web UI – browser timeouts, server limits, no progress feedback.

    The Progressive Web API Pattern

    Break the monolithic operation into two separate endpoints:

    Step 1: Initialization Endpoint

    Validates input and creates tracking record:

    public function executeStart(Request $request): JsonResponse
    {
        $validated = $request->validate([
            'item_ids' => 'required|array',
            'item_ids.*' => 'exists:items,id',
        ]);
        
        $items = Item::whereIn('id', $validated['item_ids'])->get();
        $snapshot = $this->buildInitialSnapshot($items);
        
        $revision = Revision::create([
            'user_id' => auth()->id(),
            'key' => 'batch_maintenance',
            'old_value' => $snapshot,
            'new_value' => $snapshot,
        ]);
        
        return response()->json([
            'status' => 'success',
            'data' => [
                'revision_id' => $revision->id,
                'items' => $items->map(fn($i) => [
                    'id' => $i->id,
                    'title' => $i->title,
                ]),
            ],
        ]);
    }
    

    Step 2: Per-Item Execution Endpoint

    Processes ONE item per request:

    public function executeItem(Request $request): JsonResponse
    {
        $validated = $request->validate([
            'revision_id' => 'required|exists:revisions,id',
            'item_id' => 'required|exists:items,id',
        ]);
        
        $revision = Revision::findOrFail($validated['revision_id']);
        $item = Item::findOrFail($validated['item_id']);
        
        try {
            $item->update(['status' => 'maintenance']);
            $this->processRelatedRecords($item);
            
            $this->updateRevisionResult($revision, $item->id, 'success');
            
            return response()->json([
                'status' => 'success',
                'data' => ['item_id' => $item->id, 'result' => 'success'],
            ]);
        } catch (\Exception $e) {
            $this->updateRevisionResult($revision, $item->id, 'failed', $e->getMessage());
            
            return response()->json([
                'status' => 'error',
                'data' => ['item_id' => $item->id, 'result' => 'failed', 'error' => $e->getMessage()],
            ], 422);
        }
    }
    

    Why This Works

    Each request is fast. No timeout issues – every API call completes in milliseconds.

    Real-time progress. Frontend shows “Processing 5 of 20…” as each request completes.

    Partial failures don’t lose everything. If item #15 fails, items 1-14 are already committed.

    User stays in control. Can pause/resume, see exactly what succeeded vs failed.

    The Trade-Off

    You lose transactional atomicity – it’s no longer “all or nothing”. But in practice, this is acceptable for most bulk operations where seeing incremental progress and recovering from partial failures matters more than database transaction boundaries.

    For operations that truly must be atomic, keep them CLI-only. For everything else, this pattern transforms unusable monolithic commands into production-ready progressive workflows.

  • Verify Before Removing Defensive Fallback Code

    Verify Before Removing Defensive Fallback Code

    Here’s a cautionary tale about premature optimization: removing defensive fallback code before verifying the root cause is actually fixed.

    The Setup

    We had an inventory reporting command that queried product availability across date ranges. The implementation used a hybrid caching strategy:

    // Original: Hybrid caching approach
    if ($this->hasKnownCacheIssues($date)) {
        // Known edge cases: Call external API directly
        return $this->fetchFromExternalSource($date);
    }
    
    // Standard cases: Query cached database
    return $this->fetchFromCachedData($date);
    

    This hybrid approach existed because certain date ranges had known caching bugs where the cached database returned incorrect inventory counts.

    The ‘Fix’

    A colleague claimed they fixed the root cause in another PR. Based on that claim, we removed the API fallback entirely to “simplify” the code:

    // After 'fix': Removed API fallback
    // "Root cause fixed in PR #XXXX"
    return $this->fetchFromCachedData($date);  // Now ALL dates use cache
    

    Cleaner code, single responsibility, no more conditional logic. What could go wrong?

    The Reality

    Production run: command returned 0 inventory for most dates and unavailable for edge-case dates. When we queried the external API directly, it returned actual inventory numbers (237, 189, 142, etc.).

    The cache was still broken. The “root cause fix” either didn’t work or addressed a different issue entirely.

    The Lesson

    When you have defensive fallback code for known edge cases:

    1. Don’t remove it just because someone says “I fixed it” — verify the fix works for YOUR specific use case
    2. Test the edge cases explicitly — if the fallback existed for specific dates/conditions, test those exact conditions
    3. Keep the fallback until proven unnecessary — a few extra lines of “defensive” code beats broken production data
    4. Document why fallbacks exist — future-you needs to know this wasn’t just being paranoid

    Hybrid approaches often exist for good reasons. Before removing them, verify the underlying issue is ACTUALLY fixed, not just theoretically fixed.

    The Pattern

    This applies beyond caching:

    • Retry logic for flaky APIs
    • Fallback payment gateways
    • Null checks for optional relationships
    • Manual overrides for automated processes

    If defensive code exists, there’s probably a war story behind it. Find out what it is before you delete it.

  • Debugging Collection Pipelines with Tinker

    When debugging complex collection operations, it’s tempting to scatter dd() or dump() calls throughout your code. But there’s a faster way: break down the pipeline line-by-line in Tinker.

    Imagine you have a transformer method that filters and intersects date ranges:

    public function transform(DateRange $dateRange, array $validCategories)
    {
        $range = $dateRange->toCollection()
            ->filter(fn($date) => $date->isWeekday())
            ->intersect($this->getValidRange($validCategories))
            ->values();
        
        return $range->map(fn($date) => [
            'date' => $date->format('Y-m-d'),
            'available' => true,
        ]);
    }

    The method works fine in some cases but returns empty arrays in others. Instead of adding debugging statements and re-running the entire request, open Tinker and execute each transformation step:

    $dateRange = DateRange::make('2026-03-01', '2026-03-31');
    $validCategories = ['standard', 'premium'];
    
    // Step 1: Base collection
    $step1 = $dateRange->toCollection();
    dump($step1->count()); // 31 dates
    
    // Step 2: After filter
    $step2 = $step1->filter(fn($date) => $date->isWeekday());
    dump($step2->count()); // 22 weekdays
    
    // Step 3: After intersect
    $validRange = $this->getValidRange($validCategories);
    $step3 = $step2->intersect($validRange);
    dump($step3->count()); // 0 - AHA! The intersect returns nothing
    
    // Now test with different categories
    $step3 = $step2->intersect($this->getValidRange(['standard']));
    dump($step3->count()); // 10 - works with single category

    This reveals the issue: getValidRange() returns incompatible data when multiple categories are passed. The fix: ensure getValidRange() returns a flat collection of dates, not a nested structure.

    Why This Approach Works

    Breaking down collection pipelines in Tinker is faster than traditional debugging because:

    1. No need to rebuild application state — You’re working directly with live data or easily reproducible objects
    2. Instant feedback on each transformation — See exactly where the pipeline breaks
    3. Easy to test different inputs — Try various categories, date ranges, or edge cases without modifying code
    4. Clear visibility — Each step’s output is isolated and inspectable

    Bonus: Include Tinker Steps in PR Descriptions

    When writing pull request descriptions for complex bug fixes, include the Tinker reproduction steps. Reviewers can reproduce your findings without deploying code, making reviews faster and more focused:

    Bug: DateRange filtering returns empty when multiple categories are provided.

    Root cause: getValidRange() returns nested arrays instead of flat collection.

    Reproduction in Tinker:

    $step2->intersect($this->getValidRange(['standard', 'premium']))->count(); // 0
    $step2->intersect($this->getValidRange(['standard']))->count(); // 10

    This pattern works for any multi-step transformation: API response parsing, query builders with complex scopes, or deeply nested data structures. The key is isolating each step so you can see exactly where expectations diverge from reality.

  • Register Custom Corcel Models for WordPress Post Types

    By default, Corcel uses a generic Post model for all WordPress content. But if you’re working with custom post types (recipes, portfolios, reviews, etc.), you’ll want dedicated models with proper type-hinting and scopes.

    Register your custom models in config/corcel.php:

    'post_types' => [
        'recipe' => App\Models\Recipe::class,
        'review' => App\Models\Review::class,
        'portfolio' => App\Models\Portfolio::class,
    ],
    

    Then create your model:

    namespace App\Models;
    
    use Corcel\Model\Post;
    
    class Recipe extends Post
    {
        protected $postType = 'recipe';
        
        // Now you can add recipe-specific methods
        public function scopeVegetarian($query)
        {
            return $query->hasMeta('dietary_type', 'vegetarian');
        }
    }
    

    Now queries return your custom model instead of generic Posts:

    // Returns App\Models\Recipe instances
    $recipes = Recipe::vegetarian()->get();
    
    // Auto-sets post_type when creating
    $recipe = Recipe::create([
        'post_title' => 'Pasta Carbonara',
        'post_status' => 'publish',
    ]);
    

    No more checking post_type manually—your models are now type-specific.

  • Store ACF Field Keys When Using Corcel

    If you’re using Corcel to manage WordPress content from Laravel, you might notice ACF fields breaking when you create posts programmatically. The fix: ACF stores two meta entries per field—one for the value, one for the field key.

    When creating a post with ACF fields:

    $article = Article::create([
        'post_title' => 'Getting Started',
        'post_status' => 'publish',
    ]);
    
    // Save the field value
    $article->saveMeta('author_name', 'John Smith');
    
    // Also save the ACF field key reference
    $article->saveMeta('_author_name', 'field_abc123def');
    

    The _author_name meta stores the ACF field key (field_*), which ACF uses to link the value to its field configuration.

    Finding your field keys: Export your ACF field group to PHP and look for the key property on each field definition.

    Skip this and ACF’s UI won’t recognize your fields—they’ll appear as raw meta instead of proper ACF inputs.

  • Refactor Complex Inline Logic into Dedicated Classes

    When agent() calls become complex with large schemas, extract them into dedicated Agent classes. This improves readability, enables reuse, and makes testing easier.

    ❌ Before: Inline Complexity

    public function processData(string $url): ?array
    {
        $response = agent(
            instructions: 'Extract structured data from URL...',
            tools: [new WebFetch, new WebSearch],
            schema: fn (JsonSchema $schema) => [
                'name' => $schema->string()->required(),
                'items' => $schema->array()->items(
                    $schema->object([
                        'title' => $schema->string()->required(),
                        'value' => $schema->integer()->min(0)->nullable(),
                    ])
                ),
            ]
        )->prompt("Extract from: {$url}");
    }

    ✅ After: Dedicated Agent Class

    // app/Ai/Agents/DataExtractor.php
    class DataExtractor implements Agent, HasStructuredOutput
    {
        public function instructions(): string
        {
            return 'Extract structured data from URL...';
        }
    
        public function tools(): iterable
        {
            return [new WebFetch, new WebSearch];
        }
    
        public function schema(JsonSchema $schema): array
        {
            return [
                'name' => $schema->string()->required(),
                'items' => $schema->array()->items(
                    $schema->object([
                        'title' => $schema->string()->required(),
                        'value' => $schema->integer()->min(0)->nullable(),
                    ])
                ),
            ];
        }
    }
    
    // Usage:
    public function processData(string $url): ?array
    {
        return DataExtractor::prompt("Extract from: {$url}");
    }

    Why This Matters

    Laravel AI SDK’s agent() helper is great for quick prototypes, but production code benefits from structure. Dedicated Agent classes:

    • Improve readability — separate concerns into focused files
    • Enable reuse — use the same agent across multiple controllers
    • Make testing easier — mock or test agents in isolation
    • Follow Laravel conventions — single responsibility principle

    If your agent() call spans more than ~10 lines, it’s time to extract a class.

  • Using wasRecentlyCreated to Conditionally Save Metadata

    When using firstOrCreate() in seeders or migrations, check wasRecentlyCreated before saving metadata. This prevents overwriting existing relationships when re-running seeders.

    ❌ Before: Always Overwrites

    public function run(): void
    {
        foreach ($data as $item) {
            $record = Model::firstOrCreate(['name' => $item['name']]);
            $record->saveMeta('parent_id', $parentId); // Always overwrites!
        }
    }

    ✅ After: Conditional Save

    public function run(): void
    {
        foreach ($data as $item) {
            $record = Model::firstOrCreate(['name' => $item['name']]);
            if ($record->wasRecentlyCreated) {
                $record->saveMeta('parent_id', $parentId); // Only on create
            }
        }
    }

    Why This Matters

    Seeders should be idempotent — safe to run multiple times without side effects. The wasRecentlyCreated property tells you whether firstOrCreate() created a new record or found an existing one.

    This is especially important when:

    • Re-running seeders in development
    • Deploying database changes that include seed data
    • Populating initial relationships without overwriting user modifications

    The pattern ensures you populate data for new records while leaving existing ones untouched.

  • Check Cache Before Making Expensive External API Calls

    When integrating external APIs, check your cache before making expensive API calls. Don’t fetch all data upfront only to discover you had it cached.

    ❌ Before: Wasteful API Calls

    public function handle(): int
    {
        $username = $this->argument('username');
    
        // Fetch all data first, then check cache
        $profile = $this->fetchUserProfile($username);
        $comments = $this->fetchComments($username);
        $posts = $this->fetchPosts($username);
    
        // Now check if we had cached results
        $cached = $this->checkCache($username);
        
        if ($cached) {
            return $cached; // Oops, already fetched unnecessary data!
        }
    
        // Process the data we fetched
        return $this->processData($profile, $comments, $posts);
    }

    ✅ After: Cache-First Approach

    public function handle(): int
    {
        $username = $this->argument('username');
    
        // Fetch lightweight data first
        $profile = $this->fetchUserProfile($username);
    
        // Check cache BEFORE expensive calls
        $cached = $this->checkCache($username);
        
        if ($cached) {
            $this->line('✓ Using cached data');
            return $cached;
        }
    
        // Only fetch expensive data if cache miss
        $this->line('✗ Cache miss - fetching data...');
        $comments = $this->fetchComments($username);
        $posts = $this->fetchPosts($username);
    
        return $this->processData($profile, $comments, $posts);
    }

    Why This Matters

    This pattern reduces unnecessary API requests, saves rate limits, and improves performance. In the example above, we fetch comments and posts only when there’s a cache miss.

    The key insight: lightweight checks first, expensive operations last. Your future self (and your API quota) will thank you.

  • Extend Third-Party Models While Keeping Laravel Auth

    When you need to extend a third-party Eloquent model instead of Laravel’s default base classes, you might lose Laravel’s built-in authentication functionality. Here’s how to preserve it.

    The Problem

    Say you’re integrating a package that provides its own Eloquent models — maybe you’re connecting to an external database or using a library like Corcel to query WordPress data directly. You want your User model to extend the package’s base model, but Laravel’s authentication expects specific contracts and traits.

    The Solution: Manual Implementation

    Laravel’s Authenticatable class (the default base for User models) is just a convenient wrapper. Under the hood, it implements three contracts and uses four traits. You can apply these yourself:

    use Vendor\Package\Model as ThirdPartyModel;
    use Illuminate\Auth\Authenticatable;
    use Illuminate\Auth\MustVerifyEmail;
    use Illuminate\Auth\Passwords\CanResetPassword;
    use Illuminate\Contracts\Auth\Access\Authorizable as AuthorizableContract;
    use Illuminate\Contracts\Auth\Authenticatable as AuthenticatableContract;
    use Illuminate\Contracts\Auth\CanResetPassword as CanResetPasswordContract;
    use Illuminate\Foundation\Auth\Access\Authorizable;
    
    class User extends ThirdPartyModel implements
        AuthenticatableContract,
        AuthorizableContract,
        CanResetPasswordContract
    {
        use Authenticatable, Authorizable, CanResetPassword, MustVerifyEmail;
        use HasFactory, Notifiable; // Your existing traits
    }

    What Each Part Does

    • Authenticatable: Login/logout, remember tokens, password verification
    • Authorizable: Gate and policy authorization
    • CanResetPassword: Password reset emails and tokens
    • MustVerifyEmail: Email verification (optional)

    Check for Conflicts First

    Before applying all four traits, check if the third-party model already implements some of them. For example, if the package model already has Authenticatable, you’d skip that trait to avoid conflicts.

    You can inspect the package’s base model source or test by adding traits one at a time and watching for “trait method conflict” errors.

    Why This Works

    Laravel’s auth system doesn’t care about inheritance — it only checks for contract implementation. As long as your model implements AuthenticatableContract and the required methods (via traits or manual implementation), Auth::user(), middleware, and guards will work normally.

    This pattern lets you integrate any Eloquent-compatible package while keeping Laravel’s authentication intact.

  • Explicit Formatting Constraints for Laravel AI SDK Structured Output

    ## The Problem: Inconsistent AI Output

    When using Laravel AI SDK’s structured output feature, you might notice the AI produces different formatting styles across multiple runs—even with the same prompt. One run might return markdown with `**bold**` and bullet lists, while another returns plain text.

    ## Why This Happens

    Large language models interpret prompts probabilistically. Without explicit constraints, the AI makes formatting decisions based on context and training data patterns. This leads to output variance that’s hard to predict.

    ## The Solution: Explicit Format Constraints

    Add formatting instructions directly in your schema field descriptions:

    “`php
    use Illuminate\Contracts\JsonSchema\JsonSchema;
    use Laravel\Ai\Contracts\HasStructuredOutput;

    class ContentGenerator implements HasStructuredOutput
    {
    public function schema(JsonSchema $schema): array
    {
    return [
    ‘content’ => $schema->string()
    ->description(‘
    Write the content naturally.

    Format: One paragraph per section.
    Plain text only – no markdown or formatting.
    Use newlines for structure.

    Express times in 24-hour format (0600, 1400, 2000).
    ‘)
    ->required(),
    ];
    }
    }
    “`

    ## Key Constraints to Specify

    **Format type:**
    – “Plain text only – no markdown”
    – “Use markdown formatting”
    – “Return as HTML”

    **Structure:**
    – “Use newlines for structure”
    – “One item per line”
    – “Separate sections with double newlines”

    **Consistency rules:**
    – “Express times in HHMM format”
    – “Use sentence case for headings”
    – “No bullet points or numbered lists”

    ## Real-World Example

    In a trail data extraction system, the schema description evolved from generic guidance to explicit constraints:

    “`php
    // Before (inconsistent)
    ‘schedule’ => $schema->string()
    ->description(‘Extract the itinerary from the source.’)
    ->required(),

    // After (consistent)
    ‘schedule’ => $schema->string()
    ->description(‘
    Extract the itinerary from the source.

    Format: List each day with activities one per line.
    Show time (HHMM) and description.
    Plain text only – no markdown or formatting.
    Use newlines for structure.
    ‘)
    ->required(),
    “`

    ## The Result

    After adding explicit formatting constraints:
    – Output became consistent across multiple runs
    – No more surprise markdown in plain-text fields
    – Easier to parse and display in templates

    ## When to Use This

    **Always specify format constraints when:**
    – Output will be displayed directly to users
    – You’re parsing the output programmatically
    – Consistency matters more than creativity
    – Multiple AI runs need identical formatting

    **You can be more lenient when:**
    – The AI is generating creative content
    – Formatting flexibility is desired
    – You’re post-processing the output anyway

    ## Bottom Line

    Don’t assume the AI will infer your formatting preferences. Explicit beats implicit—especially when dealing with probabilistic systems. Two sentences in your schema description can save hours of debugging inconsistent output.