Author: Daryle De Silva

  • Safely Update Environment Variables Across Multiple Production Servers

    The Check-Update-Verify Pattern

    When you need to update configuration values like API credentials across multiple production servers, a systematic approach prevents costly mistakes. Here’s a three-step pattern that gives you full visibility and confidence:

    Step 1: Check Current State

    Before making changes, audit what’s currently deployed across all servers. Use SSH with grep to read environment variables:

    ssh web01 "grep '^API_KEY=' /var/www/app/.env"
    ssh web02 "grep '^API_KEY=' /var/www/app/.env"
    ssh api01 "grep '^API_KEY=' /var/www/app/.env"
    

    This reveals discrepancies immediately. You might discover that some servers already have the updated value, or that different servers are using different credentials entirely.

    Step 2: Update with Precision

    Use sed to make surgical changes without touching other configuration values:

    ssh web01 "sed -i 's/^API_KEY=.*/API_KEY=\"sk_live_abc123xyz\"/' /var/www/app/.env"
    ssh web02 "sed -i 's/^API_KEY=.*/API_KEY=\"sk_live_abc123xyz\"/' /var/www/app/.env"
    ssh api01 "sed -i 's/^API_KEY=.*/API_KEY=\"sk_live_abc123xyz\"/' /var/www/app/.env"
    

    The caret (^) anchor is crucial here—it ensures you only match lines that start with API_KEY=, preventing accidental modifications elsewhere in the file where that string might appear in comments or other contexts.

    Step 3: Verify Success

    Run the same grep command from step 1 across all servers again to confirm consistency:

    ssh web01 "grep '^API_KEY=' /var/www/app/.env"
    ssh web02 "grep '^API_KEY=' /var/www/app/.env"  
    ssh api01 "grep '^API_KEY=' /var/www/app/.env"
    

    All servers should now return identical output. If any server differs, you caught it before it causes problems.

    When to Use This Pattern

    This approach shines when:

    • You don’t have configuration management tools like Ansible or Puppet set up
    • You need to make a one-off change quickly without going through a full deployment pipeline
    • You’re working in an environment where you’re sudoer on production servers
    • The number of servers is small enough that manual SSH is practical (typically under 10-15 servers)

    The pattern trades automation for visibility and control. You see exactly what’s happening at each step, which is valuable when working with sensitive configuration like API credentials or database passwords.

    Pro Tips

    Escape quotes properly: When the value contains quotes, escape them in the sed command: \"value-here\"

    Use a consistent naming pattern: If your .env file has similar variable names like API_KEY, API_KEY_SANDBOX, and PARTNER_API_KEY, the ^ anchor prevents accidentally matching the wrong one.

    Test on one server first: If you’re unsure about the sed syntax, run it on one server, verify the result, then proceed to the others.

    Consider a for loop: For many servers, wrap it in a loop:

    for server in web01 web02 api01; do
        echo "Updating $server..."
        ssh "$server" "sed -i 's/^API_KEY=.*/API_KEY=\"new-value\"/' /var/www/app/.env"
        ssh "$server" "grep '^API_KEY=' /var/www/app/.env"
    done
    

    This pattern isn’t a replacement for proper configuration management, but it’s a pragmatic technique for those moments when you need to make a quick, safe change across a handful of servers without the overhead of a full deployment pipeline.

  • Building Inline Edit UI with Vue.js in Laravel Blade

    Building Inline Edit UI with Vue.js in Laravel Blade

    Inline editing is a great UX pattern – users can edit data right where they see it, without navigating to a separate form. Here’s how to implement it cleanly with Vue.js in a Laravel Blade template.

    The Pattern

    The key is tracking three states: viewing, editing, and saving. Here’s the basic structure:

    new Vue({
        el: '#product-details',
        data: {
            editing: false,
            saving: false,
            form: {
                name: '{{ $product->name }}',
                price: '{{ $product->price }}',
                description: '{{ $product->description }}'
            },
            original: {}
        },
        methods: {
            startEditing() {
                // Store original values for cancel
                this.original = { ...this.form };
                this.editing = true;
            },
            
            cancelEditing() {
                // Restore original values
                this.form = { ...this.original };
                this.editing = false;
            },
            
            async save() {
                this.saving = true;
                
                try {
                    const response = await axios.put(
                        '/api/products/{{ $product->id }}',
                        this.form
                    );
                    
                    // Update local state with server response
                    // This keeps UI in sync without page reload
                    this.form = response.data.product;
                    this.editing = false;
                } catch (error) {
                    alert('Save failed: ' + error.response.data.message);
                } finally {
                    this.saving = false;
                }
            }
        }
    });
    

    The Blade Template

    <div id="product-details">
        <!-- View mode -->
        <div v-if="!editing">
            <strong>Name:</strong> @{{ form.name }}<br>
            <strong>Price:</strong> $@{{ form.price }}<br>
            <strong>Description:</strong> @{{ form.description }}
            
            <button @click="startEditing">Edit</button>
        </div>
        
        <!-- Edit mode -->
        <div v-else>
            <input v-model="form.name" placeholder="Name"><br>
            <input v-model="form.price" type="number" placeholder="Price"><br>
            <textarea v-model="form.description" placeholder="Description"></textarea>
            
            <button @click="save" :disabled="saving">
                @{{ saving ? 'Saving...' : 'Save' }}
            </button>
            <button @click="cancelEditing" :disabled="saving">Cancel</button>
        </div>
    </div>
    

    Key Points

    1. Preserve Original Values: When entering edit mode, clone the form data so cancel can restore it
    2. Update After Save: After successful save, update this.form with the server response. This keeps the UI in sync without refreshing the page
    3. Disable During Save: Disable buttons while saving is true to prevent duplicate submissions
    4. Handle Empty States: Show appropriate messaging when fields are empty (both in view and edit mode)

    The Controller

    public function update(Request $request, Product $product)
    {
        $validated = $request->validate([
            'name' => 'required|string|max:255',
            'price' => 'required|numeric|min:0',
            'description' => 'nullable|string'
        ]);
        
        $product->update($validated);
        
        // Return the updated model
        return response()->json([
            'product' => $product->fresh()
        ]);
    }
    

    Why This Works

    This pattern gives you:

    • Instant feedback (no page reload)
    • Clean cancel behavior (reverts to original values)
    • Server validation (with error handling)
    • Optimistic UI updates (form shows saved data immediately)

    The secret sauce is updating this.form with the server response after save. This ensures your Vue state matches what’s actually in the database, without needing to reload the entire page.

  • Including Polymorphic Parent Models in Laravel API Responses

    Including Polymorphic Parent Models in Laravel API Responses

    When building APIs with polymorphic relationships, you might run into a common pitfall: your API returns the morph pivot data (commentable_type and commentable_id) but not the actual parent model itself.

    The Problem

    Let’s say you have a Comment model that can belong to different types of posts (Articles, Videos, etc.) using a polymorphic relationship:

    // Comment model
    public function post()
    {
        return $this->morphTo();
    }
    

    When you try to include the parent in your API response using ?include=post, you might only get the morph pivot columns instead of the actual post data.

    The Solution

    The key is ensuring your polymorphic relationship is properly defined and that the actual parent model is loaded, not just the pivot data. Here’s how:

    // In your CommentController
    public function index(Request $request)
    {
        $query = Comment::query();
        
        // Support includes via query parameter
        if ($request->has('include')) {
            $includes = explode(',', $request->input('include'));
            
            foreach ($includes as $include) {
                if ($include === 'post') {
                    // Load the actual polymorphic parent
                    $query->with('post');
                }
            }
        }
        
        // Support field selection for included relations
        if ($request->has('fields')) {
            foreach ($request->input('fields', []) as $type => $fields) {
                // Apply sparse fieldsets for each included type
                if ($type === 'articles') {
                    $query->with('post:id,title,slug,published_at');
                }
            }
        }
        
        return $query->get();
    }
    

    Testing Your Endpoint

    Test with various combinations to ensure it works:

    # Basic include
    GET /api/comments?include=post
    
    # Include with field selection
    GET /api/comments?include=post&fields[articles]=title,slug
    
    # Multiple includes
    GET /api/comments?include=author,post&fields[articles]=title
    

    Why This Matters

    When the polymorphic parent isn’t properly loaded, your frontend gets incomplete data. Instead of the article title and content, you’d only see article_id and article_type – forcing additional API calls to fetch the actual data.

    By loading the full parent model, your API becomes more efficient and easier to consume. This follows JSON:API conventions for sparse fieldsets and relationship includes.

  • Extract Nested Closures into Private Methods for Maintainability

    Laravel's collection fluent interface is powerful, but deeply nested closures quickly become unreadable. Here's when and how to refactor.

    ## The Problem: Triple-Nested Closures

    “`php
    return $dates->map(function($date) use ($variants, $allPricing, $defaults) {
    return new DatePrice(
    $variants->map(function($variant, $key) use ($date, $allPricing, $defaults) {
    $match = $allPricing->filter(function($pricing) use ($date, $variant) {
    return $pricing->date === $date && $pricing->variant === $variant;
    })->first();
    return $match?->price ?? $defaults[$key];
    }),
    $date
    );
    });
    “`

    This is doing too much in one statement:

    1. Filtering pricing by date and variant
    2. Falling back to default if not found
    3. Mapping across all variants
    4. Creating the final structure

    ## The Solution: Extract Inner Logic

    Pull the innermost logic into a private method:

    “`php
    private function findPriceForVariantOnDate(
    Collection $allPricing,
    array $variant,
    string $date,
    ?Price $defaultPrice
    ): ?Price {
    return $allPricing
    ->filter(fn($p) => $p->date === $date && $p->matchesVariant($variant))
    ->first()?->price ?? $defaultPrice;
    }
    “`

    Now the main logic becomes clear:

    “`php
    // Restructure: one DatePrice per date, containing prices for all variants
    return $dates->map(function($date) use ($variants, $allPricing, $defaults) {
    return new DatePrice(
    $variants->mapWithKeys(function($variant, $key) use ($date, $allPricing, $defaults) {
    $price = $this->findPriceForVariantOnDate(
    $allPricing,
    $variant,
    $date,
    $defaults[$key]
    );
    return [$key => $price];
    }),
    $date
    );
    });
    “`

    ## When to Extract

    **Extract when:**

    – 3+ levels of nesting
    – Inner logic is reused elsewhere
    – The closure has complex conditionals
    – You need to unit test the inner logic separately

    **Keep inline when:**

    – 1-2 levels of nesting with simple operations
    – The closure is very short (1-2 lines)
    – Extraction would require passing 5+ parameters

    ## Naming Matters

    The extracted method name should read like documentation:

    – `findPriceForVariantOnDate()` – immediately clear what it does
    – `matchesVariant()` – better than `in_array($variant, $pricing->variants)`
    – `getDefaultPrices()` – better than `$fallbacks`

    Future developers (including you) will thank you.

  • Refactoring for Multi-Variant Pricing Support in Laravel

    When evolving a Laravel application to support multiple pricing variants (like product tiers, regions, or customer types), here's a clean refactoring pattern that maintains backward compatibility while enabling complex pricing structures.

    ## The Challenge

    You start with single-variant pricing:

    “`php
    public function fetchPricing($inventory)
    {
    $config = $inventory->api_config_standard;
    $response = $this->client->getPricing($config[“product_id”]);
    return $response->prices;
    }
    “`

    Now you need to support premium, standard, and budget variants – each with different pricing.

    ## The Solution: Reference Extraction Pattern

    Create a helper method that extracts all configured variants:

    “`php
    private function getVariants($inventory): Collection
    {
    return collect($inventory->getAllApiConfigs())
    ->mapWithKeys(fn($config) => [$this->getVariantKey($config) => $config]);
    }

    private function getVariantKey($config): string
    {
    return $config[“tier”] . “_” . $config[“region”];
    }
    “`

    Then refactor your pricing method to loop through all variants:

    “`php
    public function fetchAdvancedPricing($inventory): Collection
    {
    $allPricing = collect();
    $variants = $this->getVariants($inventory);

    foreach ($variants as $variantKey => $config) {
    $response = $this->client->getPricing($config[“product_id”]);

    foreach ($response->dates as $date) {
    $allPricing->push(new DatePrice(
    prices: collect([new VariantPrice($config, $date->price)]),
    date: $date->value
    ));
    }
    }

    // Group dates with same pricing across all variants
    return $allPricing->groupBy(fn($dp) => $dp->date)
    ->map(fn($group) => new DatePrice(
    prices: $group->flatMap(fn($dp) => $dp->prices),
    date: $group->first()->date
    ));
    }
    “`

    ## Default Pricing Fallback

    Some dates might not have pricing for certain variants. Create a fallback:

    “`php
    private function getDefaultPrices($variants, $allPricing): Collection
    {
    $prices = $allPricing->flatMap(fn($dp) => $dp->prices);

    return $variants->map(function($config) use ($prices) {
    return $prices
    ->where(“variant_key”, $this->getVariantKey($config))
    ->sortBy(“price”)
    ->first() ?? new VariantPrice($config, null);
    });
    }
    “`

    ## Benefits

    – **Backward compatible:** Single variant still works (collection of 1)
    – **Flexible:** Add new variants without changing core logic
    – **Clean:** Separation of concerns (extraction, fetching, grouping)

    This pattern works for any multi-dimensional pricing: age groups, membership tiers, regional pricing, seasonal rates, etc.

  • Running Multiple Laravel Mix Hot Reload Servers Simultaneously

    When working on Laravel applications with multiple frontends—think public website, admin dashboard, and client portal—developers often hit a frustrating wall: you can only run one npm run hot process at a time. Switch to working on the admin panel? Restart the entire Node container. Back to the main site? Restart again.

    The culprit? Port conflicts. Every webpack-dev-server instance tries to bind to port 8080 by default, and Docker won’t let two containers claim the same port.

    The Solution: HOT_PORT Environment Variable

    Laravel Mix already supports this use case through the HOT_PORT environment variable. Both your main webpack.mix.js and any project-specific configs check for it:

    // webpack.mix.js
    if (process.env.npm_lifecycle_event === 'hot') {
      mix.webpackConfig({
        devServer: {
          host: '0.0.0.0',
          port: process.env.HOT_PORT || 8080
        }
      });
    }

    This means you can run multiple hot reload servers simultaneously, each on its own port, by defining custom npm scripts:

    // package.json
    {
      "scripts": {
        "hot": "cross-env NODE_ENV=development webpack-dev-server --inline --hot ...",
        "hot:admin": "export HOT_PORT=8081 && cross-env process.env.project='admin' NODE_ENV=development webpack-dev-server --inline --hot ..."
      }
    }

    Docker Setup

    Expose both ports in your docker-compose.yml:

    services:
      node:
        ports:
          - 8080:8080  # Default hot reload
          - 8081:8081  # Admin panel hot reload

    Running Multiple Servers

    Now you can develop on multiple frontends simultaneously:

    # Terminal 1: Default site (port 8080)
    npm run hot
    
    # Terminal 2: Admin panel (port 8081)
    npm run hot:admin

    Each webpack-dev-server watches its own files and serves its own HMR updates. Changes to your public site won’t trigger admin panel rebuilds, and vice versa.

    When This Matters

    This approach shines in monolithic Laravel apps with distinct user interfaces:

    • Customer portal (Vue.js, Tailwind)
    • Admin dashboard (different Vue components, Bootstrap)
    • Partner interface (unique styling, separate Vuex stores)

    Instead of context-switching between npm processes and waiting for container restarts, you keep all your dev servers running. Jump between codebases without losing hot reload state.

    Bonus: Conditional Configs

    Laravel Mix supports loading different webpack configs based on process.env.project:

    // webpack.mix.js
    const project = process.env.project;
    
    if (project) {
      require(`${__dirname}/webpack.mix.${project}.js`);
      return;
    }
    
    // Default config follows...

    This lets you maintain clean, project-specific build configs (e.g., webpack.mix.admin.js, webpack.mix.customer.js) while still leveraging the shared HOT_PORT pattern.

    Key Takeaway

    Laravel Mix’s HOT_PORT support isn’t just a convenience—it’s a workflow multiplier for multi-frontend monoliths. No more container restarts. No more choosing which part of your app gets hot reload. Just parallel dev servers doing what they do best.

  • Debugging Eloquent Relationships with toRawSql() Before Writing Migrations

    You just defined a new Eloquent relationship. Now you need to write the migration. What columns should you create? What should they be named? What foreign keys do you need?

    Stop guessing. Use toRawSql() to see exactly what Eloquent expects before you write a single line of migration code.

    The Problem: Eloquent’s Naming Conventions

    Laravel has conventions for relationship columns:

    • hasMany expects parent_id in the child table
    • morphToMany expects taggable_id and taggable_type
    • hasManyThrough expects specific columns in both intermediate and target tables

    If you get any of these wrong, your queries fail at runtime. The fix: inspect the raw SQL before you migrate.

    Using toRawSql() in Tinker

    Define your relationship in the model:

    // app/Models/Order.php
    public function items()
    {
        return $this->hasMany(Item::class);
    }

    Before writing the migration, open php artisan tinker and run:

    Order::has('items')->toRawSql();

    Output:

    select * from `orders` where exists (
        select * from `items`
        where `orders`.`id` = `items`.`order_id`
    )

    Boom. Eloquent expects an order_id column in the items table. Now you know what to create:

    Schema::create('items', function (Blueprint $table) {
        $table->id();
        $table->foreignId('order_id')->constrained();
        $table->string('name');
        $table->timestamps();
    });

    Example: Polymorphic Relationships

    Let’s say you’re building a tagging system. Products can have tags:

    // app/Models/Product.php
    public function tags()
    {
        return $this->morphToMany(Tag::class, 'taggable');
    }

    What columns does this need? Check Tinker:

    Product::has('tags')->toRawSql();

    Output:

    select * from `products` where exists (
        select * from `tags`
        inner join `taggables` on `tags`.`id` = `taggables`.`tag_id`
        where `products`.`id` = `taggables`.`taggable_id`
        and `taggables`.`taggable_type` = 'App\\Models\\Product'
    )

    Now you know:

    • Pivot table must be named taggables
    • It needs tag_id, taggable_id, and taggable_type columns
    • The taggable_type column will store the full model class name

    Write the migration:

    Schema::create('taggables', function (Blueprint $table) {
        $table->id();
        $table->foreignId('tag_id')->constrained()->cascadeOnDelete();
        $table->morphs('taggable'); // Creates taggable_id and taggable_type
        $table->timestamps();
        
        $table->unique(['tag_id', 'taggable_id', 'taggable_type']);
    });

    Debugging Existing Relationships

    If a relationship query is failing, toRawSql() shows you what Eloquent is actually looking for:

    // This query is failing. Why?
    $orders = Order::with('customer')->get();
    
    // Check the raw SQL
    Order::with('customer')->toRawSql();

    Output shows it’s looking for customer_id in the orders table, but your column is named user_id. Fix it in the relationship definition:

    public function customer()
    {
        return $this->belongsTo(User::class, 'user_id');
    }

    Advanced: Nested Relationships

    For complex queries with nested eager loading, toRawSql() reveals the entire join chain:

    Order::with('items.product')->toRawSql();

    This shows you all the foreign keys Eloquent expects across the entire relationship chain. Invaluable when working with hasManyThrough or deeply nested relationships.

    Pro Tips

    1. Use toRawSql() before migrate: Define relationships first, inspect SQL, then write migrations. Catches naming mismatches immediately.
    2. Works with any query builder method: whereHas, with, has, withCount—if it generates SQL, toRawSql() shows it.
    3. Copy-paste into a database client: Run the raw SQL directly to test if the schema matches your expectations.
    4. Laravel 9+: In earlier versions, use toSql() instead (shows SQL with ? placeholders).

    Bonus: Debugging Performance Issues

    toRawSql() also helps spot N+1 queries and missing indexes:

    // Are we eager loading properly?
    Product::with('category', 'tags')->toRawSql();
    
    // Check if the query uses the right index
    Product::where('status', 'active')->orderBy('created_at', 'desc')->toRawSql();

    Paste the SQL into EXPLAIN to see if your indexes are being used.

    Stop Guessing, Start Inspecting

    Eloquent’s conventions are predictable, but when you’re dealing with polymorphic relationships, custom foreign keys, or deep nesting, toRawSql() removes all guesswork. Define the relationship, inspect the SQL, write the migration. No runtime surprises.

  • Polymorphic Bridge Pattern: Connecting Laravel Models to External Systems

    When integrating Laravel with an external system—a headless CMS, a legacy database, an analytics platform—you often need multiple Laravel models to link to records in that external system. The naive approach is adding a cms_post_id column to each model. But that creates tight coupling and makes future changes painful.

    The better pattern: use a polymorphic pivot table as a bridge layer between your domain models and the external system.

    The Problem

    You have Product, Category, and Tag models. Each can link to posts in an external CMS. The obvious solution:

    // products table
    cms_post_id (bigint)
    
    // categories table
    cms_post_id (bigint)
    
    // tags table
    cms_post_id (bigint)

    Now every model needs CMS-specific logic. Testing requires the CMS. Migrating to a different CMS means updating every model. You’ve tightly coupled your domain to infrastructure.

    The Solution: Polymorphic Bridge

    Create an intermediate pivot table that maps any model to external records:

    // Migration: create_cms_links_table.php
    Schema::create('cms_links', function (Blueprint $table) {
        $table->id();
        $table->morphs('linkable'); // linkable_id, linkable_type
        $table->unsignedBigInteger('cms_post_id');
        $table->timestamps();
        
        $table->unique(['linkable_id', 'linkable_type', 'cms_post_id']);
    });

    Now your models use morphToMany relationships to connect through the bridge:

    // app/Models/Product.php
    public function cmsPosts()
    {
        return $this->morphToMany(
            CmsPost::class,
            'linkable',
            'cms_links',
            'linkable_id',
            'cms_post_id'
        );
    }
    
    // app/Models/Category.php
    public function cmsPosts()
    {
        return $this->morphToMany(
            CmsPost::class,
            'linkable',
            'cms_links',
            'linkable_id',
            'cms_post_id'
        );
    }

    Use it like any relationship:

    $product->cmsPosts()->attach($cmsPostId);
    $product->cmsPosts; // Collection of CmsPost models

    Enforce Clean Type Names with Morph Maps

    By default, Laravel stores the full class name in linkable_type: App\Models\Product. If you ever refactor namespaces or rename models, those database values break.

    Fix this with Relation::enforceMorphMap() in a service provider:

    // app/Providers/AppServiceProvider.php
    use Illuminate\Database\Eloquent\Relations\Relation;
    
    public function boot()
    {
        Relation::enforceMorphMap([
            'product' => Product::class,
            'category' => Category::class,
            'tag' => Tag::class,
        ]);
    }

    Now linkable_type stores product instead of App\Models\Product. Your database is decoupled from your code structure.

    Converting to One-to-One Relationships

    If a model should only link to one CMS post, use ofOne() to convert the many-to-many into a one-to-one:

    public function cmsPost()
    {
        return $this->morphToMany(
            CmsPost::class,
            'linkable',
            'cms_links',
            'linkable_id',
            'cms_post_id'
        )->one();
    }

    Now $product->cmsPost returns a single CmsPost instance (or null), not a collection.

    Why This Pattern Wins

    • Clean domain models: No CMS-specific columns polluting your core tables.
    • Flexible: Need to link a new model? Just add the relationship—no migration to add columns.
    • Swappable: Migrating from WordPress to Contentful? Update the CmsPost model and the bridge table, your domain models stay untouched.
    • Testable: Mock the relationship, no need for the external CMS in tests.

    Real-World Example: Analytics Platform

    Suppose you’re tracking user actions in an external analytics platform. Each User, Order, and Event can have an analytics profile ID.

    // Migration
    Schema::create('analytics_links', function (Blueprint $table) {
        $table->id();
        $table->morphs('trackable');
        $table->string('analytics_profile_id');
        $table->timestamps();
        
        $table->unique(['trackable_id', 'trackable_type']);
    });
    
    // Models
    class User extends Model
    {
        public function analyticsProfile()
        {
            return $this->morphToMany(
                AnalyticsProfile::class,
                'trackable',
                'analytics_links',
                'trackable_id',
                'analytics_profile_id'
            )->one();
        }
    }
    
    // Usage
    $user->analyticsProfile()->attach($profileId);
    $profileId = $user->analyticsProfile->id;

    Your core models stay focused on your business logic. The analytics integration is isolated to the bridge table and the AnalyticsProfile model.

    When Not to Use This

    If the foreign ID is part of your domain—like user_id on an Order—put it directly in the table. This pattern is for external systems that your domain doesn’t inherently care about.

    But when you’re integrating with CMSs, analytics platforms, search engines, or any third-party system where multiple models need to link out, the polymorphic bridge pattern keeps your domain clean and your code flexible.

  • Draft-Then-Publish Pattern for Transactional Safety with External Systems

    When your Laravel app integrates with external systems—third-party APIs, CMS platforms via CLI, payment gateways—standard database transactions can’t protect you. If your Laravel transaction succeeds but the external call fails (or vice versa), you’re left with inconsistent state across systems.

    The solution: create external records in a draft or pending state, complete all Laravel operations within a transaction, and only promote the external resource to published/active after the transaction commits.

    The Pattern

    public function handle(): void {
        $createdExternalIds = [];
        
        foreach ($this->items as $item) {
            try {
                // Step 1: Create in external system as DRAFT
                $externalId = $this->createExternalResource($item, status: 'draft');
                $createdExternalIds[] = $externalId;
                
                // Step 2: Laravel DB transaction
                DB::transaction(function () use ($item, $externalId) {
                    $record = Record::create([
                        'external_id' => $externalId,
                        'title' => $item['title'],
                        'status' => 'pending'
                    ]);
                    
                    $record->tags()->sync($item['tags']);
                    // ... other database operations
                });
                
                // Step 3: Success! Publish the external resource
                $this->publishExternalResource($externalId);
                
            } catch (Throwable $e) {
                report($e);
                $this->error(sprintf('%s: %s', $item['title'], $e->getMessage()));
                continue; // Keep processing other items
            }
        }
    }

    Why This Works

    If the Laravel transaction fails for any reason—validation, constraint violation, or application error—the external record remains in draft state. You can clean it up later, retry the import, or manually investigate. Nothing went live that shouldn’t have.

    If the external API call succeeds but Laravel fails, you still have a draft record in the external system. Your database stays clean.

    Only when both sides succeed does the external record become visible to users.

    When to Use This

    • Content management systems: Create posts/pages as drafts, link them in Laravel, then publish.
    • Third-party APIs: If the API supports draft/pending states (Stripe payment intents, Shopify draft orders, etc.).
    • Batch imports: Processing hundreds or thousands of records where you want partial success rather than all-or-nothing.

    Implementation Tips

    1. Abstract the external calls into methods

    private function createExternalResource(array $data, string $status): string
    {
        $result = Http::post('https://api.example.com/resources', [
            'title' => $data['title'],
            'status' => $status,
        ]);
        
        return $result->json('id');
    }
    
    private function publishExternalResource(string $id): void
    {
        Http::patch("https://api.example.com/resources/{$id}", [
            'status' => 'published'
        ]);
    }

    2. Use continue on errors to process remaining items

    In batch operations, don’t let one failure kill the entire import. Log it, report it, move on.

    3. Consider cleanup jobs for orphaned drafts

    If your process crashes halfway through, you might have draft records in the external system with no corresponding Laravel record. Schedule a daily cleanup job that checks for drafts older than 24 hours and deletes them.

    Real-World Example: CMS Integration

    Let’s say you’re building a product catalog in Laravel that syncs to a headless CMS for public display. Each Laravel Product needs a corresponding CMS post.

    DB::transaction(function () use ($productData) {
        // Create CMS post as draft
        $cmsPostId = $this->cms->createPost([
            'title' => $productData['name'],
            'status' => 'draft',
            'content' => $productData['description']
        ]);
        
        // Create Laravel product
        $product = Product::create([
            'name' => $productData['name'],
            'sku' => $productData['sku'],
            'cms_post_id' => $cmsPostId,
        ]);
        
        $product->categories()->sync($productData['category_ids']);
        
        // Publish the CMS post
        $this->cms->publishPost($cmsPostId);
    });

    If anything inside the transaction fails—duplicate SKU, missing category, whatever—the CMS post stays as a draft. Your public site never shows broken data.

    When You Can’t Use Drafts

    Not every external system supports draft states. In those cases:

    • Do the Laravel work first, then make the external call after the transaction commits.
    • Use queued jobs for the external call—if it fails, retry logic kicks in.
    • Store the external state in a pending column in Laravel, update it to synced after success.

    But if the external system does support drafts or pending states, use them. It’s the cleanest way to maintain consistency across both systems.

  • Fixing 504 Gateway Timeout in Docker Development

    The Problem

    You’re running a Laravel app in Docker (nginx + PHP-FPM), and you keep hitting 504 Gateway Timeout errors on pages that work fine in production. Long-running reports, imports, and exports all fail locally but succeed on the live server.

    This is a configuration mismatch: your local Docker setup has default timeouts, but production doesn’t.

    The Root Cause

    Two layers have timeout settings that need to align:

    1. PHP execution limits: How long PHP will run before killing a script
    2. nginx FastCGI timeouts: How long nginx will wait for PHP-FPM to respond

    When either of these times out before your code finishes, you get a 504.

    Check Production Settings First

    SSH into your production server and check what’s actually running:

    php -i | grep -E 'max_execution_time|max_input_time|memory_limit'

    You’ll probably see something like:

    max_execution_time => 0
    max_input_time => -1
    memory_limit => -1

    0 and -1 mean unlimited. Production doesn’t kill long-running scripts. Your local Docker setup probably has defaults like 30s for execution time and 60s for nginx FastCGI timeout.

    Fix 1: PHP Timeout Settings

    Create or update your PHP overrides file:

    ; docker/php/php-overrides.ini
    max_execution_time = 0
    max_input_time = -1
    memory_limit = -1
    upload_max_filesize = 50M
    post_max_size = 50M

    Mount this file in your docker-compose.yml:

    services:
      php:
        image: php:8.2-fpm
        volumes:
          - ./docker/php/php-overrides.ini:/usr/local/etc/php/conf.d/99-overrides.ini
          - ./:/var/www/html

    Fix 2: nginx FastCGI Timeouts

    Update your nginx site config:

    # docker/nginx/site.conf
    server {
        listen 80;
        root /var/www/html/public;
        index index.php;
    
        location ~ \.php$ {
            fastcgi_pass php:9000;
            fastcgi_index index.php;
            fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
            include fastcgi_params;
    
            # Add these timeout settings
            fastcgi_read_timeout 300s;
            fastcgi_connect_timeout 300s;
            fastcgi_send_timeout 300s;
        }
    }

    Mount this in docker-compose.yml:

    services:
      nginx:
        image: nginx:alpine
        volumes:
          - ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
          - ./:/var/www/html
        ports:
          - "8080:80"

    Why These Numbers Matter

    Default Behavior (Broken)

    • PHP max_execution_time: 30s
    • nginx fastcgi_read_timeout: 60s
    • Your report generation: 120s

    Result: PHP kills the script at 30s → nginx waits until 60s → 504 error at 60s

    Production Behavior (Works)

    • PHP max_execution_time: 0 (unlimited)
    • nginx fastcgi_read_timeout: 300s (5 minutes)
    • Your report generation: 120s

    Result: PHP finishes at 120s → nginx receives response → success

    Full docker-compose.yml Example

    version: '3.8'
    
    services:
      nginx:
        image: nginx:alpine
        ports:
          - "8080:80"
        volumes:
          - ./docker/nginx/site.conf:/etc/nginx/conf.d/default.conf
          - ./:/var/www/html
        depends_on:
          - php
    
      php:
        image: php:8.2-fpm
        volumes:
          - ./docker/php/php-overrides.ini:/usr/local/etc/php/conf.d/99-overrides.ini
          - ./:/var/www/html
        environment:
          - DB_HOST=mysql
          - DB_DATABASE=laravel
          - DB_USERNAME=root
          - DB_PASSWORD=secret
    
      mysql:
        image: mysql:8.0
        environment:
          MYSQL_ROOT_PASSWORD: secret
          MYSQL_DATABASE: laravel
        volumes:
          - mysql_data:/var/lib/mysql
    
    volumes:
      mysql_data:

    After Making Changes

    Restart your containers:

    docker-compose down
    docker-compose up -d

    Verify PHP settings took effect:

    docker-compose exec php php -i | grep max_execution_time

    You should see max_execution_time => 0.

    When to Use Unlimited vs Fixed Timeouts

    Development: Unlimited (What We Just Did)

    • Mirrors production behavior
    • Prevents false negatives (things that work in prod fail locally)
    • Easier debugging (long operations don’t timeout mid-execution)

    Production: Consider Limits

    Unlimited timeouts in production can be dangerous:

    • Runaway scripts can hang forever
    • Resource exhaustion under load
    • Harder to detect infinite loops

    If your production has unlimited timeouts and you’re seeing issues, consider:

    max_execution_time = 300  ; 5 minutes
    memory_limit = 512M        ; Generous but not unlimited

    The Takeaway

    When you get 504 errors in Docker that don’t happen in production, check timeout alignment between:

    1. PHP execution limits (php.ini or php-overrides.ini)
    2. nginx FastCGI timeouts (fastcgi_read_timeout)

    Mirror your production settings locally. Don’t debug phantom timeout issues—just make the environments match.