Blog

  • Extract Cookie Domain from URL — Don’t Hardcode It

    Extract Cookie Domain from URL — Don’t Hardcode It

    Sending cookies to an API? Don’t hardcode the domain. Extract it from the URL instead.

    The Problem

    // ❌ Hardcoded domain — breaks when URL changes
    $cookieJar->setCookie(new SetCookie([
        'Name' => 'session',
        'Value' => $token,
        'Domain' => 'api.example.com',
    ]));

    Hardcoded domains break the moment someone changes the base URL in config, or you switch between staging and production environments.

    The Fix

    // ✅ Extract domain dynamically
    $baseUrl = config('services.api.base_url');
    $domain = parse_url($baseUrl, PHP_URL_HOST);
    
    $cookieJar->setCookie(new SetCookie([
        'Name' => 'session',
        'Value' => $token,
        'Domain' => $domain,
    ]));

    parse_url() with PHP_URL_HOST gives you just the hostname — no protocol, no path, no port. Clean and environment-agnostic.

    Takeaway

    Any time you need a domain, host, or path from a URL — use parse_url(). It handles edge cases (ports, trailing slashes, query strings) that string manipulation misses.

  • UUID v1 for Sessions, UUID v4 for Requests

    UUID v1 for Sessions, UUID v4 for Requests

    Not all UUIDs are created equal. When you need to replicate how a browser or external system generates identifiers, the version matters.

    UUID v1 vs v4

    UUID v4 is random — great for request IDs where uniqueness is all you need:

    use Ramsey\Uuid\Uuid;
    
    // Each request gets a unique random ID
    $requestId = Uuid::uuid4()->toString();
    // e.g., "a1b2c3d4-e5f6-4a7b-8c9d-0e1f2a3b4c5d"

    UUID v1 is time-based — useful for session IDs where sortability and temporal ordering matter:

    // Session ID that encodes when it was created
    $sessionId = Uuid::uuid1()->toString();
    // e.g., "6ba7b810-9dad-11d1-80b4-00c04fd430c8"

    When to Use Which

    • v4 (random): Request IDs, correlation IDs, idempotency keys — anything where uniqueness matters but order doesn’t
    • v1 (time-based): Session IDs, event IDs, audit logs — anything where you want to sort by creation time or match sequential behavior

    Takeaway

    Match the UUID version to the lifecycle. Random for one-off requests, time-based for persistent sessions. It’s a small detail that makes debugging much easier when you’re tracing requests through logs.

  • Don’t Hardcode Cache TTL — Use What the API Tells You

    Don’t Hardcode Cache TTL — Use What the API Tells You

    Working with an API that returns authentication tokens? Don’t hardcode the cache TTL. The API already tells you when the token expires — use it.

    The Common Mistake

    // ❌ Hardcoded — what if the API changes expiry?
    Cache::put('api_token', $token, 3600);

    Hardcoding means your cache could expire before the token does (wasting API calls) or after it does (causing auth failures).

    The Fix

    // ✅ Dynamic — uses what the API tells you
    $response = Http::post('https://api.example.com/auth', [
        'client_id' => config('services.api.client_id'),
        'client_secret' => config('services.api.client_secret'),
    ]);
    
    $data = $response->json();
    $token = $data['access_token'];
    $expiresIn = $data['expires_in']; // seconds
    
    // Cache with a small buffer (expire 60s early)
    Cache::put('api_token', $token, $expiresIn - 60);

    The expires_in field is there for a reason. Subtract a small buffer (30-60 seconds) to avoid edge cases where your cache and the token expire at the same instant.

    Takeaway

    Let the API dictate your cache duration. It’s one less magic number in your codebase, and it automatically adapts if the provider changes their token lifetime.

  • Run the Command, Then Build What It Needs

    Run the Command, Then Build What It Needs

    When integrating with a complex third-party API, don’t try to architect everything upfront. Start by running the integration point and let the errors guide you.

    The Anti-Pattern

    You read the API docs (if they exist). You design your data models. You write adapters, mappers, and DTOs. Then you finally make your first API call and… nothing works as documented.

    The Better Way

    Run the command first. Let it fail. Each error tells you exactly what to build next:

    # Step 1: Try the API call
    php artisan integration:sync
    
    # Error: "Class ApiClient not found"
    # → Build the client
    
    # Error: "Missing authentication"  
    # → Add the auth flow
    
    # Error: "Cannot map response to DTO"
    # → Build the DTO from the actual response

    Why This Works

    Errors are free documentation. Each one tells you the next thing to build — nothing more. You avoid over-engineering, and every line of code you write solves an actual problem.

    Takeaway

    Stop planning. Start running. Let errors drive your implementation order. You’ll ship faster and build only what you actually need.

  • Tinker for Quick Regex Validation Before Committing

    Tinker for Quick Regex Validation Before Committing

    Testing regex patterns before committing them? Don’t fire up a whole test suite. Use tinker --execute for instant validation.

    The Fast Way

    Laravel’s tinker has an --execute flag that runs code and exits. Perfect for one-liner regex tests:

    php artisan tinker --execute="var_dump(preg_match('/^(?=.*cat)(?=.*dog)/', 'cat and dog'))"

    Output: int(1) (match found)

    Try another:

    php artisan tinker --execute="var_dump(preg_match('/^(?=.*cat)(?=.*dog)/', 'only cat here'))"

    Output: int(0) (no match)

    Why It’s Better

    No need to:

    • Write a test file
    • Create a route
    • Open an interactive REPL session
    • Fire up PHPUnit

    Just run, check output, adjust pattern, run again. Fast feedback loop.

    Takeaway

    Use tinker --execute for quick regex (and other code) validation. It runs in your app context with all your dependencies loaded. Way faster than writing throwaway test files.

  • Regex Lookaheads: Check Multiple Words Without Verbose Permutations

    Regex Lookaheads: Check Multiple Words Without Verbose Permutations

    Need to validate that a string contains multiple words in any order? Don’t write 6 permutations of the same regex. Use positive lookaheads instead.

    The Problem

    You want to check if a string contains “cat” AND “dog” AND “bird” in any order. The naive approach is a mess of permutations — 6 for 3 words, 24 for 4 words. No thanks.

    The Fix: Chained Positive Lookaheads

    $pattern = '/^(?=.*cat)(?=.*dog)(?=.*bird)/';
    $text = 'I saw a bird, then a cat, then a dog';
    
    if (preg_match($pattern, $text)) {
        echo 'All three words found!';
    }

    Each (?=.*word) is a separate assertion. All must pass. Order doesn’t matter. Add a fourth word? Add one more lookahead. Clean and scalable.

    Takeaway

    Positive lookaheads let you check multiple conditions without caring about order. Use (?=.*word) for each required word. Works in PHP, JavaScript, Python — basically everywhere.

  • Google Drive Mounted ≠ Google Drive Synced

    Google Drive Mounted ≠ Google Drive Synced

    Google Drive is mounted. The folder exists. You open a file. It’s empty or throws an error. Welcome to the cloud storage race condition.

    Mounted ≠ Synced

    Cloud storage clients (Google Drive, OneDrive, Dropbox) mount folders immediately on startup. But syncing the actual files? That happens in the background. The folder exists, but the files are stubs waiting to download.

    If your startup script tries to read those files before sync completes, you’ll get random failures.

    The Fix: Wait for Sync

    Check if files are actually synced before using them:

    #!/bin/bash
    FILE="/path/to/cloud-storage/config.json"
    
    while [ ! -s "$FILE" ]; do
        echo "Waiting for $FILE to sync..."
        sleep 2
    done
    
    echo "File synced, proceeding..."

    The -s flag checks if the file exists and has content (not a 0-byte stub).

    Takeaway

    Cloud storage mount operations are instant lies. Always verify files are actually synced before using them in startup scripts. Check for file size, not just existence.

  • Why WSL boot Command Doesn’t Work When systemd=true

    Why WSL boot Command Doesn’t Work When systemd=true

    Enabled systemd=true in WSL2 and suddenly your boot commands stopped working? Yeah, that’s by design. The boot command in wsl.conf doesn’t play nice with systemd.

    Why It Breaks

    When you enable systemd in WSL2, it becomes PID 1. The traditional boot command runs before systemd starts, but systemd remounts everything and resets the environment. Your boot script ran, but systemd wiped it out.

    The Solution: Use systemd Services

    Stop fighting systemd. Use it instead. Create a proper systemd service:

    # /etc/systemd/system/my-startup.service
    [Unit]
    Description=My WSL Startup Script
    After=network.target
    
    [Service]
    Type=oneshot
    ExecStart=/usr/local/bin/my-startup.sh
    RemainAfterExit=yes
    
    [Install]
    WantedBy=multi-user.target

    Enable it: sudo systemctl enable my-startup.service

    Takeaway

    When systemd=true, forget the boot command exists. Create systemd services like you would on any Linux system. It’s more work upfront, but it actually works.

  • WSL2 Boots Before Windows Services (And How to Fix It)

    WSL2 Boots Before Windows Services (And How to Fix It)

    Here’s a fun edge case: WSL2 can boot faster than Windows services start. Your boot script tries to mount a network drive? Fails. Connect to a VPN? Not running yet. Access a Windows service? Still loading.

    The Race Condition

    WSL2 starts almost instantly when Windows boots. Windows services? They take their sweet time. If your WSL boot script depends on them, you’ll get random failures on startup.

    The Fix: Retry Loop

    Instead of hoping the service is ready, add a retry loop with exponential backoff:

    #!/bin/bash
    MAX_RETRIES=5
    RETRY=0
    
    while [ $RETRY -lt $MAX_RETRIES ]; do
        if mount_network_drive; then
            echo "Mounted successfully"
            break
        fi
        RETRY=$((RETRY + 1))
        sleep $((2 ** RETRY))  # 2, 4, 8, 16, 32 seconds
    done

    Takeaway

    WSL2 boot scripts can’t assume Windows is fully ready. Add retries with backoff for anything that depends on Windows services. Your future self will thank you when it works on the first boot after a restart.

  • Laravel Events: Stop Calling Services Directly

    Laravel Events: Stop Calling Services Directly

    You’re building a workflow. When an order is cancelled, you need to process a refund. The straightforward approach: call the refund service directly.

    But that creates coupling you’ll regret later.

    The Tight-Coupling Trap

    Here’s what most developers write first:

    class CancelOrderService
    {
        public function __construct(
            private RefundService $refundService
        ) {}
        
        public function cancel(Order $order): void
        {
            $order->update(['status' => 'cancelled']);
            
            // Directly call refund logic
            if ($order->payment_status === 'paid') {
                $this->refundService->process($order);
            }
        }
    }

    This works. But now:

    • CancelOrderService knows about refunds
    • Adding more post-cancellation logic (notifications, inventory updates, analytics) means editing this class every time
    • Testing cancellation requires mocking refund logic

    You’ve tightly coupled two separate concerns.

    Event-Driven Decoupling

    Instead, dispatch an event and let listeners handle side effects:

    class CancelOrderService
    {
        public function cancel(Order $order): void
        {
            $order->update(['status' => 'cancelled']);
            
            // Broadcast what happened, don't dictate what happens next
            event(new OrderCancelled($order));
        }
    }

    Now the cancellation service doesn’t know or care what happens after. It announces the cancellation and moves on.

    Listeners Do the Work

    Register listeners to handle post-cancellation tasks:

    // app/Providers/EventServiceProvider.php
    
    protected $listen = [
        OrderCancelled::class => [
            ProcessAutomaticRefund::class,
            NotifyCustomer::class,
            UpdateInventory::class,
            LogAnalytics::class,
        ],
    ];

    Each listener is independent:

    class ProcessAutomaticRefund
    {
        public function __construct(
            private RefundService $refundService
        ) {}
        
        public function handle(OrderCancelled $event): void
        {
            $order = $event->order;
            
            if ($order->payment_status === 'paid') {
                $this->refundService->process($order);
            }
        }
    }

    Now you can:

    • Add listeners without touching the cancellation logic
    • Remove listeners when features are deprecated
    • Test independently — cancel logic doesn’t know refund logic exists
    • Queue listeners individually for better performance

    When to Use Events

    Use events when:

    • Multiple things need to happen: One action triggers several side effects
    • Logic might expand: You anticipate adding more post-action tasks
    • Cross-domain concerns: Orders triggering inventory, notifications, analytics
    • Async processing: Some tasks can be queued, others need to run immediately

    Skip events when:

    • Single responsibility: Only one thing ever happens
    • Tight integration required: Caller needs return values or transaction control
    • Simple workflows: Over-engineering a two-step process

    Bonus: Queue Listeners Selectively

    Make slow listeners async:

    class NotifyCustomer implements ShouldQueue
    {
        public function handle(OrderCancelled $event): void
        {
            // Runs in background, doesn't slow down cancellation
            Mail::to($event->order->customer)->send(new OrderCancelledEmail());
        }
    }

    Now critical listeners (refund) run immediately, while optional ones (email) queue in the background.

    The Takeaway

    Stop calling side effects directly. Dispatch events instead. Your code becomes more modular, testable, and flexible. When requirements change (they always do), you add a listener instead of refactoring core logic.

    Events aren’t overkill — they’re how you build systems that scale without breaking.