Blog

  • Constructor Injection Over Property Setting in Laravel Service Providers

    Constructor Injection Over Property Setting in Laravel Service Providers

    I was reviewing a service provider that registered a class by newing it up and then setting properties on it with extend(). It worked, but it was fragile — properties could be overwritten later, and you couldn’t make them readonly.

    The Before

    // AppServiceProvider.php
    $this->app->bind(NotificationPlugin::class, function ($app) {
        $plugin = new NotificationPlugin();
        $plugin->apiKey = config('services.notify.key');
        $plugin->endpoint = config('services.notify.url');
        $plugin->timeout = 30;
        return $plugin;
    });

    This pattern has a few problems:

    • Properties are mutable — anything can overwrite $plugin->apiKey later
    • No way to use PHP 8.1’s readonly keyword
    • If you forget to set a property, you get a runtime error instead of a constructor error
    • Hard to test — you need to set up each property individually in tests

    The After

    class NotificationPlugin
    {
        public function __construct(
            public readonly string $apiKey,
            public readonly string $endpoint,
            public readonly int $timeout = 30,
        ) {}
    }
    
    // AppServiceProvider.php
    $this->app->bind(NotificationPlugin::class, function ($app) {
        return new NotificationPlugin(
            apiKey: config('services.notify.key'),
            endpoint: config('services.notify.url'),
            timeout: 30,
        );
    });

    What You Get

    Immutability. Once constructed, the object can’t be modified. readonly enforces this at the language level.

    Fail fast. If you forget a required parameter, PHP throws a TypeError at construction time — not some random null error 200 lines later.

    Easy testing. Just new NotificationPlugin('test-key', 'http://localhost', 5). No setup ceremony.

    Named arguments make it readable. PHP 8’s named parameters mean the service provider binding reads like a config file.

    The Rule

    If you’re setting properties on an object after construction in a service provider, refactor to constructor injection. It’s more explicit, more testable, and lets you use readonly. Your future self will thank you when debugging a “how did this property change?” mystery.

  • Handling Delayed API Responses with Laravel Jobs and Callbacks

    Handling Delayed API Responses with Laravel Jobs and Callbacks

    Some third-party APIs don’t give you an instant answer. You send a request, they return "status": "processing", and you’re expected to poll until the result is ready. Payment gateways do this a lot — especially for bank transfers and manual review flows.

    Here’s the pattern that’s worked well for handling this in Laravel.

    The Problem

    Your controller sends a request to an external API. Instead of a final result, you get:

    {
        "transaction_id": "txn_abc123",
        "status": "processing",
        "estimated_completion": "30s"
    }

    You can’t block the HTTP request for 30 seconds. But you also can’t just ignore it — your workflow depends on the result.

    The Solution: Dispatch a Polling Job

    // In your service
    public function initiatePayment(Order $order): void
    {
        $response = Http::post('https://api.provider.com/charge', [
            'amount' => $order->total,
            'reference' => $order->reference,
        ]);
    
        if ($response->json('status') === 'processing') {
            PollPaymentStatus::dispatch(
                transactionId: $response->json('transaction_id'),
                orderId: $order->id,
                attempts: 0,
            )->delay(now()->addSeconds(10));
        }
    }

    The Polling Job

    class PollPaymentStatus implements ShouldQueue
    {
        use Dispatchable, InteractsWithQueue, Queueable;
    
        private const MAX_ATTEMPTS = 10;
        private const POLL_INTERVAL = 15; // seconds
    
        public function __construct(
            private readonly string $transactionId,
            private readonly int $orderId,
            private readonly int $attempts,
        ) {}
    
        public function handle(): void
        {
            $response = Http::get(
                "https://api.provider.com/status/{$this->transactionId}"
            );
    
            $status = $response->json('status');
    
            if ($status === 'completed') {
                $this->onSuccess($response->json());
                return;
            }
    
            if ($status === 'failed') {
                $this->onFailure($response->json('error'));
                return;
            }
    
            // Still processing — re-dispatch with backoff
            if ($this->attempts >= self::MAX_ATTEMPTS) {
                $this->onTimeout();
                return;
            }
    
            self::dispatch(
                $this->transactionId,
                $this->orderId,
                $this->attempts + 1,
            )->delay(now()->addSeconds(self::POLL_INTERVAL));
        }
    
        private function onSuccess(array $data): void
        {
            $order = Order::find($this->orderId);
            $order->markAsPaid($data['reference']);
            // Continue your workflow...
        }
    
        private function onFailure(string $error): void
        {
            Log::error("Payment failed: {$error}", [
                'transaction_id' => $this->transactionId,
            ]);
        }
    
        private function onTimeout(): void
        {
            Log::warning("Payment polling timed out after " . self::MAX_ATTEMPTS . " attempts");
        }
    }

    Why This Works

    The job re-dispatches itself with a delay, creating a non-blocking polling loop. Your queue worker handles the timing. Your controller returns immediately. And you get clean callback methods (onSuccess, onFailure, onTimeout) for each outcome.

    The key insight: the job IS the polling loop. Each dispatch is one iteration. The delay between dispatches is your poll interval. And the max attempts give you a clean exit.

  • Interface Naming: Follow Your Parent Verb Pattern

    Interface Naming: Follow Your Parent Verb Pattern

    Yesterday I was refactoring some code that had a messy inheritance hierarchy. A base class had a method called allowsRefund(), and a child interface was named SupportsPartialRefund.

    Read that out loud: “This class allows refund, and supports partial refund.” Two different verbs for the same concept. It’s subtle, but it makes the codebase harder to scan.

    The Fix

    Rename the interface to match the parent’s verb:

    // ❌ Mixed verbs
    class PaymentGateway
    {
        public function allowsRefund(): bool { ... }
    }
    
    interface SupportsPartialRefund
    {
        public function getPartialRefundLimit(): Money;
    }
    
    // ✅ Consistent verb pattern
    class PaymentGateway
    {
        public function allowsRefund(): bool { ... }
    }
    
    interface AllowsPartialRefund
    {
        public function getPartialRefundLimit(): Money;
    }

    Why This Matters

    When you’re scanning a class that implements multiple interfaces, consistent naming lets you instantly understand the hierarchy:

    class StripeGateway extends PaymentGateway
        implements AllowsPartialRefund, AllowsRecurringCharge
    {
        // The "Allows" prefix immediately tells you
        // these extend the parent's capability pattern
    }

    If one used Supports and another used Allows, you’d waste mental energy wondering if there’s a meaningful difference. (There isn’t.)

    The Rule

    When naming an interface that extends a parent class’s concept, use the same verb the parent uses. If the parent says allows, the interface says Allows. If the parent says supports, the interface says Supports. Don’t mix.

    Small naming consistency compounds across a large codebase.

  • Ollama num_ctx: Why Setting It Higher Than the Model Supports Backfires

    Ollama num_ctx: Why Setting It Higher Than the Model Supports Backfires

    When running local LLMs with Ollama, you can set num_ctx to control the context window size. But there’s a ceiling you might not expect.

    The Gotcha

    Every model has an architectural limit baked into its training. Setting num_ctx higher than that limit doesn’t give you more context — it gives you garbage output or silent truncation:

    # This model was trained with 8K context
    ollama run llama3 --num_ctx 32768
    # Result: degraded output beyond 8K, not extended context

    The num_ctx parameter allocates memory for the KV cache, but the model’s positional embeddings only know how to handle positions it saw during training.

    How to Check the Real Limit

    # Check model metadata
    ollama show llama3 --modelfile | grep num_ctx
    
    # Or check the model card
    ollama show llama3

    The model card or GGUF metadata will tell you the trained context length. That’s your actual ceiling.

    What About YaRN and RoPE Scaling?

    Some models support extended context through YaRN (Yet another RoPE extensioN) or other RoPE scaling methods. These are baked into the model weights during fine-tuning — you can’t just enable them with a flag.

    If a model advertises 128K context, it was trained or fine-tuned with RoPE scaling to handle that. If it advertises 8K, setting num_ctx=128000 won’t magically give you 128K.

    The Rule

    Match num_ctx to what the model actually supports. Going lower saves memory. Going higher wastes memory and produces worse output. Check the model card, not your wishful thinking.

  • Check If the Binary Exists Before Installing It in Docker

    Check If the Binary Exists Before Installing It in Docker

    When you’re setting up a self-hosted service in Docker, you might reach for apt-get install ffmpeg in your Dockerfile. But many Docker images already ship with it — and installing a second copy just adds build time and image bloat.

    The Pattern: Check Before Installing

    Before adding any system dependency to your Dockerfile, check if the base image already includes it:

    # Inside a running container
    which ffmpeg
    ffmpeg -version
    
    # Or in Dockerfile
    RUN which ffmpeg || apt-get update && apt-get install -y ffmpeg

    Many application images (Nextcloud, Jellyfin, various media servers) bundle ffmpeg because they need it for thumbnail generation or transcoding. Installing it again is wasteful at best and can cause version conflicts at worst.

    The Broader Lesson

    This applies to any binary dependency:

    • ImageMagick — often pre-installed in PHP images
    • curl/wget — present in most base images
    • ffprobe — ships alongside ffmpeg
    • ghostscript — common in document processing images

    The habit: which <binary> first, apt-get install second. Your Docker builds will be faster and your images smaller.

    Conditional Install in Dockerfile

    RUN if ! which ffmpeg > /dev/null 2>&1; then \
          apt-get update && apt-get install -y --no-install-recommends ffmpeg \
          && rm -rf /var/lib/apt/lists/*; \
        fi

    One line of defense against unnecessary bloat. Check before you install.

  • Supervisord Running as Root in Docker Is Actually Fine

    Supervisord Running as Root in Docker Is Actually Fine

    You see supervisord running as root inside your Docker container and your security instinct screams. But hold on — this is actually the correct pattern.

    The Misconception

    Running processes as root in containers is generally bad practice. But supervisord is a process manager — it needs root to do its job properly:

    • It spawns and manages child processes
    • It needs to set the user directive on each child
    • It handles signal forwarding, restarts, and logging

    The Key Insight

    Supervisord runs as root, but your actual application processes don’t have to. Each program block can specify its own user:

    [program:app]
    command=/usr/bin/php artisan serve
    user=www-data
    autostart=true
    autorestart=true
    
    [program:worker]
    command=/usr/bin/php artisan queue:work
    user=www-data
    autostart=true
    autorestart=true

    The parent (supervisord) runs privileged so it can manage the children. The children run unprivileged. This is the same model as systemd, init, or any other process manager on Linux.

    When to Worry

    If your supervisord config has programs running without a user directive, they inherit root. That’s the actual security risk — not supervisord itself. Always explicitly set user= on every program block.

    The pattern is simple: privileged parent, unprivileged children. Don’t fight it — just make sure the children are locked down.

  • Let the Codebase Vote: grep for Dominant Patterns

    Let the Codebase Vote: grep for Dominant Patterns

    When you join a large codebase and need to figure out the “right” way to do something, don’t guess. Don’t check the docs. Let the codebase vote.

    The Scenario

    You’re working in a Laravel app and need to get the current locale. Quick, which one do you use?

    // Option A
    App::getLocale()
    
    // Option B
    app()->getLocale()
    
    // Option C
    config('app.locale')

    They all work. But in a codebase with 200+ files touching locales, consistency matters more than personal preference.

    grep Is Your Democracy

    grep -r "App::getLocale" --include="*.php" | wc -l
    # 96
    
    grep -r "app()->getLocale" --include="*.php" | wc -l
    # 19
    
    grep -r "config('app.locale')" --include="*.php" | wc -l
    # 3

    The vote is 96-19-3. App::getLocale() wins by a landslide. That’s what you use. Discussion over.

    Why This Works

    The dominant pattern in a mature codebase exists for a reason. Maybe it was a conscious decision. Maybe it evolved naturally. Either way, it represents what the team actually does, not what someone thinks they should do.

    Following the majority means:

    • Your code looks like the rest of the codebase
    • grep and find-replace operations work consistently
    • New developers see one pattern, not three
    • Code reviews go faster because there’s nothing to debate

    More Examples

    This technique works for any “multiple valid approaches” question:

    # String helpers: str() vs Str:: vs helper
    grep -r "Str::" --include="*.php" | wc -l
    grep -r "str_" --include="*.php" | wc -l
    
    # Config access: config() vs Config::
    grep -r "config(" --include="*.php" | wc -l
    grep -r "Config::" --include="*.php" | wc -l
    
    # Route definitions: Route::get vs Route::resource
    grep -r "Route::get" routes/ | wc -l
    grep -r "Route::resource" routes/ | wc -l

    When to Override the Vote

    The only time you should go against the majority is when the dominant pattern is actively harmful — deprecated functions, security issues, or patterns that cause real bugs. In those cases, file a tech debt ticket and migrate everything at once. Don’t create a third pattern.

  • GNU Parallel for Real-Time Log Prefixing in Docker

    GNU Parallel for Real-Time Log Prefixing in Docker

    Running multiple background processes in a Docker container and trying to figure out which one is logging what? If you’re piping through sed for prefixes, stop. There’s a one-liner that handles this properly.

    The Problem

    You have a container running two webpack watchers (or any two long-running processes). The logs are interleaved and you can’t tell which output came from where:

    npm run hot &
    npm run watch:admin &
    wait

    Every line looks the same in docker logs. When something breaks, good luck figuring out which process errored.

    The sed Approach (Don’t Do This)

    First instinct is usually piping through sed:

    npm run hot 2>&1 | sed 's/^/[HOT] /' &
    npm run watch:admin 2>&1 | sed 's/^/[ADMIN] /' &
    wait

    This looks clean but has a nasty buffering problem. Pipes buffer output in chunks (typically 4KB), so you won’t see lines in real-time. You’ll get nothing for minutes, then a wall of prefixed text all at once. Not useful for watching builds.

    GNU Parallel to the Rescue

    GNU Parallel has two flags that solve this perfectly:

    parallel --tag --line-buffer ::: "npm run hot" "npm run watch:admin"

    --tag prefixes every output line with the command that produced it. --line-buffer flushes output line-by-line instead of waiting for the process to finish. Together, you get real-time prefixed output with zero buffering issues:

    npm run hot        webpack compiled successfully in 2847 ms
    npm run watch:admin  webpack compiled successfully in 1923 ms
    npm run hot        webpack compiled successfully in 412 ms

    In Docker

    Your Dockerfile needs GNU Parallel installed (apt-get install parallel or apk add parallel), then your compose command becomes:

    command:
      - /bin/bash
      - -c
      - |
        npm install
        parallel --tag --line-buffer ::: "npm run hot" "npm run watch:admin"

    No background processes, no wait, no buffering hacks. Parallel manages both processes and exits if either one dies.

    Why –line-buffer Matters

    Without --line-buffer, GNU Parallel groups output by job — it waits until a job finishes before showing its output. That’s fine for batch processing but terrible for long-running watchers. The --line-buffer flag trades a tiny bit of CPU for real-time line-by-line output with proper prefixing. For dev tooling, that tradeoff is always worth it.

  • Ollama Model Tags: Don’t Overwrite Your Base Models

    Ollama Model Tags: Don’t Overwrite Your Base Models

    Here’s a trap that’ll bite you exactly once with Ollama: if you run ollama create using the same name as an existing model, it overwrites the original. No warning, no confirmation, just gone.

    The Scenario

    Say you pulled mistral:7b-instruct and want to customize it with a new system prompt or different parameters. You write a Modelfile:

    FROM mistral:7b-instruct
    SYSTEM "You are a code reviewer..."
    PARAMETER temperature 0.3

    Then you run:

    ollama create mistral:7b-instruct -f Modelfile

    Congratulations, you just replaced your base model. The original mistral:7b-instruct is now your customized version. Want the vanilla one back? Time to re-pull it.

    The Fix

    Always use a distinct tag name for your customizations:

    # Copy the base first (shares blobs, no extra disk)
    ollama cp mistral:7b-instruct mistral:7b-instruct-base
    
    # Create your custom version with a NEW name
    ollama create mistral:7b-code-reviewer -f Modelfile

    The ollama cp command shares the underlying blobs with the original, so it doesn’t double your disk usage. It’s basically free insurance.

    Naming Convention That Works

    I’ve settled on this pattern: base-model:size-purpose

    ollama list
    # mistral:7b-instruct          4.4 GB  (original)
    # mistral:7b-code-reviewer     4.4 GB  (custom)
    # mistral:7b-instruct-base     4.4 GB  (safety copy)
    # qwen2.5-coder:7b             4.7 GB  (original)

    The sizes look alarming but remember: copies share blobs. Actual disk usage is much lower than the sum suggests.

    Why This Matters

    When you’re iterating on Modelfile configs (tweaking temperature, system prompts, context length), you’ll run ollama create dozens of times. One slip with the wrong name and you’re re-downloading 4+ GB. Use distinct tags from the start and you’ll never have that problem.

  • Laravel Mix Parallel Builds Break mix-manifest.json

    Laravel Mix Parallel Builds Break mix-manifest.json

    Laravel Mix creates a mix-manifest.json file that maps your asset filenames to their versioned URLs. It’s the bridge between mix('build/js/app.js') in Blade and the actual hashed filename on disk. And if you run multiple Mix builds in parallel, they’ll destroy each other.

    The Problem

    Imagine you have a monolith with multiple frontends — a public marketing site, a documentation site, and an internal reporting tool. Each has its own webpack config:

    {
      "scripts": {
        "dev": "npm run dev:marketing & npm run dev:docs & npm run dev:reporting",
        "dev:marketing": "mix --mix-config=webpack.mix.marketing.js",
        "dev:docs": "mix --mix-config=webpack.mix.docs.js",
        "dev:reporting": "mix --mix-config=webpack.mix.reporting.js"
      }
    }

    Run npm run dev and all three compile simultaneously. Each one writes its own mix-manifest.json to public/. The last one to finish wins — the other two manifests are gone.

    Result: mix('build/js/marketing.js') throws “Mix manifest not found” errors for whichever build finished first.

    The Hot File Problem

    It gets worse with hot module replacement. npm run hot creates a public/hot file containing the webpack-dev-server URL. If you run two hot reloaders simultaneously, they fight over the same file — each overwriting the other’s URL.

    Laravel’s mix() helper reads public/hot to decide whether to proxy assets through webpack-dev-server. With two builds writing to the same file, only one frontend gets HMR. The other loads stale compiled assets — or nothing at all.

    The Fix: Sequential Builds or Merge Manifest

    Option 1: Build sequentially (simple, slower)

    {
      "scripts": {
        "dev": "npm run dev:marketing && npm run dev:docs && npm run dev:reporting"
      }
    }

    Use && instead of &. Each build runs after the previous one finishes. The manifest includes all entries because each build appends. Downside: 3x slower.

    Option 2: laravel-mix-merge-manifest (parallel-safe)

    npm install laravel-mix-merge-manifest --save-dev

    Add to each webpack config:

    const mix = require('laravel-mix');
    require('laravel-mix-merge-manifest');
    
    mix.js('resources/js/marketing/app.js', 'public/build/js/marketing')
       .mergeManifest();

    Now each build merges its entries into the existing manifest instead of overwriting. Parallel builds work correctly.

    Option 3: Separate containers (best for hot reload)

    For HMR, run each dev server in its own container on a different port. Each gets its own hot file context. Configure each frontend to hit its specific dev server port. More infrastructure, but zero conflicts.

    The Lesson

    When multiple processes write to the same file, the last writer wins. This isn’t a Laravel Mix bug — it’s a fundamental concurrency problem. Any time you parallelize build steps that share output files, check whether they’ll clobber each other.