Blog

  • Type Hint Your Closures: Let PHP Catch Data Structure Bugs For You

    Here’s a debugging scenario: you’re iterating a collection with a typed closure, and PHP throws a TypeError at runtime. Annoying? Yes. But also incredibly useful — the type hint just caught a data structure bug that would have silently corrupted your output.

    The Bug

    Imagine a collection that’s supposed to contain model objects. You write a clean, typed closure:

    $tasks = $project->tasks; // Should be Collection of Task objects
    
    $formatted = $tasks->map(fn(Task $task) => [
        'title' => $task->title,
        'status' => $task->status_code,
        'assignee' => $task->assigned_to,
    ]);

    This works perfectly — until the day a refactor changes how tasks are loaded and some entries come back as raw arrays from a join query instead of hydrated Eloquent models:

    TypeError: App\Services\ReportService::App\Services\{closure}():
    Argument #1 ($task) must be of type App\Models\Task,
    array given

    Without the Type Hint

    If you’d written fn($task) => instead, there’s no error. The closure happily processes the array, but $task->title triggers a “trying to access property of non-object” warning (or silently returns null in older PHP). Your output has missing data. You might not notice until a user reports a broken export.

    Why This Matters

    The type hint acts as a runtime assertion. It doesn’t just document what you expect — it enforces it at the exact point where wrong data enters your logic. The error message tells you precisely what went wrong: you expected a Task object but got an array.

    This is especially valuable with Laravel collections, where data can come from multiple sources:

    // Eloquent relationship — returns Task objects ✅
    $project->tasks->map(fn(Task $t) => $t->title);
    
    // Raw query result — returns stdClass or array ❌
    DB::table('tasks')->where('project_id', $id)->get()
        ->map(fn(Task $t) => $t->title); // TypeError caught immediately
    
    // Cached data — might be arrays after serialization ❌
    Cache::get("project.{$id}.tasks")
        ->map(fn(Task $t) => $t->title); // TypeError caught immediately

    The Practice

    Type hint your closure parameters in collection operations. It costs nothing in happy-path performance and saves hours of debugging when data structures change unexpectedly:

    // Instead of this:
    $items->map(fn($item) => $item->name);
    
    // Do this:
    $items->map(fn(Product $item) => $item->name);
    $items->filter(fn(Invoice $inv) => $inv->isPaid());
    $items->each(fn(User $user) => $user->notify(new WelcomeNotification));

    It’s not about being pedantic with types. It’s about turning silent data corruption into loud, immediate, debuggable errors. Let PHP’s type system do the work your unit tests might miss.

  • Readonly Classes Can’t Use Traits in PHP 8.2 — Here’s the Fix

    PHP 8.2’s readonly class modifier is great for DTOs and value objects. But the moment you try to use a trait that declares non-readonly properties, PHP throws a fatal error at compile time.

    The Scenario

    You’re building a queued event listener as a clean readonly class. You need InteractsWithQueue for retry control — release(), attempts(), delete(). Seems straightforward:

    readonly class SendWelcomeEmail implements ShouldQueue
    {
        use InteractsWithQueue; // 💥 Fatal error
    
        public function __construct(
            public string $email,
            public string $name
        ) {}
    
        public function handle(Mailer $mailer)
        {
            if ($this->attempts() > 3) {
                $this->delete();
                return;
            }
            // Send the email...
        }
    }

    This explodes with:

    Fatal error: Readonly class SendWelcomeEmail cannot use trait
    with a non-readonly property InteractsWithQueue::$job

    Why It Happens

    A readonly class enforces that every property must be readonly. The InteractsWithQueue trait declares a $job property that Laravel’s queue system needs to write to at runtime. Since trait properties are merged into the using class, PHP rejects the non-readonly $job property at compile time.

    This isn’t just InteractsWithQueue — any trait with mutable properties will trigger the same error. Common Laravel culprits include Dispatchable, InteractsWithQueue, and any trait that maintains internal state.

    The Fix

    Drop readonly from the class declaration and mark individual properties as readonly instead:

    class SendWelcomeEmail implements ShouldQueue
    {
        use InteractsWithQueue;
    
        public function __construct(
            public readonly string $email,  // readonly per-property
            public readonly string $name    // readonly per-property
        ) {}
    
        public function handle(Mailer $mailer)
        {
            if ($this->attempts() > 3) {
                $this->delete();
                return;
            }
            // Send the email...
        }
    }

    You get the same immutability guarantee on your data properties while allowing the trait’s mutable properties to exist alongside them.

    The Rule

    Use readonly class for pure data objects that don’t use traits with mutable state — DTOs, value objects, API response models. The moment you need framework traits that maintain internal state (queue interaction, event dispatching, etc.), switch to per-property readonly declarations.

    It’s a small syntactic difference with a big compatibility impact. Check your traits before reaching for readonly class.

  • The SerializesModels Trap: Why Your Laravel Job Retries Never Run

    If you’ve ever set $tries and $backoff on a Laravel queue job and wondered why they’re completely ignored when a model goes missing, you’ve hit the SerializesModels trap.

    The Problem

    When a job uses the SerializesModels trait, Laravel stores just the model’s ID in the serialized payload. When the job gets picked up by a worker, Laravel calls firstOrFail() to restore the model before your handle() method ever runs.

    If that model was deleted between dispatch and execution, firstOrFail() throws a ModelNotFoundException. This exception happens during deserialization — outside the retry/backoff lifecycle entirely. Your carefully configured retry logic never gets a chance to run.

    class ProcessOrder implements ShouldQueue
    {
        use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    
        public $tries = 5;
        public $backoff = [10, 30, 60];
    
        public function __construct(
            public Order $order  // Serialized as just the ID
        ) {}
    
        public function handle()
        {
            // This never executes if the Order was deleted.
            // $tries and $backoff are completely bypassed.
        }
    }

    Why It Happens

    The model restoration happens in the RestoreModel class, which uses firstOrFail(). This is called by PHP’s __wakeup() / unserialize() pipeline — way before Laravel’s retry middleware kicks in. The job fails immediately with zero retries.

    The Fix: A Nullable Models Trait

    Create a custom trait that returns null instead of throwing when a model is missing:

    trait SerializesNullableModels
    {
        use SerializesModels {
            SerializesModels::__serialize as parentSerialize;
            SerializesModels::__unserialize as parentUnserialize;
        }
    
        public function __unserialize(array $values): void
        {
            try {
                $this->parentUnserialize($values);
            } catch (ModelNotFoundException $e) {
                // Set the property to null instead of exploding
                // Your handle() method can then check for null
            }
        }
    }

    Then in your handle() method:

    public function handle()
    {
        if ($this->order === null) {
            // Model was deleted — release for retry or handle gracefully
            $this->release(30);
            return;
        }
    
        // Normal processing...
    }

    The Takeaway

    SerializesModels is convenient, but it creates a blind spot in your retry logic. If there’s any chance your model might be deleted between dispatch and execution — webhook jobs, async processing after user actions, anything with eventual consistency — either use the nullable trait pattern or pass the ID manually and look it up yourself in handle().

    Your $tries config only works when the exception happens inside handle(). Everything before that is a different world.

  • When Queued Event Listeners Silently Die: The ShouldQueue Trap

    You dispatch an event from inside a queued job. The event has a listener that implements ShouldQueue. Your job completes successfully, but the listener never executes. No exception. No failed job entry. No log. It just… doesn’t run.

    This is one of Laravel’s most frustrating silent failures.

    The Setup

    You have a workflow: when a user account is deactivated, trigger a data export automatically. The architecture looks clean:

    // In your DeactivationHandler (queued job)
    class HandleAccountDeactivation implements ShouldQueue
    {
        public function handle(): void
        {
            // Revoke access tokens for the account
            $this->revokeAccessTokens($this->account);
    
            // Dispatch event for downstream processing
            event(new AccountDeactivated($this->account));
        }
    }
    
    // The listener
    class TriggerDataExport implements ShouldQueue
    {
        public function handle(AccountDeactivated $event): void
        {
            // This never runs!
            $this->exportService->generate($event->account);
        }
    }

    Why It Fails Silently

    When you dispatch an event from within a queued job, and the listener also implements ShouldQueue, the listener gets pushed onto the queue as a new job. But here’s the catch: if the dispatching job’s database transaction hasn’t committed yet (or if the queue connection has issues during nested dispatching), the listener job can fail before it even starts — and this failure happens at the queue infrastructure level, not in your application code.

    A try-catch around event() won’t help. The event dispatch itself succeeds — it pushes a message onto the queue. The failure happens later, when the queue worker tries to process the listener job.

    The Fix: Make Critical Listeners Synchronous

    For listeners that are part of a critical workflow — where silent failure is unacceptable — remove ShouldQueue:

    // Make it synchronous — runs in the same process as the dispatcher
    class TriggerDataExport // No ShouldQueue
    {
        public function handle(AccountDeactivated $event): void
        {
            try {
                $this->exportService->generate($event->account);
            } catch (\Throwable $e) {
                // Now you CAN catch failures
                $event->account->addNote(
                    "Automatic data export failed: {$e->getMessage()}"
                );
                $event->account->flagForReview('compliance');
            }
        }
    }

    Alternative: Direct Method Calls for Critical Paths

    If the listener exists solely because of one dispatcher, skip events entirely for the critical path:

    class HandleAccountDeactivation implements ShouldQueue
    {
        public function handle(DataExportService $exportService): void
        {
            $this->revokeAccessTokens($this->account);
    
            // Direct call instead of event dispatch
            try {
                $exportService->generateComplianceExport($this->account);
            } catch (\Throwable $e) {
                $this->account->addNote("Automatic data export failed: {$e->getMessage()}");
                $this->account->flagForReview('compliance');
            }
        }
    }

    When Events Are Still Right

    Events shine when:

    • Multiple independent listeners react to the same event
    • The listener’s failure doesn’t affect the main workflow
    • You genuinely need decoupling (different bounded contexts)

    But when a queued job dispatches an event to a queued listener for a single critical operation? That’s a fragile chain with a silent failure mode. Make it synchronous or call the service directly.

    The rule of thumb: if the listener failing means the workflow is broken, don’t put a queue boundary between them.

  • Circuit Breakers: Stop Hammering Dead APIs From Your Queue Workers

    Your queue workers are burning through jobs at full speed, retrying a third-party API endpoint that’s been down for three hours. Every retry fails. Every failure generates a Sentry alert. You’re 55,000 errors deep, your queue is backed up, and the external service doesn’t care how many times you knock.

    This is what happens when you don’t have a circuit breaker.

    The Pattern

    A circuit breaker sits between your application and an unreliable external service. It tracks failures and, after a threshold, stops sending requests entirely for a cooldown period. The metaphor comes from electrical engineering — when there’s too much current, the breaker trips to prevent damage.

    Three states:

    • Closed — everything works normally, requests flow through
    • Open — too many failures, all requests short-circuit immediately (return error without calling the API)
    • Half-Open — after cooldown, let one request through to test if the service recovered

    A Simple Implementation

    class CircuitBreaker
    {
        public function __construct(
            private string $service,
            private int $threshold = 5,
            private int $cooldownSeconds = 300,
        ) {}
    
        public function isAvailable(): bool
        {
            $failures = Cache::get("circuit:{$this->service}:failures", 0);
            $openedAt = Cache::get("circuit:{$this->service}:opened_at");
    
            if ($failures < $this->threshold) {
                return true; // Closed state
            }
    
            if ($openedAt && now()->diffInSeconds($openedAt) > $this->cooldownSeconds) {
                return true; // Half-open: try one request
            }
    
            return false; // Open: reject immediately
        }
    
        public function recordFailure(): void
        {
            $failures = Cache::increment("circuit:{$this->service}:failures");
    
            if ($failures >= $this->threshold) {
                Cache::put("circuit:{$this->service}:opened_at", now(), $this->cooldownSeconds * 2);
            }
        }
    
        public function recordSuccess(): void
        {
            Cache::forget("circuit:{$this->service}:failures");
            Cache::forget("circuit:{$this->service}:opened_at");
        }
    }

    Using It in a Queue Job

    class FetchWeatherDataJob implements ShouldQueue
    {
        public function handle(WeatherApiClient $client): void
        {
            $breaker = new CircuitBreaker('weather-api', threshold: 5, cooldownSeconds: 300);
    
            if (! $breaker->isAvailable()) {
                // Release back to queue for later
                $this->release(60);
                return;
            }
    
            try {
                $response = $client->getConditions($this->stationId);
                $breaker->recordSuccess();
                $this->storeWeatherData($response);
            } catch (ApiException $e) {
                $breaker->recordFailure();
                throw $e; // Let Laravel's retry handle it
            }
        }
    }

    Pair It With Exponential Backoff

    Circuit breakers prevent hammering. Exponential backoff spaces out retries. Use both:

    class FetchWeatherDataJob implements ShouldQueue
    {
        public int $tries = 5;
    
        public function backoff(): array
        {
            return [30, 60, 120, 300, 600]; // 30s, 1m, 2m, 5m, 10m
        }
    }

    When You Need This

    If your application integrates with external APIs that can go down — email verification services, geocoding providers, analytics feeds — you need circuit breakers. The symptoms that tell you it’s time:

    • Thousands of identical errors in your error tracker from one endpoint
    • Queue workers stuck retrying failed jobs instead of processing good ones
    • Your application slowing down because every request waits for a timeout
    • Rate limit responses (HTTP 429) from the external service

    Without a circuit breaker, a flaky external service doesn’t just affect itself — it takes your entire queue infrastructure down with it. Five minutes of setup saves hours of firefighting.

  • The Git Staging Trap: When Your Commit References Code That Doesn’t Exist Yet

    You make a change in Client.php that calls a new method getRemainingTtl(). You stage Client.php, write a clean commit message, hit commit. Everything looks fine.

    Except getRemainingTtl() lives in AuthSession.php — and you forgot to stage that file.

    The Problem

    When you selectively stage files with git add, Git doesn’t check whether your staged code actually works together. It just commits whatever’s in the staging area. If File A calls a method defined in File B, and you only stage File A, your commit is broken — even though your working directory is fine.

    git add src/Client.php
    git commit -m "Add cache TTL awareness to client"
    # AuthSession.php with getRemainingTtl() is NOT in this commit

    This compiles to a commit where Client.php references a method that doesn’t exist yet. If someone checks out this specific commit — or if CI runs against it — it breaks.

    Why It Happens

    Selective staging is a good practice. Small, focused commits make history readable. But the trap is that your working directory always has both files, so you never notice the gap. Your editor doesn’t complain. Your local tests pass. Everything works — until it doesn’t.

    The Fix: Review the Diff Before Committing

    Always check what you’re actually committing:

    # See exactly what's staged
    git diff --cached
    
    # Or see the file list
    git diff --cached --name-only

    When you see Client.php calling $this->session->getRemainingTtl(), ask yourself: “Is the file that defines this method also staged?”

    A Better Habit

    Before committing, scan the staged diff for:

    • New method calls — is the definition staged too?
    • New imports/use statements — is the imported class staged?
    • New interface implementations — is the interface file staged?
    • Constructor changes — are the new dependencies staged?

    If you catch it before pushing, it’s a 5-second fix: git add AuthSession.php && git commit --amend. If you catch it after CI fails, it’s a new commit plus an embarrassing red build.

    Selective staging is powerful, but Git won’t hold your hand. Review the diff, not just the file list.

  • Eloquent Relationship Caching: Why attach() Leaves Your Model Stale

    You call attach() on a relationship, then immediately check that relationship in the next line. It returns empty. The data is in the database, but your model doesn’t know about it.

    The Problem

    Eloquent caches loaded relationships in memory. Once you access a relationship, Laravel stores the result on the model instance. Subsequent accesses return the cached version — even if the underlying data has changed.

    // Load the relationship (caches in memory)
    $article->assignedCategory;  // null
    
    // Update the pivot table
    $newCategory->articles()->attach($article);
    
    // This still returns null! Cached.
    $article->assignedCategory;  // null (stale)
    

    The attach() call writes to the database, but the model’s in-memory relationship cache still holds the old value.

    The Fix: refresh()

    Call refresh() on the model to reload it and clear all cached relationships:

    $newCategory->articles()->attach($article);
    
    // Reload the model from the database
    $article->refresh();
    
    // Now it returns the fresh data
    $article->assignedCategory;  // Category { name: 'Technology' }
    

    refresh() re-fetches the model’s attributes and clears the relationship cache, so the next access hits the database.

    refresh() vs load()

    You might think load() would work:

    // This re-queries the relationship
    $article->load('assignedCategory');
    

    It does work for this specific relationship, but refresh() is more thorough. It reloads everything — attributes and all eager-loaded relationships. Use load() when you want to reload a specific relationship. Use refresh() when the model’s state might be stale across multiple attributes.

    When This Bites You

    This typically surfaces in multi-step workflows where the same model passes through several operations:

    // Step 1: Assign to initial category
    $defaultCategory->articles()->attach($article);
    
    // Step 2: Process the article
    $result = $pipeline->run($article);
    
    // Step 3: On failure, reassign to a different category
    if (!$result->success) {
        $defaultCategory->articles()->detach($article);
        $reviewCategory->articles()->attach($article);
    
        $article->refresh();  // Critical! Without this, downstream code sees stale category.
    
        // Step 4: Log the transition
        $transition = new CategoryReassigned($article, $reviewCategory, $defaultCategory);
        $logger->record($transition);
    }
    

    Without the refresh(), any code that checks $article->assignedCategory after step 3 will still see the old category (or null). Event handlers, logging, validation — all get stale data.

    The Pattern

    Any time you modify a model’s relationships via attach(), detach(), sync(), or toggle(), and then need to read that relationship in the same request:

    // Write
    $model->relationship()->attach($relatedId);
    
    // Refresh
    $model->refresh();
    
    // Read (now safe)
    $model->relationship;
    

    This is different from updating model attributes directly, where save() keeps the in-memory state in sync. Pivot table operations bypass the model’s state management entirely — they go straight to the database without telling the model.

    Small habit. Prevents a class of bugs that are genuinely confusing to debug because the database looks correct but the code behaves like the data doesn’t exist.

  • The $array = vs $array[] Gotcha: A One-Character PHP Bug

    This one-character bug caused 300 errors over two weeks and survived three separate pull requests without anyone catching it.

    // The bug
    foreach ($items as $item) {
        $notifications = new Notification($item['title'], $item['channel']);
    }
    
    // The fix
    foreach ($items as $item) {
        $notifications[] = new Notification($item['title'], $item['channel']);
    }
    

    See it? $notifications = vs $notifications[] =. One character: [].

    What Happened

    A notification import loop was building a collection of Notification objects to pass to a service. Each iteration was supposed to append to an array. Instead, it was overwriting the variable every time.

    Result: only the last Notification object survived. The service downstream expected a collection, got a single object, and threw a TypeError.

    Why It Survived Three PRs

    Here’s the interesting part. This bug was introduced in the initial implementation and persisted through two subsequent refactors:

    1. PR #1 — Initial feature implementation. The bug shipped with it.
    2. PR #2 — Added dynamic ID logic. Touched nearby code but didn’t notice the assignment.
    3. PR #3 — Added a nested foreach around the existing loop. Reviewers focused on the new outer logic and missed the inner loop body.

    Each PR added complexity around the bug without ever looking at the bug. The nested loop actually made it harder to spot because there was more code to review.

    How It Was Caught

    A type-hinted closure caught it:

    $collection->map(function (Notification $notification) {
        // TypeError: expected Notification, got array
    });
    

    PHP’s strict type hints acted as a runtime validator. Without the type hint, the code would have silently produced wrong data instead of throwing an error.

    The Debugging Workflow

    Once the error surfaced, git blame told the full story:

    # Find who last touched the line
    git blame path/to/Handler.php -L 366,366
    
    # Check the original PR
    git show abc123
    
    # Trace backwards through each change
    git log --follow -p -- path/to/Handler.php
    

    This revealed the bug was there from day one. Not a regression — an original sin.

    The Reproduce-Before-Fix Rule

    Before applying the fix, run the failing code to confirm you can reproduce the error. Then apply the fix and run it again. Two runs:

    1. Without fix: Error reproduced. Good, you’re testing the right thing.
    2. With fix: No error. Fix confirmed.

    If you can’t reproduce the bug, you can’t be sure your fix actually addresses it.

    Lessons

    • Type hints are free runtime validators. They catch data structure bugs that unit tests might miss.
    • Code review has blind spots when nested loops add visual complexity. Reviewers naturally focus on new code.
    • git blame is archaeology. Don’t just find who to blame — trace the full history to understand why the bug persisted.
    • Always reproduce before fixing. Two runs: one to confirm the bug, one to confirm the fix.
  • afterCommit(): Dispatch Queue Jobs Only After the Transaction Commits

    You dispatch a queue job right after saving a model. The job fires, tries to look up the record… and it’s not there. The transaction hasn’t committed yet.

    This is one of those bugs that works fine locally (where your database has near-zero latency) but bites you in production with read replicas or under load.

    The Problem

    Consider this common pattern:

    DB::transaction(function () use ($report) {
        $report->status = 'generating';
        $report->save();
    
        GenerateReport::dispatch($report);
    });
    

    The job gets dispatched inside the transaction. Depending on your queue driver, the job might start executing before the transaction commits. The job tries to find the report with status = 'generating', but the database still shows the old state.

    The Fix: afterCommit()

    Laravel provides afterCommit() on dispatchable jobs to ensure the job only hits the queue after the wrapping transaction commits:

    DB::transaction(function () use ($report) {
        $report->status = 'generating';
        $report->save();
    
        GenerateReport::dispatch($report)->afterCommit();
    });
    

    Now the job waits until the transaction successfully commits before being pushed to the queue. If the transaction rolls back, the job never dispatches at all.

    Setting It Globally

    If you want all your jobs to behave this way by default, add the property to your job class:

    class GenerateReport implements ShouldQueue
    {
        use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
    
        public $afterCommit = true;
    
        // ...
    }
    

    Or set it in your queue.php config for the entire connection:

    // config/queue.php
    'redis' => [
        'driver' => 'redis',
        'connection' => 'default',
        'queue' => 'default',
        'after_commit' => true,  // All jobs wait for commit
    ],
    

    When You Actually Need This

    This pattern is critical when:

    • You use read replicas — the replica might not have the committed data yet
    • Your job uses SerializesModels — it re-fetches the model from the database when deserializing
    • You dispatch jobs that depend on the data you just saved in the same transaction
    • You’re dispatching webhook notifications — the external system might call back before your DB commits

    The Gotcha

    If there’s no wrapping transaction, afterCommit() dispatches immediately — same as without it. It only delays when there’s an active transaction to wait for.

    This is a good thing. It means you can set $afterCommit = true on all your jobs without worrying about jobs that are dispatched outside transactions.

    One of those small changes that prevents a whole class of race condition bugs in production.

  • Feature Branch Subdomains: Every PR Gets Its Own URL

    Staging environments are great until you have three developers all waiting to test on the same one. Feature branch subdomains solve this: every branch gets its own isolated URL like feature-auth-refactor.staging.example.com.

    How It Works

    The setup has three parts:

    1. Wildcard DNS — Point *.staging.example.com to your staging server
    2. Wildcard SSL — One certificate covers all subdomains
    3. Dynamic Nginx config — Route each subdomain to the right container

    The DNS

    Add a single wildcard A record:

    *.staging.example.com  A  203.0.113.50

    Every subdomain now resolves to your staging server. No DNS changes needed per branch.

    The SSL Certificate

    Use Let’s Encrypt with DNS validation for wildcard certs:

    certbot certonly \
      --dns-cloudflare \
      --dns-cloudflare-credentials /etc/letsencrypt/cloudflare.ini \
      -d "*.staging.example.com" \
      -d "staging.example.com"

    The Nginx Config

    Extract the subdomain and proxy to the matching container:

    server {
        listen 443 ssl;
        server_name ~^(?<branch>.+)\.staging\.example\.com$;
    
        ssl_certificate     /etc/letsencrypt/live/staging.example.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/staging.example.com/privkey.pem;
    
        location / {
            resolver 127.0.0.11 valid=10s;
            proxy_pass http://$branch:80;
            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }

    The regex capture (?<branch>.+) extracts the subdomain. If your CI names Docker containers after the branch slug, Nginx routes directly to them.

    The CI Pipeline

    In your CI config, deploy each branch as a named container:

    deploy_review:
      stage: deploy
      script:
        - docker compose -p "$CI_COMMIT_REF_SLUG" up -d
      environment:
        name: review/$CI_COMMIT_REF_NAME
        url: https://$CI_COMMIT_REF_SLUG.staging.example.com
        on_stop: stop_review

    Why This Beats Shared Staging

    With shared staging, you get merge conflicts, “don’t deploy, I’m testing” Slack messages, and broken environments that block everyone. With per-branch subdomains, each developer (and each PR reviewer) gets their own isolated environment. QA can test three features simultaneously. No coordination needed.

    The wildcard DNS + wildcard SSL + dynamic Nginx combo means zero manual setup per branch. Push a branch, CI deploys it, URL works automatically.