Author: Daryle De Silva

  • Stop Swallowing Exceptions in PHP

    Stop Swallowing Exceptions in PHP

    You inherit a codebase. Every API call wrapped in try-catch. Every exception swallowed. Every error returns an empty collection.

    And nobody knows when things break.

    The Anti-Pattern

    Here’s what I found in a legacy integration:

    public function getProducts(): Collection
    {
        try {
            $response = Http::get($this->apiUrl . '/products');
            return collect($response->json('data'));
        } catch (Exception $e) {
            // "Handle" the error by pretending it didn't happen
            return collect([]);
        }
    }

    What happens when the API is down? Nothing. The method returns empty. The caller thinks there are no products. The error disappears into the void.

    Weeks later, you’re debugging why users see blank pages. The real error? A 500 from the API three weeks ago that nobody noticed because the logs were clean.

    Let It Fail

    The fix is brutal: delete the try-catch.

    public function getProducts(): Collection
    {
        $response = Http::get($this->apiUrl . '/products');
        return collect($response->json('data'));
    }

    Now when the API fails, the exception bubbles up. Your error tracking (Sentry, Bugsnag, whatever) catches it. You get alerted. You fix it.

    But What About Graceful Degradation?

    If you genuinely want to fail gracefully, make it explicit:

    public function getProducts(): Collection
    {
        try {
            $response = Http::timeout(5)->get($this->apiUrl . '/products');
            return collect($response->json('data'));
        } catch (RequestException $e) {
            // Log it so you know it happened
            Log::warning('Product API failed', [
                'error' => $e->getMessage(),
                'url' => $this->apiUrl
            ]);
            
            // Return cached/stale data as fallback
            return Cache::get('products.fallback', collect([]));
        }
    }

    Now you’re:

    • Logging the failure (visibility)
    • Using a fallback strategy (resilience)
    • Not silently lying to callers (honesty)

    When to Catch

    Catch exceptions when you have a recovery strategy:

    • Retry logic: API hiccup? Try again.
    • Fallback data: Cache, defaults, partial results.
    • User-facing context: Transform technical errors into user messages.

    If you’re just catching to return empty, you’re hiding problems.

    Laravel’s Global Exception Handler

    Laravel already has a system for this. Exceptions bubble to App\Exceptions\Handler, where you can:

    // app/Exceptions/Handler.php
    public function register()
    {
        $this->reportable(function (RequestException $e) {
            // Log to Sentry/Slack/etc
        });
        
        $this->renderable(function (RequestException $e) {
            return response()->json([
                'error' => 'External service unavailable'
            ], 503);
        });
    }

    Now all API failures follow the same pattern. No more scattered try-catch blocks pretending everything’s fine.

    The Takeaway

    Empty catch blocks are lies. If you can’t handle the error meaningfully, let it bubble. Your future self — the one debugging at 2 AM — will thank you.

  • Laravel Queue Jobs: Stop Using $this in Closures

    Laravel Queue Jobs: Stop Using $this in Closures

    You dispatch a Laravel job. It runs fine locally. You push to production, and suddenly: CallbackNotFound errors everywhere.

    The culprit? A closure inside your job that references $this->someService.

    The Problem

    Here’s what breaks:

    class ProcessOrder implements ShouldQueue
    {
        use SerializesModels;
        
        public function __construct(
            private PaymentService $paymentService
        ) {}
        
        public function handle()
        {
            $orders = Order::pending()->get();
            
            // This closure references $this
            $orders->each(function ($order) {
                // 💥 BOOM after serialization
                $this->paymentService->charge($order);
            });
        }
    }

    When Laravel serializes the job for the queue, closures can’t capture $this properly. The callback becomes unresolvable after deserialization, and your job dies silently or throws cryptic errors.

    The Fix

    Stop referencing $this inside closures. Use app() to resolve the service fresh from the container:

    class ProcessOrder implements ShouldQueue
    {
        use SerializesModels;
        
        public function handle()
        {
            $orders = Order::pending()->get();
            
            // ✅ Resolve fresh from container
            $orders->each(function ($order) {
                app(PaymentService::class)->charge($order);
            });
        }
    }

    Now the closure is self-contained. Every time it runs, it pulls the service from the container. No serialization issues, no broken callbacks.

    Why It Works

    Queue jobs get serialized to JSON and stored (database, Redis, SQS, etc.). When the worker picks up the job:

    1. Laravel deserializes the job class
    2. Runs your handle() method
    3. Executes any closures inside

    But PHP can’t serialize object references inside closures. Using app() defers service resolution until the closure actually runs — after deserialization.

    Bonus: Same Rule for Collection Transforms

    This isn’t just a queue problem. Any time you serialize data with closures (caching collections, API responses, etc.), the same rule applies:

    // ❌ Breaks if serialized
    $cached = $items->map(fn($item) => $this->transformer->format($item));
    
    // ✅ Safe
    $cached = $items->map(fn($item) => app(Transformer::class)->format($item));

    The Takeaway

    Never use $this-> inside closures in queue jobs. Resolve services from the container with app() or pass them as closure parameters instead.

    Your queue workers will thank you.

  • Before/After API Testing: Compare Bytes, Not Just Objects

    Before/After API Testing: Compare Bytes, Not Just Objects

    When refactoring code that talks to external APIs, how do you know you didn’t break something subtle?

    Compare both the parsed response AND the raw wire format:

    // Test old implementation
    $oldClient = new OldApiClient($config);
    $oldParsed = $oldClient->fetchData($params);
    $oldRaw = $oldClient->getLastRawResponse();
    
    // Test new implementation  
    $newClient = new NewApiClient($config);
    $newParsed = $newClient->fetchData($params);
    $newRaw = $newClient->getLastRawResponse();
    
    // Compare both
    assert($oldParsed == $newParsed);  // Functional behavior
    assert($oldRaw === $newRaw);       // Wire-level compatibility

    Why both? Because:

    • Parsed objects verify functional correctness
    • Raw responses catch encoding issues, header differences, whitespace handling
    • Some APIs are picky about request formatting even if the parsed result looks identical

    This approach caught cases where new code was adding HTTP headers the old code didn’t send, and another where namespace handling differed slightly. Both “worked” but matching exact wire format means safer deployment.

    When refactoring integrations, test the bytes on the wire, not just the objects in memory.

  • Keep DTOs Thin — Move Logic to Services

    Keep DTOs Thin — Move Logic to Services

    When building API integrations, it’s tempting to put helper methods on your DTOs. I learned this creates problems.

    Started with “fat” DTOs:

    class Product {
        public function getUniqueVariants() { ... }
        public function isVariantAvailable($type) { ... }
        public function getDisplayType() { ... }
    }
    

    Seemed convenient — call $product->getUniqueVariants() anywhere. But issues emerged:

    1. Serialization problems: DTOs with methods are harder to cache/serialize
    2. Testing complexity: need to mock entire DTO structures just to test one method
    3. Coupling: business logic tied to the external API’s data structure

    The fix — move all logic to a service class:

    class ApiClient {
        public function getUniqueVariants(Product $product) { ... }
        public function isVariantAvailable(Product $product, string $type) { ... }
    }
    

    Now DTOs are pure data containers — just public properties or readonly classes. Business logic lives in services where it belongs.

    Feels less convenient at first (passing DTOs as arguments), but far more maintainable. DTOs stay simple and serializable. Services become testable in isolation.

  • Auto-Generate DTOs from JSON API Responses

    Auto-Generate DTOs from JSON API Responses

    Working with third-party APIs means writing a lot of DTOs. So I built a quick code generator that turns JSON fixtures into typed PHP classes.

    The workflow:

    1. Capture fixtures: add HTTP middleware to dump API responses to JSON files
    2. Analyze schema: scan fixtures, merge similar structures, infer types
    3. Generate DTOs: output properly typed classes with serializer annotations
    php artisan generate:dtos \
      --output=app/Services/SDK/ThirdParty \
      --namespace="App\\Services\\SDK\\ThirdParty"
    

    The generator handles:

    • Type inference from JSON values (string/int/float/bool/null)
    • Nullability detection: if a field is missing in any fixture, it’s nullable
    • Nested objects: creates separate classes for complex types
    • Array types: uses Str::singular() to name item classes (products → Product)

    This saved hours of manual DTO writing. The key insight: API responses are the source of truth. Let the actual data structure define your types, not the other way around.

  • HTTP Response throw() Makes Your Error Handling Unreachable

    HTTP Response throw() Makes Your Error Handling Unreachable

    Found dead code in an API client today. Classic mistake with Laravel’s HTTP client.

    The code looked reasonable at first glance:

    $response = Http::post($url, $data)->throw();
    
    if ($response->failed()) {
        throw $response->serverError() 
            ? new ServerException() 
            : $response->toException();
    }
    

    Spot the problem? The throw() method already throws an exception if the request fails. That if ($response->failed()) block will never execute — it’s unreachable.

    Laravel’s throw() is basically:

    if ($this->failed()) {
        throw $this->toException();
    }
    return $this;
    

    So you either use throw() for automatic exception handling, OR check failed() manually. Not both.

    The fix: if you need custom exception logic, don’t use throw():

    $response = Http::post($url, $data);
    
    if ($response->failed()) {
        throw $response->serverError() 
            ? new CustomServerException() 
            : new CustomClientException();
    }
    

    Small mistake, but good reminder to understand what framework helpers actually do under the hood.

  • Laravel Collections: flip() for Instant Lookups

    Laravel Collections: flip() for Instant Lookups

    Here’s a Laravel Collection performance trick: use flip() to turn expensive searches into instant lookups.

    I was looping through a date range, checking if each date existed in a collection. Using contains() works, but it’s O(n) for each check — if you have 100 dates to check against 50 items, that’s 5,000 comparisons.

    // Slow: O(n) per check
    $available = collect(['2026-01-25', '2026-01-26', '2026-01-27']);
    if ($available->contains($date)) { ... }
    

    The fix — flip() converts values to keys, making lookups O(1):

    // Fast: O(1) per check
    $available = collect(['2026-01-25', '2026-01-26'])
        ->flip(); // Now: ['2026-01-25' => 0, '2026-01-26' => 1]
    
    if ($available->has($date)) { ... }
    

    For small datasets, the difference is negligible. But checking hundreds or thousands of items? This tiny change saves significant processing time. Costs nothing to implement.

  • Git Rebase: Drop Commits But Keep the Files

    Git Rebase: Drop Commits But Keep the Files

    Ever need to clean up your Git history but keep the files locally? Here’s a trick I used recently.

    I had a commit that added some temp markdown docs — useful for my WIP but not ready for the branch history. I wanted those files to stay local (untracked) while removing the commit entirely.

    The solution: interactive rebase with the drop command:

    git rebase -i <commit-before-the-one-you-want-to-drop>
    # In the editor, change 'pick' to 'drop' for that commit
    # Save and exit
    

    The commit disappears from history, and the files become untracked. If you accidentally lose them, recover from reflog:

    git show <dropped-commit-hash>:path/to/file > path/to/file
    

    Be careful: this rewrites history. Use git push --force-with-lease when ready. Always stash current work first as a safety net.

  • When Old Code Fixes Are Actually Bugs

    When Old Code Fixes Are Actually Bugs

    Sometimes the best fix is deleting old fixes.

    While consolidating API client code, I found this “normalization” logic in production:

    $normalized = str_replace(
        ['SOAP-ENV:', 'xmlns:SOAP-ENV', 'xmlns:ns1'],
        ['soapenv:', 'xmlns:soapenv', 'xmlns'],
        $request
    );

    It looked intentional. Maybe the API was picky about namespace prefixes? Nope. It was breaking everything.

    Removing the normalization fixed server errors that had been happening silently. The API worked fine with standard SOAP envelopes — the str_replace was stripping namespace declarations and causing “undeclared prefix” errors on the server side.

    This kind of code survives because: (1) it was probably copy-pasted from a StackOverflow answer, (2) maybe it worked once on a different API, (3) nobody questioned it because it was already there.

    When refactoring, actually test the old code path independently. Sometimes those “necessary” workarounds are just ancient bugs in disguise.

  • Debug Nginx Proxies Layer by Layer

    Debug Nginx Proxies Layer by Layer

    When your Nginx reverse proxy isn’t working, test in this exact order:

    Step 1 — Test from the server itself:

    curl http://localhost/api

    Step 2 — Test via server IP:

    curl -k https://SERVER_IP/api

    Step 3 — Test via domain:

    curl https://yourdomain.com/api

    Why this order matters:

    • Step 1 fails → Nginx config is broken
    • Step 2 fails → SSL certificate issue
    • Step 3 fails → DNS, CDN, or firewall issue

    I recently debugged a proxy where Step 1 returned perfect responses but Step 3 returned just “OK” (2 bytes). That immediately told me the problem wasn’t Nginx — it was the CDN caching a broken response from an earlier deployment.

    Each layer adds complexity. Test each layer separately. When something breaks, you instantly know which layer to investigate instead of guessing.