Category: Laravel

  • Model Audit Trails with Laravel Revisionable

    When building business-critical applications, implementing model audit trails is essential for debugging and compliance. The venturecraft/revisionable package (or similar) automatically tracks all changes to Eloquent models.

    Why It Matters

    In production incidents, being able to answer “what was the previous value?” or “who changed this?” can save hours of investigation. Audit trails reveal the complete state transition history of a record, making it trivial to identify mistakes and determine the correct rollback state.

    Key Methods

    • $model->revisions()->get() – Get all revisions
    • $revision->getDiff() – See what changed
    • $revision->executor – Who made the change
    • $revision->created_at – When it happened

    Implementation

    // In your model
    use Venturecraft\Revisionable\RevisionableTrait;
    
    class Order extends Model
    {
        use RevisionableTrait;
        
        protected $revisionCreationsEnabled = true;
        protected $dontKeepRevisionOf = ['updated_at'];
    }
    
    // Query revision history
    $order = Order::find(123);
    $history = $order->revisions()->get()->map(fn($r) => [
        'timestamp' => $r->created_at->toDateTimeString(),
        'user' => $r->executor?->email,
        'changes' => $r->getDiff()
    ]);
    
    // Example output:
    // [
    //   'timestamp' => '2026-01-30 14:37:57',
    //   'user' => '[email protected]',
    //   'changes' => [
    //     'status' => ['old' => 'pending', 'new' => 'completed']
    //   ]
    // ]
    

    Pro Tip

    Store the authenticated user’s ID automatically by implementing revisionCreationsEnabled() on your model. The package will track who made each change without additional code.

  • Pre-Flight API Validation: Stop Making Calls You Know Will Fail

    Making unnecessary API calls wastes rate limits, adds latency, and pollutes your error logs. When you already have the data needed to avoid an invalid request, use it. Here’s how pre-flight validation saves time and money.

    The Anti-Pattern

    Consider a sync job that fetches pricing data for products:

    public function syncProductPricing($productId, $startDate, $endDate)
    {
        $pricing = $this->cache->remember(
            "pricing:{$productId}",
            3600,
            fn() => $this->apiClient->getProductPricing($productId, $startDate, $endDate)
        );
        
        $this->updatePricing($pricing);
    }
    

    This looks clean, but what if some products are “bundles” that don’t have individual pricing? The API will return a 422 error: “this is a bundle product, use the child product IDs instead”.

    Every sync cycle, you make the invalid API call, catch the error, fail the job, and retry. Meanwhile, you’re burning rate limits on requests you know will fail.

    The Fix: Validate Before You Call

    Check your database first. You likely already have the information needed to avoid the invalid request:

    public function syncProductPricing($productId, $startDate, $endDate)
    {
        // Check if this is a bundle product (they don't have individual pricing)
        $product = Product::find($productId);
        
        if ($product->type === 'bundle') {
            Log::debug("Skipping bundle product {$productId}, syncing children instead");
            
            foreach ($product->childProducts as $child) {
                $this->syncProductPricing($child->id, $startDate, $endDate);
            }
            
            return;
        }
        
        // Safe to call API now
        $pricing = $this->cache->remember(
            "pricing:{$productId}",
            3600,
            fn() => $this->apiClient->getProductPricing($productId, $startDate, $endDate)
        );
        
        $this->updatePricing($pricing);
    }
    

    When to Validate Up Front

    Apply this pattern when:

    • The API documents specific constraints or limitations
    • You have the validation data locally (database, config, etc.)
    • The check is cheaper than making the API call
    • The error is predictable and repeatable

    The Benefits

    This defensive approach delivers multiple wins:

    • Reduced API costs – Avoid calls you know will fail
    • Better performance – No network round-trip for validation errors
    • Preserved rate limits – Don’t waste quota on invalid requests
    • Cleaner logs – Only real errors appear in monitoring
    • Faster debugging – When errors DO occur, they’re genuine issues

    Third-party API calls are expensive resources. Treat them like database queries: fetch only what you need, cache when possible, and validate constraints before making the request. Your error logs (and your rate limit quota) will thank you.

  • Graceful API Error Handling in Laravel Queue Jobs

    Queue jobs that fail repeatedly on predictable API errors waste server resources and pollute your error tracking. Here’s how to handle known validation constraints gracefully instead of letting jobs retry forever.

    The Problem

    Imagine a background sync job that queries a third-party API. The job runs every hour, but some requests hit a validation constraint that the API rejects:

    try {
        $data = $this->apiClient->getProductAvailability($productId, $startDate, $endDate);
    } catch (ApiException $e) {
        throw new SupplierException(
            sprintf('Error connecting to API: %s', $e->getMessage())
        );
    }
    

    When the API returns a 422 error for a constraint like “this product is a bundle”, the job fails, logs to Sentry, and retries. If you have thousands of products and a fraction are bundles, you’ll generate hundreds of error events per day for a completely predictable scenario.

    One real-world case generated 1,434 Sentry errors over 10 days before someone investigated.

    The Solution: Graceful Error Handling

    Instead of treating all API errors the same way, detect known validation constraints and handle them gracefully:

    try {
        $data = $this->apiClient->getProductAvailability($productId, $startDate, $endDate);
    } catch (ApiException $e) {
        // Check if this is a known validation constraint
        if ($e->getStatusCode() === 422 && str_contains($e->getMessage(), 'product is a bundle')) {
            Log::info("Skipping bundle product {$productId} - requires child product sync");
            
            // Optional: fetch child products and queue them instead
            $children = $this->apiClient->getBundleChildren($productId);
            foreach ($children as $childId) {
                SyncProductAvailability::dispatch($childId, $startDate, $endDate);
            }
            
            return; // Exit gracefully, don't retry
        }
        
        // For unexpected errors, fail loudly
        throw new SupplierException(
            sprintf('Error connecting to API: %s', $e->getMessage())
        );
    }
    

    Why This Matters

    Queue jobs that fail on predictable errors have real costs:

    • Wasted compute – Retry logic consumes server resources for errors that will never resolve
    • Error noise – Real issues get buried in thousands of false-positive alerts
    • Rate limits – Repeated invalid requests can exhaust API quotas
    • Debugging friction – When real errors occur, they’re hidden in the noise

    This is especially critical for high-volume background jobs where a single unhandled edge case can generate thousands of errors per day. By handling known constraints gracefully, you reduce noise, save retry costs, and can implement smart fallback logic like queuing child products.

    When debugging queue job failures, check Sentry event counts. If you see the same error repeating hundreds of times, it’s likely hitting a predictable constraint that should be handled explicitly instead of retried.

  • Inject Event Dispatcher for Testability in Laravel Services

    When dispatching events in Laravel services, inject the Dispatcher instead of using the event() helper function. This makes your code more testable and explicit about its dependencies.

    The Problem with event()

    Most Laravel code uses the event() helper to dispatch events:

    class OrderProcessor
    {
        public function process($order)
        {
            // ... processing logic ...
            
            event(new OrderCompleted($order));
        }
    }

    This works fine, but it makes testing harder. You can’t easily mock the event system, and it’s not immediately clear that this class has a dependency on Laravel’s event system.

    The Solution: Dependency Injection

    Instead, inject the Illuminate\Events\Dispatcher:

    use Illuminate\Events\Dispatcher;
    
    class OrderProcessor
    {
        public function __construct(
            private Dispatcher $dispatcher
        ) {}
        
        public function process($order)
        {
            // ... processing logic ...
            
            $this->dispatcher->dispatch(
                new OrderCompleted($order)
            );
        }
    }

    Why This Matters

    Testing becomes trivial:

    public function test_processing_dispatches_event()
    {
        $dispatcher = Mockery::mock(Dispatcher::class);
        $dispatcher->shouldReceive('dispatch')
            ->once()
            ->with(Mockery::type(OrderCompleted::class));
        
        $processor = new OrderProcessor($dispatcher);
        $processor->process($order);
    }

    Dependencies are explicit: Anyone reading the constructor knows immediately that this class dispatches events. No hidden global dependencies.

    Framework agnostic: You could swap the event system by implementing the same interface. The event() helper ties you to Laravel’s globals.

    When to Use Which

    Use event() in:

    • Controllers (already framework-coupled)
    • Blade views (convenience matters more)
    • Quick scripts and seeders

    Use injected Dispatcher in:

    • Services that need testing
    • Domain logic classes
    • Anywhere you want explicit dependencies

    The pattern extends to other Laravel facades too: inject Illuminate\Contracts\Cache\Repository instead of using Cache::get(), inject Illuminate\Contracts\Queue\Queue instead of Queue::push(), and so on.

    Your tests will thank you.

  • Combining Multiple Data Sources with Union Queries in Laravel

    The Problem

    Sometimes you need to merge data from different tables or queries into a single result set. Maybe you’re combining default configuration settings with user-specific overrides, or pulling together related records from separate tables. Writing separate queries and merging results in PHP is messy and inefficient.

    The Pattern

    Laravel’s union() method lets you combine multiple queries into one result set at the database level:

    use Illuminate\Support\Facades\DB;
    
    $defaultSettings = DB::table('system_settings')
        ->select('id', 'category', 'label')
        ->where('active', true);
    
    $userSettings = DB::table('user_preferences')
        ->select('id', 'category', 'name as label')
        ->where('user_id', $userId);
    
    $combined = $defaultSettings
        ->union($userSettings->toBase())
        ->get();
    

    Both queries must return the same number of columns with compatible data types. Use toBase() to convert Eloquent queries to query builder instances before the union.

    Real-World Example

    Building a dropdown of available templates: show built-in system templates plus user-created custom templates.

    $systemTemplates = DB::table('system_templates')
        ->selectRaw('id')
        ->selectRaw('name')
        ->selectRaw('? as source', ['System'])
        ->where('enabled', true);
    
    $userTemplates = DB::table('custom_templates')
        ->select('id', 'name')
        ->selectRaw('? as source', ['Custom'])
        ->where('created_by', auth()->id());
    
    $allTemplates = $systemTemplates
        ->union($userTemplates->toBase())
        ->orderBy('name')
        ->get();
    

    When to Use It

    • Merging defaults + overrides: System configs + user-specific settings
    • Multi-table aggregation: Combining similar data from legacy + current tables
    • Fallback chains: Primary source + backup source in one query

    Watch Out For

    • Column count must match: Use selectRaw('? as placeholder', [null]) for missing columns
    • Type compatibility: MySQL will coerce types, but keep them consistent
    • Performance: Union deduplicates by default (like SQL UNION). Use unionAll() if you want duplicates and better performance

    Quick Tip

    If you need to apply filters or pagination to the unified result, wrap it in a subquery:

    $union = $defaultSettings->union($userSettings->toBase());
    
    $results = DB::table(DB::raw("({$union->toSql()}) as combined"))
        ->mergeBindings($union)
        ->where('category', 'notifications')
        ->paginate(20);
    

    Union queries keep your data merging at the database level where it belongs—fast, efficient, and clean.

  • Timeline-Based Query Scoping for Data Corruption Fixes





    Timeline-Based Query Scoping for Data Corruption Fixes

    Timeline-Based Query Scoping for Data Corruption Fixes

    When a bug corrupts data, you need to fix only the affected records—not the entire table. Here’s how to use timestamp-based scoping to target exactly the right rows.

    The Problem

    A code change introduced a bug that incorrectly populated a field. The bug ran in production for 5 days before being caught. You need to fix the corrupted data without touching records created before or after that window.

    The Pattern

    Use created_at or audit timestamps to scope your data migration to the exact corruption window:

    use Illuminate\Database\Migrations\Migration;
    use Illuminate\Support\Facades\DB;
    
    class FixCorruptedCategoryAssignments extends Migration
    {
        public function up()
        {
            // Bug deployed: Feb 20, 2026 at 14:30 UTC
            // Bug fixed: Feb 25, 2026 at 09:15 UTC
            
            $bugStartTime = '2026-02-20 14:30:00';
            $bugEndTime = '2026-02-25 09:15:00';
            
            // Only fix records created during the corruption window
            DB::table('products')
                ->whereBetween('created_at', [$bugStartTime, $bugEndTime])
                ->where('category_id', 0)  // The corrupted value
                ->update([
                    'category_id' => DB::raw('(
                        SELECT categories.id 
                        FROM categories 
                        WHERE categories.slug = products.category_slug
                        LIMIT 1
                    )')
                ]);
        }
    }
    

    How to Find Your Timeline

    Reconstruct the bug timeline from these sources:

    1. Deployment logs — When was the bad code deployed?
    2. Git history — When was the buggy commit merged?
    3. Error monitoring — When did related errors start appearing?
    4. User reports — When did people first complain?
    5. Fix deployment — When was the patch deployed?

    Your scope window is: bug deployedbug fixed.

    Why This Matters

    Scoping by timestamp prevents two disasters:

    • Over-fixing: Applying “fixes” to records that were never broken
    • Under-fixing: Missing corrupted records because your scope was too narrow

    For example, if you only fix records with category_id = 0 without the timestamp scope, you might accidentally “fix” legitimate records that intentionally have no category (like drafts or templates).

    Testing Your Scope

    Before running the migration:

    // Count affected records
    $count = DB::table('products')
        ->whereBetween('created_at', [$bugStartTime, $bugEndTime])
        ->where('category_id', 0)
        ->count();
    
    echo "Will fix {$count} records\n";
    

    Compare this count against your expectations. If the bug affected ~100 orders per day for 5 days, you’d expect ~500 records. If you see 10,000, your scope is wrong.

    Edge Cases

    Bug Affected Updates, Not Inserts

    If the bug corrupted existing records (not new ones), use updated_at instead:

    DB::table('products')
        ->whereBetween('updated_at', [$bugStartTime, $bugEndTime])
        ->where('category_id', 0)
        ->update(['category_id' => /* fix logic */]);
    

    Multiple Deployment Windows

    If the bug was deployed, rolled back, then re-deployed, use multiple scopes:

    $windows = [
        ['2026-02-20 14:30:00', '2026-02-20 18:00:00'],  // First deployment
        ['2026-02-24 10:00:00', '2026-02-25 09:15:00'],  // Second deployment
    ];
    
    foreach ($windows as [$start, $end]) {
        DB::table('products')
            ->whereBetween('created_at', [$start, $end])
            ->where('category_id', 0)
            ->update(['category_id' => /* fix logic */]);
    }
    

    Verification

    After running the migration, verify the fix:

    // Should return 0
    $remaining = DB::table('products')
        ->whereBetween('created_at', [$bugStartTime, $bugEndTime])
        ->where('category_id', 0)
        ->count();
    
    if ($remaining > 0) {
        echo "WARNING: {$remaining} records still corrupted\n";
    }
    

    Timeline-based scoping turns a risky bulk update into a surgical fix. Analyze first, scope precisely, verify after.


  • Maintaining Forked Laravel Packages: Release Workflow





    Maintaining Forked Laravel Packages: Release Workflow

    Maintaining Forked Laravel Packages: Release Workflow

    Sometimes you need to fork a Laravel package to fix bugs or add features the maintainer won’t merge. Here’s a clean workflow for maintaining your fork while keeping your app’s composer dependencies sane.

    The Scenario

    You’re using a third-party admin panel package, but it has bugs. The upstream maintainer is inactive. You need to fix it yourself and use the fixes in your app.

    Fork Workflow

    # 1. Fork the package on GitHub
    # github.com/original-author/admin-package → github.com/your-org/admin-package
    
    # 2. Create a branch for your fixes
    git checkout -b fix/column-filters
    # ... make your changes ...
    git push origin fix/column-filters
    
    # 3. Create PR in YOUR fork (not upstream)
    # This gives you a place to review and discuss changes internally
    
    # 4. After PR approved, merge to your fork's master
    git checkout master
    git merge fix/column-filters
    git push origin master
    
    # 5. Tag a release
    git tag v2.1.5
    git push origin v2.1.5
    

    Composer Configuration

    In your app’s composer.json, point to your fork:

    {
        "repositories": [
            {
                "type": "vcs",
                "url": "https://github.com/your-org/admin-package"
            }
        ],
        "require": {
            "original-author/admin-package": "^2.1"
        }
    }
    

    Composer will automatically use your fork when you run composer update, because it sees a newer version (v2.1.5) in your repository.

    Version Constraints

    Before tagging, check if your app’s constraint will auto-update:

    • "^2.1" will pick up v2.1.5 automatically ✅
    • "~2.1.0" will pick up v2.1.5 automatically ✅
    • "2.1.0" exact version won’t auto-update ❌

    If using exact versions, you’ll need to manually bump the constraint in composer.json.

    Testing Before Release

    Before tagging, test your changes in staging:

    1. Update composer to use your fork’s master branch (pre-tag)
    2. Test the specific admin pages that use the package
    3. Verify filters, columns, and any features you touched
    4. Only then tag the release and update production

    Documentation

    In your PR description, document:

    • What bug you fixed
    • Which pages/features to test in staging
    • Any breaking changes

    This helps your team review and deploy confidently, especially when the original package documentation is lacking.

    When to Fork vs. Patch

    Fork when:

    • The upstream is abandoned
    • Your fixes are app-specific and won’t be accepted upstream
    • You need long-term control over the package

    Use Composer patches (cweagans/composer-patches) when:

    • The fix is small and temporary
    • You expect upstream to merge it soon
    • You don’t want to maintain a full fork


  • When Denormalized Data Creates Filter-Display Mismatches

    I was debugging a filter that seemed to work perfectly—until users started reporting missing results. The filter UI said “Show items with USD currency,” but items with USD weren’t appearing.

    The problem? The filter was checking one field (batch_items.currency), but the UI was displaying a different field (batches.cached_currencies)—a denormalized summary column that aggregated currencies from all batch items.

    Here’s what was happening:

    // Filter checked individual batch items
    $query->whereHas('batchItems', function ($q) use ($currency) {
        $q->where('currency', $currency);
    });
    
    // But the UI displayed the denormalized summary
    $batch->cached_currencies; // ["USD", "EUR", "GBP"]
    

    When all batch items were sold out or inactive, the whereHas() check would return nothing—even though cached_currencies still showed “USD” because it hadn’t been refreshed.

    The Fix

    Match the filter logic to what users actually see. If you’re displaying denormalized data, either:

    1. Filter against the same denormalized field:
    // Filter matches what's displayed
    $query->whereJsonContains('cached_currencies', $currency);
    
    1. Or ensure the denormalized field stays in sync:
    // Observer to keep cached data fresh
    class BatchObserver
    {
        public function saved(Batch $batch)
        {
            $batch->update([
                'cached_currencies' => $batch->batchItems()
                    ->distinct('currency')
                    ->pluck('currency')
            ]);
        }
    }
    

    The Lesson

    When users see one thing but your filter checks another, you’ll get confusing bugs. Always verify: Does my filter logic match what’s displayed in the UI?

    If you’re caching/denormalizing data for performance, make sure filters query the same cached field—or keep it strictly in sync.

  • Role-Based Historical Data Migrations with Progress Tracking

    Fixing historical data issues in production is nerve-wracking. You need to update thousands of records, but a single wrong JOIN can cascade the update to user data you shouldn’t touch.

    Here’s a pattern I use: role-based filtering combined with progress tracking to make data migrations safer and more observable.

    The Problem: Broad Updates Are Dangerous

    Say you need to fix a file naming pattern in a generated_reports table. The naive approach:

    public function up(): void
    {
        DB::table('generated_reports')
            ->where('file_name', 'LIKE', 'Old_Format_%')
            ->update([
                'file_name' => DB::raw("REPLACE(file_name, 'Old_Format_', 'New_Format_')")
            ]);
    }
    

    But what if this table has reports generated by customers, internal admins, AND automated systems? You only want to fix admin reports, but this query hits everything.

    The Pattern: Role-Based Scoping

    Use JOINs to filter by user roles, then add progress tracking:

    use Symfony\Component\Console\Output\ConsoleOutput;
    
    public function up(): void
    {
        $output = new ConsoleOutput();
        
        // Step 1: Find affected records with role filtering
        $affected = DB::table('generated_reports')
            ->join('users', 'generated_reports.user_id', '=', 'users.id')
            ->join('role_user', 'users.id', '=', 'role_user.user_id')
            ->join('roles', 'role_user.role_id', '=', 'roles.id')
            ->whereIn('roles.slug', ['admin', 'finance', 'manager'])
            ->where('generated_reports.file_name', 'LIKE', 'Old_Format_%')
            ->where('generated_reports.created_at', '>=', '2024-01-01')
            ->select('generated_reports.id')
            ->distinct()  // Prevent duplicate IDs from multiple role assignments
            ->get();
        
        $output->writeln("Found {$affected->count()} records to update");
        
        // Step 2: Update in chunks with progress indicator
        $affected->chunk(100)->each(function ($chunk) use ($output) {
            DB::table('generated_reports')
                ->whereIn('id', $chunk->pluck('id'))
                ->update([
                    'file_name' => DB::raw("REPLACE(file_name, 'Old_Format_', 'New_Format_')")
                ]);
            
            $output->write('.');  // Progress indicator
        });
        
        $output->writeln("\nMigration complete: {$affected->count()} records updated");
    }
    

    Why This Matters

    This pattern:

    • Scopes updates safely: JOINs to roles table ensure you only touch records created by specific user types
    • Prevents accidental cascade: If roles misconfigured, updates fail instead of hitting wrong records
    • Visible progress: ConsoleOutput lets you watch long-running migrations in real-time
    • Handles duplicates: distinct() prevents row multiplication from many-to-many role relationships
    • Testable in dev: Run on staging with --pretend to see affected IDs without changes

    Key Components Explained

    1. Role-Based JOIN Chain

    ->join('users', 'generated_reports.user_id', '=', 'users.id')
    ->join('role_user', 'users.id', '=', 'role_user.user_id')
    ->join('roles', 'role_user.role_id', '=', 'roles.id')
    ->whereIn('roles.slug', ['admin', 'finance', 'manager'])
    

    This filters to records created by users with specific roles. If your app uses a different permission system (Spatie, custom), adjust accordingly.

    2. ConsoleOutput for Progress

    use Symfony\Component\Console\Output\ConsoleOutput;
    
    $output = new ConsoleOutput();
    $output->writeln("Found X records");  // Green text
    $output->write('.');  // Progress dots
    

    Works in migrations because they run via Artisan. You see progress live in your terminal.

    3. distinct() to Avoid Duplicates

    When a user has multiple roles, JOIN creates duplicate rows. distinct() collapses them:

    ->select('generated_reports.id')
    ->distinct()
    

    Without this, the same report gets updated multiple times (harmless but wasteful).

    When to Use This

    • Historical data fixes that should only affect specific user types (admins, internal users, etc.)
    • Migrations on large tables where you need to see progress
    • When UPDATE scope is safety-critical (don’t want to accidentally touch customer data)
    • When you need to generate an affected-IDs report before applying changes

    Testing Before Running

    Always verify scope on staging first:

    # See the query without executing
    php artisan migrate --pretend
    
    # Or add a dry-run to your migration:
    if (app()->environment('local')) {
        $output->writeln("DRY RUN - would update: " . $affected->pluck('id')->implode(', '));
        return;
    }
    

    Remember: Data migrations in production are one-way. Role-based filtering gives you an extra safety net to ensure you’re only touching the records you intend to fix.

  • Bypassing Global Query Scopes for Admin/Backend Features

    Global query scopes are great for filtering production data — hiding soft-deleted records, filtering by status, etc. But when you’re building admin tools, those same scopes become obstacles.

    Here’s the problem: you’re building a translation editor for your e-commerce dashboard. Your Product model has a global scope that hides discontinued products. But translators need to update ALL products, including discontinued ones, because those translations might be needed for historical orders or future reactivation.

    The Pattern: Explicit Scope-Bypassing Methods

    Instead of fighting with scopes everywhere, create dedicated methods that explicitly bypass them:

    // In your Order model
    public function all_items()
    {
        return $this->hasMany(OrderItem::class)
            ->withoutGlobalScopes()
            ->get();
    }
    
    // Usage in admin controllers
    $order = Order::find($id);
    $allItems = $order->all_items(); // Gets even soft-deleted items
    

    Compare this to the default relationship:

    // Normal relationship - respects global scopes
    $order->items; // Hides discontinued/soft-deleted items
    
    // Admin relationship - explicit about bypassing
    $order->all_items(); // Shows everything
    

    Why This Matters

    This pattern:

    • Makes intent explicit: all_items() signals “I know what I’m doing, show me everything”
    • Isolates scope-bypassing: Only admin code uses these methods; production code uses normal relationships
    • Prevents bugs: No accidental scope bypassing in customer-facing features
    • Self-documenting: Method name explains why scopes are bypassed

    When to Use This

    • Admin dashboards that need full data visibility
    • Translation/content management interfaces
    • Data export tools
    • Debugging utilities
    • Bulk operations that shouldn’t skip “hidden” records

    Alternative Approaches

    You could use withoutGlobalScopes() inline everywhere:

    $order->items()->withoutGlobalScopes()->get();
    

    But this is:

    • Verbose and repetitive
    • Easy to forget in some places
    • Harder to grep for “where are we bypassing scopes?”

    A named method is cleaner, easier to test, and more maintainable.

    Bonus: Selective Scope Bypassing

    You can also bypass specific scopes while keeping others:

    public function published_items_including_deleted()
    {
        return $this->hasMany(OrderItem::class)
            ->withoutGlobalScope(SoftDeletingScope::class)
            ->get();
    }
    

    This keeps your “published” scope active but removes soft delete filtering — useful when you want partial bypassing.

    Remember: Global scopes exist to protect production users from seeing the wrong data. When you need admin superpowers, make it explicit with dedicated methods. Your future self (and your code reviewers) will thank you.