Table of Contents
Queue jobs that fail repeatedly on predictable API errors waste server resources and pollute your error tracking. Here’s how to handle known validation constraints gracefully instead of letting jobs retry forever.
The Problem
Imagine a background sync job that queries a third-party API. The job runs every hour, but some requests hit a validation constraint that the API rejects:
try {
$data = $this->apiClient->getProductAvailability($productId, $startDate, $endDate);
} catch (ApiException $e) {
throw new SupplierException(
sprintf('Error connecting to API: %s', $e->getMessage())
);
}
When the API returns a 422 error for a constraint like “this product is a bundle”, the job fails, logs to Sentry, and retries. If you have thousands of products and a fraction are bundles, you’ll generate hundreds of error events per day for a completely predictable scenario.
One real-world case generated 1,434 Sentry errors over 10 days before someone investigated.
The Solution: Graceful Error Handling
Instead of treating all API errors the same way, detect known validation constraints and handle them gracefully:
try {
$data = $this->apiClient->getProductAvailability($productId, $startDate, $endDate);
} catch (ApiException $e) {
// Check if this is a known validation constraint
if ($e->getStatusCode() === 422 && str_contains($e->getMessage(), 'product is a bundle')) {
Log::info("Skipping bundle product {$productId} - requires child product sync");
// Optional: fetch child products and queue them instead
$children = $this->apiClient->getBundleChildren($productId);
foreach ($children as $childId) {
SyncProductAvailability::dispatch($childId, $startDate, $endDate);
}
return; // Exit gracefully, don't retry
}
// For unexpected errors, fail loudly
throw new SupplierException(
sprintf('Error connecting to API: %s', $e->getMessage())
);
}
Why This Matters
Queue jobs that fail on predictable errors have real costs:
- Wasted compute – Retry logic consumes server resources for errors that will never resolve
- Error noise – Real issues get buried in thousands of false-positive alerts
- Rate limits – Repeated invalid requests can exhaust API quotas
- Debugging friction – When real errors occur, they’re hidden in the noise
This is especially critical for high-volume background jobs where a single unhandled edge case can generate thousands of errors per day. By handling known constraints gracefully, you reduce noise, save retry costs, and can implement smart fallback logic like queuing child products.
When debugging queue job failures, check Sentry event counts. If you see the same error repeating hundreds of times, it’s likely hitting a predictable constraint that should be handled explicitly instead of retried.
Leave a Reply