Table of Contents
📖 2 minutes read
When you call an external API, not all failures are equal.
Some errors are transient (timeouts, rate limits). Retrying makes sense. Others are permanent for a given resource (disabled/expired/missing). Retrying just burns worker time and creates noise.
Step 1: extract a machine-readable error code
If the API returns a JSON error payload, parse it once and normalize it.
use GuzzleHttp\Exception\RequestException;
function extractRemoteErrorCode(RequestException $e): ?string
{
$body = (string) optional($e->getResponse())->getBody();
$data = json_decode($body, true);
if (!is_array($data)) {
return null;
}
return $data['error_code'] ?? $data['error'] ?? null;
}
Step 2: map codes to a “don’t retry” exception
Create a dedicated exception type that your job runner can treat as non-retriable.
final class PermanentRemoteFailure extends \RuntimeException {}
try {
$client->get('/v1/resource/' . $resourceId);
} catch (RequestException $e) {
$code = extractRemoteErrorCode($e);
$permanentCodes = [
'RESOURCE_DISABLED',
'RESOURCE_EXPIRED',
'RESOURCE_NOT_FOUND',
];
if (in_array($code, $permanentCodes, true)) {
throw new PermanentRemoteFailure('Remote resource is not usable', 0, $e);
}
// Unknown/transient: let the queue retry policy handle it.
throw $e;
}
Step 3: teach your job what to do next
The point isn’t just to stop retries — it’s to move the system forward.
public function handle()
{
try {
$this->syncOne($this->remoteId);
} catch (PermanentRemoteFailure $e) {
$this->markAsInactive($this->remoteId);
return; // stop here; no retry
}
}
Why this is worth doing
- Fewer wasted retries
- Cleaner alerts
- More predictable queue behavior
- A single place to expand your error taxonomy as you learn
If you’re already catching exceptions, you’re 80% there — the rest is classifying them so your system reacts appropriately.
Leave a Reply