Table of Contents
The Problem: MaxAttemptsExceededException on External API Calls
You’ve queued a job that syncs data from an external API. The job processes multiple records in a loop, making HTTP requests for each one. Everything works fine in development, but in production you start seeing this error:
Illuminate\Queue\MaxAttemptsExceededException:
App\Jobs\DataSyncJob has been attempted too many times or run too long.
The job may have previously timed out.
The job hasn’t actually failed—it’s just slow. Laravel thinks it’s taking too long and kills it before it finishes.
Why This Happens
When a queue job makes multiple external API calls (especially in a loop over date ranges or collections), three timeout layers can conflict:
- HTTP client timeout (default: no limit in Guzzle)
- Job timeout (default: 60 seconds in Laravel)
- Job retry logic (default: 3 attempts)
If the external API is slow or your loop has many iterations, the job times out before completing. Laravel marks it as “max attempts exceeded” even though the real issue is timing, not failure.
The Solution: Set Explicit Timeouts at Every Level
1. Configure the Job Timeout
Add a timeout property to your queue job to tell Laravel how long it can run:
class DataSyncJob implements ShouldQueue
{
public int $timeout = 300; // 5 minutes
public int $tries = 3;
public int $maxExceptions = 1;
public function __construct(
protected Report $report,
protected string $startDate,
protected string $endDate,
) {}
public function handle(ExternalApiService $api): void
{
$period = CarbonPeriod::create($this->startDate, $this->endDate);
foreach ($period as $date) {
$api->fetchData($date->toDateString());
}
}
}
2. Set HTTP Client Timeouts
Configure Guzzle (or whatever HTTP client you use) with explicit connect and request timeouts:
// In your API client class
protected function makeRequest(string $method, string $url, array $data = [])
{
try {
return $this->http->request($method, $url, [
'json' => $data,
'timeout' => 30, // Total request timeout: 30 seconds
'connect_timeout' => 5, // Connection timeout: 5 seconds
]);
} catch (RequestException $e) {
// Handle timeouts gracefully
throw new ApiException("External API request failed", 0, $e);
}
}
3. Add Redis Rate Limiting (Bonus)
If your job processes many items and the external API has rate limits, use Laravel’s Redis throttle to avoid hammering their servers:
use Illuminate\Redis\RedisManager;
public function handle(ExternalApiService $api, RedisManager $redis): void
{
$codes = $this->report->getCodes();
foreach ($codes as $code) {
$redis
->throttle('api_sync_' . $api->getName())
->allow(10) // 10 requests
->every(60) // per 60 seconds
->then(function () use ($api, $code) {
try {
$api->fetchData($code);
} catch (\Exception $e) {
// Log but don't fail the entire job
logger()->error("API sync failed for code {$code}", [
'exception' => $e->getMessage()
]);
}
}, function () {
// Rate limit hit—release job back to queue
$this->release($this->attempts());
});
}
}
When to Use This Pattern
This approach works well when:
- Your job makes multiple external API calls (loops, date ranges, batches)
- The external API can be slow or unreliable
- You need graceful degradation—one failed request shouldn’t kill the entire job
- The external API has rate limits
Key Takeaways
- Always set explicit
timeoutproperties on long-running queue jobs - Configure HTTP client timeouts (
timeoutandconnect_timeout) - Use Redis throttling for rate-limited APIs
- Catch exceptions inside loops so one failure doesn’t kill the entire job
- Monitor Sentry/logs for timeout patterns—they reveal slow external dependencies
The MaxAttemptsExceededException isn’t always a failure—sometimes it’s just a sign your timeouts need tuning.
Leave a Reply