Long-Running Job Timeout - Background Tasks Interrupted
Long-running operations (file processing, email sending, bulk updates) timeout. Tasks don't complete.
Operations that take >30 seconds fail on serverless deployments.
Error Messages You Might See
Common Causes
- Running long task synchronously in API route
- Serverless timeout shorter than task duration
- No background job queue (Bull, RabbitMQ)
- Task killed before cleanup code runs
- No retry mechanism for failed jobs
How to Fix It
Use background job queue: Bull, RabbitMQ, or cloud services (Google Tasks, AWS SQS)
Pattern: API returns immediately with job ID, frontend polls for status
Implement retry logic with exponential backoff
Add timeout to job: if not complete in 5 min, mark failed and retry
Process jobs outside request context: cron job, worker service, or queue consumer
Real developers can help you.
You don't need to be technical. Just describe what's wrong and a verified developer will handle the rest.
Get HelpFrequently Asked Questions
How do I handle long tasks?
Queue the job, return immediately with ID. Frontend polls /api/job/id for status
What's a good job timeout?
Set longer than expected: if usually 2min, set 5min timeout. Prevents premature kills
Should I retry failed jobs?
Yes. Retry with exponential backoff: 1min, 5min, 30min. Max 3-5 retries