Inefficient Algorithm Causing Request Timeouts
An endpoint that processes data times out when dataset size increases. The algorithm works correctly for small datasets but degrades exponentially with larger inputs. Response time jumps from 100ms to 10+ seconds as data volume grows.
The algorithm is correct but has poor time complexity that wasn't apparent at scale.
Error Messages You Might See
Common Causes
- Nested loops creating O(n²) or O(n³) complexity (sorted list checking for each item)
- Inefficient search: linear search where binary search should be used
- Unnecessary array copying in loop creating O(n²) memory usage
- Recursive algorithm without memoization, recalculating same values
- Sorting inside loops instead of once before loop
How to Fix It
Profile the slow endpoint with realistic dataset size. Look for nested loops and recursive calls. Use appropriate data structures: HashSet for O(1) lookup, sorted array for binary search. Avoid creating new objects in tight loops. Memoize/cache expensive calculations. Consider pagination: process in batches instead of all at once.
Real developers can help you.
You don't need to be technical. Just describe what's wrong and a verified developer will handle the rest.
Get HelpFrequently Asked Questions
How to identify O(n²) problems?
Double the input size. If time increases by 4x, likely O(n²). If increases by 2x, likely O(n). If no change, probably O(1).
When should binary search be used?
When searching sorted array/list. O(log n) instead of O(n). If unsorted, sort first (O(n log n)) then binary search.
How to optimize recursive algorithms?
Add memoization (cache results). Fibonacci: instead of recalculating fib(5) multiple times, cache it. Or use iterative approach.