Resource Exhaustion

DoSDenial of ServiceMemory ExhaustionCPU Exhaustion

Resource Exhaustion at a glance

What it is: Vulnerabilities that allow attackers to exhaust server resources (memory, CPU, disk, connections) through malicious requests, causing service degradation or complete denial of service.
Why it happens: Resource exhaustion occurs when systems lack limits on input size, memory, execution time, or connection pools, allowing excessive processing, deep nesting, or unmonitored resource use to degrade performance.
How to fix: Set strict limits on memory, CPU, and connections, enforce request timeouts, validate and cap input sizes, and manage resources through bounded pools.

Overview

Resource exhaustion attacks exploit insufficient resource limits to consume all available memory, CPU, disk space, network connections, or file handles, causing the application to crash or become unresponsive. These attacks can be triggered with surprisingly small numbers of malicious requests.

Common attack vectors include unbounded file uploads consuming disk space, large JSON/XML payloads consuming memory, recursive operations exhausting stack space, inefficient algorithms with poor time complexity, connection pool exhaustion from slow requests, and decompression bombs (zip bombs). Without proper resource limits, a single malicious request can crash an entire service.

sequenceDiagram participant Attacker participant App participant Memory Attacker->>App: POST /api/data (10GB JSON payload) App->>Memory: Allocate buffer for entire payload Memory->>Memory: Allocate 10GB RAM Memory-->>App: Out of memory App->>App: Crash Attacker->>Attacker: Service down Note over App: Missing: Payload size limits<br/>Missing: Streaming parser<br/>Missing: Memory limits
A potential flow for a Resource Exhaustion exploit

Where it occurs

Resource exhaustion occurs in systems without proper limits on input size, execution time, or resource allocation, allowing users to trigger excessive processing or memory use.

Impact

Prevent exhaustion by setting strict request/upload limits, streaming I/O with timeouts, capping recursion and payloads, bounding pools with circuit breakers, monitoring resources, using efficient algorithms, pagination, and graceful degradation.

Prevention

Set strict limits on file upload sizes. Implement maximum payload sizes for requests. Use memory-efficient streaming for large files. Set timeouts on all I/O operations and long-running tasks. Limit recursion depth and nesting in data structures. Use resource pools with maximum sizes (connection pools, thread pools). Implement circuit breakers to prevent cascading failures. Monitor resource usage (memory, CPU, connections) and alert on anomalies. Use efficient algorithms with good time complexity. Validate input sizes before processing. Implement request size limits at multiple layers (web server, application, framework). Use pagination for large result sets. Implement graceful degradation under load.

Examples

Switch tabs to view language/framework variants.

Database query without pagination causes memory exhaustion

API endpoint fetches entire database table into memory without limits, enabling DoS attacks.

Vulnerable
Python • Django — Bad
# Vulnerable: No pagination or limits
from django.http import JsonResponse
from .models import User

def export_users(request):
    # VULNERABLE: Loads ALL users into memory!
    users = User.objects.all()
    
    # Serialize all users at once
    data = [{
        'id': user.id,
        'email': user.email,
        'created_at': str(user.created_at),
        'profile': user.profile_data  # Could be large JSON
    } for user in users]
    
    # Returns massive JSON response
    return JsonResponse({'users': data})

# Another vulnerable pattern
def search_logs(request):
    query = request.GET.get('q', '')
    
    # No limit on results
    logs = Log.objects.filter(message__contains=query)
    
    return JsonResponse({
        'results': list(logs.values())
    })
  • Line 6:
  • Line 9:

The vulnerable code calls User.objects.all() which loads every user record from the database into Python's memory. With millions of users, this can consume gigabytes of RAM.

The list comprehension then materializes all users into a JSON-serializable list, doubling memory usage before the response is generated.

There's no pagination, no maximum result limit, and no streaming - the entire dataset must fit in memory.

An attacker can repeatedly call this endpoint to exhaust server memory, causing the Python process to be killed or the entire server to swap and become unresponsive.

Secure
Python • Django — Good
# Secure: Enforced pagination and limits
from django.http import JsonResponse
from django.core.paginator import Paginator
from .models import User, Log

# Constants for limits
MAX_PAGE_SIZE = 100
DEFAULT_PAGE_SIZE = 20

def export_users(request):
    # Parse pagination parameters
    page = int(request.GET.get('page', 1))
    page_size = min(
        int(request.GET.get('page_size', DEFAULT_PAGE_SIZE)),
        MAX_PAGE_SIZE  # Never exceed max
    )
    
    # Use queryset slicing for efficiency
    users = User.objects.all()
    paginator = Paginator(users, page_size)
    
    if page > paginator.num_pages:
        return JsonResponse({'error': 'Page not found'}, status=404)
    
    page_obj = paginator.get_page(page)
    
    # Only serialize current page
    data = [{
        'id': user.id,
        'email': user.email,
        'created_at': str(user.created_at),
        'profile': user.profile_data
    } for user in page_obj]
    
    return JsonResponse({
        'users': data,
        'page': page,
        'total_pages': paginator.num_pages,
        'total_count': paginator.count,
        'has_next': page_obj.has_next(),
        'has_previous': page_obj.has_previous()
    })

def search_logs(request):
    query = request.GET.get('q', '')
    page = int(request.GET.get('page', 1))
    page_size = min(
        int(request.GET.get('page_size', DEFAULT_PAGE_SIZE)),
        MAX_PAGE_SIZE
    )
    
    # Limit query length to prevent ReDoS
    if len(query) > 500:
        return JsonResponse({'error': 'Query too long'}, status=400)
    
    # Apply limits at database level
    logs = Log.objects.filter(message__contains=query)
    
    # Count efficiently without loading data
    total_count = logs.count()
    
    # Enforce maximum total results
    if total_count > 10000:
        return JsonResponse({
            'error': 'Too many results. Please refine your query.',
            'total_count': total_count
        }, status=400)
    
    # Paginate results
    paginator = Paginator(logs, page_size)
    page_obj = paginator.get_page(page)
    
    return JsonResponse({
        'results': list(page_obj.object_list.values()),
        'page': page,
        'total_pages': min(paginator.num_pages, 100),  # Cap max pages
        'total_count': total_count
    })

# For large exports, use streaming or background jobs
from django.http import StreamingHttpResponse
import csv

def export_users_csv(request):
    def iter_users():
        # Stream results in chunks
        queryset = User.objects.all().iterator(chunk_size=1000)
        
        for user in queryset:
            yield f"{user.id},{user.email},{user.created_at}\n"
    
    response = StreamingHttpResponse(iter_users(), content_type='text/csv')
    response['Content-Disposition'] = 'attachment; filename="users.csv"'
    return response
  • Line 11:
  • Line 19:
  • Line 64:
  • Line 83:

The secure implementation enforces pagination with a maximum page size of 100 records. Users cannot request more than this limit per request.

Django's Paginator translates to efficient SQL with LIMIT and OFFSET, so only the requested page is loaded from the database into memory.

The search_logs function includes additional protections: it rejects queries matching more than 10,000 total results, preventing massive dataset transfers.

For legitimate large exports, the export_users_csv function uses queryset.iterator(chunk_size=1000) which streams results in batches, keeping memory usage constant regardless of dataset size.

These protections ensure that memory usage is bounded and predictable, even with large tables and malicious input.

Engineer Checklist

  • Set maximum file upload sizes (e.g., 10MB)

  • Limit request payload sizes at web server and app level

  • Use streaming for large file processing

  • Implement timeouts on all I/O operations

  • Set maximum recursion depth for nested data

  • Use connection pools with max size limits

  • Limit concurrent requests per user/IP

  • Implement circuit breakers for external dependencies

  • Monitor memory, CPU, and connection usage

  • Use efficient algorithms (avoid O(n²) on user input)

  • Validate input sizes before processing

  • Implement pagination for large result sets

  • Set maximum database query result sizes

  • Use resource limits (cgroups, ulimit)

  • Test with large, malicious inputs

End-to-End Example

An API endpoint accepts JSON payloads without size limits, allowing attackers to send massive payloads that exhaust server memory.

Vulnerable
PYTHON
# Vulnerable: No size limits
@app.route('/api/upload', methods=['POST'])
def upload():
    # Loads entire JSON into memory!
    data = request.get_json()
    
    # Process unbounded data
    for item in data['items']:
        process_item(item)
    
    return jsonify({'status': 'success'})
Secure
PYTHON
# Secure: Size limits and streaming
from flask import request, jsonify
import json

# Set maximum content length (10MB)
app.config['MAX_CONTENT_LENGTH'] = 10 * 1024 * 1024

@app.route('/api/upload', methods=['POST'])
def upload():
    # Check content length before processing
    if request.content_length and request.content_length > app.config['MAX_CONTENT_LENGTH']:
        return jsonify({'error': 'Payload too large'}), 413
    
    try:
        # Use streaming parser for large JSON
        stream = request.stream
        parser = ijson.items(stream, 'items.item')
        
        processed = 0
        max_items = 1000
        
        for item in parser:
            if processed >= max_items:
                return jsonify({'error': 'Too many items'}), 400
            
            # Process with timeout
            with Timeout(5):  # 5 second timeout per item
                process_item(item)
            
            processed += 1
        
        return jsonify({'status': 'success', 'processed': processed})
    
    except TimeoutError:
        return jsonify({'error': 'Processing timeout'}), 408
    except MemoryError:
        log_alert('Memory exhaustion attempt')
        return jsonify({'error': 'Request too large'}), 413

Discovery

Send progressively larger payloads or deeply nested data structures and observe memory usage and response behavior.

  1. 1. Test resource-intensive operations

    http

    Action

    Submit operations that consume CPU/memory

    Request

    POST https://api.example.com/process
    Body:
    "{'size': 999999999}"

    Response

    Status: 200
    Body:
    {
      "note": "Large payloads processed without limits"
    }

    Artifacts

    memory_consumption no_size_limits
  2. 2. Test concurrent connection limits

    http

    Action

    Open many simultaneous connections

    Request

    GET https://api.example.com/stream

    Response

    Status: 200
    Body:
    {
      "note": "Unlimited connections accepted"
    }

    Artifacts

    connection_exhaustion no_limits
  3. 3. Test file upload size limits

    http

    Action

    Upload extremely large files

    Request

    POST https://api.example.com/upload
    Body:
    "10GB file"

    Response

    Status: 200
    Body:
    {
      "note": "Massive files accepted and processed"
    }

    Artifacts

    disk_exhaustion no_upload_limits

Exploit steps

Attacker sends massive payloads, deeply nested JSON/XML, or zip bombs to exhaust server memory, CPU, or disk space.

  1. 1. Memory exhaustion attack

    Consume all available memory

    http

    Action

    Send payloads that allocate massive amounts of memory

    Request

    POST https://api.example.com/parse
    Body:
    "{'data': 'A' * 10000000}"

    Response

    Status: 200
    Body:
    {
      "note": "Application crashes from out-of-memory error"
    }

    Artifacts

    oom_crash service_outage
  2. 2. Disk space exhaustion

    Fill disk with uploads

    http

    Action

    Upload files until disk is full

    Request

    POST https://api.example.com/upload

    Response

    Status: 200
    Body:
    {
      "note": "Disk full, application cannot write logs or data"
    }

    Artifacts

    disk_full application_failure
  3. 3. CPU exhaustion via regex

    ReDoS attack

    http

    Action

    Trigger expensive regex operations

    Request

    POST https://api.example.com/validate
    Body:
    "input=AAAAAAAAAAAAAAAAAAAAAAAAA!"

    Response

    Status: 200
    Body:
    {
      "note": "CPU pegged at 100%, service unresponsive"
    }

    Artifacts

    cpu_exhaustion redos_attack

Specific Impact

Complete service outage affecting all users, application crashes requiring restarts, and infrastructure costs from resource spikes.

Fix

Set maximum content length limits. Use streaming parsers for large payloads instead of loading into memory. Limit the number of items processed. Implement timeouts on processing. Handle resource errors gracefully and log security events.

Detect This Vulnerability in Your Code

Sourcery automatically identifies resource exhaustion vulnerabilities and many other security issues in your codebase.

Scan Your Code for Free