Timing-Based Elicitation
Different code paths take measurably different amounts of time.
Different code paths take measurably different amounts of time. When the server follows one path for existing resources (database lookup, authorization check, schema validation) and a shorter path for non-existing ones (early 404 return), the latency differential is a side-channel that persists even when all other response signals are normalized. This is the oracle of last resort — it works when every other signal has been collapsed.
Latency Distribution Analysis
(Applies to: All methods) — Non-destructive when using GET/HEAD.
Mechanism: The server's response time varies depending on how deeply the request penetrates the processing pipeline before being rejected. A request for a non-existent resource may be rejected at the routing layer (~3ms), while a request for an existing resource triggers a database lookup, loads the resource, evaluates authorization, and returns the response (~45ms) — even if both return identical 404 responses.
Isolated Variable: Nothing about the request changes between probes. The only variable is the target resource ID. The measurement is server response latency, collected over N>=30 samples per target.
Oracle Signal: Statistically significant latency difference, measured via non-parametric hypothesis testing.
Baseline Calibration — Route-Level vs Application-Level 404
-- Route-level rejection (URL pattern doesn't match any route) --
GET /api/zzz/999 HTTP/1.1 → 404 in ~3ms
GET /api/zzz/999 HTTP/1.1 → 404 in ~2ms
GET /api/zzz/999 HTTP/1.1 → 404 in ~4ms
-- Application-level rejection (valid route, no matching record) --
GET /api/users/999 HTTP/1.1 → 404 in ~45ms
GET /api/users/999 HTTP/1.1 → 404 in ~43ms
GET /api/users/999 HTTP/1.1 → 404 in ~47ms
Mann-Whitney U: p < 0.001, effect size = 0.97The ~40ms differential — both returning identical 404 — confirms /api/users/:id is a valid route that triggers database access. This is itself a route-enumeration oracle.
GET — Existence Timing (Authorized vs Unauthorized Lookup)
GET /api/docs/secret-123 HTTP/1.1 → 403 in ~52ms (found → authz denied)
GET /api/docs/secret-123 HTTP/1.1 → 403 in ~49ms
GET /api/docs/secret-123 HTTP/1.1 → 403 in ~55ms
GET /api/docs/nonexist-999 HTTP/1.1 → 404 in ~12ms (not found → early return)
GET /api/docs/nonexist-999 HTTP/1.1 → 404 in ~11ms
GET /api/docs/nonexist-999 HTTP/1.1 → 404 in ~13ms
Mann-Whitney U: p < 0.001, effect size = 0.99~50ms for existing resources vs ~12ms for nonexistent reveals the server performs resource lookup before authorization. The timing signal is independent of the response content.
POST — Password/Hash Computation Timing ⚠️ Destructive
POST /api/auth/login
{"email": "alice@example.com", "password": "wrong"}
→ 401 in ~320ms (existing user → bcrypt hash computed and compared)
POST /api/auth/login
{"email": "nobody@example.com", "password": "wrong"}
→ 401 in ~4ms (no user → early return, no hash computation)Even with identical 401 responses, the ~316ms differential reveals that alice@example.com is a registered user. The hash computation time is the oracle.
💡 Statistical methodology: Use Mann-Whitney U test (non-parametric, no normality assumption) or Kolmogorov-Smirnov test. Collect N>=30 samples per target to account for network jitter. Report p-value and effect size — a statistically significant but tiny effect size may not be practically exploitable.
💡 Constant-time defenses: Some servers add artificial delays to normalize response times. Test by measuring the variance of response times: constant-time implementations have low variance across all targets, while fake-delay implementations often have higher variance for the padded path.
💡 Network jitter cancellation: Send a control request (to a known-existing resource) immediately before or after the target request. Compute the differential between control and target latencies rather than using raw latencies. This cancels out shared network conditions.
Mitigation: Implement constant-time request processing: always perform the full pipeline (resource lookup, auth check, validation) regardless of whether the resource exists. For password endpoints, always compute the hash even when the user doesn't exist (hash a dummy value). Alternatively, add calibrated random delays — but this is fragile against statistical analysis with large sample sizes.
HTTP/2 Timeless Timing Attack
(Applies to: All methods) — Non-destructive when using GET/HEAD.
Mechanism: HTTP/2 multiplexing allows sending multiple requests over a single TCP connection in a single flight. By sending a request for a known-existing resource and a request for the target resource simultaneously in the same HTTP/2 connection, network jitter is eliminated entirely — both requests traverse the same network path at the same time. The only variable is server-side processing time.
This technique, described in "Timeless Timing Attacks" (Van Goethem et al., USENIX Security 2020), fundamentally changes timing attacks from a noisy network-dependent signal to a precise server-side measurement.
Isolated Variable: Two requests are sent in the same HTTP/2 frame burst: one to a known-existing control resource, one to the target. The relative order of responses reveals which took longer to process.
Oracle Signal: Response ordering. If the target resource exists (longer processing path), its response arrives after the control. If it doesn't exist (fast 404), its response arrives before the control.
HTTP/2 Multiplexed Probe — Existing Target
-- Single TCP connection, single TLS handshake --
Stream 1: GET /api/users/control-known-existing HTTP/1.1
Stream 3: GET /api/users/target-1001 HTTP/1.1
-- Sent in same HEADERS frame burst --
Response order: Stream 1 (control) arrives first, Stream 3 (target) arrives second.
Both return 404 (server normalizes status codes).
Interpretation: target-1001 took LONGER to process → existsHTTP/2 Multiplexed Probe — Non-Existing Target
Stream 1: GET /api/users/control-known-existing HTTP/1.1
Stream 3: GET /api/users/target-9999 HTTP/1.1
Response order: Stream 3 (target) arrives first, Stream 1 (control) arrives second.
Both return 404.
Interpretation: target-9999 took LESS time to process → does not exist💡 Why this defeats network-level defenses: Traditional timing attacks suffer from network jitter (variable latency per packet). HTTP/2 multiplexing eliminates this: both requests share the same TCP segments, TLS records, and network path. The only variable left is server-side processing time.
💡 Amplification via repeated trials: Run N>=50 trials per target. Count the number of times the target response arrived before vs after the control. A consistent pattern (>75% of trials) confirms the existence signal.
💡 HTTP/2 PRIORITY manipulation: Some HTTP/2 servers respect stream priority hints. Setting the target stream to higher priority than the control ensures the server processes them in the intended order, making the response-ordering signal cleaner.
Mitigation: Constant-time processing is the only effective defense against timeless timing. Network-level delays, rate limiting, and response padding do not help because the attacker is measuring relative processing time on the same connection. Ensure the server follows the same code path for existing and non-existing resources — including identical database query patterns (e.g., always query, return dummy result for non-existing).
Connection-State Timing
(Applies to: All methods) — Non-destructive.
Mechanism: Some servers exhibit connection-level behavioral differences based on the resource being accessed. These include:
- Keep-Alive behavior: The server may set different
Keep-Alive: timeout=Nvalues or close the connection sooner for existing vs non-existing resources. - TLS session resumption: The server may offer or refuse TLS session tickets differently based on the application-level result.
- Connection reuse patterns: The server's backend connection pool may route existing-resource requests to different backend servers than non-existing ones, creating measurable TCP-level differences.
Isolated Variable: The HTTP request is identical. The measurement is at the transport layer: TCP connection duration, Keep-Alive header values, TLS handshake characteristics, or connection reuse behavior.
Oracle Signal: Transport-layer behavioral differences correlated with resource existence.
Keep-Alive Header Differential
GET /api/users/1001 HTTP/1.1
Host: target.com
Connection: keep-alive
HTTP/1.1 404 Not Found
Connection: keep-alive
Keep-Alive: timeout=30
---
GET /api/users/9999 HTTP/1.1
Host: target.com
Connection: keep-alive
HTTP/1.1 404 Not Found
Connection: keep-alive
Keep-Alive: timeout=5Both return 404. But the Keep-Alive: timeout=30 vs timeout=5 differential reveals that resource 1001 was processed by a different backend or code path than 9999. This is a transport-layer existence oracle that persists even when all application-layer signals are normalized.
💡 Backend routing as the root cause: The most common source of connection-state differentials is a reverse proxy that routes requests to different backend pools based on the application's response. Existing-resource responses may be routed through a caching layer (longer keep-alive), while 404 responses bypass it (shorter keep-alive).
💡 TLS session ticket as an oracle: If the server issues TLS session tickets only for responses from the main application backend (not the 404 handler), the presence or absence of a
NewSessionTicketmessage in the TLS handshake becomes an oracle.
Mitigation: Ensure uniform connection-level behavior regardless of the request's routing path. Configure identical Keep-Alive timeouts, TLS session policies, and connection reuse behavior across all backend pools. If the infrastructure routes requests to different backends based on URL pattern matching, ensure the transport-layer configuration is homogeneous. This is a rare signal in practice — most servers do not exhibit connection-level differentials — but it's worth auditing in high-security contexts where all application-layer oracles have been collapsed.