Skip to main content
The Test Results page provides comprehensive analysis of all load test executions. View aggregate statistics, drill into individual tests, compare results, and set performance baselines.

Key Metrics

Latency Percentiles

Response time distribution: P50, P90, P95, P99 show how fast most requests complete.

Error Rate

Percentage of failed requests. Low error rates indicate stable performance under load.

Throughput

Requests per second your system handles at the configured load level.

Success Rate

Percentage of requests returning expected status codes (typically 2xx/3xx).

Results List

The main Test Results page displays all test executions in a searchable, filterable table.

Table Columns

ColumnDescription
Test IDUnique identifier (click to view details)
Test NameName from the associated test plan
LocationGeographic origin (country flag)
ProviderProxy provider used for the test
VUsNumber of virtual users
DurationTotal test duration
RequestsTotal requests made
CreatedTest execution timestamp
  • Search: Filter by test ID, name, provider, or country
  • Provider Filter: Show only tests from specific providers
  • Country Filter: Show only tests from specific locations

How to View Test Details

1

Find the Test

Use search or filters to locate the test in the results list.
2

Click the Test ID

Click the test ID link to open the detailed results page.
3

Explore Tabs

Use the tabs to view different aspects of the test results.

Test Detail Page

The detail page provides comprehensive analysis with six tabs.

Header Information

The header displays:
  • Status: SUCCESS, FAILED, RUNNING, or PENDING
  • Multi-Step Badge: Shown for multi-step scenario tests
  • Baseline Badge: Shown if this test is set as the performance baseline
  • Quick Stats: Total requests, errors, success rate

Quick Stats Grid

Eight cards showing key performance indicators:
MetricDescription
RequestsTotal number of HTTP requests
ErrorsNumber of failed requests
Error RatePercentage of failures
Avg LatencyMean response time
P50 LatencyMedian response time
P90 Latency90th percentile response time
P95 Latency95th percentile response time
P99 Latency99th percentile response time

Summary Tab

The default view showing:
  • Test Info Cards: Detailed test configuration (URL, method, VUs, duration, executor, thresholds)
  • Latency Chart: Response time distribution over time
  • Status Pie Chart: Distribution of HTTP status codes
  • Outliers Chart: Visualization of slow requests

Steps Tab (Multi-Step Only)

For multi-step tests, this tab shows:
  • Results for each step in the scenario
  • Per-step metrics (latency, success rate)
  • Extracted variables and assertion results
  • Step execution order and timing

Metrics Tab

Detailed timing metrics from K6:
MetricDescription
Iteration DurationTotal time per test iteration
HTTP Request DurationEnd-to-end request time
Connecting TimeTCP connection establishment
TLS HandshakeSSL/TLS negotiation time
Sending TimeTime to send request body
Waiting TimeTime to first byte (TTFB)
Receiving TimeTime to receive response
Blocked TimeTime waiting for connection
Each metric shows MIN, AVG, and MAX values. Export: Download metrics as CSV or JSON for external analysis.

Requests Tab

Paginated list of individual requests with:
  • Request index and timestamp
  • HTTP status code
  • Response time
  • Request/response details (expandable)
Use this tab to investigate specific slow or failed requests.

Stats Tab

Large format cards displaying all key metrics with copy-to-clipboard functionality. Useful for reporting and sharing specific values.

Terminal Tab

Raw K6 output log showing:
  • Test execution progress
  • Real-time metrics during the run
  • Error messages and warnings
  • Final summary statistics

How to Compare Test Results

Compare two test executions to identify performance changes.
1

Open Test Details

Navigate to the test you want to compare from.
2

Click Compare

Click the Compare button in the header.
3

Select Comparison Test

Choose another test to compare against. Tests from the same plan are shown first.
4

Review Comparison

The comparison shows metric differences, improvements, and degradations.
Comparison highlights improvements in green and degradations in red, making it easy to spot performance changes between releases.

How to Set a Baseline

Set a test as the performance baseline for regression detection.
1

Open a Successful Test

Navigate to a completed test with good performance.
2

Click Set as Baseline

Click the Set as Baseline button in the header.
3

Confirm

The test is now the baseline for its test plan. Future tests are compared against it.
Set baselines after confirming acceptable performance. Future tests automatically show regression alerts if metrics exceed defined thresholds.

Understanding Alerts

Regression Alert

Shown when a test’s metrics exceed the baseline thresholds:
  • Warning: Metrics approaching threshold limits
  • Critical: Metrics significantly exceeding limits
Review the specific metrics that triggered the alert and investigate root causes.

Threshold Results

If thresholds were defined in the test plan, results show:
  • Passed: Threshold conditions met
  • Failed: Threshold conditions not met
  • Individual threshold evaluation with actual vs expected values

Error Alert

Shown when a test fails with details about:
  • Error type and message
  • Potential causes
  • Troubleshooting suggestions

Latency Percentiles Explained

PercentileMeaningUse Case
P50Median - 50% of requests fasterTypical user experience
P9090% of requests fasterMost users’ experience
P9595% of requests fasterSLA target (common)
P9999% of requests fasterWorst case for most users
Don’t rely only on averages. A low average can hide a long tail of slow requests. Always check P95 and P99 for real user experience.

Troubleshooting

  • Check the Terminal tab for error messages
  • Review the Error Alert for specific failure details
  • Common causes: target unreachable, SSL errors, timeout
  • Check the Requests tab for failing status codes
  • Review Status Pie Chart for error distribution
  • Common causes: rate limiting, authentication issues, server errors
  • Check the Outliers Chart for specific slow requests
  • Review the Latency Chart for patterns over time
  • Consider network distance (test location vs server)
  • Test may still be processing results
  • Refresh the page after a few seconds
  • Very short tests may have limited metric data
  • Only successful tests can be baselines
  • The test must belong to a test plan
  • You need write permission for Load Testing

FAQ

Test results are stored permanently. There is no automatic cleanup or retention limit.
Individual test results cannot be deleted. This ensures audit trail integrity for performance history.
P95 means 95% of requests were faster than this value. P99 is more stringent - 99% faster. P99 catches edge cases that P95 might miss.
This indicates a “long tail” - most requests are fast, but some are very slow. Investigate outliers in the Requests tab.
Copy the URL from the detail page - it includes the test ID and can be shared directly. Or export metrics as CSV/JSON.
When metrics exceed the baseline’s thresholds. Default thresholds compare latency, error rate, and success rate.