Key Metrics
Latency Percentiles
Response time distribution: P50, P90, P95, P99 show how fast most requests complete.
Error Rate
Percentage of failed requests. Low error rates indicate stable performance under load.
Throughput
Requests per second your system handles at the configured load level.
Success Rate
Percentage of requests returning expected status codes (typically 2xx/3xx).
Results List
The main Test Results page displays all test executions in a searchable, filterable table.Table Columns
| Column | Description |
|---|---|
| Test ID | Unique identifier (click to view details) |
| Test Name | Name from the associated test plan |
| Location | Geographic origin (country flag) |
| Provider | Proxy provider used for the test |
| VUs | Number of virtual users |
| Duration | Total test duration |
| Requests | Total requests made |
| Created | Test execution timestamp |
Filtering and Search
- Search: Filter by test ID, name, provider, or country
- Provider Filter: Show only tests from specific providers
- Country Filter: Show only tests from specific locations
How to View Test Details
Test Detail Page
The detail page provides comprehensive analysis with six tabs.Header Information
The header displays:- Status: SUCCESS, FAILED, RUNNING, or PENDING
- Multi-Step Badge: Shown for multi-step scenario tests
- Baseline Badge: Shown if this test is set as the performance baseline
- Quick Stats: Total requests, errors, success rate
Quick Stats Grid
Eight cards showing key performance indicators:| Metric | Description |
|---|---|
| Requests | Total number of HTTP requests |
| Errors | Number of failed requests |
| Error Rate | Percentage of failures |
| Avg Latency | Mean response time |
| P50 Latency | Median response time |
| P90 Latency | 90th percentile response time |
| P95 Latency | 95th percentile response time |
| P99 Latency | 99th percentile response time |
Summary Tab
The default view showing:- Test Info Cards: Detailed test configuration (URL, method, VUs, duration, executor, thresholds)
- Latency Chart: Response time distribution over time
- Status Pie Chart: Distribution of HTTP status codes
- Outliers Chart: Visualization of slow requests
Steps Tab (Multi-Step Only)
For multi-step tests, this tab shows:- Results for each step in the scenario
- Per-step metrics (latency, success rate)
- Extracted variables and assertion results
- Step execution order and timing
Metrics Tab
Detailed timing metrics from K6:| Metric | Description |
|---|---|
| Iteration Duration | Total time per test iteration |
| HTTP Request Duration | End-to-end request time |
| Connecting Time | TCP connection establishment |
| TLS Handshake | SSL/TLS negotiation time |
| Sending Time | Time to send request body |
| Waiting Time | Time to first byte (TTFB) |
| Receiving Time | Time to receive response |
| Blocked Time | Time waiting for connection |
Requests Tab
Paginated list of individual requests with:- Request index and timestamp
- HTTP status code
- Response time
- Request/response details (expandable)
Stats Tab
Large format cards displaying all key metrics with copy-to-clipboard functionality. Useful for reporting and sharing specific values.Terminal Tab
Raw K6 output log showing:- Test execution progress
- Real-time metrics during the run
- Error messages and warnings
- Final summary statistics
How to Compare Test Results
Compare two test executions to identify performance changes.Select Comparison Test
Choose another test to compare against. Tests from the same plan are shown first.
Comparison highlights improvements in green and degradations in red, making it easy to spot performance changes between releases.
How to Set a Baseline
Set a test as the performance baseline for regression detection.Understanding Alerts
Regression Alert
Shown when a test’s metrics exceed the baseline thresholds:- Warning: Metrics approaching threshold limits
- Critical: Metrics significantly exceeding limits
Threshold Results
If thresholds were defined in the test plan, results show:- Passed: Threshold conditions met
- Failed: Threshold conditions not met
- Individual threshold evaluation with actual vs expected values
Error Alert
Shown when a test fails with details about:- Error type and message
- Potential causes
- Troubleshooting suggestions
Latency Percentiles Explained
| Percentile | Meaning | Use Case |
|---|---|---|
| P50 | Median - 50% of requests faster | Typical user experience |
| P90 | 90% of requests faster | Most users’ experience |
| P95 | 95% of requests faster | SLA target (common) |
| P99 | 99% of requests faster | Worst case for most users |
Troubleshooting
Test shows FAILED status
Test shows FAILED status
- Check the Terminal tab for error messages
- Review the Error Alert for specific failure details
- Common causes: target unreachable, SSL errors, timeout
High error rate
High error rate
- Check the Requests tab for failing status codes
- Review Status Pie Chart for error distribution
- Common causes: rate limiting, authentication issues, server errors
Unexpected latency spikes
Unexpected latency spikes
- Check the Outliers Chart for specific slow requests
- Review the Latency Chart for patterns over time
- Consider network distance (test location vs server)
Missing metrics
Missing metrics
- Test may still be processing results
- Refresh the page after a few seconds
- Very short tests may have limited metric data
Cannot set as baseline
Cannot set as baseline
- Only successful tests can be baselines
- The test must belong to a test plan
- You need write permission for Load Testing
FAQ
How long are results stored?
How long are results stored?
Test results are stored permanently. There is no automatic cleanup or retention limit.
Can I delete test results?
Can I delete test results?
Individual test results cannot be deleted. This ensures audit trail integrity for performance history.
What's the difference between P95 and P99?
What's the difference between P95 and P99?
P95 means 95% of requests were faster than this value. P99 is more stringent - 99% faster. P99 catches edge cases that P95 might miss.
Why is my average latency low but P99 high?
Why is my average latency low but P99 high?
This indicates a “long tail” - most requests are fast, but some are very slow. Investigate outliers in the Requests tab.
How do I share results with my team?
How do I share results with my team?
What triggers a regression alert?
What triggers a regression alert?
When metrics exceed the baseline’s thresholds. Default thresholds compare latency, error rate, and success rate.