You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/performance/motion.md
+3-1Lines changed: 3 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,9 @@ In addition to testing the efficiency of your app rendering, Sauce Labs also pro
18
18
### What You'll Need
19
19
20
20
- Google Chrome (no older than 3 versions from latest)
21
-
- Test configuration must have performance enabled. See [Set Performance Capabilities](/performance/transitions#set-performance-capabilities) for instructions.
21
+
- Test configuration must have performance enabled.
22
+
See [Set Performance Capabilities](/performance/transitions/#setting-performance-capabilities)
The custom `sauce:performance` command measures the performance output against a baseline of previously accepted performance values. If no baseline has been set, the Performance test will create one by measuring performance output 10 times to get an aggregate baseline. The command returns `pass` when the current results are within the baseline allowances or `fail` when the results fall outside the baseline. A fail result gives you the option to handle [regressions](#handle-regressions).
103
+
The custom `sauce:performance` command measures the performance output against
104
+
a baseline of previously accepted performance values. If no baseline has been
105
+
set, the Performance test will create one by measuring performance output 10
106
+
times to get an aggregate baseline. The command returns `pass` when the current
107
+
results are within the baseline allowances or `fail` when the results fall
108
+
outside the baseline. A fail result gives you the option to handle
109
+
[regressions](#handling-regressions).
104
110
105
111
:::caution
106
112
Enabling performance capturing can add up to 60 seconds per URL change in a test. We, therefore, advise separating your performance tests from your functional tests. See our [Performance Requirements and Recommendations](https://docs.saucelabs.com/performance/about/#sauce-performance-requirements-and-recommendations) for more advice on optimizing your performance test results.
@@ -268,7 +274,14 @@ performanceLogs[metric] < value`metric ${metric} is over the performance budget`
268
274
269
275
## Handling Regressions
270
276
271
-
When one or more metric evaluations fail because the result falls outside the established baseline, it is considered a regression and the tester has an option to either troubleshoot and resolve the source of the regression to get the test back into the baseline range or [update the baseline](/performance/analyze#reset-baselines-for-a-failed-test) with the new performance values. If new baselines are accepted, the command will measure performance against those new values until another regression is detected, when you will again have the option to troubleshoot or update the baselines.
277
+
When one or more metric evaluations fail because the result falls outside the
278
+
established baseline, it is considered a regression and the tester has an option
279
+
to either troubleshoot and resolve the source of the regression to get the test
280
+
back into the baseline range or [update the baseline](/performance/analyze/#resetting-baselines-for-a-failed-test)
281
+
with the new performance values. If new baselines are accepted, the command will
282
+
measure performance against those new values until another regression is
283
+
detected, when you will again have the option to troubleshoot or update the
284
+
baselines.
272
285
273
286
Since the command can be called throughout the test script, create tests that check for performance regressions across core business flows and screens. For example, evaluate pages that load following a successful login event or require multiple steps to trigger.
0 commit comments