You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Copy file name to clipboardExpand all lines: benchmark/opperf/README.md
+17-1Lines changed: 17 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -129,7 +129,7 @@ Output for the above benchmark run, on a CPU machine, would look something like
129
129
130
130
```
131
131
132
-
## Usecase 3.1 - Run benchmarks for group of operators with same input
132
+
## Usecase 4 - Run benchmarks for group of operators with same input
133
133
For example, you want to run benchmarks for `nd.add`, `nd.sub` operator in MXNet, with the same set of inputs. You just run the following python script.
134
134
135
135
```
@@ -173,6 +173,22 @@ This utility queries MXNet operator registry to fetch all operators registered w
173
173
However, fully automated tests are enabled only for simpler operators such as - broadcast operators, element_wise operators etc... For the purpose of readability and giving more control to the users, complex operators such as convolution (2D, 3D), Pooling, Recurrent are not fully automated but expressed as default rules.
174
174
See `utils/op_registry_utils.py` for more details.
175
175
176
+
## Use python timer
177
+
Optionally, you could use the python time package as the profiler engine to caliberate runtime in each operator.
178
+
To use python timer for all operators, use the argument --profiler 'python':
0 commit comments