Skip to content

Commit f9eb9b8

Browse files
author
Rohit Kumar Srivastava
committed
Revert "Updating profiler tutorial to include new custom operator profiling (apache#15403)"
This reverts commit d49445f.
1 parent 3583c55 commit f9eb9b8

File tree

1 file changed

+0
-75
lines changed

1 file changed

+0
-75
lines changed

docs/tutorials/python/profiler.md

Lines changed: 0 additions & 75 deletions
Original file line numberDiff line numberDiff line change
@@ -208,81 +208,6 @@ Let's zoom in to check the time taken by operators
208208

209209
The above picture visualizes the sequence in which the operators were executed and the time taken by each operator.
210210

211-
### Profiling Custom Operators
212-
Should the existing NDArray operators fail to meet all your model's needs, MXNet supports [Custom Operators](https://mxnet.incubator.apache.org/versions/master/tutorials/gluon/customop.html) that you can define in Python. In `forward()` and `backward()` of a custom operator, there are two kinds of code: "pure Python" code (NumPy operators included) and "sub-operators" (NDArray operators called within `forward()` and `backward()`). With that said, MXNet can profile the execution time of both kinds without additional setup. Specifically, the MXNet profiler will break a single custom operator call into a pure Python event and several sub-operator events if there are any. Furthermore, all of those events will have a prefix in their names, which is, conveniently, the name of the custom operator you called.
213-
214-
Let's try profiling custom operators with the following code example:
215-
216-
```python
217-
218-
import mxnet as mx
219-
from mxnet import nd
220-
from mxnet import profiler
221-
222-
class MyAddOne(mx.operator.CustomOp):
223-
def forward(self, is_train, req, in_data, out_data, aux):
224-
self.assign(out_data[0], req[0], in_data[0]+1)
225-
226-
def backward(self, req, out_grad, in_data, out_data, in_grad, aux):
227-
self.assign(in_grad[0], req[0], out_grad[0])
228-
229-
@mx.operator.register('MyAddOne')
230-
class CustomAddOneProp(mx.operator.CustomOpProp):
231-
def __init__(self):
232-
super(CustomAddOneProp, self).__init__(need_top_grad=True)
233-
234-
def list_arguments(self):
235-
return ['data']
236-
237-
def list_outputs(self):
238-
return ['output']
239-
240-
def infer_shape(self, in_shape):
241-
return [in_shape[0]], [in_shape[0]], []
242-
243-
def create_operator(self, ctx, shapes, dtypes):
244-
return MyAddOne()
245-
246-
247-
inp = mx.nd.zeros(shape=(500, 500))
248-
249-
profiler.set_config(profile_all=True, continuous_dump = True)
250-
profiler.set_state('run')
251-
252-
w = nd.Custom(inp, op_type="MyAddOne")
253-
254-
mx.nd.waitall()
255-
256-
profiler.set_state('stop')
257-
profiler.dump()
258-
```
259-
260-
Here, we have created a custom operator called `MyAddOne`, and within its `forward()` function, we simply add one to the input. We can visualize the dump file in `chrome://tracing/`:
261-
262-
![Custom Operator Profiling Screenshot](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_output_custom_operator_chrome.png)
263-
264-
As shown by the screenshot, in the **Custom Operator** domain where all the custom operator-related events fall into, we can easily visualize the execution time of each segment of `MyAddOne`. We can tell that `MyAddOne::pure_python` is executed first. We also know that `CopyCPU2CPU` and `_plus_scalr` are two "sub-operators" of `MyAddOne` and the sequence in which they are executed.
265-
266-
Please note that: to be able to see the previously described information, you need to set `profile_imperative` to `True` even when you are using custom operators in [symbolic mode](https://mxnet.incubator.apache.org/versions/master/tutorials/basic/symbol.html) (refer to the code snippet below, which is the symbolic-mode equivelent of the code example above). The reason is that within custom operators, pure python code and sub-operators are still called imperatively.
267-
268-
```python
269-
# Set profile_all to True
270-
profiler.set_config(profile_all=True, aggregate_stats=True, continuous_dump = True)
271-
# OR, Explicitly Set profile_symbolic and profile_imperative to True
272-
profiler.set_config(profile_symbolic = True, profile_imperative = True, \
273-
aggregate_stats=True, continuous_dump = True)
274-
275-
profiler.set_state('run')
276-
# Use Symbolic Mode
277-
a = mx.symbol.Variable('a')
278-
b = mx.symbol.Custom(data=a, op_type='MyAddOne')
279-
c = b.bind(mx.cpu(), {'a': inp})
280-
y = c.forward()
281-
mx.nd.waitall()
282-
profiler.set_state('stop')
283-
profiler.dump()
284-
```
285-
286211
## Advanced: Using NVIDIA Profiling Tools
287212

288213
MXNet's Profiler is the recommended starting point for profiling MXNet code, but NVIDIA also provides a couple of tools for low-level profiling of CUDA code: [NVProf](https://devblogs.nvidia.com/cuda-pro-tip-nvprof-your-handy-universal-gpu-profiler/), [Visual Profiler](https://developer.nvidia.com/nvidia-visual-profiler) and [Nsight Compute](https://developer.nvidia.com/nsight-compute). You can use these tools to profile all kinds of executables, so they can be used for profiling Python scripts running MXNet. And you can use these in conjunction with the MXNet Profiler to see high-level information from MXNet alongside the low-level CUDA kernel information.

0 commit comments

Comments
 (0)