This repository was archived by the owner on Nov 17, 2023. It is now read-only.
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
General support for Float16 and other DTypes #2302
Closed
Description
So from what I can tell the following operators currently don't support anything than real_t
e.g. Float32
. I am going to work to fix the ones important for my research and I would welcome any help. I feel that having comprehensive support for other datatypes is important for MXNet.
Up for grabs
-
crop
-
slice_channel
-
softmax_activation
-
matrix_op
-
l2_normalization
-
make_loss
-
identity_attach_KL_sparse_reg
-
broadcast_reduce
-
embedding
-
smooth_l1_unary
Depending on a resolution to dmlc/mshadow#125
-
leaky_relu
[RFC][DTypes] pooling and LeakyReLU #2280 -
regression_output
DType regression #3018 -
lrn
-
batch_norm
Done
-
roi_pooling
OP ROIPooling CPU fix and DType support #3011 -
deconvolution
Enable DTypes in deconvolution #2322 -
dropout
-
pooling
-
reshape
DTypes for Concat, UpSampling, Reshape, BlockGrad, SwapAxis and ElementWiseSum #2380 -
swapaxis
DTypes for Concat, UpSampling, Reshape, BlockGrad, SwapAxis and ElementWiseSum #2380 -
elementwise_sum
DTypes for Concat, UpSampling, Reshape, BlockGrad, SwapAxis and ElementWiseSum #2380 -
upsampling
DTypes for Concat, UpSampling, Reshape, BlockGrad, SwapAxis and ElementWiseSum #2380 -
concat
DTypes for Concat, UpSampling, Reshape, BlockGrad, SwapAxis and ElementWiseSum #2380 -
block_grad
DTypes for Concat, UpSampling, Reshape, BlockGrad, SwapAxis and ElementWiseSum #2380