@@ -82,6 +82,28 @@ The following table lists pre-trained models trained on Kinetics400.
82
82
| i3d_resnet101_v1_kinetics400 [4 ]_ | ImageNet | 1 | 32 (64/2) | 74.8 | c5721407 | `shell script <https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/logs/action_recognition/kinetics400/i3d_resnet101_v1_kinetics400.sh >`_ | `log <https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/logs/action_recognition/kinetics400/i3d_resnet101_v1_kinetics400.log >`_ |
83
83
+---------------------------------------------+------------------+--------------+----------------+-----------+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------+
84
84
85
+ Something-Something-V2 Dataset
86
+ -------------------
87
+
88
+ The following table lists pre-trained models trained on Something-Something-V2.
89
+
90
+ .. note ::
91
+
92
+ Our pre-trained models reproduce results from "Temporal Segment Networks (TSN)" [2 ]_ , "Inflated 3D Networks (I3D)" [3 ]_ . Please check the reference paper for further information.
93
+
94
+
95
+ .. table ::
96
+ :widths: 40 8 8 8 10 8 8 10
97
+
98
+ +--------------------------------------+------------------+--------------+----------------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
99
+ | Name | Pretrained | Segments | Clip Length | Top-1 | Hashtag | Train Command | Train Log |
100
+ +======================================+==================+==============+================+===========+===========+===================================================================================================================================================================+=========================================================================================================================================================+
101
+ | resnet50_v1b_sthsthv2 [2 ]_ | ImageNet | 8 | 1 | 35.5 | 80ee0c6b | `shell script <https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/logs/action_recognition/somethingsomethingv2/resnet50_v1b_sthsthv2_tsn.sh >`_ | `log <https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/logs/action_recognition/somethingsomethingv2/resnet50_v1b_sthsthv2_tsn.log >`_ |
102
+ +--------------------------------------+------------------+--------------+----------------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
103
+ | i3d_resnet50_v1_sthsthv2 [3 ]_ | ImageNet | 1 | 16 (32/2) | 50.6 | 01961e4c | `shell script <https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/logs/action_recognition/somethingsomethingv2/i3d_resnet50_v1_sthsthv2.sh >`_ | `log <https://raw.githubusercontent.com/dmlc/web-data/master/gluoncv/logs/action_recognition/somethingsomethingv2/i3d_resnet50_v1_sthsthv2.log >`_ |
104
+ +--------------------------------------+------------------+--------------+----------------+-----------+-----------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
105
+
106
+
85
107
.. [1 ] Limin Wang, Yuanjun Xiong, Zhe Wang and Yu Qiao. \
86
108
"Towards Good Practices for Very Deep Two-Stream ConvNets." \
87
109
arXiv preprint arXiv:1507.02159, 2015.
0 commit comments