You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-1Lines changed: 7 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,6 @@ The code provided is compatible with [nuScenes](https://www.nuscenes.org/lidar-s
40
40
41
41
[PV-RCNN finetuned on KITTI](https://github.com/valeoai/SLidR/releases/download/v1.0/pvrcnn_slidr.pt)
42
42
43
-
44
43
## Reproducing the results
45
44
46
45
### Pre-computing the superpixels (required)
@@ -131,6 +130,13 @@ SLidR |81.9 |51.6 |68.5 |**
131
130
132
131
*As reimplemented in [ONCE](https://arxiv.org/abs/2106.11037)
133
132
133
+
## Visualizations
134
+
135
+
For visualization you need a pre-training containing both 2D & 3D models. We provide the raw [SR-UNet & ResNet50 pre-trained on nuScenes](https://github.com/valeoai/SLidR/releases/download/v1.1/minkunet_slidr_1gpu_raw.pt).
136
+
The image part of the pre-trained weights are identical for almost all layers to those of [MoCov2](https://github.com/facebookresearch/moco) (He et al.)
137
+
138
+
The [visualization code](utils/visualization.ipynb) allows to assess the similarities between points and pixels, as shown in the article.
0 commit comments