forked from facebookresearch/vissl
-
Notifications
You must be signed in to change notification settings - Fork 1
Byol(190) #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
pranavsinghps1
wants to merge
23
commits into
iseessel:byol
Choose a base branch
from
pranavsinghps1:BYOL(190)
base: byol
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Byol(190) #2
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Summary: Pull Request resolved: facebookresearch#376 Regnet128Gf configuration for 6 additional of linear evaluations Reviewed By: prigoyal Differential Revision: D29915382 fbshipit-source-id: 636125438db2ef62ced5daaea94add72ef571fea
…rch#174) Summary: This PR introduces a script to automatically download kinetics 700 and format it in `disk_folder` and `disk_filelist` format. Pull Request resolved: fairinternal/ssl_scaling#174 Reviewed By: prigoyal Differential Revision: D29917908 Pulled By: QuentinDuval fbshipit-source-id: 66d244bfc6ed219ae12ad333705c9687ee08b47a
Summary: Pull Request resolved: facebookresearch#382 The warm up `dist.all_reduce()` call was happening before setting the CUDA device, which meant all workers were using device 0. This resulted in crashes / hangs as mentioned in https://fb.workplace.com/groups/1309000715937050/permalink/1621428588027593/ Reviewed By: prigoyal Differential Revision: D30005438 fbshipit-source-id: 48087d117262dad9ee3e858f05c0f9c7206496bf
Summary: Pull Request resolved: facebookresearch#383 Reviewed By: QuentinDuval Differential Revision: D30012758 Pulled By: prigoyal fbshipit-source-id: c737dfbb3e7e59fc925d5615efdfdd3e9eef791c
…ures (facebookresearch#175) Summary: Correctly rely on config.MODEL.FEATURE_EVAL_SETTINGS.SHOULD_FLATTEN_FEATS to decide whether or not to flatten the features: In addition: - add options at loading of features to decide if we should flatten - add unit tests to ensure the right behaviour Pull Request resolved: fairinternal/ssl_scaling#175 Reviewed By: iseessel Differential Revision: D30069587 Pulled By: QuentinDuval fbshipit-source-id: 044389c46c5c1e658141c599545dc72c5c50dff2
Summary: Add configurations for Regnet256 on linear evaluation benchmarks Reviewed By: iseessel Differential Revision: D30070086 fbshipit-source-id: fc20ba889443c495b64088bf88b3dfe52e97ed8a
…search#387) Summary: Pull Request resolved: facebookresearch#387 The sliced were not created at the right place because of os.path.abspath Reviewed By: iseessel Differential Revision: D30109789 fbshipit-source-id: c332fbf5f5c52241a537bd1188e3268a2f5cb966
Summary: [enhancement] FSDP with activation checkpoint now allows to specify blocks without activations (useful for linear evaluation) and completes incomplete configurations for stage_checkpoints Pull Request resolved: fairinternal/ssl_scaling#176 Reviewed By: prigoyal Differential Revision: D30143386 Pulled By: QuentinDuval fbshipit-source-id: 6fa85059d36d0bfa44ea7c07ac92994985674943
Summary: This fix the problem that the Barlow Twins model needs to save a function in the checkpoint. Pull Request resolved: facebookresearch#388 Reviewed By: iseessel Differential Revision: D30158877 Pulled By: prigoyal fbshipit-source-id: 537d0686422148447a4a42e14b448eb6e592eec9
Summary: Minor typo appearing on https://vissl.ai/ Pull Request resolved: facebookresearch#389 Reviewed By: iseessel Differential Revision: D30158860 Pulled By: prigoyal fbshipit-source-id: 0effb12f494de3067c49b19b142b30cc7e9312ff
Summary: Pull Request resolved: facebookresearch#380 Various Instance Retrieval improvements: 1. Add support for Manifold 2. Cleanup noisy logs and add helpful logging. 3. Add DEBUG_MODE support for the Revisited Datasets. 4. Add ability to save results/logs/features. 5. Fix ROI crop bug. 6. Fix typo in benchmark_workflow.py causing benchmarks to fail. 7. Add a bunch of json configs to track and group multiple experiments. Reviewed By: prigoyal Differential Revision: D29995282 fbshipit-source-id: 2382963f39c6c61aa417b690a39754d4b30b3fe2
…ch#379) Summary: Pull Request resolved: facebookresearch#379 1. Fix the gem post processing logic. Before this change, the code assumes that each non-preprocessed feature tensor has the same tensor shape: ``` if cfg.IMG_RETRIEVAL.FEATS_PROCESSING_TYPE == "gem": gem_out_fname = f"{out_dir}/{train_dataset_name}_GeM.npy" train_features = torch.tensor(np.concatenate(train_features)) ``` This is not the case, since ROxford/RParis images do not have a standard size, hence the resx layers have different height and widths (but same number of channels). GeM pooling will transform an image of any shape to a shape of `(num_channels)` The change performs gem_pooling on each individual images, as opposed to all the images at once. This should be fine because both gem and l2 normalization are to be performed per-image. 2. Transform before cropping to the bounding box (as opposed to after cropping). The experiments show that this yields much better results. This is also what the deepcluster implentation uses: https://github.com/facebookresearch/deepcluster/blob/master/eval_retrieval.py#L44 ``` Oxford: 61.57 / 41.74 / 14.33 vs. 69.65 / 48.51 / 16.41 Paris: 83.7 / 66.87 / 44.81 vs. 87.9 / 70.57 / 47.39 ``` f288434289 f288438150 Reviewed By: prigoyal Differential Revision: D29993204 fbshipit-source-id: 052a77c97a53f9dd6a969d44622cee0b25901498
Summary: Pull Request resolved: facebookresearch#378 Revisited oxford and paris provide bounding boxes for the queries of the landmarks, that they suggest to use in the evaluation. Weirdly enough, the bounding boxes actually degrade performance for my experiments. Hence, putitng an option to make the bounding boxes optional. Reviewed By: prigoyal Differential Revision: D29993208 fbshipit-source-id: cd1a00ae19d3faf61b520e00b9d05f28f60207b8
) Summary: Pull Request resolved: facebookresearch#381 1. Rename SHOULD_TRAIN_PCA_OR_WHITENING to TRAIN_PCA_WHITENING 2. Make l2 normalization optional. 3. Fix cfg access bugs 4. Add some more experiments. Reviewed By: prigoyal Differential Revision: D30002757 fbshipit-source-id: 3ec5be799a1d9bf2fa75c736fce9b2552db7966c
Wrt 9ff5847
|
iseessel
reviewed
Aug 11, 2021
configs/config/benchmark/linear_image_classification/imagenet1k/byol_transfer_in1k_linear.yaml
Outdated
Show resolved
Hide resolved
iseessel
reviewed
Aug 11, 2021
configs/config/benchmark/linear_image_classification/imagenet1k/byol_transfer_in1k_linear.yaml
Outdated
Show resolved
Hide resolved
iseessel
reviewed
Aug 11, 2021
iseessel
reviewed
Aug 11, 2021
iseessel
reviewed
Aug 11, 2021
iseessel
reviewed
Aug 11, 2021
iseessel
reviewed
Aug 11, 2021
iseessel
reviewed
Aug 11, 2021
@register_loss("byol_loss") | ||
class BYOLLoss(ClassyLoss): | ||
""" | ||
This is the loss proposed in BYOL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would write a bit more information about how the loss is created.
iseessel
reviewed
Aug 11, 2021
loss | ||
""" | ||
|
||
# Split data |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would write a nice comment explaining what's happening here.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Still Todo: