Skip to content

rules for pyramid_level_inputs? #2413

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
innat opened this issue Apr 10, 2024 · 1 comment
Open

rules for pyramid_level_inputs? #2413

innat opened this issue Apr 10, 2024 · 1 comment
Assignees

Comments

@innat
Copy link
Contributor

innat commented Apr 10, 2024

Current Behavior:

While trying to get P1, P2, P3 from EfficientNet, I got

backbone = EfficientNetV1B0Backbone(input_shape=(512, 512, 3))
for k, v in backbone.pyramid_level_inputs.items():
    print(k, v, backbone.get_layer(name=v).output)

P1 block1a_project_activation <KerasTensor shape=(None, 256, 256, 16) >
P2 block2b_add <KerasTensor shape=(None, 128, 128, 24) >
P3 block3b_add <KerasTensor shape=(None, 64, 64, 40) >
P4 block5c_add  <KerasTensor shape=(None, 32, 32, 112) >
P5 top_activation  <KerasTensor shape=(None, 16, 16, 1280) >

Expected Behavior:

For P5, I think it should be from block7a_project_activation, before the top activation. This will give (None, 16, 16, 320) instead of (None, 16, 16, 1280) - which can be used in EfficientDet model.

Also, the P3, P4, no hard opinion, but wouldn't it be more better to pick tensor from project_activation layers?

@sachinprasadhs
Copy link
Collaborator

HI, Could you please try this in Keras Hub package, for all the pyramid level type of models we have a FeaturePyramid task for these backbones now.
If you still have any question, open a new issue in /keras-hub repo

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants