You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I applied the implemented code for 5-datasets.
The result was acc@1 75.5033 | forgetting 14.9538,
which is much lower than the result in the paper, acc@1 88.08 | forgetting 2.21
any missed point?
The text was updated successfully, but these errors were encountered:
I didn't implement and experiment,
because the dualprompt 5-datasets config of the official code was not opened.
Did you use L2P's 5-datasets config as it is?
In my experience, the dataset order (fixed order in official code) and use_prompt_mask have a significant impact on performance in 5-datasets.
Please check it and leave a comment.
Discuss it together.
I applied the implemented code for 5-datasets.
The result was acc@1 75.5033 | forgetting 14.9538,
which is much lower than the result in the paper, acc@1 88.08 | forgetting 2.21
any missed point?
The text was updated successfully, but these errors were encountered: