You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I train an RL model (policy gradient + reward shaping) on the NELL-995 dataset, I execute the following commands: ./experiment-rs.sh configs/nell-995-rs.sh --train 0 --test
Then I ran into the following problem: experiments.py: error: argument --distmult_state_dict_path: expected one argument
What is the role of distmult_state_dict_path ? What input does it need?
The text was updated successfully, but these errors were encountered:
@swxhha Sorry for the delayed response.
You first need to train a ConvE model. As I've described in the README note To train the RL models using reward shaping, make sure 1) you have pre-trained the embedding-based ConvE model and 2) set the file path pointers conve_state_dict_path to the pre-trained embedding-based models correctly in the configs/<dataset>-rs.sh or configs/<dataset>.sh files.
When I train an RL model (policy gradient + reward shaping) on the NELL-995 dataset, I execute the following commands:
./experiment-rs.sh configs/nell-995-rs.sh --train 0 --test
Then I ran into the following problem:
experiments.py: error: argument --distmult_state_dict_path: expected one argument
What is the role of distmult_state_dict_path ? What input does it need?
The text was updated successfully, but these errors were encountered: