Skip to content

Commit 9c1867d

Browse files
update total_val_output.json reference
Signed-off-by: Emmanuel Ferdman <[email protected]>
1 parent 4fea6b8 commit 9c1867d

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

mmmu/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Then run eval_only with:
2323
python main_eval_only.py --output_path ./example_outputs/llava1.5_13b/total_val_output.json
2424
```
2525

26-
Please refer to [example output](https://github.com/MMMU-Benchmark/MMMU/blob/main/eval/example_outputs/llava1.5_13b/total_val_output.json) for a detailed prediction file form.
26+
Please refer to [example output](https://github.com/MMMU-Benchmark/MMMU/blob/main/mmmu/example_outputs/llava1.5_13b/total_val_output.json) for a detailed prediction file form.
2727

2828

2929
## Parse and Evaluation
@@ -76,7 +76,7 @@ Each `output.json`` has a list of dict containing instances for evaluation ().
7676
```
7777
python main_parse_and_eval.py --path ./example_outputs/llava1.5_13b --subject ALL # all subject
7878
79-
# OR you can sepecify one subject for the evaluation
79+
# OR you can specify one subject for the evaluation
8080
8181
python main_parse_and_eval.py --path ./example_outputs/llava1.5_13b --subject elec # short name for Electronics. use --help for all short names
8282
@@ -108,7 +108,7 @@ python print_results.py --path ./example_outputs/llava1.5_13b
108108
##### Run Llava
109109
In case if you want to reproduce the results of some models, please go check `run_llava.py` as an example.
110110

111-
By seeting up the env for llava via following steps:
111+
By setting up the env for llava via following steps:
112112

113113
Step 1:
114114
```

0 commit comments

Comments
 (0)