Skip to content

fix deadlock when dump samples with filter #2052

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 29, 2021

Conversation

Zenglinxiao
Copy link
Contributor

Currently, when dumping samples after applying transforms in the onmt_build_vocab, we may encounter a deadlock issue if the filtertoolong exist in the transform pipe.
The multiprocessing dumping using num_threads queue, each corresponds to one shard of original corpus and they are supposed to be equal size. This equal size assumption will be broken by the filtertoolong transform as there may have a different number of examples removed from that shard which will also break the ordering when writing.
To fix it, we need to pass a placeholder even that example has been filtered. Thus guaranteed the equal size assumption.

@francoishernandez francoishernandez merged commit b1a4615 into OpenNMT:master Apr 29, 2021
@Zenglinxiao Zenglinxiao deleted the fix_dump_lock branch April 29, 2021 08:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants