Skip to content

fix(dot/peerset): fix sending on closed channel race condition when dropping peer #2573

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Jun 10, 2022
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 7 additions & 4 deletions dot/peerset/peerset.go
Original file line number Diff line number Diff line change
Expand Up @@ -690,10 +690,12 @@ func (ps *PeerSet) disconnect(setIdx int, reason DropReason, peers ...peer.ID) e
return fmt.Errorf("cannot disconnect: %w", err)
}

ps.resultMsgCh <- Message{
Status: Drop,
setID: uint64(setIdx),
PeerID: pid,
if ps.resultMsgCh != nil {
ps.resultMsgCh <- Message{
Status: Drop,
setID: uint64(setIdx),
PeerID: pid,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't the producer close the channel? So somewhere here 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are problems with the arch of peerset.Handler that we should address at some point.

  • Right now this channel is returned by Handler.Messages(), but it's not a true pipeline. If handler.Messages() was called again and was consumed, then messages would be split among all readers.
  • IMO the producer is peerset.Peerset so the producer is actually closing the channel. Just on another goroutine.

}

// TODO: figure out the condition of connection refuse.
Expand Down Expand Up @@ -765,6 +767,7 @@ func (ps *PeerSet) periodicallyAllocateSlots(ctx context.Context) {
defer func() {
ticker.Stop()
close(ps.resultMsgCh)
ps.resultMsgCh = nil
}()

for {
Expand Down