-
Notifications
You must be signed in to change notification settings - Fork 190
don't cache more than 1024 entries, to avoid DoS attacks #94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
don't cache more than 1024 entries, to avoid DoS attacks #94
Conversation
@grandcat Friendly ping. Could I get a review of this PR? |
// DoS protection: don't cache more than maxSentEntries entries | ||
if len(sentEntries) >= maxSentEntries { | ||
for key := range sentEntries { | ||
delete(sentEntries, key) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was reading through this PR already multiple times, but I'm still not sure if this simple protection is sufficient.
Surely, it will avoid that memory fills up and makes the host crash (potentially).
However, on the other hand, it will just delete the full cache containing both valid and fake entries, while the latter come from the attacker.
Given the attacker Eve knows the mechanism, Eve could simply abuse this behavior now and fill the cache with fake entries only, pointing all to his machine.
Like this, the attack scenario shifted now from a DoS attack to kind of a Sybil attack, i.e. the view of the host under attack is under full control by the attacker now.
Some thoughts on improving this with the downside of increasing complexity:
- Track source IPvX and limit the number of entries per source IPvX.
- Some kind of rate limiting
- Involving the TTL values
How do you think about that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I doubt there's a lot one can do to avoid these DoS attacks when facing an Eve who knows the defenses. Faking a source IP address is trivial, even more so if done on a local network, where routers don't get the chance to do a plausibility check on ingress and egress packets. Eve can also craft the TTL values to exploit whatever logic we come up with to do a TTL-based cleanup, so that wouldn't work either.
All that we're defending against here is an overflow of the map we use for deduplication. If I understand correctly, the worst thing that Eve can achieve (compared to what is possible without this PR) by flooding the map and causing us to throw it out the map. We'd then hand duplicate entries to the application. How an application defends against bogus (and duplicate) mDNS announcements will very much depend on the application, but I'd assume that every application will need some logic for that anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's probably valid as well.
Given we just move responsibility mostly to the client, this motivation of this mitigation is okay.
76c62ce
to
39d1d5d
Compare
The size of the
sentEntries
map needs to be limited, otherwise an attacker can consume unbounded state.The solution implemented here is very simple, but given Go's lack of data structures, any more sophisticated solution would require keeping track of a lot more state: The cache is deleted when it overflows.