-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
ipfs stats bw accuracy or purpose #1040
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There's significant TCP + IP framing, though 1MB may be too much... We unfortunately don't control that so our current accounting doesn't cover it. We could try to estimate it but wouldn't be right... maybe we should just be polling the way some of these other tools do it, so we get full stats from the kernel. |
@rubiojr Yeah, I couldnt figure out what the discrepancy was being caused by. We are listing all bytes going over our peers |
@whyrusleeping @jbenet so what I was trying to say is that the person downloading from me was getting ~450 Kilobytes/s (3.6 Megabits/s) while The interface reports 5.3 Mb/s but the peer was downloading using wget from an external ipfs gateway while I was streaming the file from my laptop realtime so I guess the overhead seen in the interface could be explained (I don't know enough about ipfs to say unfortunately). Maybe I was measuring it wrong but it should be easy to double check. |
i think our bw reporting is definitely off -- given ip + tcp framing. (i believe we do catch the streaming + encryption (HMAC) overheads, but still). What you suggest though implies to me there's more missing. Would be nice to account precisely how much on some sample test runs. (cat a file of a given size between only two nodes, disconnected from the rest of the network). I think we should explore the kernel based monitoring i mentioned above for totals. We could rename what we currently call total into protocol total, or something. |
Do OS/Kernel APIs allow for filtering by process? |
Perhaps using pcap like nethogs?. That will require root though (or maybe CAP_NET_RAW+CAP_NET_ADMIN. Or just use something like |
At the time this issue was filed, the bandwidth stats collection system miscalculated bandwidth by a factor of 5(?). We still have overhead but that issue has since been fixed. |
I was streaming a large file and I realized that
ipfs stats bw -poll
output doesn't match what raw iface stats (viabwm-ng
) and the other peer is getting.Here's what
bwm-ng
is reporting (eth0 is the interesting interface):Here's what
ipfs stats bw
is reporting:The person I'm streaming the file to is reporting ~450 KiB too.
Should
ipfs stats bw
account for that?The text was updated successfully, but these errors were encountered: