Skip to content

ipfs stats bw accuracy or purpose #1040

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rubiojr opened this issue Apr 8, 2015 · 8 comments
Closed

ipfs stats bw accuracy or purpose #1040

rubiojr opened this issue Apr 8, 2015 · 8 comments
Labels
kind/bug A bug in existing code (including security flaws)

Comments

@rubiojr
Copy link

rubiojr commented Apr 8, 2015

I was streaming a large file and I realized that ipfs stats bw -poll output doesn't match what raw iface stats (via bwm-ng) and the other peer is getting.

Here's what bwm-ng is reporting (eth0 is the interesting interface):

screen shot 2015-04-08 at 2 12 06 pm

Here's what ipfs stats bw is reporting:

screen shot 2015-04-08 at 2 12 15 pm

The person I'm streaming the file to is reporting ~450 KiB too.

Should ipfs stats bw account for that?

@jbenet
Copy link
Member

jbenet commented Apr 8, 2015

There's significant TCP + IP framing, though 1MB may be too much... We unfortunately don't control that so our current accounting doesn't cover it. We could try to estimate it but wouldn't be right... maybe we should just be polling the way some of these other tools do it, so we get full stats from the kernel.

cc @whyrusleeping

@whyrusleeping
Copy link
Member

@rubiojr Yeah, I couldnt figure out what the discrepancy was being caused by. We are listing all bytes going over our peers net.Conns... But one thing to note is that ipfs stats bw outputs in bytes per second, where most network tools output in bits per second. 56KB/s * 8 = 448. which looks about right for what youre reporting.

@rubiojr
Copy link
Author

rubiojr commented Apr 8, 2015

@whyrusleeping @jbenet so what I was trying to say is that the person downloading from me was getting ~450 Kilobytes/s (3.6 Megabits/s) while ipfs stats bw was reporting 56 KB (448 Kbits/s as you said). Right? It seems to me that the numbers don't add up (or it's just measuring protocol data transfer and not the bulk file transfer).

The interface reports 5.3 Mb/s but the peer was downloading using wget from an external ipfs gateway while I was streaming the file from my laptop realtime so I guess the overhead seen in the interface could be explained (I don't know enough about ipfs to say unfortunately).

Maybe I was measuring it wrong but it should be easy to double check.

@jbenet
Copy link
Member

jbenet commented Apr 9, 2015

i think our bw reporting is definitely off -- given ip + tcp framing. (i believe we do catch the streaming + encryption (HMAC) overheads, but still). What you suggest though implies to me there's more missing. Would be nice to account precisely how much on some sample test runs. (cat a file of a given size between only two nodes, disconnected from the rest of the network).

I think we should explore the kernel based monitoring i mentioned above for totals. We could rename what we currently call total into protocol total, or something.

@cryptix
Copy link
Contributor

cryptix commented Apr 9, 2015

I think we should explore the kernel based monitoring i mentioned above for totals. We could rename what we currently call total into protocol total, or something.

Do OS/Kernel APIs allow for filtering by process?
Otherwise you just bundle an iftop like tool without much gain..

@rubiojr
Copy link
Author

rubiojr commented Apr 9, 2015

Perhaps using pcap like nethogs?. That will require root though (or maybe CAP_NET_RAW+CAP_NET_ADMIN. Or just use something like nethogs :)

@Kubuxu
Copy link
Member

Kubuxu commented Feb 4, 2016

I used nethogs and it is off by factor of 8 and more.
It isn't just one measurement, I tried many.

I am sure that nethogs shows KiloBytes per second.

(also 10/10 KB/s is quite high bandwidth for DHT janitor)

@em-ly em-ly added the kind/bug A bug in existing code (including security flaws) label Aug 25, 2016
@Stebalien
Copy link
Member

At the time this issue was filed, the bandwidth stats collection system miscalculated bandwidth by a factor of 5(?). We still have overhead but that issue has since been fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws)
Projects
None yet
Development

No branches or pull requests

7 participants