开发者

Programmatically determining maximum transfer rate

开发者 https://www.devze.com 2022-12-30 01:26 出处:网络
I have a problem that requires me to calculate the maximum upload and download available, then limit my program\'s usage to a percentage of it. However, I can\'t think of a good way to find the maximu

I have a problem that requires me to calculate the maximum upload and download available, then limit my program's usage to a percentage of it. However, I can't think of a good way to find the maximums.

At the moment, the only solution I can come up with is transfering a few megabytes between the client and server, then measuring how ling the transfer took. This solution is very undesirable, however, because with 100,000 clients it could potentially result in too much of an increase to our server's bandwidth usage (which is already too high).

Does anyone have any solutions to this problem?

Note that I'm mostly interested in the limitation of data transferred up to the point it leaves the ISP's network; I think this is most likely where a bottleneck that would cause other programs' communication to degrade would occur. Correct me if I'm wrong, tho开发者_运维百科ugh.

EDIT: After further investigation, I don't think this is possible; there are too many variables involved to accurately measure the maximum transfer rate when leaving the ISP's network. Leaving the question open, in case someone comes up with an accurate solution, though.


If you can restrict the code to Windows Vista or newer (not likely, but who knows?) you can use SetPerTcpConnectionEStats and GetPerTcpConnectionEStats along with TCP_ESTATS_BANDWIDTH_RW_v0 to have Windows estimate the bandwidth for a connection, and later retrieve that estimate. Then, based on that estimate, you can throttle the bandwidth you use.

So what would happen is that you'd start running the application about as you do now, collect statistics for a while, then impose throttling based on what you measure during that initial time period.

This has the advantage that it avoids sending extra data only to collect bandwidth information -- it simply collects statistics on the data you're sending anyway. It has the disadvantage (which I suspect is nearly unavoidable) that it still uses something approaching full bandwidth until you get an estimate of the bandwidth that's available (and, as mentioned above, this was added in Windows Vista, so it's not even close to universally available yet).


If you have Windows devices on both ends of the connections, you could use the Background Intelligent Transfer Service (BITS) to move the information and cop out of the entire bandwidth question. The (nearly) always installed component is described at http://msdn.microsoft.com/en-us/library/aa362708(VS.85).aspx.

You don't say whether the bandwidth friendliness is required or just a cost issue so this may not be appropriate.


The only answers I see are:

  1. Use a small sample to time the transfer rate.
  2. Time the actual data in chunks (say 1k) and report the average.

Some of the issues complicating the matter:

  • The processor bandwidth of the sending machine (i.e. other tasks running).
  • Traffic density on the network.
  • Tasks running on the client machine.
  • Architecture of all machines.

Since the client may be running other tasks, and the host (sending machine) will be running different tasks, the transfer rate will vary.

I vote for sending a chunk of data timing it, sending another and timing it. Accumulate these durations and average over the number of chunks. This allows for a dynamic timing, which would be more accurate than any precalculated timing.


If the problem is raw bandwidth then a a feedback mechanism could work here. When you start the session, the server tells the client at wich rate it will send data. The client can monitor at wich rate it receives data. If the rate for the data received if less than the rate the data is being sent (you could use a threshold here, like 90% lower or less) then the client notifies the server throttle down the data rate and start the process again. This will serve as a basic QoS mechanism.

If the problem is that the connection has a high latency and/or jitter, try to send the information in smaller packtes (actual IP/TCP packtes). Normally the system will try to use the maximum packet size, but the packets fragmentation on the internet could and will delay the traffic. If this still does not improves the latency, then you could fallback to using UDP instead of TCP. But this will not ensure data delivery.


One option would be to implement something like uTorrent's UDP transport protocol between the client and server to keep latency down. Just measuring a raw throughput won't help when some other process starts using bandwidth as well, cutting down the amount of bandwidth you have free.

0

精彩评论

暂无评论...
验证码 换一张
取 消