I'm going to implement a (simple) downloader application in Java as a personal exercise. It is goin开发者_开发百科g to run several jobs in different threads, in a way that I will be downloading a few files at the same time at all times during execution.
I want to be able to define a download rate limit that is shared between all the download jobs, but I don't know how to do it even for a single download task. How should I go about doing this? What are the solutions I should try implementing?
Thanks.
- Decide how much bandwidth you want to use, in bytes/second.
- Establish the delay of the network path to the target, in seconds.
- Multiply to get an answer in bytes (bytes/second * seconds = bytes).
- Divide by the number of concurrent connections.
- Set the socket receive buffer of each connection to this number.
I'd start with a DownloadManager that manages all downloads.
interface DownloadManager
{
public InputStream registerDownload(InputStream stream);
}
All code that wants to take part in managed bandwidth will register it's stream with the download manager before it starts reading from it. In it's registerDownload() method, the manager wraps the given input stream in a ManagedBandwidthStream
.
public class ManagedBandwidthStream extends InputStream
{
private DownloadManagerImpl owner;
public ManagedBandwidthStream(
InputStream original,
DownloadManagerImpl owner
)
{
super(original);
this.owner = owner;
}
public int read(byte[] b, int offset, int length)
{
owner.read(this, b, offset, length);
}
// used by DownloadManager to actually read from the stream
int actuallyRead(byte[] b, int offset, int length)
{
super.read(b, offset, length);
}
// also override other read() methods to delegate to the read() above
}
The stream ensures all calls to read() are directed back to the download manager.
class DownloadManagerImpl implements DownloadManager
{
public InputStream registerDownload(InputStream in)
{
return new ManagedDownloadStream(in);
}
void read(ManagedDownloadStream source, byte[] b, int offset, int len)
{
// all your streams now call this method.
// You can decide how much data to actually read.
int allowed = getAllowedDataRead(source, len);
int read = source.actuallyRead(b, offset, len);
recordBytesRead(read); // update counters for number of bytes read
}
}
Your bandwidth allocation strategy is then about how you implement getAllowedDataRead().
A simple way of throttling the bandwidth is, keep a counter of how many more bytes can be read in a given period (e.g. 1 second). Each call to read examines the counter and uses that to restrict the actual number of bytes read. A timer is used to reset the counter.
In practice, the allocation of bandwith amongst multiple streams can get quite complex, espeically to avoid starvation and promote fairness, but this should give you a fair start.
This question is waaaaay high level, so I hope you don't expect a low level answer. In general, you will first need to define/decide which network utilities you will be using. For instance, are you just going to open a standard java Socket? Is there some third party network library you will be using? Have you familiarized yourself with any of the available options even?
In the most general sense, you can control bandwidth via the networking library you decide on. It should be a relatively simple formula.
You will have some kind of object (call it a socket) that you set a bandwidth limit on. You will set the bandwidth limit on your sockets (in general) to be total bandwidth / number of connections active. You can optimize this number on an ongoing basis if some connections aren't using their full allocation of bandwidth. Ask for help on that algorithm when you get there, and if you even care...
The second part of the equation will be, can the OS/ network library already control the bandwidth for you just by giving it a rate limit number, or do you need to control this process yourself by limiting the read/write rates? This isn't as straightforward as it may seem, since an OS can have TCP socket buffers that will read in data until full. Suppose you had a 2Mb socket buffer for inbound traffic. If you relied on the remote side only stopping sending data when the 2Mb buffer was full, you would have to wait for 2Mb of data to transfer before you had an opportunity to rate limit by removing from the queue, you will always have a huge burst on every socket before you can rate limit.
At that point you begin talking about writing a protocol which will run over tcp (or UDP) so that one side can tell the other side, "Ok send more data", or "wait, my bandwidth limit has been temporarily hit". Long story short, get started, then ask questions once you have an implementation in place and want to improve it...
- Send/Receive Data
- Sleep
- Repeat
Thats basically how most limiters work (just like wget
)
精彩评论