开发者

requesting data indefinitely in curl

开发者 https://www.devze.com 2023-02-22 09:52 出处:网络
I have 200MB file to download. I don\'t want to download it directly by passing URL to cURL (because my college blocks requests with more than 150MB).

I have 200MB file to download. I don't want to download it directly by passing URL to cURL (because my college blocks requests with more than 150MB). So, I can download data by 10MB chunks, by passing range parameters to cURL. But I don't know how many 10MB chunks to download. Is there a way in cURL so that I can download data indefinitely. Something more like

while(next byte present) download 开发者_如何学Pythonbyte;

Thanks :)


command line curl lets you specify a range to download, so for your 150meg max, you'd do something like

curl http://example.com/200_meg_file -r 0-104857600 > the_file
curl http://example.com/200_meg_file -r 104857601-209715200  >> the_file

and so on until the entire thing's downloaded, grabbing 100meg chunks at a time and appending each chunk to the local copy.


Curl already has the ability to resume a download. Just run like this:

$> curl -C - $url -o $output_file

Of course this won't figure out when to stop, per se. However it would be pretty easy to write a wrapper. Something like this:

#!/bin/bash
url="http://someurl/somefile"
out="outfile"

touch "$out"
last_size=-1
while [ "`du -b $out | sed 's/\W.*//'`" -ne "$last_size" ]; do
    curl -C - "$url" -o "$out"
    last_size=`du -b $out | sed 's/\W.*//'`
done

I should note that curl outputs a fun looking error:

curl: (18) transfer closed with outstanding read data remaining

However I tested this on a rather large ISO file, and the md5 still matched up even though the above error was shown.

0

精彩评论

暂无评论...
验证码 换一张
取 消