I have a website where some of the downloadale files are stored. say the website contains files like
GTP-UGP-LATEST-5.3.0.123.iso
GTP-UGP-LATEST-5.3.0.127.iso
GTP-UGP-LATEST-5.3.0.132.iso
GTP-UGP-LATEST-5.3.0.136.iso
PRE-UGP-LATEST-5.3.0.124.iso
PRE-UGP-LATEST-5.3.0.126.iso
PRE-UGP-LATEST-5.3.0.127.iso
PRE-UGP-LATEST-5.3.0.130.iso
The number of these files will go on increasing day by day with accending version number.
My final 开发者_开发技巧goal is to run the script everyday(cronjob)to check if there is any new file added and if yes download all the new files added.
My logic is get the version numbers of all files starting with GTP*(say 5.3.0.123)convert it to (530123) and then compare to get the largest number and then check with the previous downloaded version number which is stored in a local file.If it doesnt match, we download the file with greatest version number that start with GTP
And we do similary with file starting with PRE*
Im poor is regular expression, please help me on this.
Please let me know to list all the files in a link and then write it to a local file. If I know that much then I think I can take it from there
Updated: I would do the following (tested):
#!/usr/bin/env perl
use Data::Dumper qw(Dumper);
use File::Glob ':glob';
sub by_version {
my $v_a = $a;
my $v_b = $b;
$v_a =~ s/[^\d\.]//g;
$v_b =~ s/[^\d\.]//g;
my @version_a = split(/\./, $v_a);
my @version_b = split(/\./, $v_b);
for (my $i = 0; $i < scalar(@version_a); $i++) {
my $comp = $version_a[$i] <=> $version_b[$i];
return $comp if ($comp != 0);
}
return 0;
}
my @files = bsd_glob('GTP-UGP-LATEST-*.iso');
my @sorted_files = sort by_version @files;
print Dumper(@sorted_files);
Or you could just rsync from the download directory, since you only want the new ones.
精彩评论