i used cp to copy lots of small files (b/n 1M and 10M size; around 6G total) in Linux box. Di开发者_如何转开发dn't time it but since i am going to do cp again and again, can time it and be more specific later; but since the cp is not the major task, can't tolerate the time it's taking if there is a better option/choice/way. So, if there is a better way/option/method to do cp-ing (faster) files from one dir to another, be glad glad to try it out.
Thanks,
Either of tar/untar or rsync
when told not to checksum will be faster, since they bulk-read files instead of handling them one-by-one.
You could try cpio, as that has a copy directory to directory mode.
use perl for fastest copy:
use File::Copy::syscopy; # preserves OS specific file attributes
copy($foo,$bar) or die "cannot copy $foo to $bar: $!"; # always check for errors!
Try this just backup today's files:
find /home/me/files -ctime 0 -print -exec cp {} /mnt/backup/{} \;
from: http://commandperls.com/find-all-today%E2%80%99s-files-and-copy-them-to-another-directory/
Have you tried:
time (cd /usr/local/src/ && tar pcf - cvs.gnome.org) | buffer -m 8m -p 75 | (cd /mnt/tmp/src/ && tar pxf -)
(Credits: https://lists.debian.org/debian-user/2001/06/msg00288.html)
精彩评论