Suppose i have a shell script in which there is a statement like :
a=$(find / -type f)
This says there is a certain list of files with their file paths which will be stored in the variable 'a'.
What is the maximum limit or number of lines that it can store. How do I find it?
IIRC, bash does not impose a limit on how much data a variable can store. It is however limited by the environment that bash was executed under. See this answer for a more comprehensive explanation.
As a data point, I tried the following script in OS X 10.10.5, using the built-in bash on a Macbook Pro Retina with a 2.8 GHz Intel Core i7:
#!/bin/bash
humombo="X"
while true; do
humombo="$humombo$humombo"
echo "Time $(date "+%H:%M:%S"), chars $(echo "$humombo" | wc -c)"
done
Results: the size happily doubled again and again (note that the sizes include an extra byte for the single line end). Things started to slow down when humombo
passed 4MB; doubling from 256MB to 512MB took 48 seconds, and the script exploded after that:
mbpe:~ griscom$ ./delme.sh
Time 16:00:04, chars 3
Time 16:00:04, chars 5
Time 16:00:04, chars 9
Time 16:00:04, chars 17
Time 16:00:04, chars 33
Time 16:00:04, chars 65
Time 16:00:04, chars 129
Time 16:00:04, chars 257
Time 16:00:04, chars 513
Time 16:00:04, chars 1025
Time 16:00:04, chars 2049
Time 16:00:04, chars 4097
Time 16:00:04, chars 8193
Time 16:00:04, chars 16385
Time 16:00:04, chars 32769
Time 16:00:04, chars 65537
Time 16:00:04, chars 131073
Time 16:00:04, chars 262145
Time 16:00:04, chars 524289
Time 16:00:04, chars 1048577
Time 16:00:04, chars 2097153
Time 16:00:05, chars 4194305
Time 16:00:05, chars 8388609
Time 16:00:07, chars 16777217
Time 16:00:09, chars 33554433
Time 16:00:15, chars 67108865
Time 16:00:27, chars 134217729
Time 16:00:51, chars 268435457
Time 16:01:39, chars 536870913
bash(80722,0x7fff77bff300) malloc: *** mach_vm_map(size=18446744071562072064) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
./delme.sh: xrealloc: cannot allocate 18446744071562068096 bytes
mbpe:~ griscom$
Two notes:
I suspect that the crash was more that the whole process took too much memory, rather than I hit the limit of a single variable's capacity.
While playing with this, I ran the same commands interactively, and when the loop exited bash was broken; I had to open a new terminal window to do anything. So, too much memory allocation breaks bash in unknown ways; my guess is that doing it inside a script cleans up upon exit.
Edit: I just tried the same code on a high-powered Ubuntu 18 system:
Time 18:03:02, chars 3
Time 18:03:02, chars 5
Time 18:03:02, chars 9
Time 18:03:02, chars 17
Time 18:03:02, chars 33
Time 18:03:02, chars 65
Time 18:03:02, chars 129
Time 18:03:02, chars 257
Time 18:03:02, chars 513
Time 18:03:02, chars 1025
Time 18:03:02, chars 2049
Time 18:03:02, chars 4097
Time 18:03:02, chars 8193
Time 18:03:02, chars 16385
Time 18:03:02, chars 32769
Time 18:03:02, chars 65537
Time 18:03:02, chars 131073
Time 18:03:02, chars 262145
Time 18:03:02, chars 524289
Time 18:03:02, chars 1048577
Time 18:03:02, chars 2097153
Time 18:03:02, chars 4194305
Time 18:03:02, chars 8388609
Time 18:03:03, chars 16777217
Time 18:03:04, chars 33554433
Time 18:03:07, chars 67108865
Time 18:03:12, chars 134217729
Time 18:03:23, chars 268435457
Time 18:03:43, chars 536870913
./delme.sh: xrealloc: cannot allocate 18446744071562068096 bytes
It took less than half the time, and died a bit more cleanly, but at the same character size. (BTW, the number in the error message, decimal 18446744071562068096, is 0xffff ffff 8000 0080, so clearly we're hitting some number-capacity limits here.)
I don't think there is a limit to variable size in bash, but do you really want a 6GB variable in your shell (suject to ulimit -a
of course)?
There certainly is a limit on the command-line. grep <pattern> $TEN_MILLION_FILENAMES
is not going to work. In fact, it's very hard to do any command spawning with $TEN_MILLION_FILES
. You need other strategies like doing it per-directory, or temporary files &c.
As I know, the only way to find the limit is through an empirical way. Try to run the following shell script and wait to finish:
limit=1
while true
do
limit=`echo 1+$limit|bc`
a=' '$a
echo $limit
done
A slight improvement of Daniel Griscom's script:
- you can add showing how much memory the script is using (see added last command in the loop)
- you can try different shell environments (my tests showed that bash uses about 5x as much memory for the same sized variable as zsh - you can the below tests yourself)
NOTE: the "VmPeak" line will have empty output when the script is run inside Cygwin, as cygwin doesn't replicate the /proc fully (basically "VmPeak" value is missing, but you can go for "VmSize" perhaps in such a case?)
$ cat delme.sh
#!/bin/zsh
humombo="X"
pid=$$
while true; do
humombo="$humombo$humombo"
echo "Time $(date "+%H:%M:%S"), chars $(echo "$humombo" | wc -c)"
echo -n "Memory usage: "
grep ^VmPeak /proc/${pid}/status
done
As far as i see, standard imposes no limitations. But underlying system may. I recollect i once bumped into a limit on some AIX.
You may check like the configure checks for maximum number of arguments - try until you find an error. Some sort of iterative approach with formula var(i)=concatenation(var(i-1),var(i-1))
. Sooner or later you hit the limit (at least memory limit while handling it).
精彩评论