开发者

When offset is passed descending mmap, mmap call failed

开发者 https://www.devze.com 2023-02-11 07:31 出处:网络
under linux : #free -m totalusedfreesharedbufferscached Mem:19951460534068432 -/+ buffers/cache:9591035 Swap:20557431311

under linux :

#free -m
             total       used       free     shared    buffers     cached
Mem:          1995       1460        534          0         68        432
-/+ buffers/cache:        959       1035
Swap:         2055        743       1311

# cat /proc/sys/vm/overcommit_memory
0

#cat /proc/sys/vm/overcommit_ratio

50

test code 1:

#define PER_PAGE_SIZE 4096
#define MMAP(fd,offset)开发者_如何学Python mmap (NULL,PER_PAGE_SIZE,PROT_READ|PROT_WRITE,MAP_SHARED|MAP_NORESERVE,fd,offset)


int main(){
    int j = 0;
    int fd = open("dat.tmp",O_RDWR);
    for(int i = 131071 ; i >= 0; i--){
        ++j;        
        void* r = MMAP(fd,i*4096);
        if(r ==  MAP_FAILED){
            printf("%d,%m\n",j);
            break; 
        }    
    }  
    cout << "done " << j << endl;          
    sleep(5); 
}
##############
error  message :
# ./a.out 
65513,Cannot allocate memory
done 65513
...
#################

test code 2:

#define PER_PAGE_SIZE 4096
#define MMAP(fd,offset) mmap (NULL,PER_PAGE_SIZE,PROT_READ|PROT_WRITE,MAP_SHARED|MAP_NORESERVE,fd,offset)

int main(){
    int j = 0;
    int fd = open("dat.tmp",O_RDWR);
    for(int i = 0 ; i <= 131071; i++){
        ++j;        
        void* r = MMAP(fd,i*4096);
        if(r ==  MAP_FAILED){
            printf("%d,%m\n",j);
            break; 
        }    
    }  
    cout << "done " << j << endl;          
    sleep(5); 
}

This works, so,why??????????


Here is my guess. I'm guessing the second program simply extends an internal data structure describing a single mapping to include one more page. The first one could do this, but it would have to extend backwards instead, and I bet the special case code to coalesce mappings doesn't even check for that.

The fact that it stops at 65513 is very suggestive. There will be some number of mappings used for shared libraries and the like, so you will have less than 65536 mappings available to you. And 65536 is the kind of size a lot of kernel people would use for a data structure.

I would suggest looking at /proc/<pid>/maps and seeing how many maps are listed in each case when the program is sleeping. To facilitate this, you might want to print out the result of getpid() when you're printing out the 'done' message.

I cannot replicate your problem directly, so it seems the reverse case has been handled properly on my system. The output of uname -a on my system is this:

Linux a_hostname.somewhere 2.6.35.11-83.fc14.x86_64 #1 SMP Mon Feb 7 07:06:44 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux

But this program does replicate your problem:

#include <iostream>
#include <cstddef>
#include <cstdio>
#include <cstdlib>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <string.h>
#include <errno.h>
#include <unistd.h>

#define PER_PAGE_SIZE 4096
#define MMAP(fd,offset) mmap (NULL,PER_PAGE_SIZE,PROT_READ|PROT_WRITE,MAP_SHARED|MAP_NORESERVE,fd,offset)

int main()
{
   using ::std::cout;
   using ::std::endl;
   int j = 0;
   int fd = open("dat.tmp",O_RDWR);
   char catcmd[] = "cat /proc/99999/maps_padding";
   for(int i = 131071 ; i >= 0; i-=2){
      ++j;
      void* r = MMAP(fd,i*4096);
      if(r ==  MAP_FAILED){
         cout << j << ", " << strerror(errno) << '\n';
         break;
      }
   }
   ::std::snprintf(catcmd, sizeof(catcmd), "cat /proc/%d/maps", getpid());
   cout.flush();
   ::std::system(catcmd);
   cout << "done " << j << endl;
   sleep(5);
}

As you can see, if you skip by 2 while going backwards, the problem still occurs. And the output of cat /proc/<pid>/maps from the call to system shows that indeed, there are thousands of individual maps.

If I stop skipping by 2 and simply go backwards I end up with 2 maps, one largish, and another not quite so large. The kernel coalesces adjacent maps into one map if it can.

As further corroborating evidence that your problem is as I describe, there's this nice discussion of /proc/sys/vm/max_map_count. Setting that variable allows you to change how many maps there are, and the default setting is 65530.

0

精彩评论

暂无评论...
验证码 换一张
取 消