开发者

Swap memory speed in Linux

开发者 https://www.devze.com 2023-03-27 05:01 出处:网络
开发者_如何转开发I have a process in Linux 64 bit (Redhat Enterprise) which enrolled one million of records into memory, each record is 4KB so total memory consumption is about 4 Gigabytes.

开发者_如何转开发I have a process in Linux 64 bit (Redhat Enterprise) which enrolled one million of records into memory, each record is 4KB so total memory consumption is about 4 Gigabytes.

My computer has 2GB of RAM and 3 GB of swap memory. So obviously part of data will be put into swap memory. The problem is that I don't know why it really takes too long time to traverse across all those records. I have a function that loop through each record and do some stuff things. It works well with about 500,000 records, the function just need couple of minutes to accomplish. However, with double amount of that records, i.e 1,000,000 records, it needs hours to do the same function. I used top command in Linux to check the cpu load, and see that it's about 90%wa (waiting time for I/O). I guess this might cause the problem but really don't know why it happens.

I would thank you so much any helpful idea.


Swap area is disk. Disk bandwidth is two or three order of magnitude less than memory bandwidth.


There are two options:

  1. The process works over the records sequentially. Than it was the stupidest thing on Earth to roll them all up to memory.
    1. If you can fix the process, fix it to only load a bit at a time.
    2. If you can't fix the process, you'll have to buy more memory.
  2. The process works over the records in random order or multiple times (and can't do otherwise). Well, you'll have to buy more memory.


If you want to use your swap space efficiently, you should make sure that you traverse your data sequentially in contiguous memory blocks. I.e. blocks of several megabytes. That way, when a new chunk is loaded into ram from swap space, this chunk will contain the next few records as well.


Sounds like either cache or swap thrashing is happening. Check vmstat to verify. You can remedy swap thrashing if you load only as much data as you can fit into memory, process them, load another block, and so on. This way you don't have to impose processing order (random or sequential doesn't matter much). Alternatively, we'd have to have more details on your algorithm / program architecture to comment.


The speed of your swap memory depends on the speed of the underlying hardware where the swap resides.

Usually in the operating systems, Windows calls it pagefile.sys, Linux calls it is the swap partition(s), the hardware of the swap is one of the hard drives in the system, so it is orders of magnitude slower than RAM.


Before buying more RAM, you could try using part of your RAM as a compressed swap. I heard of compcache, but I have not used it myself. The idea is the following:

  • If the data you put in RAM can be compressed (lets say a ratio 3 to 1),
  • Allocate 1 GB of your 2 GB RAM to a $in memory* swap,
  • You then have a loo latency RAM of 4 GB.

I would be curious to know if it improves the amount of record you can handle without thrashing.

0

精彩评论

暂无评论...
验证码 换一张
取 消