开发者

Eat memory using Python

开发者 https://www.devze.com 2023-03-12 11:28 出处:网络
I am trying to create an app that can "Purposely" consume RAM as much as we specify immediately.

I am trying to create an app that can "Purposely" consume RAM as much as we specify immediately. e.g. I want to consume 512 MB RAM, then the app will consume 512 MB directly.

I have search on the web, most of them are using while loop to fill the ram with variable or data. But I think it is slow way to fill the RAM and might not accurate either.

I am looking for a library in python about memory management. and came across these http://docs.python.org/library/mmap.html. But can't figure out how to use these library to eat 开发者_开发知识库the RAM Space in one shot.

I ever saw an mem-eater application, but don't know how they were written...

So, is there any other better suggestion for Fill the RAM with random data immediately? Or Should I just use while loop to fill the data manually but with Multi-Threading to make it faster?


One simple way might be:

some_str = ' ' * 512000000

Seemed to work pretty well in my tests.

Edit: in Python 3, you might want to use bytearray(512000000) instead.


You won't be able to allocate all the memory you can using constructs like

s = ' ' * BIG_NUMBER

It is better to append a list as in

a = []
while True:
    print len(a)
    a.append(' ' * 10**6)

Here is a longer code which gives more insight on the memory allocation limits:

import os
import psutil

PROCESS = psutil.Process(os.getpid())
MEGA = 10 ** 6
MEGA_STR = ' ' * MEGA

def pmem():
    tot, avail, percent, used, free = psutil.virtual_memory()
    tot, avail, used, free = tot / MEGA, avail / MEGA, used / MEGA, free / MEGA
    proc = PROCESS.get_memory_info()[1] / MEGA
    print('process = %s total = %s avail = %s used = %s free = %s percent = %s'
          % (proc, tot, avail, used, free, percent))

def alloc_max_array():
    i = 0
    ar = []
    while True:
        try:
            #ar.append(MEGA_STR)  # no copy if reusing the same string!
            ar.append(MEGA_STR + str(i))
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    print 'maximum array allocation:', max_i
    pmem()

def alloc_max_str():
    i = 0
    while True:
        try:
            a = ' ' * (i * 10 * MEGA)
            del a
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    _ = ' ' * (max_i * 10 * MEGA)
    print 'maximum string allocation', max_i
    pmem()

pmem()
alloc_max_str()
alloc_max_array()

This is the output I get:

process = 4 total = 3179 avail = 2051 used = 1127 free = 2051 percent = 35.5
maximum string allocation 102
process = 1025 total = 3179 avail = 1028 used = 2150 free = 1028 percent = 67.7
maximum array allocation: 2004
process = 2018 total = 3179 avail = 34 used = 3144 free = 34 percent = 98.9


x = bytearray(1024*1024*1000)

Eats about 1GB of memory


Here is a version of markolopa's answer that worked for me:

import os
import psutil

PROCESS = psutil.Process(os.getpid())
MEGA = 10 ** 6
MEGA_STR = ' ' * MEGA


def pmem():
    try:
        tot, avail, percent, used, free, active, inactive, buffers = psutil.virtual_memory()
    except ValueError:
        tot, avail, percent, used, free, active, inactive, buffers, cached, shared = psutil.virtual_memory()
    tot, avail, used, free = tot / MEGA, avail / MEGA, used / MEGA, free / MEGA
    proc = PROCESS.memory_info()[1] / MEGA
    print('process = %s total = %s avail = %s used = %s free = %s percent = %s'
          % (proc, tot, avail, used, free, percent))


def alloc_max_array():
    i = 0
    ar = []
    while True:
        try:
            #ar.append(MEGA_STR)  # no copy if reusing the same string!
            ar.append(MEGA_STR + str(i))
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    print('maximum array allocation:', max_i)
    pmem()


def alloc_max_str():
    i = 0
    while True:
        try:
            a = ' ' * (i * 10 * MEGA)
            del a
        except MemoryError:
            break
        i += 1
    max_i = i - 1
    _ = ' ' * (max_i * 10 * MEGA)
    print('maximum string allocation', max_i)
    pmem()

pmem()
alloc_max_str()
alloc_max_array()


This function will allocate memory into a list of bytes objects. Each item in the list will effectively be unique and of the same length. The function also logs its allocations. I have tested it up to 3.7 TiB. It uses the humanfriendly package but you can remove its use if you don't want it.

It does use a loop, but at least it lets you optionally customize how much to allocate in each iteration. For example, you can use an 8-fold higher value for multiplier_per_allocation.

import logging
import secrets
from typing import Optional

from humanfriendly import format_size

log = logging.getLogger(__name__)


def fill_memory(*, num_unique_bytes_per_allocation: int = 1024, multiplier_per_allocation: int = 1024 ** 2, max_allocations: Optional[int] = None) -> None:
    """Allocate available memory into a list of effectively unique bytes objects.

    This function is for diagnostic purposes.

    :param num_unique_bytes_per_allocation: Each allocation is created by multiplying a random sequence of bytes of this length.
    :param multiplier_per_allocation: Each allocation is created by multiplying the random sequence of bytes by this number.
    :param max_allocations: Optional number of max allocations.
    """
    # Ref: https://stackoverflow.com/a/66109163/
    num_allocation_bytes = num_unique_bytes_per_allocation * multiplier_per_allocation
    log.info(
        f"Allocating cumulative instances of {num_allocation_bytes:,} bytes ({format_size(num_allocation_bytes)}) each. "
        f"Each allocation uses {num_unique_bytes_per_allocation:,} unique bytes ({format_size(num_unique_bytes_per_allocation)}) "
        f"with a multiplier of {multiplier_per_allocation:,} ({format_size(multiplier_per_allocation)})."
    )

    # Allocate memory
    allocated = []
    num_allocation = 1
    while True:
        unique_bytes_for_allocation = secrets.token_bytes(num_unique_bytes_per_allocation)
        allocated.append(unique_bytes_for_allocation * multiplier_per_allocation)
        num_total_bytes_allocated = num_allocation * num_allocation_bytes
        log.info(f"Used a total of {num_total_bytes_allocated:,} bytes ({format_size(num_total_bytes_allocated)}) via {num_allocation:,} allocations.")
        if max_allocations and (max_allocations == num_allocation):
            break
        num_allocation += 1

Sample output:

>>> import logging
>>> logging.basicConfig(level=logging.INFO)

>>> fill_memory()
INFO:Allocating cumulative instances of 1,073,741,824 bytes (1 GiB) each. Each allocation uses 1,024 unique bytes (1 KiB) with a multiplier of 1,048,576 (1 MiB).
INFO:Used a total of 1,073,741,824 bytes (1 GiB) via 1 allocations.
INFO:Used a total of 2,147,483,648 bytes (2 GiB) via 2 allocations.
INFO:Used a total of 3,221,225,472 bytes (3 GiB) via 3 allocations.
INFO:Used a total of 4,294,967,296 bytes (4 GiB) via 4 allocations.


You can allocate a huge amount of ram by executing:

while True:
    for i in range(0,100000000):
        Gig = 1024*1024*1024*2#A Gig multiplied by 2
        a = 999999999999999999999 * (i * Gig)
        a = a * i
        print str(a)*2

Save it in a .pyw for background ram allocation. If it doesn't freeze your pc try increasing the variable a's value. To stop it :

#First we send signals
os.system("TASKKILL /im pythonw.exe")
os.system("TASKKILL /im python.exe") 
print "Forcefull termination"
#Now we forcefully terminate
#pythonw.exe if running in idle or background
os.system("TASKKILL /im python.exe /f")
os.system("TASKKILL /im pythonw.exe /f")
os.system("pause")
0

精彩评论

暂无评论...
验证码 换一张
取 消