Asked  7 Months ago    Answers:  5   Viewed   55 times

Is there a way for a Python program to determine how much memory it's currently using? I've seen discussions about memory usage for a single object, but what I need is total memory usage for the process, so that I can determine when it's necessary to start discarding cached data.

 Answers

29

Here is a useful solution that works for various operating systems, including Linux, Windows, etc.:

import os, psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss)  # in bytes 

With Python 2.7 and psutil 5.6.3, the last line should be

print(process.memory_info()[0])

instead (there was a change in the API later).

Note:

  • do pip install psutil if it is not installed yet

  • handy one-liner if you quickly want to know how many MB your process takes:

    import os, psutil; print(psutil.Process(os.getpid()).memory_info().rss / 1024 ** 2)
    
Tuesday, June 1, 2021
 
Farnabaz
answered 7 Months ago
76

You might simply hit the default behaviour of the linux memory allocator.

Basically Linux has two allocation strategies, sbrk() for small blocks of memory and mmap() for larger blocks. sbrk() allocated memory blocks cannot easily be returned to the system, while mmap() based ones can (just unmap the page).

So if you allocate a memory block larger than the value where the malloc() allocator in your libc decides to switch between sbrk() and mmap() you see this effect. See the mallopt() call, especially the MMAP_THRESHOLD (http://man7.org/linux/man-pages/man3/mallopt.3.html).

Update To answer your extra question: yes, it is expected that you leak memory that way, if the memory allocator works like the libc one on Linux. If you used Windows LowFragmentationHeap instead, it would probably not leak, similar on AIX, depending on which malloc is configured. Maybe one of the other allocators (tcmalloc etc.) also fix such issues. sbrk() is blazingly fast, but has issues with memory fragmentation. CPython cannot do much about it, as it does not have a compacting garbage collector, but simple reference counting.

Python offers a few methods to reduce the buffer allocations, see for example the blog post here: http://eli.thegreenplace.net/2011/11/28/less-copies-in-python-with-the-buffer-protocol-and-memoryviews/

Tuesday, August 10, 2021
 
David542
answered 4 Months ago
77

There isn't a stated or fixed amount of memory available to apps on iOS devices.

That said, there are game apps that are reported to use over 55MB of memory, however the OS is also reported to kill these games some significant percentage of the time if not run right after a device reset.

If you use 22MB of memory or less, the OS could still kill your app because there wasn't enough available memory, but it would also have to kill a massive percentage of other apps in the app store, so you would be in very good company.

When any app (foreground or background) requests enough memory to start depleting the memory pool sufficiently, memory warnings are sent to other apps. If the memory pool gets small enough, apps are killed, including possibly the foreground app if it's a big memory hog.

Friday, August 13, 2021
 
Luis González
answered 4 Months ago
90

This is not a good way of handling memory management. By the time you see MemoryError, you're already in a critical state where the kernel is probably close to killing processes to free up memory, and on many systems you'll never see it because it'll go to swap or just OOM-kill your process rather than fail allocations.

The only recoverable case you're likely to see MemoryError is after trying to make a very large allocation that doesn't fit in available address space, only common on 32-bit systems.

If you want to have a cache that frees memory as needed for other allocations, it needs to not interface with errors, but with the allocator itself. This way, when you need to release memory for an allocation you'll know how much contiguous memory is needed, or else you'll be guessing blindly. It also means you can track memory allocations as they happen, so you can keep memory usage at a specific level, rather than letting it grow unfettered and then trying to recover when it gets too high.

I'd strongly suggest that for most applications this sort of caching behavior is overcomplicated, though--you're usually better off just using a set amount of memory for cache.

Saturday, August 14, 2021
 
rorymorris
answered 4 Months ago
98

You can get the memory info using

MemoryInfo mi = new MemoryInfo();
ActivityManager activityManager = (ActivityManager) getSystemService(ACTIVITY_SERVICE);
activityManager.getMemoryInfo(mi);

and for a particular process use

activityManager.getProcessMemoryInfo(new int[]{process_ids}); 

which returns an array of memory information

I would ask you to refere these three

Get Memory Usage in Android

How to get current memory usage in android?

How do I discover memory usage of my application in Android?

Saturday, August 21, 2021
 
adrtam
answered 4 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :  
Share