Asked  7 Months ago    Answers:  5   Viewed   37 times

I can find lots of tutorials on how to overcome the out-of-memory error. The solution is: To increase the memory in the php.ini or in the .htaccess - what a surprise...

I actually don't understand the error message:

Fatal error: Out of memory (allocated 32016932) (tried to allocate 25152 bytes)

"Allocated 32016932", means 32MB have been allocated as in - the PHP script is using 32MB? Tried to allocate 25152, means that another 25KB were tried to be allocated, but the script failed as the maximum (of ~ 32MB?) has been reached?

What can I actually tell from this error message, besides that I'm "out of memory"?



I Always interpreted it like:

Fatal error: Out of memory ([currently] allocated 32016932) (tried to allocate [additional] 25152 bytes)

But good Question if there is a bulletproof explanation.

Wednesday, March 31, 2021
answered 7 Months ago

I have finally found the answer. The clue came from pcguru's answer beginning 'Since the server has only 1 GB of RAM...'.

On a hunch I looked to see whether Apache had memory limits of its own as those were likely to affect PHP's ability to allocate memory. Right at the top of httpd.conf I found this statement: RLimitMEM 204535125

This is put there by whm/cpanel. According to the following webpage whm/cpanel incorrectly calculates its value on a virtual server...

The script that runs out of memory gets most of the way through, so I increased RLimitMEM to 268435456 (256 MB) and reran the script. It completed its array merge and produced the csv file for download.

ETA: After further reading about RLimitMEM and RLimitCPU I decided to remove them from httpd.conf. This allows ini_set('memory_limit','###M') to work, and I now give that particular script the extra memory it needs. I also doubled the RAM on that server.

Thank you to everyone for your help in detecting this rather thorny issue, and especially to pcguru who came up with the vital clue that got me to the solution.

Wednesday, July 21, 2021
answered 3 Months ago

The most eager component in Symfony is a profiler. If you don't need profiler in some particular actions you can disable it via code:

if ($this->container->has('profiler'))

You can also set global parameter in config:

        collect: false
Friday, July 30, 2021
answered 3 Months ago

You created a fire-hose problem. After deadlocks and threading races, probably the 3rd most likely problem caused by threads. And just as hard to diagnose.

Easiest to see by using the debugger's Debug + Windows + Threads window and look at thread that is executing CreateRandomFile(). With some luck, you'll see it is completed and has written all 99MB bytes. But the progress reported on the console is far behind this, having only reported 125KB bytes written, give or take.

Core issue is the way Progress<>.Report() works. It uses SynchronizationContext.Post() to invoke the ProgressChanged event handler. In a console mode app that will call ThreadPool.QueueUserWorkItem(). That's quite fast, your CreateRandomFile() method won't be bogged down much by it.

But the event handler itself is quite a lot slower, console output is not very fast. So in effect, you are adding threadpool work requests at an enormous rate, 99 million of them in a handful of seconds. No way for the threadpool scheduler to keep up, you'll have roughly 4 of them executing at the same time. All competing to write to the console as well, only one of them can acquire the underlying lock.

So it is the threadpool scheduler that causes OOM, forced to store so many work requests.

And sure, when you call Report() less frequently then the fire-hose problem is a lot less worse. Not actually that simple to ensure it never causes a problem, although directly calling Console.Write() is an obvious fix. Ultimately simple, create a usable UI that is useful to a human. Nobody likes a crazily scrolling window or a blur of text. Reporting progress no more frequently than 20 times per second is plenty good enough for the user's eyes, the console has no trouble keeping up with that.

Sunday, August 29, 2021
Mark Ransom
answered 2 Months ago

You can rely on this method working correctly, this exception is very likely to trip in a 32-bit process when you ask for 250 megabytes. That gets to be difficult to get when the program has been running for a while.

A program never crashes with OOM because you've consumed all available virtual memory address space. It crashes because there isn't a hole left in the address space that's big enough to fit the allocation. Your code requests a hole big enough to allocate 250 megabytes in one gulp. When you don't get the exception that you can be sure that this allocation will not fail.

But 250 megabytes is rather a lot, that's a really big array. And is very likely to fail due to a problem called "address space fragmentation". In other words, a program typically starts out with several very large holes, the largest about 600 megabytes. Holes available between the allocations made to store code and data that's used by the .NET runtime and unmanaged Windows DLLs. As the program allocates more memory, those holes get smaller. It is likely to release some memory but that doesn't reproduce a big hole. You typically get two holes, roughly half the size of the original, with an allocation somewhere in the middle that cuts the original big hole in two.

This is called fragmentation, a 32-bit process that allocates and releases a lot of memory ends up fragmenting the virtual memory address space so the biggest hole that's still available after a while gets smaller, around 90 megabytes is fairly typical. Asking for 250 megabytes is almost guaranteed to fail. You will need to aim lower.

You no doubt expected it to work differently, ensuring that the sum of allocations adding up to 250 megabytes is guaranteed to work. This however is not how MemoryFailPoint works, it only checks for the largest possible allocation. Needless to say perhaps, this makes it less than useful. I otherwise do sympathize with the .NET framework programmers, getting it to work the way we'd like it is both expensive and cannot actually provide a guarantee since the size of an allocation matters most.

Virtual memory is a plentiful resource that's incredibly cheap. But getting close to consuming it all is very troublesome. Once you consume a gigabyte of it then OOM striking at random is starting to get likely. Don't forget the easy fix for this problem, you are running on a 64-bit operating system. So just changing the EXE platform target to AnyCPU gets you gobs and gobs of virtual address space. Depends on the OS edition but a terabyte is possible. It still fragments but you just don't care anymore, the holes are huge.

Last but not least, visible in the comments, this problem has nothing to do with RAM. Virtual memory is quite unrelated to how much RAM you have. It is the operating system's job to map virtual memory addresses to physical addresses in RAM, it does so dynamically. Accessing a memory location may trip a page fault, the OS will allocate RAM for the page. And the reverse happens, the OS will unmap RAM for a page when it is needed elsewhere. You can never run out of RAM, the machine will slow down to a crawl before that can happen. The SysInternals' VMMap utility is nice to see what your program's virtual address space looks like, albeit that you tend to drown in the info for a large process.

Thursday, September 16, 2021
answered 1 Month ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :