Asked  7 Months ago    Answers:  5   Viewed   39 times

I have an iOS app written in Swift that is leaking memory - in certain situation some objects should be released but they are not. I have learnt about the issue by simply adding deinit debug messages like this:

deinit {
    println("DEINIT: KeysProvider released")
}

So, the deinit message should be present in console after such events that should cause the object to release. However, for some of the objects that should be released, the message is missing. Still, Leaks Developer Tool does not show any leaks. How do I solve such situation?

 Answers

55

In Xcode 8, you can click on the "Debug Memory Graph" button, debugmemorygraphbutton in the debug toolbar (shown at the bottom of the screen):

debug memory graph

Just identify the object in the left panel that you think should have been deallocated, and it will show you the object graph (shown in the main canvas, above). This is very useful in quickly identifying where the strong references were established on the object in question. From here, you can start your research, diagnosing why those strong references were not resolved (e.g. if the object in question has a strong reference from something else that should have been deallocated, look at that object's graph, too, and you may find the issue (e.g. strong reference cycles, repeating timers, etc.).

Notice, that in the right panel, I'm seeing the call tree. I got that by turning on the "malloc stack" logging option in the scheme settings:

malloc stack

Anyway, having done that, one can then click on the arrow next to the relevant method call shown in the stack trace in the right panel of the first screen snapshot above, and you can see where that strong reference was originally established:

code

The above memory diagnostic technique (and more) is demonstrated in the latter part of WWDC 2016 Visual Debugging with Xcode.


The traditional Instruments technique (especially useful if using older versions of Xcode) is described below, in my original answer.


I would suggest using Instruments' "Allocations" tool with the "Record Reference Counts" feature:

record reference counts

You can then run the app in Instruments and then search for your class that you know is leaking and drill in by clicking on the arrow:

enter image description here

You can then drill into the details and look at the stack trace using the "Extended Details" panel on the right:

extended details

In that "Extended Details" panel, focus on your code in black rather than the system calls in gray. Anyway, from the "Extended Details" panel, you can then drill into your source code, right in Instruments::

your code

For more information and demonstrations in using Instruments to track down memory problems, please refer to:

  • WWDC 2013 video Fixing Memory Issues
  • WWDC 2012 video iOS App Performance: Memory
Tuesday, June 1, 2021
 
capsid
answered 7 Months ago
93

Your Rust function do_something constructs a temporary CString, takes a pointer into it, and then drops the CString. The *const c_char is invalid from the instant you return it. If you're on nightly, you probably want CString#into_ptr instead of CString#as_ptr, as the former consumes the CString without deallocating the memory. On stable, you can mem::forget the CString. Then you can worry about who is supposed to free it.

Freeing from Python will be tricky or impossible, since Rust may use a different allocator. The best approach would be to expose a Rust function that takes a c_char pointer, constructs a CString for that pointer (rather than copying the data into a new allocation), and drops it. Unfortunately the middle part (creating the CString) seems impossible on stable for now: CString::from_ptr is unstable.

A workaround would to pass (a pointer to) the entire CString to Python and provide an accessor function to get the char pointer from it. You simply need to box the CString and transmute the box to a raw pointer. Then you can have another function that transmutes the pointer back to a box and lets it drop.

Wednesday, July 7, 2021
 
linjuming
answered 5 Months ago
29

I tried this [...]

lazy var personalizedGreeting: String = { return self.name }()

it seems there are no retain cycles

Correct.

The reason is that the immediately applied closure {}() is considered @noescape. It does not retain the captured self.

For reference: Joe Groff's tweet.

Friday, July 16, 2021
 
qitch
answered 5 Months ago
33

For the simplest approach - skip to the bottom to read about the description of doing that with Visual Studio 2013.


Now there might be some new tools - perhaps something in the updated Visual Studio and I would love to find about these, but I tried WinDbg before with some success. Here are my old notes on how to do that:

1. Create dump file from process manager
2. Run WinDbg (X64)
3. File/Open Crash Dump… (Crtl+D)
4. Run following:

lm
.load C:windowsMicrosoft.NETFramework64v4.0.30319sos.dll
.sympath SRV*c:localsymbols*http://msdl.microsoft.com/download/symbols
.symfix
.reload
!dumpheap -stat

Note that if your process if x86, especially if you are running on a x64 version of Windows - you will need to use an x86 version of the debugger (WinDbg ships both versions) to save the dump. SOS, which is a managed memory debugging extension for WinDbg does not support debugging x64 bit dumps of x86 bit processes. You will then also need to update the sos path respectively, so it looks like this:

.load C:windowsMicrosoft.NETFrameworkv4.0.30319sos.dll

Possibly not all of these commands are necessary, but this is what worked for me.

Now you can find the type of the object that seems to exist in too many instances

!DumpHeap -type TypeName

where type name is just the name of the type - no fully qualified namespace necessary.

Now you can check what keeps this object in memory:

!GCRoot Object_Address

Live debugging didn't work for me since the app seems to get suspended when you attach a debugger. I think I saw an option somewhere to make the app stay in memory, but I forgot where, but for memory profiling - looking at a static dump file might be enough.


You can download WinDbg as part of Windows SDK or as a standalone download of "Debugging Tools for Windows" from here.

To create a dump file - go to Task Manager, right click a proces and select "Create dump file".


Some more links:

http://blogs.microsoft.co.il/blogs/sasha/archive/2012/10/15/diagnosing-memory-leaks-in-managed-windows-store-apps.aspx

http://blogs.msdn.com/b/delay/archive/2009/03/11/where-s-your-leak-at-using-windbg-sos-and-gcroot-to-diagnose-a-net-memory-leak.aspx

http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/f3a3faa3-f1b3-4348-944c-43f11c339423

http://msdn.microsoft.com/en-us/library/bb190764.aspx

http://blogs.msdn.com/b/dougste/archive/2009/02/18/failed-to-load-data-access-dll-0x80004005-or-what-is-mscordacwks-dll.aspx


*EDIT

According to .NET Memory Allocation Profiling with Visual Studio 2012 by Stephen Toub - PerfView tool supports analyzing leaks in .NET Windows Store apps. Check an article and video walkthrough with Vance Morrison here.


*EDIT 2

Visual Studio 2013 Preview adds a new option to analyze managed memory heaps from dump files. To do it - simply pause your app in the Visual Studio debugger, save your current dump through Debug/Save Dump As, then resume execution and use your app until your suspected leak happens and do another dump. Then go to File/Open/File and open the second dump file. To the right of the Dump Summary in the "Actions" panel you'll see a "Debug Managed Memory" action. Select that and then in "Select Baseline" select your first dump file. You will see a list of objects on the managed heap, grouped by type, with count diffs. Note that you would typically first look at the objects with low, non-zero count differences to track a single leak source. You can drill into the list of objects and see what keeps them in memory by expanding the tree in the Reference Graph view.

Thursday, July 29, 2021
 
keyBeatz
answered 4 Months ago
64

The issue was that I was measuring the memory during the init method of the new scene. The report was hence including the assets of the previous scene (as it was not yet being deallocated).

Adding a callback with a 0.1 delay solved it and answering my questions:

Q:

someone (cit. 5) told me that resident memory could be reclaimed by my system. What does this mean? Does it mean that my App is not using that memory or is the resident memory value directly related to the memory being currently used by my App?

A:

it is the memory directly related to the memory being used by my App. Using a delay in the callback of this function plus a call to [[CCTextureCache sharedTextureCache] dumpCachedTextureInfo]; will confirm this.

Q:

Any help? Is this related to using ARC?

A: Not in this case fortunately. There where other issues causing leaks in some scenes. For example, the starting scene was subclass of another scene. This starting scene had some child nodes being added as child which where not being removed on the cleanup method of the scene. Adding an explicit removal of those child nodes solved the issue. I am not sure why this was necessary (I was expecting all child nodes to be automatically removed) but it solved the issue.

Friday, August 13, 2021
 
alioygur
answered 4 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :
 
Share