Asked  6 Months ago    Answers:  5   Viewed   55 times

I have captured a crash dump of my 32 bit .NET application running on a 64 bit Windows operating system. During the analysis somebody found out that I have a 64 bit dump and told me that it is not possible to analyze this dump due to wrong bitness.

When using Windows Task Manager to create the dump, I was not aware that I was doing something wrong. This always worked for 32 bit operating systems.

How can I take a good dump for .NET, especially with the correct bitness?



Why is bitness relevant here?

The bitness matters for .NET applications for the following reasons:

  • a DAC (data access control) library (mscordakwks.dll) of the correct bitness is needed. There's no cross-bitness DAC available.
  • the debugger needs to be able to load the SOS debugging extension of the correct bitness

It is not possible to convert a dump from 64 bit to 32 bit, although in theory it should contain all necessary information.

If you're feeling lucky, you can also try some of the instructions anyway

  • How to use Windbg to debug a dump of a 32bit .NET app running on a x64 machine

How to detect the bitness of an application?

If you don't know the bitness, you can find it out like this:

Windows 7 Task Manager shows *32 on processes: Windows 7 Task Manager

In Windows 8 task manager, go to the Details tab and add a column named Platform: Windows 8 Task Manager

Visual Studio shows the bitness when attaching to the process: Bitness in Visual Studio

Process Explorer can be configured to show the Image Type column: Bitness in Process Explorer


Programs which detect bitness automatically:

  • Process Explorer
  • ProcDump
  • Microsoft Visual Studio
  • Windows Error Reporting LocalDumps

Tools which capture a dump with specific bitness:

  • 64 Bit: default Task Manager on a 64 bit OS
  • 32 Bit: Task manager run from %windir%SysWOW64taskmgr.exe on a 64 Bit OS
  • 64 Bit: ProcDump run with the -64 command line switch
  • 32 Bit: WinDbg x86 version
  • 64 Bit: WinDbg x64 version
  • 32 Bit: DebugDiag x86 version
  • 64 Bit: DebugDiag x64 version
  • 32 Bit: ADPlus x86 version
  • 64 Bit: ADPlus x64 version

Just choose the bitness according to your application, not according the OS.

Why is memory relevant here?

For .NET you need a full memory dump, otherwise you cannot figure out the content of the objects. To include full memory, do the following:

  • in WinDbg, specify /ma when doing .dump
  • in Process Explorer, choose "Create full dump" (although technically, the result is still a minidump)
  • in ProcDump , apply the -ma command line switch
  • in Visual Studio, choose "Minidump with heap"
  • Task Manager will always create a dump with full memory
  • For Windows Error Reporting LocalDumps set DumpType to 2

Visual Studio instructions

I found out that many developers aren't even aware that Visual Studio can create dumps. The reason probably is that the menu is invisible for a long time. These are the steps:

  • Start Visual Studio: menu is invisible
  • Attach to a process: menu is still invisible
  • Break: menu becomes visible (find it under Debug / Save dump as)

Why 64 bit dumps of 32 bit applications at all?

Probably just for debugging the WoW64 layer itself.

Tuesday, June 1, 2021
answered 6 Months ago

Standard java practise, is to simply write

final int prime = 31;
int result = 1;
for( String s : strings )
    result = result * prime + s.hashCode();
// result is the hashcode.
Tuesday, June 22, 2021
answered 6 Months ago

Differences between full memory dump files and mini memory dump files

A memory dump file can collect a variety of information. Typically, a support engineer must have all the contents of virtual memory to troubleshoot a problem. In other cases, you might want to capture less information to focus on a specific problem. The debugger is flexible. This flexibility lets you limit the information that a memory dump file captures by collecting either full memory dump files or mini memory dump files:

  • Full memory dump files. These files contain the contents of virtual memory for a process. These files are the most useful when you are troubleshooting unknown issues. A support engineer can use these files to look anywhere in memory to locate any object, pull up the variable that was loaded on any call stack, and disassemble code to help diagnose the problem. The disadvantage of full memory dump files is that they are large. It also may take additional time to collect these files, and the process that is being recorded must be frozen while the dump file is created.
  • Mini memory dump files. A mini dump file is more configurable than a full dump file and can range from only several megabytes (MB) up to the size of a full dump file. The size differs because of the amount of virtual memory that the debugger is writing to disk. Although you can gather mini memory dump files quickly and they are small, they also have a disadvantage. Mini dump files may contain much less information than full dump files. The information that a mini dump file gathers may be virtually useless to a support engineer if the area of memory that the support engineer has to troubleshoot was not captured. For example, if the heap memory is not written to the memory dump file, a support engineer cannot examine the contents of a message that was being processed at the time that the problem occurred. Useful information, such as the subject line and the recipient list, would be lost.

An extract from Microsoft's documentation.

Sunday, August 1, 2021
answered 4 Months ago

Stay away from super-short magic numbers. Just because you're designing a binary format doesn't mean you can't use a text string for identifier. Follow that by an EOF char, and as an added bonus people who cat or type your binary file won't get a mangled terminal.

Sunday, September 26, 2021
Jeremy Pare
answered 2 Months ago

I'm by far no expert here, but some more information that might be useful:

  • According to this, GC threads are created on CLR startup, at least for server GC, so not having enough threads for a GC run is possibly not even possible ;-)

  • The "Disabled" in the "GC" column of thread 21 just means that it decided to not be preemted by an eventual GC operation. This happends when the code on thread decides that it is doing a critical operation that should not be disturbed by a GC (like loading and assembly, hence fusion).

  • From the "kb" command output I would guess that you are actually using the server GC (stackframe "mscorwks!SVR::gc_heap::make_heap_segment"; workstation GC would have something with the class/namespace "WKS"). This is not unexpected as it should be the default on a "server operating" system. You should make sure about this using the "!eeversion" command. Additionally you should find out how many cores you have, because if the server GC runs,
    it will use as many threads (one per logical/physical core).

Could it be, that the timer is firing pretty often, or faster than the previous one is finished. You can get an overview of threadpool thread usage using the "!ThreadPool" command.

Also, you might want to check the actual arguments to the methods and locals (!clrstack -a) and/or the current objects on the stack (!dso). Maybe that can shed some more light.

As wild guess, some googling for "System.Net.ConnectionPool.CleanupCallbackWrapper" yield the following links, maybe that could be your problem?

  • Debugging high cpu usage
  • SmtpClient does not close session after sending message
  • issue with high cpu usage in web app with no load
Tuesday, October 5, 2021
answered 2 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :