Asked  7 Months ago    Answers:  5   Viewed   242 times

I've read in different places that it is done for "performance reasons", but I still wonder what are the particular cases where performance get improved by this 16-byte alignment. Or, in any case, what were the reasons why this was chosen.

edit: I'm thinking I wrote the question in a misleading way. I wasn't asking about why the processor does things faster with 16-byte aligned memory, this is explained everywhere in the docs. What I wanted to know instead, is how the enforced 16-byte alignment is better than just letting the programmers align the stack themselves when needed. I'm asking this because from my experience with assembly, the stack enforcement has two problems: it is only useful by less 1% percent of the code that is executed (so in the other 99% is actually overhead); and it is also a very common source of bugs. So I wonder how it really pays off in the end. While I'm still in doubt about this, I'm accepting peter's answer as it contains the most detailed answer to my original question.

 Answers

45

Note that the current version of the i386 System V ABI used on Linux also requires 16-byte stack alignment1. See https://sourceforge.net/p/fbc/bugs/659/ for some history, and my comment on https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40838#c91 for an attempt at summarizing the unfortunate history of how i386 GNU/Linux + GCC accidentally got into a situation where a backwards-incompat change to the i386 System V ABI was the lesser of two evils.

Windows x64 also requires 16-byte stack alignment before a call, presumably for similar motivations as x86-64 System V.

Also, semi-related: x86-64 System V requires that global arrays of 16 bytes and large be aligned by 16. Same for local arrays of >= 16 bytes or variable size, although that detail is only relevant across functions if you know that you're being passed the address of the start of an array, not a pointer into the middle. (Different memory alignment for different buffer sizes). It doesn't let you make any extra assumptions about an arbitrary int *.


SSE2 is baseline for x86-64, and making the ABI efficient for types like __m128, and for compiler auto-vectorization, was one of the design goals, I think. The ABI has to define how such args are passed as function args, or by reference.

16-byte alignment is sometimes useful for local variables on the stack (especially arrays), and guaranteeing 16-byte alignment means compilers can get it for free whenever it's useful, even if the source doesn't explicitly request it.

If the stack alignment relative to a 16-byte boundary wasn't known, every function that wanted an aligned local would need an and rsp, -16, and extra instructions to save/restore rsp after an unknown offset to rsp (either 0 or -8). e.g. using up rbp for a frame pointer.

Without AVX, memory source operands have to be 16-byte aligned. e.g. paddd xmm0, [rsp+rdi] faults if the memory operand is misaligned. So if alignment isn't known, you'd have to either use movups xmm1, [rsp+rdi] / paddd xmm0, xmm1, or write a loop prologue / epilogue to handle the misaligned elements. For local arrays that the compiler wants to auto-vectorize over, it can simply choose to align them by 16.

Also note that early x86 CPUs (before Nehalem / Bulldozer) had a movups instruction that's slower than movaps even when the pointer does turn out to be aligned. (i.e. unaligned loads/stores on aligned data was extra slow, as well as preventing folding loads into an ALU instruction). (See Agner Fog's optimization guides, microarch guide, and instruction tables for more about all of the above.)

These factors are why a guarantee is more useful than just "usually" keeping the stack aligned. Being allowed to make code which actually faults on a misaligned stack allows more optimization opportunities.

Aligned arrays also speed up vectorized memcpy / strcmp / whatever functions that can't assume alignment, but instead check for it and can jump straight to their whole-vector loops.

From a recent version of the x86-64 System V ABI (r252):

An array uses the same alignment as its elements, except that a local or global array variable of length at least 16 bytes or a C99 variable-length array variable always has alignment of at least 16 bytes.4

4 The alignment requirement allows the use of SSE instructions when operating on the array. The compiler cannot in general calculate the size of a variable-length array (VLA), but it is expected that most VLAs will require at least 16 bytes, so it is logical to mandate that VLAs have at least a 16-byte alignment.

This is a bit aggressive, and mostly only helps when functions that auto-vectorize can be inlined, but usually there are other locals the compiler can stuff into any gaps so it doesn't waste stack space. And doesn't waste instructions as long as there's a known stack alignment. (Obviously the ABI designers could have left this out if they'd decided not to require 16-byte stack alignment.)


Spill/reload of __m128

Of course, it makes it free to do alignas(16) char buf[1024]; or other cases where the source requests 16-byte alignment.

And there are also __m128 / __m128d / __m128i locals. The compiler may not be able to keep all vector locals in registers (e.g. spilled across a function call, or not enough registers), so it needs to be able to spill/reload them with movaps, or as a memory source operand for ALU instructions, for efficiency reasons discussed above.

Loads/stores that actually are split across a cache-line boundary (64 bytes) have significant latency penalties, and also minor throughput penalties on modern CPUs. The load needs data from 2 separate cache lines, so it takes two accesses to the cache. (And potentially 2 cache misses, but that's rare for stack memory).

I think movups already had that cost baked in for vectors on older CPUs where it's expensive, but it still sucks. Spanning a 4k page boundary is much worse (on CPUs before Skylake), with a load or store taking ~100 cycles if it touches bytes on both sides of a 4k boundary. (Also needs 2 TLB checks). Natural alignment makes splits across any wider boundary impossible, so 16-byte alignment was sufficient for everything you can do with SSE2.


max_align_t has 16-byte alignment in the x86-64 System V ABI, because of long double (10-byte/80-bit x87). It's defined as padded to 16 bytes for some weird reason, unlike in 32-bit code where sizeof(long double) == 10. x87 10-byte load/store is quite slow anyway (like 1/3rd the load throughput of double or float on Core2, 1/6th on P4, or 1/8th on K8), but maybe cache-line and page split penalties were so bad on older CPUs that they decided to define it that way. I think on modern CPUs (maybe even Core2) looping over an array of long double would be no slower with packed 10-byte, because the fld m80 would be a bigger bottleneck than a cache-line split every ~6.4 elements.

Actually, the ABI was defined before silicon was available to benchmark on (back in ~2000), but those K8 numbers are the same as K7 (32-bit / 64-bit mode is irrelevant here). Making long double 16-byte does make it possible to copy a single one with movaps, even though you can't do anything with it in XMM registers. (Except manipulate the sign bit with xorps / andps / orps)

Related: this max_align_t definition means that malloc always returns 16-byte aligned memory in x86-64 code. This lets you get away with using it for SSE aligned loads like _mm_load_ps, but such code can break when compiled for 32-bit where alignof(max_align_t) is only 8. (Use aligned_alloc or whatever).


Other ABI factors include passing __m128 values on the stack (after xmm0-7 have the first 8 float / vector args). It makes sense to require 16-byte alignment for vectors in memory, so they can be used efficiently by the callee, and stored efficiently by the caller. Maintaining 16-byte stack alignment at all times makes it easy for functions that need to align some arg-passing space by 16.

There are types like __m128 that the ABI guarantees have 16-byte alignment. If you define a local and take its address, and pass that pointer to some other function, that local needs to be sufficiently aligned. So maintaining 16-byte stack alignment goes hand in hand with giving some types 16-byte alignment, which is obviously a good idea.

These days, it's nice that atomic<struct_of_16_bytes> can cheaply get 16-byte alignment, so lock cmpxchg16b doesn't ever cross a cache line boundary. For the really rare case where you have an atomic local with automatic storage, and you pass pointers to it to multiple threads...


Footnote 1: 32-bit Linux

Not all 32-bit platforms broke backwards compatibility with existing binaries and hand-written asm the way Linux did; some like i386 NetBSD still only use the historical 4-byte stack alignment requirement from the original version of the i386 SysV ABI.

The historical 4-byte stack alignment was also insufficient for efficient 8-byte double on modern CPUs. Unaligned fld / fstp are generally efficient except when they cross a cache-line boundary (like other loads/stores), so it's not horrible, but naturally-aligned is nice.

Even before 16-byte alignment was officially part of the ABI, GCC used to enable -mpreferred-stack-boundary=4 (2^4 = 16-bytes) on 32-bit. This currently assumes the incoming stack alignment is 16 bytes (even for cases that will fault if it's not), as well as preserving that alignment. I'm not sure if historical gcc versions used to try to preserve stack alignment without depending on it for correctness of SSE code-gen or alignas(16) objects.

ffmpeg is one well-known example that depends on the compiler to give it stack alignment: what is "stack alignment"?, e.g. on 32-bit Windows.

Modern gcc still emits code at the top of main to align the stack by 16 (even on Linux where the ABI guarantees that the kernel starts the process with an aligned stack), but not at the top of any other function. You could use -mincoming-stack-boundary to tell gcc how aligned it should assume the stack is when generating code.

Ancient gcc4.1 didn't seem to really respect __attribute__((aligned(16))) or 32 for automatic storage, i.e. it doesn't bother aligning the stack any extra in this example on Godbolt, so old gcc has kind of a checkered past when it comes to stack alignment. I think the change of the official Linux ABI to 16-byte alignment happened as a de-facto change first, not a well-planned change. I haven't turned up anything official on when the change happened, but somewhere between 2005 and 2010 I think, after x86-64 became popular and the x86-64 System V ABI's 16-byte stack alignment proved useful.

At first it was a change to GCC's code-gen to use more alignment than the ABI required (i.e. using a stricter ABI for gcc-compiled code), but later it was written in to the version of the i386 System V ABI maintained at https://github.com/hjl-tools/x86-psABI/wiki/X86-psABI (which is official for Linux at least).


@MichaelPetch and @ThomasJager report that gcc4.5 may have been the first version to have -mpreferred-stack-boundary=4 for 32-bit as well as 64-bit. gcc4.1.2 and gcc4.4.7 on Godbolt appear to behave that way, so maybe the change was backported, or Matt Godbolt configured old gcc with a more modern config.

Tuesday, June 1, 2021
 
radmen
answered 7 Months ago
89

TL:DR: int 0x80 works when used correctly, as long as any pointers fit in 32 bits (stack pointers don't fit). But beware that strace decodes it wrong unless you have a very recent strace + kernel.

int 0x80 zeros r8-r11, and preserves everything else. Use it exactly like you would in 32-bit code, with the 32-bit call numbers. (Or better, don't use it!)

Not all systems even support int 0x80: The Windows Subsystem for Linux (WSL) is strictly 64-bit only: int 0x80 doesn't work at all. It's also possible to build Linux kernels without IA-32 emulation either. (No support for 32-bit executables, no support for 32-bit system calls).


The details: what's saved/restored, which parts of which regs the kernel uses

int 0x80 uses eax (not the full rax) as the system-call number, dispatching to the same table of function-pointers that 32-bit user-space int 0x80 uses. (These pointers are to sys_whatever implementations or wrappers for the native 64-bit implementation inside the kernel. System calls are really function calls across the user/kernel boundary.)

Only the low 32 bits of arg registers are passed. The upper halves of rbx-rbp are preserved, but ignored by int 0x80 system calls. Note that passing a bad pointer to a system call doesn't result in SIGSEGV; instead the system call returns -EFAULT. If you don't check error return values (with a debugger or tracing tool), it will appear to silently fail.

All registers (except eax of course) are saved/restored (including RFLAGS, and the upper 32 of integer regs), except that r8-r11 are zeroed. r12-r15 are call-preserved in the x86-64 SysV ABI's function calling convention, so the registers that get zeroed by int 0x80 in 64-bit are the call-clobbered subset of the "new" registers that AMD64 added.

This behaviour has been preserved over some internal changes to how register-saving was implemented inside the kernel, and comments in the kernel mention that it's usable from 64-bit, so this ABI is probably stable. (I.e. you can count on r8-r11 being zeroed, and everything else being preserved.)

The return value is sign-extended to fill 64-bit rax. (Linux declares 32-bit sys_ functions as returning signed long.) This means that pointer return values (like from void *mmap()) need to be zero-extended before use in 64-bit addressing modes

Unlike sysenter, it preserves the original value of cs, so it returns to user-space in the same mode that it was called in. (Using sysenter results in the kernel setting cs to $__USER32_CS, which selects a descriptor for a 32-bit code segment.)


Older strace decodes int 0x80 incorrectly for 64-bit processes. It decodes as if the process had used syscall instead of int 0x80. This can be very confusing. e.g. strace prints write(0, NULL, 12 <unfinished ... exit status 1> for eax=1 / int $0x80, which is actually _exit(ebx), not write(rdi, rsi, rdx).

I don't know the exact version where the PTRACE_GET_SYSCALL_INFO feature was added, but Linux kernel 5.5 / strace 5.5 handle it. It misleadingly says the process "runs in 32-bit mode" but does decode correctly. (Example).


int 0x80 works as long as all arguments (including pointers) fit in the low 32 of a register. This is the case for static code and data in the default code model ("small") in the x86-64 SysV ABI. (Section 3.5.1 : all symbols are known to be located in the virtual addresses in the range 0x00000000 to 0x7effffff, so you can do stuff like mov edi, hello (AT&T mov $hello, %edi) to get a pointer into a register with a 5 byte instruction).

But this is not the case for position-independent executables, which many Linux distros now configure gcc to make by default (and they enable ASLR for executables). For example, I compiled a hello.c on Arch Linux, and set a breakpoint at the start of main. The string constant passed to puts was at 0x555555554724, so a 32-bit ABI write system call would not work. (GDB disables ASLR by default, so you always see the same address from run to run, if you run from within GDB.)

Linux puts the stack near the "gap" between the upper and lower ranges of canonical addresses, i.e. with the top of the stack at 2^48-1. (Or somewhere random, with ASLR enabled). So rsp on entry to _start in a typical statically-linked executable is something like 0x7fffffffe550, depending on size of env vars and args. Truncating this pointer to esp does not point to any valid memory, so system calls with pointer inputs will typically return -EFAULT if you try to pass a truncated stack pointer. (And your program will crash if you truncate rsp to esp and then do anything with the stack, e.g. if you built 32-bit asm source as a 64-bit executable.)


How it works in the kernel:

In the Linux source code, arch/x86/entry/entry_64_compat.S defines ENTRY(entry_INT80_compat). Both 32 and 64-bit processes use the same entry point when they execute int 0x80.

entry_64.S is defines native entry points for a 64-bit kernel, which includes interrupt / fault handlers and syscall native system calls from long mode (aka 64-bit mode) processes.

entry_64_compat.S defines system-call entry-points from compat mode into a 64-bit kernel, plus the special case of int 0x80 in a 64-bit process. (sysenter in a 64-bit process may go to that entry point as well, but it pushes $__USER32_CS, so it will always return in 32-bit mode.) There's a 32-bit version of the syscall instruction, supported on AMD CPUs, and Linux supports it too for fast 32-bit system calls from 32-bit processes.

I guess a possible use-case for int 0x80 in 64-bit mode is if you wanted to use a custom code-segment descriptor that you installed with modify_ldt. int 0x80 pushes segment registers itself for use with iret, and Linux always returns from int 0x80 system calls via iret. The 64-bit syscall entry point sets pt_regs->cs and ->ss to constants, __USER_CS and __USER_DS. (It's normal that SS and DS use the same segment descriptors. Permission differences are done with paging, not segmentation.)

entry_32.S defines entry points into a 32-bit kernel, and is not involved at all.

The int 0x80 entry point in Linux 4.12's entry_64_compat.S:

/*
 * 32-bit legacy system call entry.
 *
 * 32-bit x86 Linux system calls traditionally used the INT $0x80
 * instruction.  INT $0x80 lands here.
 *
 * This entry point can be used by 32-bit and 64-bit programs to perform
 * 32-bit system calls.  Instances of INT $0x80 can be found inline in
 * various programs and libraries.  It is also used by the vDSO's
 * __kernel_vsyscall fallback for hardware that doesn't support a faster
 * entry method.  Restarted 32-bit system calls also fall back to INT
 * $0x80 regardless of what instruction was originally used to do the
 * system call.
 *
 * This is considered a slow path.  It is not used by most libc
 * implementations on modern hardware except during process startup.
 ...
 */
 ENTRY(entry_INT80_compat)
 ...  (see the github URL for the full source)

The code zero-extends eax into rax, then pushes all the registers onto the kernel stack to form a struct pt_regs. This is where it will restore from when the system call returns. It's in a standard layout for saved user-space registers (for any entry point), so ptrace from other process (like gdb or strace) will read and/or write that memory if they use ptrace while this process is inside a system call. (ptrace modification of registers is one thing that makes return paths complicated for the other entry points. See comments.)

But it pushes $0 instead of r8/r9/r10/r11. (sysenter and AMD syscall32 entry points store zeros for r8-r15.)

I think this zeroing of r8-r11 is to match historical behaviour. Before the Set up full pt_regs for all compat syscalls commit, the entry point only saved the C call-clobbered registers. It dispatched directly from asm with call *ia32_sys_call_table(, %rax, 8), and those functions follow the calling convention, so they preserve rbx, rbp, rsp, and r12-r15. Zeroing r8-r11 instead of leaving them undefined was probably a way to avoid info-leaks from the kernel. IDK how it handled ptrace if the only copy of user-space's call-preserved registers was on the kernel stack where a C function saved them. I doubt it used stack-unwinding metadata to find them there.

The current implementation (Linux 4.12) dispatches 32-bit-ABI system calls from C, reloading the saved ebx, ecx, etc. from pt_regs. (64-bit native system calls dispatch directly from asm, with only a mov %r10, %rcx needed to account for the small difference in calling convention between functions and syscall. Unfortunately it can't always use sysret, because CPU bugs make it unsafe with non-canonical addresses. It does try to, so the fast-path is pretty damn fast, although syscall itself still takes tens of cycles.)

Anyway, in current Linux, 32-bit syscalls (including int 0x80 from 64-bit) eventually end up indo_syscall_32_irqs_on(struct pt_regs *regs). It dispatches to a function pointer ia32_sys_call_table, with 6 zero-extended args. This maybe avoids needing a wrapper around the 64-bit native syscall function in more cases to preserve that behaviour, so more of the ia32 table entries can be the native system call implementation directly.

Linux 4.12 arch/x86/entry/common.c

if (likely(nr < IA32_NR_syscalls)) {
  /*
   * It's possible that a 32-bit syscall implementation
   * takes a 64-bit parameter but nonetheless assumes that
   * the high bits are zero.  Make sure we zero-extend all
   * of the args.
   */
  regs->ax = ia32_sys_call_table[nr](
      (unsigned int)regs->bx, (unsigned int)regs->cx,
      (unsigned int)regs->dx, (unsigned int)regs->si,
      (unsigned int)regs->di, (unsigned int)regs->bp);
}

syscall_return_slowpath(regs);

In older versions of Linux that dispatch 32-bit system calls from asm (like 64-bit still did until 4.151), the int80 entry point itself puts args in the right registers with mov and xchg instructions, using 32-bit registers. It even uses mov %edx,%edx to zero-extend EDX into RDX (because arg3 happen to use the same register in both conventions). code here. This code is duplicated in the sysenter and syscall32 entry points.

Footnote 1: Linux 4.15 (I think) introduced Spectre / Meltdown mitigations, and a major revamp of the entry points that made them them a trampoline for the meltdown case. It also sanitized the incoming registers to avoid user-space values other than actual args being in registers during the call (when some Spectre gadget might run), by storing them, zeroing everything, then calling to a C wrapper that reloads just the right widths of args from the struct saved on entry.

I'm planning to leave this answer describing the much simpler mechanism because the conceptually useful part here is that the kernel side of a syscall involves using EAX or RAX as an index into a table of function pointers, with other incoming register values copied going to the places where the calling convention wants args to go. i.e. syscall is just a way to make a call into the kernel, to its dispatch code.


Simple example / test program:

I wrote a simple Hello World (in NASM syntax) which sets all registers to have non-zero upper halves, then makes two write() system calls with int 0x80, one with a pointer to a string in .rodata (succeeds), the second with a pointer to the stack (fails with -EFAULT).

Then it uses the native 64-bit syscall ABI to write() the chars from the stack (64-bit pointer), and again to exit.

So all of these examples are using the ABIs correctly, except for the 2nd int 0x80 which tries to pass a 64-bit pointer and has it truncated.

If you built it as a position-independent executable, the first one would fail too. (You'd have to use a RIP-relative lea instead of mov to get the address of hello: into a register.)

I used gdb, but use whatever debugger you prefer. Use one that highlights changed registers since the last single-step. gdbgui works well for debugging asm source, but is not great for disassembly. Still, it does have a register pane that works well for integer regs at least, and it worked great on this example.

See the inline ;;; comments describing how register are changed by system calls

global _start
_start:
    mov  rax, 0x123456789abcdef
    mov  rbx, rax
    mov  rcx, rax
    mov  rdx, rax
    mov  rsi, rax
    mov  rdi, rax
    mov  rbp, rax
    mov  r8, rax
    mov  r9, rax
    mov  r10, rax
    mov  r11, rax
    mov  r12, rax
    mov  r13, rax
    mov  r14, rax
    mov  r15, rax

    ;; 32-bit ABI
    mov  rax, 0xffffffff00000004          ; high garbage + __NR_write (unistd_32.h)
    mov  rbx, 0xffffffff00000001          ; high garbage + fd=1
    mov  rcx, 0xffffffff00000000 + .hello
    mov  rdx, 0xffffffff00000000 + .hellolen
    ;std
after_setup:       ; set a breakpoint here
    int  0x80                   ; write(1, hello, hellolen);   32-bit ABI
    ;; succeeds, writing to stdout
;;; changes to registers:   r8-r11 = 0.  rax=14 = return value

    ; ebx still = 1 = STDOUT_FILENO
    push 'bye' + (0xa<<(3*8))
    mov  rcx, rsp               ; rcx = 64-bit pointer that won't work if truncated
    mov  edx, 4
    mov  eax, 4                 ; __NR_write (unistd_32.h)
    int  0x80                   ; write(ebx=1, ecx=truncated pointer,  edx=4);  32-bit
    ;; fails, nothing printed
;;; changes to registers: rax=-14 = -EFAULT  (from /usr/include/asm-generic/errno-base.h)

    mov  r10, rax               ; save return value as exit status
    mov  r8, r15
    mov  r9, r15
    mov  r11, r15               ; make these regs non-zero again

    ;; 64-bit ABI
    mov  eax, 1                 ; __NR_write (unistd_64.h)
    mov  edi, 1
    mov  rsi, rsp
    mov  edx, 4
    syscall                     ; write(edi=1, rsi='byen' on the stack,  rdx=4);  64-bit
    ;; succeeds: writes to stdout and returns 4 in rax
;;; changes to registers: rax=4 = length return value
;;; rcx = 0x400112 = RIP.   r11 = 0x302 = eflags with an extra bit set.
;;; (This is not a coincidence, it's how sysret works.  But don't depend on it, since iret could leave something else)

    mov  edi, r10d
    ;xor  edi,edi
    mov  eax, 60                ; __NR_exit (unistd_64.h)
    syscall                     ; _exit(edi = first int 0x80 result);  64-bit
    ;; succeeds, exit status = low byte of first int 0x80 result = 14

section .rodata
_start.hello:    db "Hello World!", 0xa, 0
_start.hellolen  equ   $ - _start.hello

Build it into a 64-bit static binary with

yasm -felf64 -Worphan-labels -gdwarf2 abi32-from-64.asm
ld -o abi32-from-64 abi32-from-64.o

Run gdb ./abi32-from-64. In gdb, run set disassembly-flavor intel and layout reg if you don't have that in your ~/.gdbinit already. (GAS .intel_syntax is like MASM, not NASM, but they're close enough that it's easy to read if you like NASM syntax.)

(gdb)  set disassembly-flavor intel
(gdb)  layout reg
(gdb)  b  after_setup
(gdb)  r
(gdb)  si                     # step instruction
    press return to repeat the last command, keep stepping

Press control-L when gdb's TUI mode gets messed up. This happens easily, even when programs don't print to stdout themselves.

Tuesday, June 1, 2021
 
John_BSDthos
answered 7 Months ago
15

The Shadow space (also sometimes called Spill space or Home space) is 32 bytes above the return address which the called function owns (and can use as scratch space), below stack args if any. The caller has to reserve space for their callee's shadow space before running a call instruction.

It is meant to be used to make debugging x64 easier.

Recall that the first 4 parameters are passed in registers. If you break into the debugger and inspect the call stack for a thread, you won't be able to see any parameters passed to functions. The values stored in registers are transient and cannot be reconstructed when moving up the call stack.

This is where the Home space comes into play: It can be used by compilers to leave a copy of the register values on the stack for later inspection in the debugger. This usually happens for unoptimized builds. When optimizations are enabled, however, compilers generally treat the Home space as available for scratch use. No copies are left on the stack, and debugging a crash dump turns into a nightmare.

Challenges of Debugging Optimized x64 Code offers in-depth information on the issue.

Thursday, June 3, 2021
 
TMichel
answered 7 Months ago
49

From "Intel®64 and IA-32 Architectures Optimization Reference Manual", section 4.4.2:

"For best performance, the Streaming SIMD Extensions and Streaming SIMD Extensions 2 require their memory operands to be aligned to 16-byte boundaries. Unaligned data can cause significant performance penalties compared to aligned data."

From Appendix D:

"It is important to ensure that the stack frame is aligned to a 16-byte boundary upon function entry to keep local __m128 data, parameters, and XMM register spill locations aligned throughout a function invocation."

http://www.intel.com/Assets/PDF/manual/248966.pdf

Thursday, June 3, 2021
 
Strae
answered 7 Months ago
45

The x86-64 ABI used by Linux (and some other OSes, although notably not Windows, which has its own different ABI) defines a "red zone" of 128 bytes below the stack pointer, which is guaranteed not to be touched by signal or interrupt handlers. (See figure 3.3 and §3.2.2.)

A leaf function (i.e. one which does not call anything else) may therefore use this area for whatever it wants - it isn't doing anything like a call which would place data at the stack pointer; and any signal or interrupt handler will follow the ABI and drop the stack pointer by at least an additional 128 bytes before storing anything.

(Shorter instruction encodings are available for signed 8-bit displacements, so the point of the red zone is that it increases the amount of local data that a leaf function can access using these shorter instructions.)

That's what's happening here.

But... this code isn't making use of those shorter encodings (it's using offsets from rbp rather than rsp). Why not? It's also saving edi and rsi completely unnecessarily - you ask why it's saving edi instead of rdi, but why is it saving it at all?

The answer is that the compiler is generating really crummy code, because no optimisations are enabled. If you enable any optimisation, your entire function is likely to collapse down to:

mov eax, 0
ret

because that's really all it needs to do: buffer[] is local, so the changes made to it will never be visible to anything else, so can be optimised away; beyond that, all the function needs to do is return 0.


So, here's a better example. This function is complete nonsense, but makes use of a similar array, whilst doing enough to ensure that things don't all get optimised away:

$ cat test.c
int foo(char *bar)
{
    char tmp[256];
    int i;

    for (i = 0; bar[i] != 0; i++)
      tmp[i] = bar[i] + i;

    return tmp[1] + tmp[200];
}

Compiled with some optimisation, you can see similar use of the red zone, except this time it really does use offsets from rsp:

$ gcc -m64 -O1 -c test.c
$ objdump -Mintel -d test.o

test.o:     file format elf64-x86-64


Disassembly of section .text:

0000000000000000 <foo>:
   0:   53                      push   rbx
   1:   48 81 ec 88 00 00 00    sub    rsp,0x88
   8:   0f b6 17                movzx  edx,BYTE PTR [rdi]
   b:   84 d2                   test   dl,dl
   d:   74 26                   je     35 <foo+0x35>
   f:   4c 8d 44 24 88          lea    r8,[rsp-0x78]
  14:   48 8d 4f 01             lea    rcx,[rdi+0x1]
  18:   4c 89 c0                mov    rax,r8
  1b:   89 c3                   mov    ebx,eax
  1d:   44 28 c3                sub    bl,r8b
  20:   89 de                   mov    esi,ebx
  22:   01 f2                   add    edx,esi
  24:   88 10                   mov    BYTE PTR [rax],dl
  26:   0f b6 11                movzx  edx,BYTE PTR [rcx]
  29:   48 83 c0 01             add    rax,0x1
  2d:   48 83 c1 01             add    rcx,0x1
  31:   84 d2                   test   dl,dl
  33:   75 e6                   jne    1b <foo+0x1b>
  35:   0f be 54 24 50          movsx  edx,BYTE PTR [rsp+0x50]
  3a:   0f be 44 24 89          movsx  eax,BYTE PTR [rsp-0x77]
  3f:   8d 04 02                lea    eax,[rdx+rax*1]
  42:   48 81 c4 88 00 00 00    add    rsp,0x88
  49:   5b                      pop    rbx
  4a:   c3                      ret    

Now let's tweak it very slightly, by inserting a call to another function, so that foo() is no longer a leaf function:

$ cat test.c
extern void dummy(void);  /* ADDED */

int foo(char *bar)
{
    char tmp[256];
    int i;

    for (i = 0; bar[i] != 0; i++)
      tmp[i] = bar[i] + i;

    dummy();  /* ADDED */

    return tmp[1] + tmp[200];
}

Now the red zone cannot be used, so you see something more like you originally expected:

$ gcc -m64 -O1 -c test.c
$ objdump -Mintel -d test.o

test.o:     file format elf64-x86-64


Disassembly of section .text:

0000000000000000 <foo>:
   0:   53                      push   rbx
   1:   48 81 ec 00 01 00 00    sub    rsp,0x100
   8:   0f b6 17                movzx  edx,BYTE PTR [rdi]
   b:   84 d2                   test   dl,dl
   d:   74 24                   je     33 <foo+0x33>
   f:   49 89 e0                mov    r8,rsp
  12:   48 8d 4f 01             lea    rcx,[rdi+0x1]
  16:   48 89 e0                mov    rax,rsp
  19:   89 c3                   mov    ebx,eax
  1b:   44 28 c3                sub    bl,r8b
  1e:   89 de                   mov    esi,ebx
  20:   01 f2                   add    edx,esi
  22:   88 10                   mov    BYTE PTR [rax],dl
  24:   0f b6 11                movzx  edx,BYTE PTR [rcx]
  27:   48 83 c0 01             add    rax,0x1
  2b:   48 83 c1 01             add    rcx,0x1
  2f:   84 d2                   test   dl,dl
  31:   75 e6                   jne    19 <foo+0x19>
  33:   e8 00 00 00 00          call   38 <foo+0x38>
  38:   0f be 94 24 c8 00 00    movsx  edx,BYTE PTR [rsp+0xc8]
  3f:   00 
  40:   0f be 44 24 01          movsx  eax,BYTE PTR [rsp+0x1]
  45:   8d 04 02                lea    eax,[rdx+rax*1]
  48:   48 81 c4 00 01 00 00    add    rsp,0x100
  4f:   5b                      pop    rbx
  50:   c3                      ret    

(Note that tmp[200] was in range of a signed 8-bit displacement in the first case, but is not in this one.)

Friday, June 18, 2021
 
Webroots
answered 6 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :
 
Share