I'm writing a Sparc compiler. One of my test cases runs fine normally, but crashes when the output is redirected to a file.
Using GDB, I have found that this is the line that causes the segfault:
save %sp, -800, %sp
Am I out of stack space? What's the deal? How come it 开发者_如何学运维only happens when I redirect the output?
A save
instruction in SPARC can trigger a segfault only through window spill traps. That would happen if:
- you've run out of stack (and/or have a corrupted stackpointer)
- and cause a window spill (i.e. a register window writeback flush to stack).
The latter means there's an element of unpredictability in the occurance. That is because the occurance of spills depends on previous register window usage - the exact occurance of the spill can change through what other processes that timeshare the same CPU have been doing. Solaris will not auto-spill the entire reg window set on every context switch because that'd hurt performance. E.g. two workloads that use eight windows (stackframes) each might happily preempt each other and run fully "stack-free" on a CPU with >= 16 reg windows.
I can imagine that spill likelyhood might increase in due to output redirecting (deeper stacks on the write-to-file than write-to-console side are more likely to evict your process' reg wins).
If this is the case then you should be able to force consistent failure of your testcase even without the output redirect if you bind a background CPU / stack hogger (recursive factorial of 200000, in a loop, to permanently trash the reg windows) onto the same CPU where your testcase is processing.
精彩评论