X-Git-Url: http://pilppa.org/gitweb/gitweb.cgi?a=blobdiff_plain;f=Documentation%2FRCU%2FNMI-RCU.txt;h=a6d32e65d222bbea68cba1c133db87c459c99dec;hb=392eaef2e9f8e6527043ad8422d9cfea59ee6fb0;hp=d0634a5c3445440f57104c589219bf71e8997a95;hpb=c0d6f9663b30a09ed725229b2d50391268c8538e;p=linux-2.6-omap-h63xx.git diff --git a/Documentation/RCU/NMI-RCU.txt b/Documentation/RCU/NMI-RCU.txt index d0634a5c344..a6d32e65d22 100644 --- a/Documentation/RCU/NMI-RCU.txt +++ b/Documentation/RCU/NMI-RCU.txt @@ -25,7 +25,7 @@ the NMI handler to take the default machine-specific action. This nmi_callback variable is a global function pointer to the current NMI handler. - fastcall void do_nmi(struct pt_regs * regs, long error_code) + void do_nmi(struct pt_regs * regs, long error_code) { int cpu; @@ -93,6 +93,9 @@ Since NMI handlers disable preemption, synchronize_sched() is guaranteed not to return until all ongoing NMI handlers exit. It is therefore safe to free up the handler's data as soon as synchronize_sched() returns. +Important note: for this to work, the architecture in question must +invoke irq_enter() and irq_exit() on NMI entry and exit, respectively. + Answer to Quick Quiz