The direct mapped shadow code (used for real mode and two dimensional paging)
sets upper-level ptes using direct assignment rather than calling
set_shadow_pte().  A nonpae host will split this into two writes, which opens
up a race if another vcpu accesses the same memory area.
Fix by calling set_shadow_pte() instead of assigning directly.
Noticed by Izik Eidus.
Signed-off-by: Avi Kivity <avi@qumranet.com>
                                return -ENOMEM;
                        }
 
-                       table[index] = __pa(new_table->spt)
-                               | PT_PRESENT_MASK | PT_WRITABLE_MASK
-                               | shadow_user_mask | shadow_x_mask;
+                       set_shadow_pte(&table[index],
+                                      __pa(new_table->spt)
+                                      | PT_PRESENT_MASK | PT_WRITABLE_MASK
+                                      | shadow_user_mask | shadow_x_mask);
                }
                table_addr = table[index] & PT64_BASE_ADDR_MASK;
        }