The conversion between virtual and real time is as follows:
  dvt = rw/w * dt <=> dt = w/rw * dvt
Since we want the fair sleeper granularity to be in real time, we actually
need to do:
  dvt = - rw/w * l
This bug could be related to the regression reported by Yanmin Zhang:
| Comparing with kernel 2.6.25, sysbench+mysql(oltp, readonly) has lots
| of regressions with 2.6.26-rc1:
|
| 1) 8-core stoakley: 28%;
| 2) 16-core tigerton: 20%;
| 3) Itanium Montvale: 50%.
Reported-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
        if (!initial) {
                /* sleeps upto a single latency don't count. */
                if (sched_feat(NEW_FAIR_SLEEPERS)) {
+                       unsigned long thresh = sysctl_sched_latency;
+
+                       /*
+                        * convert the sleeper threshold into virtual time
+                        */
                        if (sched_feat(NORMALIZED_SLEEPER))
-                               vruntime -= calc_delta_weight(sysctl_sched_latency, se);
-                       else
-                               vruntime -= sysctl_sched_latency;
+                               thresh = calc_delta_fair(thresh, se);
+
+                       vruntime -= thresh;
                }
 
                /* ensure we never gain time by being placed backwards. */