]> rtime.felk.cvut.cz Git - hercules2020/nv-tegra/linux-4.4.git/commit
sched/fair: Fix PELT integrity for new tasks
authorPeter Zijlstra <peterz@infradead.org>
Thu, 16 Jun 2016 11:29:28 +0000 (13:29 +0200)
committermobile promotions <svcmobile_promotions@nvidia.com>
Mon, 15 Aug 2016 21:05:25 +0000 (14:05 -0700)
commitcf28d327a66f09f660883e9967ab98f348202f65
tree65891543b2da4e9a6b04ea028278b5b8df5ac93d
parentfb87e79b799b376fc8df26695cc555a9b8f46180
sched/fair: Fix PELT integrity for new tasks

Vincent and Yuyang found another few scenarios in which entity
tracking goes wobbly.

The scenarios are basically due to the fact that new tasks are not
immediately attached and thereby differ from the normal situation -- a
task is always attached to a cfs_rq load average (such that it
includes its blocked contribution) and are explicitly
detached/attached on migration to another cfs_rq.

Scenario 1: switch to fair class

  p->sched_class = fair_class;
  if (queued)
    enqueue_task(p);
      ...
        enqueue_entity()
  enqueue_entity_load_avg()
    migrated = !sa->last_update_time (true)
    if (migrated)
      attach_entity_load_avg()
  check_class_changed()
    switched_from() (!fair)
    switched_to()   (fair)
      switched_to_fair()
        attach_entity_load_avg()

If @p is a new task that hasn't been fair before, it will have
!last_update_time and, per the above, end up in
attach_entity_load_avg() _twice_.

Scenario 2: change between cgroups

  sched_move_group(p)
    if (queued)
      dequeue_task()
    task_move_group_fair()
      detach_task_cfs_rq()
        detach_entity_load_avg()
      set_task_rq()
      attach_task_cfs_rq()
        attach_entity_load_avg()
    if (queued)
      enqueue_task();
        ...
          enqueue_entity()
    enqueue_entity_load_avg()
      migrated = !sa->last_update_time (true)
      if (migrated)
        attach_entity_load_avg()

Similar as with scenario 1, if @p is a new task, it will have
!load_update_time and we'll end up in attach_entity_load_avg()
_twice_.

Furthermore, notice how we do a detach_entity_load_avg() on something
that wasn't attached to begin with.

As stated above; the problem is that the new task isn't yet attached
to the load tracking and thereby violates the invariant assumption.

This patch remedies this by ensuring a new task is indeed properly
attached to the load tracking on creation, through
post_init_entity_util_avg().

Of course, this isn't entirely as straightforward as one might think,
since the task is hashed before we call wake_up_new_task() and thus
can be poked at. We avoid this by adding TASK_NEW and teaching
cpu_cgroup_can_attach() to refuse such tasks.

Reported-by: Yuyang Du <yuyang.du@intel.com>
Reported-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from upstream commit 7dc603c9028ea5d4354e0e317e8481df99b06d7e)

Conflicts:
kernel/sched/core.c
kernel/sched/fair.c

Change-Id: I105ff928c5cfbd9cb23acc4e11fd6980190861f3
Signed-off-by: Sai Gurrappadi <sgurrappadi@nvidia.com>
Reviewed-on: http://git-master/r/1195370
GVS: Gerrit_Virtual_Submit
Reviewed-by: Puneet Saxena <puneets@nvidia.com>
Reviewed-by: Matthew Longnecker <mlongnecker@nvidia.com>
include/linux/sched.h
kernel/sched/core.c
kernel/sched/fair.c