]> rtime.felk.cvut.cz Git - zynq/linux.git/log
zynq/linux.git
8 years agosnd/pcm: fix snd_pcm_stream_lock*() irqs_disabled() splats
Mike Galbraith [Wed, 18 Feb 2015 14:09:23 +0000 (15:09 +0100)]
snd/pcm: fix snd_pcm_stream_lock*() irqs_disabled() splats

Locking functions previously using read_lock_irq()/read_lock_irqsave() were
changed to local_irq_disable/save(), leading to gripes.  Use nort variants.

|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:915
|in_atomic(): 0, irqs_disabled(): 1, pid: 5947, name: alsa-sink-ALC88
|CPU: 5 PID: 5947 Comm: alsa-sink-ALC88 Not tainted 3.18.7-rt1 #9
|Hardware name: MEDION MS-7848/MS-7848, BIOS M7848W08.404 11/06/2014
ffff880409316240 ffff88040866fa38 ffffffff815bdeb5 0000000000000002
0000000000000000 ffff88040866fa58 ffffffff81073c86 ffffffffa03b2640
ffff88040239ec00 ffff88040866fa78 ffffffff815c3d34 ffffffffa03b2640
|Call Trace:
| [<ffffffff815bdeb5>] dump_stack+0x4f/0x9e
| [<ffffffff81073c86>] __might_sleep+0xe6/0x150
| [<ffffffff815c3d34>] __rt_spin_lock+0x24/0x50
| [<ffffffff815c4044>] rt_read_lock+0x34/0x40
| [<ffffffffa03a2979>] snd_pcm_stream_lock+0x29/0x70 [snd_pcm]
| [<ffffffffa03a355d>] snd_pcm_playback_poll+0x5d/0x120 [snd_pcm]
| [<ffffffff811937a2>] do_sys_poll+0x322/0x5b0
| [<ffffffff81193d48>] SyS_ppoll+0x1a8/0x1c0
| [<ffffffff815c4556>] system_call_fastpath+0x16/0x1b

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoirq_work: Delegate non-immediate irq work to ksoftirqd
Mike Galbraith [Sun, 13 Sep 2015 07:47:32 +0000 (09:47 +0200)]
irq_work: Delegate non-immediate irq work to ksoftirqd

Based on a patch from Jan Kiszka.

Jan reported that ftrace queueing work from arbitrary contexts can
and does lead to deadlock.  trace-cmd -e sched:* deadlocked in fact.

Resolve the problem by delegating all non-immediate work to ksoftirqd.

We need two lists to do this, one for hard irq, one for soft, so we
can use the two existing lists, eliminating the -rt specific list and
all of the ifdefery while we're at it.

Strategy: Queue work tagged for hirq invocation to the raised_list,
invoke via IPI as usual.  If a work item being queued to lazy_list,
which becomes our all others list, is not a lazy work item, or the
tick is stopped, fire an IPI to raise SOFTIRQ_TIMER immediately,
otherwise let ksofirqd find it when the tick comes along.  Raising
SOFTIRQ_TIMER via IPI even when queueing local ensures delegation.

Cc: stable-rt@vger.kernel.org
Acked-by: Jan Kiszka <jan.kiszka@siemens.com>
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
8 years agokernel/irq_work: fix non RT case
Sebastian Andrzej Siewior [Thu, 11 Jun 2015 15:31:40 +0000 (17:31 +0200)]
kernel/irq_work: fix non RT case

After the deadlock fixed, the checked got somehow away and broke the non-RT
case which could invoke IRQ-work from softirq context.

Cc: stable-rt@vger.kernel.org
Reported-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agokernel/irq_work: fix no_hz deadlock
Sebastian Andrzej Siewior [Fri, 10 Apr 2015 09:50:22 +0000 (11:50 +0200)]
kernel/irq_work: fix no_hz deadlock

Invoking NO_HZ's irq_work callback from timer irq is not working very
well if the callback decides to invoke hrtimer_cancel():

|hrtimer_try_to_cancel+0x55/0x5f
|hrtimer_cancel+0x16/0x28
|tick_nohz_restart+0x17/0x72
|__tick_nohz_full_check+0x8e/0x93
|nohz_full_kick_work_func+0xe/0x10
|irq_work_run_list+0x39/0x57
|irq_work_tick+0x60/0x67
|update_process_times+0x57/0x67
|tick_sched_handle+0x4a/0x59
|tick_sched_timer+0x3b/0x64
|__run_hrtimer+0x7a/0x149
|hrtimer_interrupt+0x1cc/0x2c5

and here we deadlock while waiting for the lock which we are holding.
To fix this I'm doing the same thing that upstream is doing: is the
irq_work dedicated IRQ and use it only for what is marked as "hirq"
which should only be the FULL_NO_HZ related work.

Reported-by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoirq_work: Hide access to hirq_work_list in PREEMPT_RT_FULL
Steven Rostedt [Thu, 12 Mar 2015 22:08:57 +0000 (18:08 -0400)]
irq_work: Hide access to hirq_work_list in PREEMPT_RT_FULL

The hirq_work_list is only defined when PREEMPT_RT_FULL is configured.
Most access to it is within an #ifdef CONFIG_PREEMPT_RT_FULL, except
for one. Encapsulate that location too.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
8 years agoirq_work: allow certain work in hard irq context
Sebastian Andrzej Siewior [Fri, 31 Jan 2014 13:20:31 +0000 (14:20 +0100)]
irq_work: allow certain work in hard irq context

irq_work is processed in softirq context on -RT because we want to avoid
long latencies which might arise from processing lots of perf events.
The noHZ-full mode requires its callback to be called from real hardirq
context (commit 76c24fb ("nohz: New APIs to re-evaluate the tick on full
dynticks CPUs")). If it is called from a thread context we might get
wrong results for checks like "is_idle_task(current)".
This patch introduces a second list (hirq_work_list) which will be used
if irq_work_run() has been invoked from hardirq context and process only
work items marked with IRQ_WORK_HARD_IRQ.

This patch also removes arch_irq_work_raise() from sparc & powerpc like
it is already done for x86. Atleast for powerpc it is somehow
superfluous because it is called from the timer interrupt which should
invoke update_process_times().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agox86-no-perf-irq-work-rt.patch
Thomas Gleixner [Wed, 13 Jul 2011 12:05:05 +0000 (14:05 +0200)]
x86-no-perf-irq-work-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agouse skbufhead with raw lock
Thomas Gleixner [Tue, 12 Jul 2011 13:38:34 +0000 (15:38 +0200)]
use skbufhead with raw lock

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agojump-label: disable if stop_machine() is used
Thomas Gleixner [Wed, 8 Jul 2015 15:14:48 +0000 (17:14 +0200)]
jump-label: disable if stop_machine() is used

Some architectures are using stop_machine() while switching the opcode which
leads to latency spikes.
The architectures which use stop_machine() atm:
- ARM stop machine
- s390 stop machine

The architecures which use other sorcery:
- MIPS
- X86
- powerpc
- sparc
- arm64

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[bigeasy: only ARM for now]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agodebugobjects-rt.patch
Thomas Gleixner [Sun, 17 Jul 2011 19:41:35 +0000 (21:41 +0200)]
debugobjects-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agopercpu_ida: use locklocks
Sebastian Andrzej Siewior [Wed, 9 Apr 2014 09:58:17 +0000 (11:58 +0200)]
percpu_ida: use locklocks

the local_irq_save() + spin_lock() does not work that well on -RT

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoidr: Use local lock instead of preempt enable/disable
Thomas Gleixner [Sun, 13 Sep 2015 07:47:31 +0000 (09:47 +0200)]
idr: Use local lock instead of preempt enable/disable

We need to protect the per cpu variable and prevent migration.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agosched: Distangle worker accounting from rqlock
Thomas Gleixner [Wed, 22 Jun 2011 17:47:03 +0000 (19:47 +0200)]
sched: Distangle worker accounting from rqlock

The worker accounting for cpu bound workers is plugged into the core
scheduler code and the wakeup code. This is not a hard requirement and
can be avoided by keeping track of the state in the workqueue code
itself.

Keep track of the sleeping state in the worker itself and call the
notifier before entering the core scheduler. There might be false
positives when the task is woken between that call and actually
scheduling, but that's not really different from scheduling and being
woken immediately after switching away. There is also no harm from
updating nr_running when the task returns from scheduling instead of
accounting it in the wakeup code.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20110622174919.135236139@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agoworkqueue vs ata-piix livelock fixup
Thomas Gleixner [Mon, 1 Jul 2013 09:02:42 +0000 (11:02 +0200)]
workqueue vs ata-piix livelock fixup

An Intel i7 system regularly detected rcu_preempt stalls after the kernel
was upgraded from 3.6-rt to 3.8-rt. When the stall happened, disk I/O was no
longer possible, unless the system was restarted.

The kernel message was:
INFO: rcu_preempt self-detected stall on CPU { 6}
[..]
NMI backtrace for cpu 6
CPU 6
Pid: 119, comm: irq/19-ata_piix Not tainted 3.8.13-rt13 #11 Shuttle Inc. SX58/SX58
RIP: 0010:[<ffffffff8124ca60>]  [<ffffffff8124ca60>] ip_compute_csum+0x30/0x30
RSP: 0018:ffff880333303cb0  EFLAGS: 00000002
RAX: 0000000000000006 RBX: 00000000000003e9 RCX: 0000000000000034
RDX: 0000000000000000 RSI: ffffffff81aa16d0 RDI: 0000000000000001
RBP: ffff880333303ce8 R08: ffffffff81aa16d0 R09: ffffffff81c1b8cc
R10: 0000000000000000 R11: 0000000000000000 R12: 000000000005161f
R13: 0000000000000006 R14: ffffffff81aa16d0 R15: 0000000000000002
FS:  0000000000000000(0000) GS:ffff880333300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 0000003c1b2bb420 CR3: 0000000001a0f000 CR4: 00000000000007e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process irq/19-ata_piix (pid: 119, threadinfo ffff88032d88a000, task ffff88032df80000)
Stack:
ffffffff8124cb32 000000000005161e 00000000000003e9 0000000000001000
0000000000009022 ffffffff81aa16d0 0000000000000002 ffff880333303cf8
ffffffff8124caa9 ffff880333303d08 ffffffff8124cad2 ffff880333303d28
Call Trace:
<IRQ>
[<ffffffff8124cb32>] ? delay_tsc+0x33/0xe3
[<ffffffff8124caa9>] __delay+0xf/0x11
[<ffffffff8124cad2>] __const_udelay+0x27/0x29
[<ffffffff8102d1fa>] native_safe_apic_wait_icr_idle+0x39/0x45
[<ffffffff8102dc9b>] __default_send_IPI_dest_field.constprop.0+0x1e/0x58
[<ffffffff8102dd1e>] default_send_IPI_mask_sequence_phys+0x49/0x7d
[<ffffffff81030326>] physflat_send_IPI_all+0x17/0x19
[<ffffffff8102de53>] arch_trigger_all_cpu_backtrace+0x50/0x79
[<ffffffff810b21d0>] rcu_check_callbacks+0x1cb/0x568
[<ffffffff81048c9c>] ? raise_softirq+0x2e/0x35
[<ffffffff81086be0>] ? tick_sched_do_timer+0x38/0x38
[<ffffffff8104f653>] update_process_times+0x44/0x55
[<ffffffff81086866>] tick_sched_handle+0x4a/0x59
[<ffffffff81086c1c>] tick_sched_timer+0x3c/0x5b
[<ffffffff81062845>] __run_hrtimer+0x9b/0x158
[<ffffffff810631d8>] hrtimer_interrupt+0x172/0x2aa
[<ffffffff8102d498>] smp_apic_timer_interrupt+0x76/0x89
[<ffffffff814d881d>] apic_timer_interrupt+0x6d/0x80
<EOI>
[<ffffffff81057cd2>] ? __local_lock_irqsave+0x17/0x4a
[<ffffffff81059336>] try_to_grab_pending+0x42/0x17e
[<ffffffff8105a699>] mod_delayed_work_on+0x32/0x88
[<ffffffff8105a70b>] mod_delayed_work+0x1c/0x1e
[<ffffffff8122ae84>] blk_run_queue_async+0x37/0x39
[<ffffffff81230985>] flush_end_io+0xf1/0x107
[<ffffffff8122e0da>] blk_finish_request+0x21e/0x264
[<ffffffff8122e162>] blk_end_bidi_request+0x42/0x60
[<ffffffff8122e1ba>] blk_end_request+0x10/0x12
[<ffffffff8132de46>] scsi_io_completion+0x1bf/0x492
[<ffffffff81335cec>] ? sd_done+0x298/0x2ef
[<ffffffff81325a02>] scsi_finish_command+0xe9/0xf2
[<ffffffff8132dbcb>] scsi_softirq_done+0x106/0x10f
[<ffffffff812333d3>] blk_done_softirq+0x77/0x87
[<ffffffff8104826f>] do_current_softirqs+0x172/0x2e1
[<ffffffff810aa820>] ? irq_thread_fn+0x3a/0x3a
[<ffffffff81048466>] local_bh_enable+0x43/0x72
[<ffffffff810aa866>] irq_forced_thread_fn+0x46/0x52
[<ffffffff810ab089>] irq_thread+0x8c/0x17c
[<ffffffff810ab179>] ? irq_thread+0x17c/0x17c
[<ffffffff810aaffd>] ? wake_threads_waitq+0x44/0x44
[<ffffffff8105eb18>] kthread+0x8d/0x95
[<ffffffff8105ea8b>] ? __kthread_parkme+0x65/0x65
[<ffffffff814d7b7c>] ret_from_fork+0x7c/0xb0
[<ffffffff8105ea8b>] ? __kthread_parkme+0x65/0x65

The state of softirqd of this CPU at the time of the crash was:
ksoftirqd/6     R  running task        0    53      2 0x00000000
ffff88032fc39d18 0000000000000046 ffff88033330c4c0 ffff8803303f4710
ffff88032fc39fd8 ffff88032fc39fd8 0000000000000000 0000000000062500
ffff88032df88000 ffff8803303f4710 0000000000000000 ffff88032fc38000
Call Trace:
[<ffffffff8105a3ae>] ? __queue_work+0x27c/0x27c
[<ffffffff814d178c>] preempt_schedule+0x61/0x76
[<ffffffff8106cccf>] migrate_enable+0xe5/0x1df
[<ffffffff8105a3ae>] ? __queue_work+0x27c/0x27c
[<ffffffff8104ef52>] run_timer_softirq+0x161/0x1d6
[<ffffffff8104826f>] do_current_softirqs+0x172/0x2e1
[<ffffffff8104840b>] run_ksoftirqd+0x2d/0x45
[<ffffffff8106658a>] smpboot_thread_fn+0x2ea/0x308
[<ffffffff810662a0>] ? test_ti_thread_flag+0xc/0xc
[<ffffffff810662a0>] ? test_ti_thread_flag+0xc/0xc
[<ffffffff8105eb18>] kthread+0x8d/0x95
[<ffffffff8105ea8b>] ? __kthread_parkme+0x65/0x65
[<ffffffff814d7afc>] ret_from_fork+0x7c/0xb0
[<ffffffff8105ea8b>] ? __kthread_parkme+0x65/0x65

Apparently, the softirq demon and the ata_piix IRQ handler were waiting
for each other to finish ending up in a livelock. After the below patch
was applied, the system no longer crashes.

Reported-by: Carsten Emde <C.Emde@osadl.org>
Proposed-by: Thomas Gleixner <tglx@linutronix.de>
Tested by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoUse local irq lock instead of irq disable regions
Thomas Gleixner [Sun, 17 Jul 2011 19:42:26 +0000 (21:42 +0200)]
Use local irq lock instead of irq disable regions

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agoworkqueue: Use normal rcu
Thomas Gleixner [Wed, 24 Jul 2013 13:26:54 +0000 (15:26 +0200)]
workqueue: Use normal rcu

There is no need for sched_rcu. The undocumented reason why sched_rcu
is used is to avoid a few explicit rcu_read_lock()/unlock() pairs by
abusing the fact that sched_rcu reader side critical sections are also
protected by preempt or irq disabled regions.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agonet: Use cpu_chill() instead of cpu_relax()
Thomas Gleixner [Wed, 7 Mar 2012 20:10:04 +0000 (21:10 +0100)]
net: Use cpu_chill() instead of cpu_relax()

Retry loops on RT might loop forever when the modifying side was
preempted. Use cpu_chill() instead of cpu_relax() to let the system
make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
8 years agofs: dcache: Use cpu_chill() in trylock loops
Thomas Gleixner [Wed, 7 Mar 2012 20:00:34 +0000 (21:00 +0100)]
fs: dcache: Use cpu_chill() in trylock loops

Retry loops on RT might loop forever when the modifying side was
preempted. Use cpu_chill() instead of cpu_relax() to let the system
make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
8 years agoblock: Use cpu_chill() for retry loops
Thomas Gleixner [Thu, 20 Dec 2012 17:28:26 +0000 (18:28 +0100)]
block: Use cpu_chill() for retry loops

Retry loops on RT might loop forever when the modifying side was
preempted. Steven also observed a live lock when there was a
concurrent priority boosting going on.

Use cpu_chill() instead of cpu_relax() to let the system
make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
8 years agoblock/mq: drop per ctx cpu_lock
Sebastian Andrzej Siewior [Wed, 18 Feb 2015 17:37:26 +0000 (18:37 +0100)]
block/mq: drop per ctx cpu_lock

While converting the get_cpu() to get_cpu_light() I added a cpu lock to
ensure the same code is not invoked twice on the same CPU. And now I run
into this:

| kernel BUG at kernel/locking/rtmutex.c:996!
| invalid opcode: 0000 [#1] PREEMPT SMP
| CPU0: 13 PID: 75 Comm: kworker/u258:0 Tainted: G          I    3.18.7-rt1.5+ #12
| Workqueue: writeback bdi_writeback_workfn (flush-8:0)
| task: ffff88023742a620 ti: ffff88023743c000 task.ti: ffff88023743c000
| RIP: 0010:[<ffffffff81523cc0>]  [<ffffffff81523cc0>] rt_spin_lock_slowlock+0x280/0x2d0
| Call Trace:
|  [<ffffffff815254e7>] rt_spin_lock+0x27/0x60
taking the same lock again
|
|  [<ffffffff8127c771>] blk_mq_insert_requests+0x51/0x130
|  [<ffffffff8127d4a9>] blk_mq_flush_plug_list+0x129/0x140
|  [<ffffffff81272461>] blk_flush_plug_list+0xd1/0x250
|  [<ffffffff81522075>] schedule+0x75/0xa0
|  [<ffffffff8152474d>] do_nanosleep+0xdd/0x180
|  [<ffffffff810c8312>] __hrtimer_nanosleep+0xd2/0x1c0
|  [<ffffffff810c8456>] cpu_chill+0x56/0x80
|  [<ffffffff8107c13d>] try_to_grab_pending+0x1bd/0x390
|  [<ffffffff8107c431>] cancel_delayed_work+0x21/0x170
|  [<ffffffff81279a98>] blk_mq_stop_hw_queue+0x18/0x40
|  [<ffffffffa000ac6f>] scsi_queue_rq+0x7f/0x830 [scsi_mod]
|  [<ffffffff8127b0de>] __blk_mq_run_hw_queue+0x1ee/0x360
|  [<ffffffff8127b528>] blk_mq_map_request+0x108/0x190
take the lock  ^^^
|
|  [<ffffffff8127c8d2>] blk_sq_make_request+0x82/0x350
|  [<ffffffff8126f6c0>] generic_make_request+0xd0/0x120
|  [<ffffffff8126f788>] submit_bio+0x78/0x190
|  [<ffffffff811bd537>] _submit_bh+0x117/0x180
|  [<ffffffff811bf528>] __block_write_full_page.constprop.38+0x138/0x3f0
|  [<ffffffff811bf880>] block_write_full_page+0xa0/0xe0
|  [<ffffffff811c02b3>] blkdev_writepage+0x13/0x20
|  [<ffffffff81127b25>] __writepage+0x15/0x40
|  [<ffffffff8112873b>] write_cache_pages+0x1fb/0x440
|  [<ffffffff811289be>] generic_writepages+0x3e/0x60
|  [<ffffffff8112a17c>] do_writepages+0x1c/0x30
|  [<ffffffff811b3603>] __writeback_single_inode+0x33/0x140
|  [<ffffffff811b462d>] writeback_sb_inodes+0x2bd/0x490
|  [<ffffffff811b4897>] __writeback_inodes_wb+0x97/0xd0
|  [<ffffffff811b4a9b>] wb_writeback+0x1cb/0x210
|  [<ffffffff811b505b>] bdi_writeback_workfn+0x25b/0x380
|  [<ffffffff8107b50b>] process_one_work+0x1bb/0x490
|  [<ffffffff8107c7ab>] worker_thread+0x6b/0x4f0
|  [<ffffffff81081863>] kthread+0xe3/0x100
|  [<ffffffff8152627c>] ret_from_fork+0x7c/0xb0

After looking at this for a while it seems that it is save if blk_mq_ctx is
used multiple times, the in struct lock protects the access.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoblock: blk-mq: use swait
Sebastian Andrzej Siewior [Fri, 13 Feb 2015 10:01:26 +0000 (11:01 +0100)]
block: blk-mq: use swait

| BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:914
| in_atomic(): 1, irqs_disabled(): 0, pid: 255, name: kworker/u257:6
| 5 locks held by kworker/u257:6/255:
|  #0:  ("events_unbound"){.+.+.+}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0
|  #1:  ((&entry->work)){+.+.+.}, at: [<ffffffff8108edf1>] process_one_work+0x171/0x5e0
|  #2:  (&shost->scan_mutex){+.+.+.}, at: [<ffffffffa000faa3>] __scsi_add_device+0xa3/0x130 [scsi_mod]
|  #3:  (&set->tag_list_lock){+.+...}, at: [<ffffffff812f09fa>] blk_mq_init_queue+0x96a/0xa50
|  #4:  (rcu_read_lock_sched){......}, at: [<ffffffff8132887d>] percpu_ref_kill_and_confirm+0x1d/0x120
| Preemption disabled at:[<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70
|
| CPU: 2 PID: 255 Comm: kworker/u257:6 Not tainted 3.18.7-rt0+ #1
| Workqueue: events_unbound async_run_entry_fn
|  0000000000000003 ffff8800bc29f998 ffffffff815b3a12 0000000000000000
|  0000000000000000 ffff8800bc29f9b8 ffffffff8109aa16 ffff8800bc29fa28
|  ffff8800bc5d1bc8 ffff8800bc29f9e8 ffffffff815b8dd4 ffff880000000000
| Call Trace:
|  [<ffffffff815b3a12>] dump_stack+0x4f/0x7c
|  [<ffffffff8109aa16>] __might_sleep+0x116/0x190
|  [<ffffffff815b8dd4>] rt_spin_lock+0x24/0x60
|  [<ffffffff810b6089>] __wake_up+0x29/0x60
|  [<ffffffff812ee06e>] blk_mq_usage_counter_release+0x1e/0x20
|  [<ffffffff81328966>] percpu_ref_kill_and_confirm+0x106/0x120
|  [<ffffffff812eff76>] blk_mq_freeze_queue_start+0x56/0x70
|  [<ffffffff812f0000>] blk_mq_update_tag_set_depth+0x40/0xd0
|  [<ffffffff812f0a1c>] blk_mq_init_queue+0x98c/0xa50
|  [<ffffffffa000dcf0>] scsi_mq_alloc_queue+0x20/0x60 [scsi_mod]
|  [<ffffffffa000ea35>] scsi_alloc_sdev+0x2f5/0x370 [scsi_mod]
|  [<ffffffffa000f494>] scsi_probe_and_add_lun+0x9e4/0xdd0 [scsi_mod]
|  [<ffffffffa000fb26>] __scsi_add_device+0x126/0x130 [scsi_mod]
|  [<ffffffffa013033f>] ata_scsi_scan_host+0xaf/0x200 [libata]
|  [<ffffffffa012b5b6>] async_port_probe+0x46/0x60 [libata]
|  [<ffffffff810978fb>] async_run_entry_fn+0x3b/0xf0
|  [<ffffffff8108ee81>] process_one_work+0x201/0x5e0

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoblk-mq: revert raw locks, post pone notifier to POST_DEAD
Sebastian Andrzej Siewior [Sat, 3 May 2014 09:00:29 +0000 (11:00 +0200)]
blk-mq: revert raw locks, post pone notifier to POST_DEAD

The blk_mq_cpu_notify_lock should be raw because some CPU down levels
are called with interrupts off. The notifier itself calls currently one
function that is blk_mq_hctx_notify().
That function acquires the ctx->lock lock which is sleeping and I would
prefer to keep it that way. That function only moves IO-requests from
the CPU that is going offline to another CPU and it is currently the
only one. Therefore I revert the list lock back to sleeping spinlocks
and let the notifier run at POST_DEAD time.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agocpu_chill: Add a UNINTERRUPTIBLE hrtimer_nanosleep
Steven Rostedt [Tue, 4 Mar 2014 17:28:32 +0000 (12:28 -0500)]
cpu_chill: Add a UNINTERRUPTIBLE hrtimer_nanosleep

We hit another bug that was caused by switching cpu_chill() from
msleep() to hrtimer_nanosleep().

This time it is a livelock. The problem is that hrtimer_nanosleep()
calls schedule with the state == TASK_INTERRUPTIBLE. But these means
that if a signal is pending, the scheduler wont schedule, and will
simply change the current task state back to TASK_RUNNING. This
nullifies the whole point of cpu_chill() in the first place. That is,
if a task is spinning on a try_lock() and it preempted the owner of the
lock, if it has a signal pending, it will never give up the CPU to let
the owner of the lock run.

I made a static function __hrtimer_nanosleep() that takes a fifth
parameter "state", which determines the task state of that the
nanosleep() will be in. The normal hrtimer_nanosleep() will act the
same, but cpu_chill() will call the __hrtimer_nanosleep() directly with
the TASK_UNINTERRUPTIBLE state.

cpu_chill() only cares that the first sleep happens, and does not care
about the state of the restart schedule (in hrtimer_nanosleep_restart).

Cc: stable-rt@vger.kernel.org
Reported-by: Ulrich Obergfell <uobergfe@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agokernel/hrtimer: be non-freezeable in cpu_chill()
Sebastian Andrzej Siewior [Wed, 19 Feb 2014 10:56:06 +0000 (11:56 +0100)]
kernel/hrtimer: be non-freezeable in cpu_chill()

Since we replaced msleep() by hrtimer I see now and then (rarely) this:

| [....] Waiting for /dev to be fully populated...
| =====================================
| [ BUG: udevd/229 still has locks held! ]
| 3.12.11-rt17 #23 Not tainted
| -------------------------------------
| 1 lock held by udevd/229:
|  #0:  (&type->i_mutex_dir_key#2){+.+.+.}, at: lookup_slow+0x28/0x98
|
| stack backtrace:
| CPU: 0 PID: 229 Comm: udevd Not tainted 3.12.11-rt17 #23
| (unwind_backtrace+0x0/0xf8) from (show_stack+0x10/0x14)
| (show_stack+0x10/0x14) from (dump_stack+0x74/0xbc)
| (dump_stack+0x74/0xbc) from (do_nanosleep+0x120/0x160)
| (do_nanosleep+0x120/0x160) from (hrtimer_nanosleep+0x90/0x110)
| (hrtimer_nanosleep+0x90/0x110) from (cpu_chill+0x30/0x38)
| (cpu_chill+0x30/0x38) from (dentry_kill+0x158/0x1ec)
| (dentry_kill+0x158/0x1ec) from (dput+0x74/0x15c)
| (dput+0x74/0x15c) from (lookup_real+0x4c/0x50)
| (lookup_real+0x4c/0x50) from (__lookup_hash+0x34/0x44)
| (__lookup_hash+0x34/0x44) from (lookup_slow+0x38/0x98)
| (lookup_slow+0x38/0x98) from (path_lookupat+0x208/0x7fc)
| (path_lookupat+0x208/0x7fc) from (filename_lookup+0x20/0x60)
| (filename_lookup+0x20/0x60) from (user_path_at_empty+0x50/0x7c)
| (user_path_at_empty+0x50/0x7c) from (user_path_at+0x14/0x1c)
| (user_path_at+0x14/0x1c) from (vfs_fstatat+0x48/0x94)
| (vfs_fstatat+0x48/0x94) from (SyS_stat64+0x14/0x30)
| (SyS_stat64+0x14/0x30) from (ret_fast_syscall+0x0/0x48)

For now I see no better way but to disable the freezer the sleep the period.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agort: Make cpu_chill() use hrtimer instead of msleep()
Steven Rostedt [Wed, 5 Feb 2014 16:51:25 +0000 (11:51 -0500)]
rt: Make cpu_chill() use hrtimer instead of msleep()

Ulrich Obergfell pointed out that cpu_chill() calls msleep() which is woken
up by the ksoftirqd running the TIMER softirq. But as the cpu_chill() is
called from softirq context, it may block the ksoftirqd() from running, in
which case, it may never wake up the msleep() causing the deadlock.

I checked the vmcore, and irq/74-qla2xxx is stuck in the msleep() call,
running on CPU 8. The one ksoftirqd that is stuck, happens to be the one that
runs on CPU 8, and it is blocked on a lock held by irq/74-qla2xxx. As that
ksoftirqd is the one that will wake up irq/74-qla2xxx, and it happens to be
blocked on a lock that irq/74-qla2xxx holds, we have our deadlock.

The solution is not to convert the cpu_chill() back to a cpu_relax() as that
will re-create a possible live lock that the cpu_chill() fixed earlier, and may
also leave this bug open on other softirqs. The fix is to remove the
dependency on ksoftirqd from cpu_chill(). That is, instead of calling
msleep() that requires ksoftirqd to wake it up, use the
hrtimer_nanosleep() code that does the wakeup from hard irq context.

|Looks to be the lock of the block softirq. I don't have the core dump
|anymore, but from what I could tell the ksoftirqd was blocked on the
|block softirq lock, where the block softirq handler did a msleep
|(called by the qla2xxx interrupt handler).
|
|Looking at trigger_softirq() in block/blk-softirq.c, it can do a
|smp_callfunction() to another cpu to run the block softirq. If that
|happens to be the cpu where the qla2xx irq handler is doing the block
|softirq and is in a middle of a msleep(), I believe the ksoftirqd will
|try to run the softirq. If it does that, then BOOM, it's deadlocked
|because the ksoftirqd will never run the timer softirq either.

|I should have also stated that it was only one lock that was involved.
|But the lock owner was doing a msleep() that requires a wakeup by
|ksoftirqd to continue. If ksoftirqd happens to be blocked on a lock
|held by the msleep() caller, then you have your deadlock.
|
|It's best not to have any softirqs going to sleep requiring another
|softirq to wake it up. Note, if we ever require a timer softirq to do a
|cpu_chill() it will most definitely hit this deadlock.

Cc: stable-rt@vger.kernel.org
Found-by: Ulrich Obergfell <uobergfe@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
[bigeasy: add the 4 | chapters from email]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agort: Introduce cpu_chill()
Thomas Gleixner [Wed, 7 Mar 2012 19:51:03 +0000 (20:51 +0100)]
rt: Introduce cpu_chill()

Retry loops on RT might loop forever when the modifying side was
preempted. Add cpu_chill() to replace cpu_relax(). cpu_chill()
defaults to cpu_relax() for non RT. On RT it puts the looping task to
sleep for a tick so the preempted task can make progress.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
8 years agoblock/mq: don't complete requests via IPI
Sebastian Andrzej Siewior [Thu, 29 Jan 2015 14:10:08 +0000 (15:10 +0100)]
block/mq: don't complete requests via IPI

The IPI runs in hardirq context and there are sleeping locks. This patch
moves the completion into a workqueue.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoblock/mq: do not invoke preempt_disable()
Sebastian Andrzej Siewior [Sun, 13 Sep 2015 07:47:29 +0000 (09:47 +0200)]
block/mq: do not invoke preempt_disable()

preempt_disable() and get_cpu() don't play well together with the sleeping
locks it tries to allocate later.
It seems to be enough to replace it with get_cpu_light() and migrate_disable().

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoblock: mq: use cpu_light()
Sebastian Andrzej Siewior [Wed, 9 Apr 2014 08:37:23 +0000 (10:37 +0200)]
block: mq: use cpu_light()

there is a might sleep splat because get_cpu() disables preemption and
later we grab a lock. As a workaround for this we use get_cpu_light()
and an additional lock to prevent taking the same ctx.

There is a lock member in the ctx already but there some functions which do ++
on the member and this works with irq off but on RT we would need the extra lock.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agomm-vmalloc.patch
Thomas Gleixner [Tue, 12 Jul 2011 09:39:36 +0000 (11:39 +0200)]
mm-vmalloc.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agoepoll.patch
Thomas Gleixner [Fri, 8 Jul 2011 14:35:35 +0000 (16:35 +0200)]
epoll.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agothermal: Defer thermal wakups to threads
Daniel Wagner [Tue, 17 Feb 2015 08:37:44 +0000 (09:37 +0100)]
thermal: Defer thermal wakups to threads

On RT the spin lock in pkg_temp_thermal_platfrom_thermal_notify will
call schedule while we run in irq context.

[<ffffffff816850ac>] dump_stack+0x4e/0x8f
[<ffffffff81680f7d>] __schedule_bug+0xa6/0xb4
[<ffffffff816896b4>] __schedule+0x5b4/0x700
[<ffffffff8168982a>] schedule+0x2a/0x90
[<ffffffff8168a8b5>] rt_spin_lock_slowlock+0xe5/0x2d0
[<ffffffff8168afd5>] rt_spin_lock+0x25/0x30
[<ffffffffa03a7b75>] pkg_temp_thermal_platform_thermal_notify+0x45/0x134 [x86_pkg_temp_thermal]
[<ffffffff8103d4db>] ? therm_throt_process+0x1b/0x160
[<ffffffff8103d831>] intel_thermal_interrupt+0x211/0x250
[<ffffffff8103d8c1>] smp_thermal_interrupt+0x21/0x40
[<ffffffff8169415d>] thermal_interrupt+0x6d/0x80

Let's defer the work to a kthread.

Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
[bigeasy: reoder init/denit position. TODO: flush swork on exit]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agox86: UV: raw_spinlock conversion
Mike Galbraith [Sun, 2 Nov 2014 07:31:37 +0000 (08:31 +0100)]
x86: UV: raw_spinlock conversion

Shrug.  Lots of hobbyists have a beast in their basement, right?

Cc: stable-rt@vger.kernel.org
Signed-off-by: Mike Galbraith <mgalbraith@suse.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agox86: Use generic rwsem_spinlocks on -rt
Thomas Gleixner [Sun, 26 Jul 2009 00:21:32 +0000 (02:21 +0200)]
x86: Use generic rwsem_spinlocks on -rt

Simplifies the separation of anon_rw_semaphores and rw_semaphores for
-rt.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agox86: stackprotector: Avoid random pool on rt
Thomas Gleixner [Thu, 16 Dec 2010 13:25:18 +0000 (14:25 +0100)]
x86: stackprotector: Avoid random pool on rt

CPU bringup calls into the random pool to initialize the stack
canary. During boot that works nicely even on RT as the might sleep
checks are disabled. During CPU hotplug the might sleep checks
trigger. Making the locks in random raw is a major PITA, so avoid the
call on RT is the only sensible solution. This is basically the same
randomness which we get during boot where the random pool has no
entropy and we rely on the TSC randomnness.

Reported-by: Carsten Emde <carsten.emde@osadl.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agox86/mce: use swait queue for mce wakeups
Steven Rostedt [Fri, 27 Feb 2015 14:20:37 +0000 (15:20 +0100)]
x86/mce: use swait queue for mce wakeups

We had a customer report a lockup on a 3.0-rt kernel that had the
following backtrace:

[ffff88107fca3e80] rt_spin_lock_slowlock at ffffffff81499113
[ffff88107fca3f40] rt_spin_lock at ffffffff81499a56
[ffff88107fca3f50] __wake_up at ffffffff81043379
[ffff88107fca3f80] mce_notify_irq at ffffffff81017328
[ffff88107fca3f90] intel_threshold_interrupt at ffffffff81019508
[ffff88107fca3fa0] smp_threshold_interrupt at ffffffff81019fc1
[ffff88107fca3fb0] threshold_interrupt at ffffffff814a1853

It actually bugged because the lock was taken by the same owner that
already had that lock. What happened was the thread that was setting
itself on a wait queue had the lock when an MCE triggered. The MCE
interrupt does a wake up on its wait list and grabs the same lock.

NOTE: THIS IS NOT A BUG ON MAINLINE

Sorry for yelling, but as I Cc'd mainline maintainers I want them to
know that this is an PREEMPT_RT bug only. I only Cc'd them for advice.

On PREEMPT_RT the wait queue locks are converted from normal
"spin_locks" into an rt_mutex (see the rt_spin_lock_slowlock above).
These are not to be taken by hard interrupt context. This usually isn't
a problem as most all interrupts in PREEMPT_RT are converted into
schedulable threads. Unfortunately that's not the case with the MCE irq.

As wait queue locks are notorious for long hold times, we can not
convert them to raw_spin_locks without causing issues with -rt. But
Thomas has created a "simple-wait" structure that uses raw spin locks
which may have been a good fit.

Unfortunately, wait queues are not the only issue, as the mce_notify_irq
also does a schedule_work(), which grabs the workqueue spin locks that
have the exact same issue.

Thus, this patch I'm proposing is to move the actual work of the MCE
interrupt into a helper thread that gets woken up on the MCE interrupt
and does the work in a schedulable context.

NOTE: THIS PATCH ONLY CHANGES THE BEHAVIOR WHEN PREEMPT_RT IS SET

Oops, sorry for yelling again, but I want to stress that I keep the same
behavior of mainline when PREEMPT_RT is not set. Thus, this only changes
the MCE behavior when PREEMPT_RT is configured.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
[bigeasy@linutronix: make mce_notify_work() a proper prototype, use
     kthread_run()]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
[wagi: use work-simple framework to defer work to a kthread]
Signed-off-by: Daniel Wagner <daniel.wagner@bmw-carit.de>
8 years agox86: Convert mce timer to hrtimer
Thomas Gleixner [Mon, 13 Dec 2010 15:33:39 +0000 (16:33 +0100)]
x86: Convert mce timer to hrtimer

mce_timer is started in atomic contexts of cpu bringup. This results
in might_sleep() warnings on RT. Convert mce_timer to a hrtimer to
avoid this.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
fold in:
|From: Mike Galbraith <bitbucket@online.de>
|Date: Wed, 29 May 2013 13:52:13 +0200
|Subject: [PATCH] x86/mce: fix mce timer interval
|
|Seems mce timer fire at the wrong frequency in -rt kernels since roughly
|forever due to 32 bit overflow.  3.8-rt is also missing a multiplier.
|
|Add missing us -> ns conversion and 32 bit overflow prevention.
|
|Signed-off-by: Mike Galbraith <bitbucket@online.de>
|[bigeasy: use ULL instead of u64 cast]
|Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

8 years agoxfs: Disable percpu SB on PREEMPT_RT_FULL
Steven Rostedt [Wed, 13 May 2015 15:36:32 +0000 (11:36 -0400)]
xfs: Disable percpu SB on PREEMPT_RT_FULL

Running a test on a large CPU count box with xfs, I hit a live lock
with the following backtraces on several CPUs:

 Call Trace:
  [<ffffffff812c34f8>] __const_udelay+0x28/0x30
  [<ffffffffa033ab9a>] xfs_icsb_lock_cntr+0x2a/0x40 [xfs]
  [<ffffffffa033c871>] xfs_icsb_modify_counters+0x71/0x280 [xfs]
  [<ffffffffa03413e1>] xfs_trans_reserve+0x171/0x210 [xfs]
  [<ffffffffa0378cfd>] xfs_create+0x24d/0x6f0 [xfs]
  [<ffffffff8124c8eb>] ? avc_has_perm_flags+0xfb/0x1e0
  [<ffffffffa0336eeb>] xfs_vn_mknod+0xbb/0x1e0 [xfs]
  [<ffffffffa0337043>] xfs_vn_create+0x13/0x20 [xfs]
  [<ffffffff811b0edd>] vfs_create+0xcd/0x130
  [<ffffffff811b21ef>] do_last+0xb8f/0x1240
  [<ffffffff811b39b2>] path_openat+0xc2/0x490

Looking at the code I see it was stuck at:

STATIC void
xfs_icsb_lock_cntr(
xfs_icsb_cnts_t *icsbp)
{
while (test_and_set_bit(XFS_ICSB_FLAG_LOCK, &icsbp->icsb_flags)) {
ndelay(1000);
}
}

In xfs_icsb_modify_counters() the code is fine. There's a
preempt_disable() called when taking this bit spinlock and a
preempt_enable() after it is released. The issue is that not all
locations are protected by preempt_disable() when PREEMPT_RT is set.
Namely the places that grab all CPU cntr locks.

STATIC void
xfs_icsb_lock_all_counters(
xfs_mount_t *mp)
{
xfs_icsb_cnts_t *cntp;
int i;

for_each_online_cpu(i) {
cntp = (xfs_icsb_cnts_t *)per_cpu_ptr(mp->m_sb_cnts, i);
xfs_icsb_lock_cntr(cntp);
}
}

STATIC void
xfs_icsb_disable_counter()
{
[...]
xfs_icsb_lock_all_counters(mp);
[...]
xfs_icsb_unlock_all_counters(mp);
}

STATIC void
xfs_icsb_balance_counter_locked()
{
[...]
xfs_icsb_disable_counter();
[...]
}

STATIC void
xfs_icsb_balance_counter(
xfs_mount_t *mp,
xfs_sb_field_t  fields,
int min_per_cpu)
{
spin_lock(&mp->m_sb_lock);
xfs_icsb_balance_counter_locked(mp, fields, min_per_cpu);
spin_unlock(&mp->m_sb_lock);
}

Now, when PREEMPT_RT is not enabled, that spin_lock() disables
preemption. But for PREEMPT_RT, it does not. Although with my test box I
was not able to produce a task state of all tasks, but I'm assuming that
some task called the xfs_icsb_lock_all_counters() and was preempted by
an RT task and could not finish, causing all callers of that lock to
block indefinitely.

Dave Chinner has stated that the scalability of that code will probably
be negated by PREEMPT_RT, and that it is probably best to just disable
the code in question. Also, this code has been rewritten in newer kernels.

Link: http://lkml.kernel.org/r/20150504004844.GA21261@dastard
Suggested-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
8 years agofs/aio: simple simple work
Sebastian Andrzej Siewior [Mon, 16 Feb 2015 17:49:10 +0000 (18:49 +0100)]
fs/aio: simple simple work

|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:768
|in_atomic(): 1, irqs_disabled(): 0, pid: 26, name: rcuos/2
|2 locks held by rcuos/2/26:
| #0:  (rcu_callback){.+.+..}, at: [<ffffffff810b1a12>] rcu_nocb_kthread+0x1e2/0x380
| #1:  (rcu_read_lock_sched){.+.+..}, at: [<ffffffff812acd26>] percpu_ref_kill_rcu+0xa6/0x1c0
|Preemption disabled at:[<ffffffff810b1a93>] rcu_nocb_kthread+0x263/0x380
|Call Trace:
| [<ffffffff81582e9e>] dump_stack+0x4e/0x9c
| [<ffffffff81077aeb>] __might_sleep+0xfb/0x170
| [<ffffffff81589304>] rt_spin_lock+0x24/0x70
| [<ffffffff811c5790>] free_ioctx_users+0x30/0x130
| [<ffffffff812ace34>] percpu_ref_kill_rcu+0x1b4/0x1c0
| [<ffffffff810b1a93>] rcu_nocb_kthread+0x263/0x380
| [<ffffffff8106e046>] kthread+0xd6/0xf0
| [<ffffffff81591eec>] ret_from_fork+0x7c/0xb0

replace this preempt_disable() friendly swork.

Reported-By: Mike Galbraith <umgwanakikbuti@gmail.com>
Suggested-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agofs: jbd2: pull your plug when waiting for space
Sebastian Andrzej Siewior [Mon, 17 Feb 2014 16:30:03 +0000 (17:30 +0100)]
fs: jbd2: pull your plug when waiting for space

Two cps in parallel managed to stall the the ext4 fs. It seems that
journal code is either waiting for locks or sleeping waiting for
something to happen. This seems similar to what Mike observed on ext3,
here is his description:

|With an -rt kernel, and a heavy sync IO load, tasks can jam
|up on journal locks without unplugging, which can lead to
|terminal IO starvation.  Unplug and schedule when waiting
|for space.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agofs, jbd: pull your plug when waiting for space
Mike Galbraith [Wed, 11 Jul 2012 22:05:20 +0000 (22:05 +0000)]
fs, jbd: pull your plug when waiting for space

With an -rt kernel, and a heavy sync IO load, tasks can jam
up on journal locks without unplugging, which can lead to
terminal IO starvation.  Unplug and schedule when waiting for space.

Signed-off-by: Mike Galbraith <mgalbraith@suse.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Theodore Tso <tytso@mit.edu>
Link: http://lkml.kernel.org/r/1341812414.7370.73.camel@marge.simpson.net
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agofs: ntfs: disable interrupt only on !RT
Mike Galbraith [Fri, 3 Jul 2009 13:44:12 +0000 (08:44 -0500)]
fs: ntfs: disable interrupt only on !RT

On Sat, 2007-10-27 at 11:44 +0200, Ingo Molnar wrote:
> * Nick Piggin <nickpiggin@yahoo.com.au> wrote:
>
> > > [10138.175796]  [<c0105de3>] show_trace+0x12/0x14
> > > [10138.180291]  [<c0105dfb>] dump_stack+0x16/0x18
> > > [10138.184769]  [<c011609f>] native_smp_call_function_mask+0x138/0x13d
> > > [10138.191117]  [<c0117606>] smp_call_function+0x1e/0x24
> > > [10138.196210]  [<c012f85c>] on_each_cpu+0x25/0x50
> > > [10138.200807]  [<c0115c74>] flush_tlb_all+0x1e/0x20
> > > [10138.205553]  [<c016caaf>] kmap_high+0x1b6/0x417
> > > [10138.210118]  [<c011ec88>] kmap+0x4d/0x4f
> > > [10138.214102]  [<c026a9d8>] ntfs_end_buffer_async_read+0x228/0x2f9
> > > [10138.220163]  [<c01a0e9e>] end_bio_bh_io_sync+0x26/0x3f
> > > [10138.225352]  [<c01a2b09>] bio_endio+0x42/0x6d
> > > [10138.229769]  [<c02c2a08>] __end_that_request_first+0x115/0x4ac
> > > [10138.235682]  [<c02c2da7>] end_that_request_chunk+0x8/0xa
> > > [10138.241052]  [<c0365943>] ide_end_request+0x55/0x10a
> > > [10138.246058]  [<c036dae3>] ide_dma_intr+0x6f/0xac
> > > [10138.250727]  [<c0366d83>] ide_intr+0x93/0x1e0
> > > [10138.255125]  [<c015afb4>] handle_IRQ_event+0x5c/0xc9
> >
> > Looks like ntfs is kmap()ing from interrupt context. Should be using
> > kmap_atomic instead, I think.
>
> it's not atomic interrupt context but irq thread context - and -rt
> remaps kmap_atomic() to kmap() internally.

Hm.  Looking at the change to mm/bounce.c, perhaps I should do this
instead?

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agofs-block-rt-support.patch
Thomas Gleixner [Tue, 14 Jun 2011 15:05:09 +0000 (17:05 +0200)]
fs-block-rt-support.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agomm: Protect activate_mm() by preempt_[disable&enable]_rt()
Yong Zhang [Tue, 15 May 2012 05:53:56 +0000 (13:53 +0800)]
mm: Protect activate_mm() by preempt_[disable&enable]_rt()

User preempt_*_rt instead of local_irq_*_rt or otherwise there will be
warning on ARM like below:

WARNING: at build/linux/kernel/smp.c:459 smp_call_function_many+0x98/0x264()
Modules linked in:
[<c0013bb4>] (unwind_backtrace+0x0/0xe4) from [<c001be94>] (warn_slowpath_common+0x4c/0x64)
[<c001be94>] (warn_slowpath_common+0x4c/0x64) from [<c001bec4>] (warn_slowpath_null+0x18/0x1c)
[<c001bec4>] (warn_slowpath_null+0x18/0x1c) from [<c0053ff8>](smp_call_function_many+0x98/0x264)
[<c0053ff8>] (smp_call_function_many+0x98/0x264) from [<c0054364>] (smp_call_function+0x44/0x6c)
[<c0054364>] (smp_call_function+0x44/0x6c) from [<c0017d50>] (__new_context+0xbc/0x124)
[<c0017d50>] (__new_context+0xbc/0x124) from [<c009e49c>] (flush_old_exec+0x460/0x5e4)
[<c009e49c>] (flush_old_exec+0x460/0x5e4) from [<c00d61ac>] (load_elf_binary+0x2e0/0x11ac)
[<c00d61ac>] (load_elf_binary+0x2e0/0x11ac) from [<c009d060>] (search_binary_handler+0x94/0x2a4)
[<c009d060>] (search_binary_handler+0x94/0x2a4) from [<c009e8fc>] (do_execve+0x254/0x364)
[<c009e8fc>] (do_execve+0x254/0x364) from [<c0010e84>] (sys_execve+0x34/0x54)
[<c0010e84>] (sys_execve+0x34/0x54) from [<c000da00>] (ret_fast_syscall+0x0/0x30)
---[ end trace 0000000000000002 ]---

The reason is that ARM need irq enabled when doing activate_mm().
According to mm-protect-activate-switch-mm.patch, actually
preempt_[disable|enable]_rt() is sufficient.

Inspired-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/1337061236-1766-1-git-send-email-yong.zhang0@gmail.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agofs: namespace preemption fix
Thomas Gleixner [Sun, 19 Jul 2009 13:44:27 +0000 (08:44 -0500)]
fs: namespace preemption fix

On RT we cannot loop with preemption disabled here as
mnt_make_readonly() might have been preempted. We can safely enable
preemption while waiting for MNT_WRITE_HOLD to be cleared. Safe on !RT
as well.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agort: Improve the serial console PASS_LIMIT
Ingo Molnar [Wed, 14 Dec 2011 12:05:54 +0000 (13:05 +0100)]
rt: Improve the serial console PASS_LIMIT

Beyond the warning:

 drivers/tty/serial/8250/8250.c:1613:6: warning: unused variable â€˜pass_counter’ [-Wunused-variable]

the solution of just looping infinitely was ugly - up it to 1 million to
give it a chance to continue in some really ugly situation.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agodrivers-tty-pl011-irq-disable-madness.patch
Thomas Gleixner [Tue, 8 Jan 2013 20:36:51 +0000 (21:36 +0100)]
drivers-tty-pl011-irq-disable-madness.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agodrivers-tty-fix-omap-lock-crap.patch
Thomas Gleixner [Thu, 28 Jul 2011 11:32:57 +0000 (13:32 +0200)]
drivers-tty-fix-omap-lock-crap.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agostomp-machine: use lg_global_trylock_relax() to dead with stop_cpus_lock lglock
Mike Galbraith [Fri, 2 May 2014 11:13:34 +0000 (13:13 +0200)]
stomp-machine: use lg_global_trylock_relax() to dead with stop_cpus_lock lglock

If the stop machinery is called from inactive CPU we cannot use
lg_global_lock(), because some other stomp machine invocation might be
in progress and the lock can be contended.  We cannot schedule from this
context, so use the lovely new lg_global_trylock_relax() primitive to
do what we used to do via one mutex_trylock()/cpu_relax() loop.  We
now do that trylock()/relax() across an entire herd of locks. Joy.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agostomp-machine: create lg_global_trylock_relax() primitive
Mike Galbraith [Fri, 2 May 2014 11:13:22 +0000 (13:13 +0200)]
stomp-machine: create lg_global_trylock_relax() primitive

Create lg_global_trylock_relax() for use by stopper thread when it cannot
schedule, to deal with stop_cpus_lock, which is now an lglock.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agolglocks-rt.patch
Thomas Gleixner [Wed, 15 Jun 2011 09:02:21 +0000 (11:02 +0200)]
lglocks-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agorcutree/rcu_bh_qs: disable irq while calling rcu_preempt_qs()
Tiejun Chen [Wed, 18 Dec 2013 09:51:49 +0000 (17:51 +0800)]
rcutree/rcu_bh_qs: disable irq while calling rcu_preempt_qs()

Any callers to the function rcu_preempt_qs() must disable irqs in
order to protect the assignment to ->rcu_read_unlock_special. In
RT case, rcu_bh_qs() as the wrapper of rcu_preempt_qs() is called
in some scenarios where irq is enabled, like this path,

do_single_softirq()
    |
    + local_irq_enable();
    + handle_softirq()
    |    |
    |    + rcu_bh_qs()
    |        |
    |        + rcu_preempt_qs()
    |
    + local_irq_disable()

So here we'd better disable irq directly inside of rcu_bh_qs() to
fix this, otherwise the kernel may be freezable sometimes as
observed. And especially this way is also kind and safe for the
potential rcu_bh_qs() usage elsewhere in the future.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Bin Jiang <bin.jiang@windriver.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agorcu: Make ksoftirqd do RCU quiescent states
Paul E. McKenney [Wed, 5 Oct 2011 18:45:18 +0000 (11:45 -0700)]
rcu: Make ksoftirqd do RCU quiescent states

Implementing RCU-bh in terms of RCU-preempt makes the system vulnerable
to network-based denial-of-service attacks.  This patch therefore
makes __do_softirq() invoke rcu_bh_qs(), but only when __do_softirq()
is running in ksoftirqd context.  A wrapper layer in interposed so that
other calls to __do_softirq() avoid invoking rcu_bh_qs().  The underlying
function __do_softirq_common() does the actual work.

The reason that rcu_bh_qs() is bad in these non-ksoftirqd contexts is
that there might be a local_bh_enable() inside an RCU-preempt read-side
critical section.  This local_bh_enable() can invoke __do_softirq()
directly, so if __do_softirq() were to invoke rcu_bh_qs() (which just
calls rcu_preempt_qs() in the PREEMPT_RT_FULL case), there would be
an illegal RCU-preempt quiescent state in the middle of an RCU-preempt
read-side critical section.  Therefore, quiescent states can only happen
in cases where __do_softirq() is invoked directly from ksoftirqd.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20111005184518.GA21601@linux.vnet.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agorcu-more-fallout.patch
Thomas Gleixner [Mon, 14 Nov 2011 09:57:54 +0000 (10:57 +0100)]
rcu-more-fallout.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agorcu: Merge RCU-bh into RCU-preempt
Thomas Gleixner [Wed, 5 Oct 2011 18:59:38 +0000 (11:59 -0700)]
rcu: Merge RCU-bh into RCU-preempt

The Linux kernel has long RCU-bh read-side critical sections that
intolerably increase scheduling latency under mainline's RCU-bh rules,
which include RCU-bh read-side critical sections being non-preemptible.
This patch therefore arranges for RCU-bh to be implemented in terms of
RCU-preempt for CONFIG_PREEMPT_RT_FULL=y.

This has the downside of defeating the purpose of RCU-bh, namely,
handling the case where the system is subjected to a network-based
denial-of-service attack that keeps at least one CPU doing full-time
softirq processing.  This issue will be fixed by a later commit.

The current commit will need some work to make it appropriate for
mainline use, for example, it needs to be extended to cover Tiny RCU.

[ paulmck: Added a useful changelog ]

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/20111005185938.GA20403@linux.vnet.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agorcu: Frob softirq test
Peter Zijlstra [Fri, 12 Aug 2011 22:23:17 +0000 (00:23 +0200)]
rcu: Frob softirq test

With RT_FULL we get the below wreckage:

[  126.060484] =======================================================
[  126.060486] [ INFO: possible circular locking dependency detected ]
[  126.060489] 3.0.1-rt10+ #30
[  126.060490] -------------------------------------------------------
[  126.060492] irq/24-eth0/1235 is trying to acquire lock:
[  126.060495]  (&(lock)->wait_lock#2){+.+...}, at: [<ffffffff81501c81>] rt_mutex_slowunlock+0x16/0x55
[  126.060503]
[  126.060504] but task is already holding lock:
[  126.060506]  (&p->pi_lock){-...-.}, at: [<ffffffff81074fdc>] try_to_wake_up+0x35/0x429
[  126.060511]
[  126.060511] which lock already depends on the new lock.
[  126.060513]
[  126.060514]
[  126.060514] the existing dependency chain (in reverse order) is:
[  126.060516]
[  126.060516] -> #1 (&p->pi_lock){-...-.}:
[  126.060519]        [<ffffffff810afe9e>] lock_acquire+0x145/0x18a
[  126.060524]        [<ffffffff8150291e>] _raw_spin_lock_irqsave+0x4b/0x85
[  126.060527]        [<ffffffff810b5aa4>] task_blocks_on_rt_mutex+0x36/0x20f
[  126.060531]        [<ffffffff815019bb>] rt_mutex_slowlock+0xd1/0x15a
[  126.060534]        [<ffffffff81501ae3>] rt_mutex_lock+0x2d/0x2f
[  126.060537]        [<ffffffff810d9020>] rcu_boost+0xad/0xde
[  126.060541]        [<ffffffff810d90ce>] rcu_boost_kthread+0x7d/0x9b
[  126.060544]        [<ffffffff8109a760>] kthread+0x99/0xa1
[  126.060547]        [<ffffffff81509b14>] kernel_thread_helper+0x4/0x10
[  126.060551]
[  126.060552] -> #0 (&(lock)->wait_lock#2){+.+...}:
[  126.060555]        [<ffffffff810af1b8>] __lock_acquire+0x1157/0x1816
[  126.060558]        [<ffffffff810afe9e>] lock_acquire+0x145/0x18a
[  126.060561]        [<ffffffff8150279e>] _raw_spin_lock+0x40/0x73
[  126.060564]        [<ffffffff81501c81>] rt_mutex_slowunlock+0x16/0x55
[  126.060566]        [<ffffffff81501ce7>] rt_mutex_unlock+0x27/0x29
[  126.060569]        [<ffffffff810d9f86>] rcu_read_unlock_special+0x17e/0x1c4
[  126.060573]        [<ffffffff810da014>] __rcu_read_unlock+0x48/0x89
[  126.060576]        [<ffffffff8106847a>] select_task_rq_rt+0xc7/0xd5
[  126.060580]        [<ffffffff8107511c>] try_to_wake_up+0x175/0x429
[  126.060583]        [<ffffffff81075425>] wake_up_process+0x15/0x17
[  126.060585]        [<ffffffff81080a51>] wakeup_softirqd+0x24/0x26
[  126.060590]        [<ffffffff81081df9>] irq_exit+0x49/0x55
[  126.060593]        [<ffffffff8150a3bd>] smp_apic_timer_interrupt+0x8a/0x98
[  126.060597]        [<ffffffff81509793>] apic_timer_interrupt+0x13/0x20
[  126.060600]        [<ffffffff810d5952>] irq_forced_thread_fn+0x1b/0x44
[  126.060603]        [<ffffffff810d582c>] irq_thread+0xde/0x1af
[  126.060606]        [<ffffffff8109a760>] kthread+0x99/0xa1
[  126.060608]        [<ffffffff81509b14>] kernel_thread_helper+0x4/0x10
[  126.060611]
[  126.060612] other info that might help us debug this:
[  126.060614]
[  126.060615]  Possible unsafe locking scenario:
[  126.060616]
[  126.060617]        CPU0                    CPU1
[  126.060619]        ----                    ----
[  126.060620]   lock(&p->pi_lock);
[  126.060623]                                lock(&(lock)->wait_lock);
[  126.060625]                                lock(&p->pi_lock);
[  126.060627]   lock(&(lock)->wait_lock);
[  126.060629]
[  126.060629]  *** DEADLOCK ***
[  126.060630]
[  126.060632] 1 lock held by irq/24-eth0/1235:
[  126.060633]  #0:  (&p->pi_lock){-...-.}, at: [<ffffffff81074fdc>] try_to_wake_up+0x35/0x429
[  126.060638]
[  126.060638] stack backtrace:
[  126.060641] Pid: 1235, comm: irq/24-eth0 Not tainted 3.0.1-rt10+ #30
[  126.060643] Call Trace:
[  126.060644]  <IRQ>  [<ffffffff810acbde>] print_circular_bug+0x289/0x29a
[  126.060651]  [<ffffffff810af1b8>] __lock_acquire+0x1157/0x1816
[  126.060655]  [<ffffffff810ab3aa>] ? trace_hardirqs_off_caller+0x1f/0x99
[  126.060658]  [<ffffffff81501c81>] ? rt_mutex_slowunlock+0x16/0x55
[  126.060661]  [<ffffffff810afe9e>] lock_acquire+0x145/0x18a
[  126.060664]  [<ffffffff81501c81>] ? rt_mutex_slowunlock+0x16/0x55
[  126.060668]  [<ffffffff8150279e>] _raw_spin_lock+0x40/0x73
[  126.060671]  [<ffffffff81501c81>] ? rt_mutex_slowunlock+0x16/0x55
[  126.060674]  [<ffffffff810d9655>] ? rcu_report_qs_rsp+0x87/0x8c
[  126.060677]  [<ffffffff81501c81>] rt_mutex_slowunlock+0x16/0x55
[  126.060680]  [<ffffffff810d9ea3>] ? rcu_read_unlock_special+0x9b/0x1c4
[  126.060683]  [<ffffffff81501ce7>] rt_mutex_unlock+0x27/0x29
[  126.060687]  [<ffffffff810d9f86>] rcu_read_unlock_special+0x17e/0x1c4
[  126.060690]  [<ffffffff810da014>] __rcu_read_unlock+0x48/0x89
[  126.060693]  [<ffffffff8106847a>] select_task_rq_rt+0xc7/0xd5
[  126.060696]  [<ffffffff810683da>] ? select_task_rq_rt+0x27/0xd5
[  126.060701]  [<ffffffff810a852a>] ? clockevents_program_event+0x8e/0x90
[  126.060704]  [<ffffffff8107511c>] try_to_wake_up+0x175/0x429
[  126.060708]  [<ffffffff810a95dc>] ? tick_program_event+0x1f/0x21
[  126.060711]  [<ffffffff81075425>] wake_up_process+0x15/0x17
[  126.060715]  [<ffffffff81080a51>] wakeup_softirqd+0x24/0x26
[  126.060718]  [<ffffffff81081df9>] irq_exit+0x49/0x55
[  126.060721]  [<ffffffff8150a3bd>] smp_apic_timer_interrupt+0x8a/0x98
[  126.060724]  [<ffffffff81509793>] apic_timer_interrupt+0x13/0x20
[  126.060726]  <EOI>  [<ffffffff81072855>] ? migrate_disable+0x75/0x12d
[  126.060733]  [<ffffffff81080a61>] ? local_bh_disable+0xe/0x1f
[  126.060736]  [<ffffffff81080a70>] ? local_bh_disable+0x1d/0x1f
[  126.060739]  [<ffffffff810d5952>] irq_forced_thread_fn+0x1b/0x44
[  126.060742]  [<ffffffff81502ac0>] ? _raw_spin_unlock_irq+0x3b/0x59
[  126.060745]  [<ffffffff810d582c>] irq_thread+0xde/0x1af
[  126.060748]  [<ffffffff810d5937>] ? irq_thread_fn+0x3a/0x3a
[  126.060751]  [<ffffffff810d574e>] ? irq_finalize_oneshot+0xd1/0xd1
[  126.060754]  [<ffffffff810d574e>] ? irq_finalize_oneshot+0xd1/0xd1
[  126.060757]  [<ffffffff8109a760>] kthread+0x99/0xa1
[  126.060761]  [<ffffffff81509b14>] kernel_thread_helper+0x4/0x10
[  126.060764]  [<ffffffff81069ed7>] ? finish_task_switch+0x87/0x10a
[  126.060768]  [<ffffffff81502ec4>] ? retint_restore_args+0xe/0xe
[  126.060771]  [<ffffffff8109a6c7>] ? __init_kthread_worker+0x8c/0x8c
[  126.060774]  [<ffffffff81509b10>] ? gs_change+0xb/0xb

Because irq_exit() does:

void irq_exit(void)
{
account_system_vtime(current);
trace_hardirq_exit();
sub_preempt_count(IRQ_EXIT_OFFSET);
if (!in_interrupt() && local_softirq_pending())
invoke_softirq();

...
}

Which triggers a wakeup, which uses RCU, now if the interrupted task has
t->rcu_read_unlock_special set, the rcu usage from the wakeup will end
up in rcu_read_unlock_special(). rcu_read_unlock_special() will test
for in_irq(), which will fail as we just decremented preempt_count
with IRQ_EXIT_OFFSET, and in_sering_softirq(), which for
PREEMPT_RT_FULL reads:

int in_serving_softirq(void)
{
int res;

preempt_disable();
res = __get_cpu_var(local_softirq_runner) == current;
preempt_enable();
return res;
}

Which will thus also fail, resulting in the above wreckage.

The 'somewhat' ugly solution is to open-code the preempt_count() test
in rcu_read_unlock_special().

Also, we're not at all sure how ->rcu_read_unlock_special gets set
here... so this is very likely a bandaid and more thought is required.

Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
8 years agoRevert "timers: do not raise softirq unconditionally"
Sebastian Andrzej Siewior [Wed, 28 Jan 2015 13:10:02 +0000 (14:10 +0100)]
Revert "timers: do not raise softirq unconditionally"

The patch I revert here triggers the HRtimer switch from hardirq instead
of from softirq. As a result we get a periodic interrupt before the
switch is complete (that is a hrtimer has been programmed) and so the
tick still programms periodic mode. Since the timer has been shutdown,
dev->next_event is set to max and the next increment makes it negative.
And now we wait…

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agotimer: do not spin_trylock() on UP
Sebastian Andrzej Siewior [Fri, 2 May 2014 19:31:50 +0000 (21:31 +0200)]
timer: do not spin_trylock() on UP

This will void a warning comming from the spin-lock debugging code. The
lock avoiding idea is from Steven Rostedt.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agortmutex: use a trylock for waiter lock in trylock
Sebastian Andrzej Siewior [Fri, 15 Nov 2013 14:46:50 +0000 (15:46 +0100)]
rtmutex: use a trylock for waiter lock in trylock

Mike Galbraith captered the following:
| >#11 [ffff88017b243e90] _raw_spin_lock at ffffffff815d2596
| >#12 [ffff88017b243e90] rt_mutex_trylock at ffffffff815d15be
| >#13 [ffff88017b243eb0] get_next_timer_interrupt at ffffffff81063b42
| >#14 [ffff88017b243f00] tick_nohz_stop_sched_tick at ffffffff810bd1fd
| >#15 [ffff88017b243f70] tick_nohz_irq_exit at ffffffff810bd7d2
| >#16 [ffff88017b243f90] irq_exit at ffffffff8105b02d
| >#17 [ffff88017b243fb0] reschedule_interrupt at ffffffff815db3dd
| >--- <IRQ stack> ---
| >#18 [ffff88017a2a9bc8] reschedule_interrupt at ffffffff815db3dd
| >    [exception RIP: task_blocks_on_rt_mutex+51]
| >#19 [ffff88017a2a9ce0] rt_spin_lock_slowlock at ffffffff815d183c
| >#20 [ffff88017a2a9da0] lock_timer_base.isra.35 at ffffffff81061cbf
| >#21 [ffff88017a2a9dd0] schedule_timeout at ffffffff815cf1ce
| >#22 [ffff88017a2a9e50] rcu_gp_kthread at ffffffff810f9bbb
| >#23 [ffff88017a2a9ed0] kthread at ffffffff810796d5
| >#24 [ffff88017a2a9f50] ret_from_fork at ffffffff815da04c

lock_timer_base() does a try_lock() which deadlocks on the waiter lock
not the lock itself.
This patch takes the waiter_lock with trylock so it should work from interrupt
context as well. If the fastpath doesn't work and the waiter_lock itself is
taken then it seems that the lock itself taken.
This patch also adds "rt_spin_unlock_after_trylock_in_irq" to keep lockdep
happy. If we managed to take the wait_lock in the first place we should also
be able to take it in the unlock path.

Cc: stable-rt@vger.kernel.org
Reported-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agotimer/rt: Always raise the softirq if there's irq_work to be done
Steven Rostedt [Fri, 31 Jan 2014 17:07:57 +0000 (12:07 -0500)]
timer/rt: Always raise the softirq if there's irq_work to be done

It was previously discovered that some systems would hang on boot up
with a previous version of 3.12-rt. This was due to RCU using irq_work,
and RT defers the irq_work to a softirq. But if there's no active
timers, the softirq will not be raised, and RCU work will not get done,
causing the system to hang.  The fix was to check that if there was no
active timers but irq_work to be done, then we should raise the softirq.

But this fix was not 100% correct. It left out the case that there were
active timers that were not expired yet. This would have the softirq
not get raised even if there was irq work to be done.

If there is irq_work to be done, then we must raise the timer softirq
regardless of if there is active timers or whether they are expired or
not. The softirq can handle those cases. But we can never ignore
irq_work.

As it is only PREEMPT_RT_FULL that requires irq_work to be done in the
softirq, we can pull out the check in the active_timers condition, and
make the code a bit cleaner by having the irq_work check separate, and
put the code in with the other #ifdef PREEMPT_RT. If there is irq_work
to be done, there's no need to check the active timers or if they are
expired. Just raise the time softirq and be done with it. Otherwise, we
can do the timer checks just like we do with non -rt.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agotimer: Raise softirq if there's irq_work
Steven Rostedt [Fri, 24 Jan 2014 20:09:33 +0000 (15:09 -0500)]
timer: Raise softirq if there's irq_work

[ Talking with Sebastian on IRC, it seems that doing the irq_work_run()
  from the interrupt in -rt is a bad thing. Here we simply raise the
  softirq if there's irq work to do. This too boots on my i7 ]

After trying hard to figure out why my i7 box was locking up with the
new active_timers code, that does not run the timer softirq if there
are no active timers, I took an extra look at the softirq handler and
noticed that it doesn't just run timer softirqs, it also runs irq work.

This was the bug that was locking up the system. It wasn't missing a
timer, it was missing irq work. By always doing the irq work callbacks,
the system boots fine. The missing irq work callback was the RCU's
sp_wakeup() function.

No need to check for defined(CONFIG_IRQ_WORK). When that's not set the
"irq_work_needs_cpu()" is a static inline that returns false.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agotimers: do not raise softirq unconditionally
Thomas Gleixner [Thu, 7 Nov 2013 11:21:11 +0000 (12:21 +0100)]
timers: do not raise softirq unconditionally

Mike,

On Thu, 7 Nov 2013, Mike Galbraith wrote:

> On Thu, 2013-11-07 at 04:26 +0100, Mike Galbraith wrote:
> > On Wed, 2013-11-06 at 18:49 +0100, Thomas Gleixner wrote:
>
> > > I bet you are trying to work around some of the side effects of the
> > > occasional tick which is still necessary despite of full nohz, right?
> >
> > Nope, I wanted to check out cost of nohz_full for rt, and found that it
> > doesn't work at all instead, looked, and found that the sole running
> > task has just awakened ksoftirqd when it wants to shut the tick down, so
> > that shutdown never happens.
>
> Like so in virgin 3.10-rt.  Box is x3550 M3 booted nowatchdog
> rcu_nocbs=1-3 nohz_full=1-3, and CPUs1-3 are completely isolated via
> cpusets as well.

well, that very same problem is in mainline if you add "threadirqs" to
the command line. But we can be smart about this. The untested patch
below should address that issue. If that works on mainline we can
adapt it for RT (needs a trylock(&base->lock) there).

Though it's not a full solution. It needs some thought versus the
softirq code of timers. Assume we have only one timer queued 1000
ticks into the future. So this change will cause the timer softirq not
to be called until that timer expires and then the timer softirq is
going to do 1000 loops until it catches up with jiffies. That's
anything but pretty ...

What worries me more is this one:

  pert-5229  [003] d..h1..   684.482618: softirq_raise: vec=9 [action=RCU]

The CPU has no callbacks as you shoved them over to cpu 0, so why is
the RCU softirq raised?

Thanks,

tglx
------------------
Message-id: <alpine.DEB.2.02.1311071158350.23353@ionos.tec.linutronix.de>
|CONFIG_NO_HZ_FULL + CONFIG_PREEMPT_RT_FULL = nogo
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agotimer-handle-idle-trylock-in-get-next-timer-irq.patch
Thomas Gleixner [Sun, 17 Jul 2011 20:08:38 +0000 (22:08 +0200)]
timer-handle-idle-trylock-in-get-next-timer-irq.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agorwlocks: Fix section mismatch
John Kacur [Mon, 19 Sep 2011 09:09:27 +0000 (11:09 +0200)]
rwlocks: Fix section mismatch

This fixes the following build error for the preempt-rt kernel.

make kernel/fork.o
  CC      kernel/fork.o
kernel/fork.c:90: error: section of tasklist_lock conflicts with previous declaration
make[2]: *** [kernel/fork.o] Error 1
make[1]: *** [kernel/fork.o] Error 2

The rt kernel cache aligns the RWLOCK in DEFINE_RWLOCK by default.
The non-rt kernels explicitly cache align only the tasklist_lock in
kernel/fork.c
That can create a build conflict. This fixes the build problem by making the
non-rt kernels cache align RWLOCKs by default. The side effect is that
the other RWLOCKs are also cache aligned for non-rt.

This is a short term solution for rt only.
The longer term solution would be to push the cache aligned DEFINE_RWLOCK
to mainline. If there are objections, then we could create a
DEFINE_RWLOCK_CACHE_ALIGNED or something of that nature.

Comments? Objections?

Signed-off-by: John Kacur <jkacur@redhat.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/alpine.LFD.2.00.1109191104010.23118@localhost6.localdomain6
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agolocking: ww_mutex: fix ww_mutex vs self-deadlock
Mike Galbraith [Thu, 26 Feb 2015 08:02:05 +0000 (09:02 +0100)]
locking: ww_mutex: fix ww_mutex vs self-deadlock

If the caller already holds the mutex, task_blocks_on_rt_mutex()
returns -EDEADLK, we proceed directly to rt_mutex_handle_deadlock()
where it's instant game over.

Let ww_mutexes return EDEADLK/EALREADY as they want to instead.

Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agortmutex: enable deadlock detection in ww_mutex_lock functions
Gustavo Bittencourt [Tue, 20 Jan 2015 20:02:29 +0000 (18:02 -0200)]
rtmutex: enable deadlock detection in ww_mutex_lock functions

The functions ww_mutex_lock_interruptible and ww_mutex_lock should return -EDEADLK when faced with
a deadlock. To do so, the paramenter detect_deadlock in rt_mutex_slowlock must be TRUE.
This patch corrects potential deadlocks when running PREEMPT_RT with nouveau driver.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Gustavo Bittencourt <gbitten@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agort,locking: fix __ww_mutex_lock_interruptible() lockdep annotation
Mike Galbraith [Mon, 2 Jun 2014 13:12:44 +0000 (15:12 +0200)]
rt,locking: fix __ww_mutex_lock_interruptible() lockdep annotation

Using mutex_acquire_nest() as used in __ww_mutex_lock() fixes the
splat below.  Remove superfluous line break in __ww_mutex_lock()
as well.

|=============================================
|[ INFO: possible recursive locking detected ]
|3.14.4-rt5 #26 Not tainted
|---------------------------------------------
|Xorg/4298 is trying to acquire lock:
| (reservation_ww_class_mutex){+.+.+.}, at: [<ffffffffa02b4270>] nouveau_gem_ioctl_pushbuf+0x870/0x19f0 [nouveau]
|but task is already holding lock:
| (reservation_ww_class_mutex){+.+.+.}, at: [<ffffffffa02b4270>] nouveau_gem_ioctl_pushbuf+0x870/0x19f0 [nouveau]
|other info that might help us debug this:
| Possible unsafe locking scenario:
|       CPU0
|       ----
|  lock(reservation_ww_class_mutex);
|  lock(reservation_ww_class_mutex);
|
| *** DEADLOCK ***
|
| May be due to missing lock nesting notation
|
|3 locks held by Xorg/4298:
| #0:  (&cli->mutex){+.+.+.}, at: [<ffffffffa02b597b>] nouveau_abi16_get+0x2b/0x100 [nouveau]
| #1:  (reservation_ww_class_acquire){+.+...}, at: [<ffffffffa0160cd2>] drm_ioctl+0x4d2/0x610 [drm]
| #2:  (reservation_ww_class_mutex){+.+.+.}, at: [<ffffffffa02b4270>] nouveau_gem_ioctl_pushbuf+0x870/0x19f0 [nouveau]

Cc: stable-rt@vger.kernel.org
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
8 years agortmutex: add a first shot of ww_mutex
Sebastian Andrzej Siewior [Mon, 28 Oct 2013 08:36:37 +0000 (09:36 +0100)]
rtmutex: add a first shot of ww_mutex

lockdep says:
| --------------------------------------------------------------------------
| | Wound/wait tests |
| ---------------------
|                 ww api failures:  ok  |  ok  |  ok  |
|              ww contexts mixing:  ok  |  ok  |
|            finishing ww context:  ok  |  ok  |  ok  |  ok  |
|              locking mismatches:  ok  |  ok  |  ok  |
|                EDEADLK handling:  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |  ok  |
|          spinlock nest unlocked:  ok  |
| -----------------------------------------------------
|                                |block | try  |context|
| -----------------------------------------------------
|                         context:  ok  |  ok  |  ok  |
|                             try:  ok  |  ok  |  ok  |
|                           block:  ok  |  ok  |  ok  |
|                        spinlock:  ok  |  ok  |  ok  |

Signed-off-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc>
8 years agortmutex.c: Fix incorrect waiter check
Brad Mouring [Wed, 14 Jan 2015 21:11:38 +0000 (15:11 -0600)]
rtmutex.c: Fix incorrect waiter check

In task_blocks_on_lock, there's a null check on pi_blocked_on
of the task_struct. This pointer can encode the fact that the
task that contains the pointer is waking (preventing requeuing)
and therefore is non-null. Use the inline function to avoid
dereferencing an invalid "pointer"

Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Reported-by: Ben Shelton <ben.shelton@ni.com>
Reviewed-by: T Makphaibulchoke <tmac@hp.com>
Tested-by: T Makphaibulchoke <tmac@hp.com>
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agort: Cleanup of unnecessary do while 0 in read/write _lock()
Nicholas Mc Guire [Sat, 8 Feb 2014 11:39:20 +0000 (12:39 +0100)]
rt: Cleanup of unnecessary do while 0 in read/write _lock()

With the migration pushdonw a few of the do{ }while(0)
loops became obsolete but got left over - this patch
only removes this fallout.

Patch applies on top of 3.12.9-rt13

Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agorwlock: disable migration before taking a lock
Steven Rostedt [Wed, 30 Apr 2014 00:13:08 +0000 (20:13 -0400)]
rwlock: disable migration before taking a lock

If there's no complaints about it. I'm going to add this to the 3.12-rt
stable tree. As without it, it fails horribly with the cpu hotplug
stress test, and I wont release a stable kernel that does that.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
8 years agoread_lock migrate_disable pushdown to rt_read_lock
Nicholas Mc Guire [Thu, 2 Jan 2014 09:19:15 +0000 (10:19 +0100)]
read_lock migrate_disable pushdown to rt_read_lock

pushdown of migrate_disable/enable from read_*lock* to the rt_read_*lock*
api level

general mapping to mutexes:

read_*lock*
  `-> rt_read_*lock*
          `-> __spin_lock (the sleeping spin locks)
                 `-> rt_mutex

The real read_lock* mapping:

          read_lock_irqsave -.
read_lock_irq                `-> rt_read_lock_irqsave()
       `->read_lock ---------.       \
          read_lock_bh ------+        \
                             `--> rt_read_lock()
                                   if (rt_mutex_owner(lock) != current){
                                           `-> __rt_spin_lock()
                                                rt_spin_lock_fastlock()
                                                       `->rt_mutex_cmpxchg()
                                    migrate_disable()
                                   }
                                   rwlock->read_depth++;
read_trylock mapping:

read_trylock
          `-> rt_read_trylock
               if (rt_mutex_owner(lock) != current){
                                              `-> rt_mutex_trylock()
                                                   rt_mutex_fasttrylock()
                                                    rt_mutex_cmpxchg()
                migrate_disable()
               }
               rwlock->read_depth++;

read_unlock* mapping:

read_unlock_bh --------+
read_unlock_irq -------+
read_unlock_irqrestore +
read_unlock -----------+
                       `-> rt_read_unlock()
                            if(--rwlock->read_depth==0){
                                      `-> __rt_spin_unlock()
                                           rt_spin_lock_fastunlock()
                                                        `-> rt_mutex_cmpxchg()
                             migrate_disable()
                            }

So calls to migrate_disable/enable() are better placed at the rt_read_*
level of lock/trylock/unlock as all of the read_*lock* API has this as a
common path. In the rt_read* API of lock/trylock/unlock the nesting level
is already being recorded in rwlock->read_depth, so we can push down the
migrate disable/enable to that level and condition it on the read_depth
going from 0 to 1 -> migrate_disable and 1 to 0 -> migrate_enable. This
eliminates the recursive calls that were needed when migrate_disable/enable
was done at the read_*lock* level.

The approach to read_*_bh also eliminates the concerns raised with the
regards to api inbalances (read_lock_bh -> read_unlock+local_bh_enable)

Tested-by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agowrite_lock migrate_disable pushdown to rt_write_lock
Nicholas Mc Guire [Thu, 2 Jan 2014 09:18:42 +0000 (10:18 +0100)]
write_lock migrate_disable pushdown to rt_write_lock

pushdown of migrate_disable/enable from write_*lock* to the rt_write_*lock*
api level

general mapping of write_*lock* to mutexes:

write_*lock*
  `-> rt_write_*lock*
          `-> __spin_lock (the sleeping __spin_lock)
                 `-> rt_mutex

write_*lock*s are non-recursive so we have two lock chains to consider
 - write_trylock*/write_unlock
 - write_lock*/wirte_unlock
for both paths the migration_disable/enable must be balanced.

write_trylock* mapping:

write_trylock_irqsave
                `-> rt_write_trylock_irqsave
write_trylock             \
          `-------->  rt_write_trylock
                       ret = rt_mutex_trylock
                              rt_mutex_fasttrylock
                               rt_mutex_cmpxchg
                       if (ret)
                            migrate_disable

write_lock* mapping:

                  write_lock_irqsave
                                `-> rt_write_lock_irqsave
write_lock_irq -> write_lock ----.     \
                  write_lock_bh -+      \
                                 `-> rt_write_lock
                                      __rt_spin_lock()
                                       rt_spin_lock_fastlock()
                                        rt_mutex_cmpxchg()
                                     migrate_disable()

write_unlock* mapping:

                    write_unlock_irqrestore.
                    write_unlock_bh -------+
write_unlock_irq -> write_unlock ----------+
                                           `-> rt_write_unlock()
                                                __rt_spin_unlock()
                                                 rt_spin_lock_fastunlock()
                                                  rt_mutex_cmpxchg()
                                               migrate_enable()

So calls to migrate_disable/enable() are better placed at the rt_write_*
level of lock/trylock/unlock as all of the write_*lock* API has this as a
common path.

This approach to write_*_bh also eliminates the concerns raised with
regards to api inbalances (write_lock_bh -> write_unlock+local_bh_enable)

Tested-by: Carsten Emde <C.Emde@osadl.org>
Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agomigrate_disable pushd down in rt_write_trylock_irqsave
Nicholas Mc Guire [Fri, 29 Nov 2013 05:21:59 +0000 (00:21 -0500)]
migrate_disable pushd down in rt_write_trylock_irqsave

Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agomigrate_disable pushd down in rt_spin_trylock_irqsave
Nicholas Mc Guire [Fri, 29 Nov 2013 05:17:27 +0000 (00:17 -0500)]
migrate_disable pushd down in rt_spin_trylock_irqsave

Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoRevert "migrate_disable pushd down in atomic_dec_and_spin_lock"
Sebastian Andrzej Siewior [Fri, 2 May 2014 15:32:30 +0000 (17:32 +0200)]
Revert "migrate_disable pushd down in atomic_dec_and_spin_lock"

This reverts commit ff9c870c3e27d58c9512fad122e91436681fee5a.
Cc: stable-rt@vger.kernel.org
8 years agomigrate_disable pushd down in atomic_dec_and_spin_lock
Nicholas Mc Guire [Fri, 29 Nov 2013 05:19:41 +0000 (00:19 -0500)]
migrate_disable pushd down in atomic_dec_and_spin_lock

Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agocondition migration_disable on lock acquisition
Nicholas Mc Guire [Fri, 22 Nov 2013 03:52:30 +0000 (22:52 -0500)]
condition migration_disable on lock acquisition

No need to unconditionally migrate_disable (what is it protecting ?) and
re-enable on failure to acquire the lock.
This patch moves the migrate_disable to be conditioned on sucessful lock
acquisition only.

Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agoRevert "rwsem-rt: Do not allow readers to nest"
Sebastian Andrzej Siewior [Wed, 25 Feb 2015 11:16:43 +0000 (12:16 +0100)]
Revert "rwsem-rt: Do not allow readers to nest"

This behaviour is required by cpufreq and its logic is "okay": It does a
read_lock followed by a try_read_lock.
Lockdep warns if one try a read_lock twice in -RT and vanilla so it
should be good. We still only allow multiple readers as long as it is in
the same process.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agorwsem-rt: Do not allow readers to nest
Steven Rostedt (Red Hat) [Fri, 2 May 2014 08:53:30 +0000 (10:53 +0200)]
rwsem-rt: Do not allow readers to nest

The readers of mainline rwsems are not allowed to nest, the rwsems in the
PREEMPT_RT kernel should not nest either.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agort: Add the preempt-rt lock replacement APIs
Thomas Gleixner [Sun, 26 Jul 2009 17:39:56 +0000 (19:39 +0200)]
rt: Add the preempt-rt lock replacement APIs

Map spinlocks, rwlocks, rw_semaphores and semaphores to the rt_mutex
based locking functions for preempt-rt.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agorwsem-add-rt-variant.patch
Thomas Gleixner [Wed, 29 Jun 2011 19:02:53 +0000 (21:02 +0200)]
rwsem-add-rt-variant.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agort-add-rt-to-mutex-headers.patch
Thomas Gleixner [Wed, 29 Jun 2011 18:56:22 +0000 (20:56 +0200)]
rt-add-rt-to-mutex-headers.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agort-add-rt-spinlocks.patch
Thomas Gleixner [Wed, 29 Jun 2011 17:43:35 +0000 (19:43 +0200)]
rt-add-rt-spinlocks.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agortmutex-avoid-include-hell.patch
Thomas Gleixner [Wed, 29 Jun 2011 18:06:39 +0000 (20:06 +0200)]
rtmutex-avoid-include-hell.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agospinlock-types-separate-raw.patch
Thomas Gleixner [Wed, 29 Jun 2011 17:34:01 +0000 (19:34 +0200)]
spinlock-types-separate-raw.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agort-mutex-add-sleeping-spinlocks-support.patch
Thomas Gleixner [Fri, 10 Jun 2011 09:21:25 +0000 (11:21 +0200)]
rt-mutex-add-sleeping-spinlocks-support.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agortmutex-lock-killable.patch
Thomas Gleixner [Thu, 9 Jun 2011 09:43:52 +0000 (11:43 +0200)]
rtmutex-lock-killable.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agofutex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
Thomas Gleixner [Fri, 1 Mar 2013 10:17:42 +0000 (11:17 +0100)]
futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock

In exit_pi_state_list() we have the following locking construct:

   spin_lock(&hb->lock);
   raw_spin_lock_irq(&curr->pi_lock);

   ...
   spin_unlock(&hb->lock);

In !RT this works, but on RT the migrate_enable() function which is
called from spin_unlock() sees atomic context due to the held pi_lock
and just decrements the migrate_disable_atomic counter of the
task. Now the next call to migrate_disable() sees the counter being
negative and issues a warning. That check should be in
migrate_enable() already.

Fix this by dropping pi_lock before unlocking hb->lock and reaquire
pi_lock after that again. This is safe as the loop code reevaluates
head again under the pi_lock.

Reported-by: Yong Zhang <yong.zhang@windriver.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
8 years agofutex: Fix bug on when a requeued RT task times out
Steven Rostedt [Sun, 13 Sep 2015 07:47:20 +0000 (09:47 +0200)]
futex: Fix bug on when a requeued RT task times out

Requeue with timeout causes a bug with PREEMPT_RT_FULL.

The bug comes from a timed out condition.

TASK 1 TASK 2
------ ------
    futex_wait_requeue_pi()
futex_wait_queue_me()
<timed out>

double_lock_hb();

raw_spin_lock(pi_lock);
if (current->pi_blocked_on) {
} else {
    current->pi_blocked_on = PI_WAKE_INPROGRESS;
    run_spin_unlock(pi_lock);
    spin_lock(hb->lock); <-- blocked!

plist_for_each_entry_safe(this) {
    rt_mutex_start_proxy_lock();
task_blocks_on_rt_mutex();
BUG_ON(task->pi_blocked_on)!!!!

The BUG_ON() actually has a check for PI_WAKE_INPROGRESS, but the
problem is that, after TASK 1 sets PI_WAKE_INPROGRESS, it then tries to
grab the hb->lock, which it fails to do so. As the hb->lock is a mutex,
it will block and set the "pi_blocked_on" to the hb->lock.

When TASK 2 goes to requeue it, the check for PI_WAKE_INPROGESS fails
because the task1's pi_blocked_on is no longer set to that, but instead,
set to the hb->lock.

The fix:

When calling rt_mutex_start_proxy_lock() a check is made to see
if the proxy tasks pi_blocked_on is set. If so, exit out early.
Otherwise set it to a new flag PI_REQUEUE_INPROGRESS, which notifies
the proxy task that it is being requeued, and will handle things
appropriately.

Cc: stable-rt@vger.kernel.org
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agortmutex-futex-prepare-rt.patch
Thomas Gleixner [Fri, 10 Jun 2011 09:04:15 +0000 (11:04 +0200)]
rtmutex-futex-prepare-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agomd: raid5: Make raid5_percpu handling RT aware
Thomas Gleixner [Tue, 6 Apr 2010 14:51:31 +0000 (16:51 +0200)]
md: raid5: Make raid5_percpu handling RT aware

__raid_run_ops() disables preemption with get_cpu() around the access
to the raid5_percpu variables. That causes scheduling while atomic
spews on RT.

Serialize the access to the percpu data with a lock and keep the code
preemptible.

Reported-by: Udo van den Heuvel <udovdh@xs4all.nl>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Udo van den Heuvel <udovdh@xs4all.nl>
8 years agolocal-vars-migrate-disable.patch
Thomas Gleixner [Tue, 28 Jun 2011 18:42:16 +0000 (20:42 +0200)]
local-vars-migrate-disable.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agogenirq: Allow disabling of softirq processing in irq thread context
Thomas Gleixner [Tue, 31 Jan 2012 12:01:27 +0000 (13:01 +0100)]
genirq: Allow disabling of softirq processing in irq thread context

The processing of softirqs in irq thread context is a performance gain
for the non-rt workloads of a system, but it's counterproductive for
interrupts which are explicitely related to the realtime
workload. Allow such interrupts to prevent softirq processing in their
thread context.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable-rt@vger.kernel.org
8 years agotasklet: Prevent tasklets from going into infinite spin in RT
Ingo Molnar [Wed, 30 Nov 2011 01:18:22 +0000 (20:18 -0500)]
tasklet: Prevent tasklets from going into infinite spin in RT

When CONFIG_PREEMPT_RT_FULL is enabled, tasklets run as threads,
and spinlocks turn are mutexes. But this can cause issues with
tasks disabling tasklets. A tasklet runs under ksoftirqd, and
if a tasklets are disabled with tasklet_disable(), the tasklet
count is increased. When a tasklet runs, it checks this counter
and if it is set, it adds itself back on the softirq queue and
returns.

The problem arises in RT because ksoftirq will see that a softirq
is ready to run (the tasklet softirq just re-armed itself), and will
not sleep, but instead run the softirqs again. The tasklet softirq
will still see that the count is non-zero and will not execute
the tasklet and requeue itself on the softirq again, which will
cause ksoftirqd to run it again and again and again.

It gets worse because ksoftirqd runs as a real-time thread.
If it preempted the task that disabled tasklets, and that task
has migration disabled, or can't run for other reasons, the tasklet
softirq will never run because the count will never be zero, and
ksoftirqd will go into an infinite loop. As an RT task, it this
becomes a big problem.

This is a hack solution to have tasklet_disable stop tasklets, and
when a tasklet runs, instead of requeueing the tasklet softirqd
it delays it. When tasklet_enable() is called, and tasklets are
waiting, then the tasklet_enable() will kick the tasklets to continue.
This prevents the lock up from ksoftirq going into an infinite loop.

[ rostedt@goodmis.org: ported to 3.0-rt ]

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agosoftirq-make-fifo.patch
Thomas Gleixner [Thu, 21 Jul 2011 19:06:43 +0000 (21:06 +0200)]
softirq-make-fifo.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agosoftirq-disable-softirq-stacks-for-rt.patch
Thomas Gleixner [Mon, 18 Jul 2011 11:59:17 +0000 (13:59 +0200)]
softirq-disable-softirq-stacks-for-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agosoftirq-local-lock.patch
Thomas Gleixner [Tue, 28 Jun 2011 13:57:18 +0000 (15:57 +0200)]
softirq-local-lock.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agomutex-no-spin-on-rt.patch
Thomas Gleixner [Sun, 17 Jul 2011 19:51:45 +0000 (21:51 +0200)]
mutex-no-spin-on-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
8 years agolockdep-rt.patch
Thomas Gleixner [Sun, 17 Jul 2011 16:51:23 +0000 (18:51 +0200)]
lockdep-rt.patch

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>