Skip to content

Commit c77c0a8

Browse files
Waiman-Longtorvalds
authored andcommitted
mm/hugetlb: defer freeing of huge pages if in non-task context
The following lockdep splat was observed when a certain hugetlbfs test was run: ================================ WARNING: inconsistent lock state 4.18.0-159.el8.x86_64+debug deepin-community#1 Tainted: G W --------- - - -------------------------------- inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. swapper/30/0 [HC0[0]:SC1[1]:HE1:SE0] takes: ffffffff9acdc038 (hugetlb_lock){+.?.}, at: free_huge_page+0x36f/0xaa0 {SOFTIRQ-ON-W} state was registered at: lock_acquire+0x14f/0x3b0 _raw_spin_lock+0x30/0x70 __nr_hugepages_store_common+0x11b/0xb30 hugetlb_sysctl_handler_common+0x209/0x2d0 proc_sys_call_handler+0x37f/0x450 vfs_write+0x157/0x460 ksys_write+0xb8/0x170 do_syscall_64+0xa5/0x4d0 entry_SYSCALL_64_after_hwframe+0x6a/0xdf irq event stamp: 691296 hardirqs last enabled at (691296): [<ffffffff99bb034b>] _raw_spin_unlock_irqrestore+0x4b/0x60 hardirqs last disabled at (691295): [<ffffffff99bb0ad2>] _raw_spin_lock_irqsave+0x22/0x81 softirqs last enabled at (691284): [<ffffffff97ff0c63>] irq_enter+0xc3/0xe0 softirqs last disabled at (691285): [<ffffffff97ff0ebe>] irq_exit+0x23e/0x2b0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(hugetlb_lock); <Interrupt> lock(hugetlb_lock); *** DEADLOCK *** : Call Trace: <IRQ> __lock_acquire+0x146b/0x48c0 lock_acquire+0x14f/0x3b0 _raw_spin_lock+0x30/0x70 free_huge_page+0x36f/0xaa0 bio_check_pages_dirty+0x2fc/0x5c0 clone_endio+0x17f/0x670 [dm_mod] blk_update_request+0x276/0xe50 scsi_end_request+0x7b/0x6a0 scsi_io_completion+0x1c6/0x1570 blk_done_softirq+0x22e/0x350 __do_softirq+0x23d/0xad8 irq_exit+0x23e/0x2b0 do_IRQ+0x11a/0x200 common_interrupt+0xf/0xf </IRQ> Both the hugetbl_lock and the subpool lock can be acquired in free_huge_page(). One way to solve the problem is to make both locks irq-safe. However, Mike Kravetz had learned that the hugetlb_lock is held for a linear scan of ALL hugetlb pages during a cgroup reparentling operation. So it is just too long to have irq disabled unless we can break hugetbl_lock down into finer-grained locks with shorter lock hold times. Another alternative is to defer the freeing to a workqueue job. This patch implements the deferred freeing by adding a free_hpage_workfn() work function to do the actual freeing. The free_huge_page() call in a non-task context saves the page to be freed in the hpage_freelist linked list in a lockless manner using the llist APIs. The generic workqueue is used to process the work, but a dedicated workqueue can be used instead if it is desirable to have the huge page freed ASAP. Thanks to Kirill Tkhai <[email protected]> for suggesting the use of llist APIs which simplfy the code. Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Waiman Long <[email protected]> Reviewed-by: Mike Kravetz <[email protected]> Acked-by: Davidlohr Bueso <[email protected]> Acked-by: Michal Hocko <[email protected]> Reviewed-by: Kirill Tkhai <[email protected]> Cc: Aneesh Kumar K.V <[email protected]> Cc: Matthew Wilcox <[email protected]> Cc: Andi Kleen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent a7c46c0 commit c77c0a8

File tree

1 file changed

+50
-1
lines changed

1 file changed

+50
-1
lines changed

mm/hugetlb.c

Lines changed: 50 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,7 @@
2727
#include <linux/swapops.h>
2828
#include <linux/jhash.h>
2929
#include <linux/numa.h>
30+
#include <linux/llist.h>
3031

3132
#include <asm/page.h>
3233
#include <asm/pgtable.h>
@@ -1136,7 +1137,7 @@ static inline void ClearPageHugeTemporary(struct page *page)
11361137
page[2].mapping = NULL;
11371138
}
11381139

1139-
void free_huge_page(struct page *page)
1140+
static void __free_huge_page(struct page *page)
11401141
{
11411142
/*
11421143
* Can't pass hstate in here because it is called from the
@@ -1199,6 +1200,54 @@ void free_huge_page(struct page *page)
11991200
spin_unlock(&hugetlb_lock);
12001201
}
12011202

1203+
/*
1204+
* As free_huge_page() can be called from a non-task context, we have
1205+
* to defer the actual freeing in a workqueue to prevent potential
1206+
* hugetlb_lock deadlock.
1207+
*
1208+
* free_hpage_workfn() locklessly retrieves the linked list of pages to
1209+
* be freed and frees them one-by-one. As the page->mapping pointer is
1210+
* going to be cleared in __free_huge_page() anyway, it is reused as the
1211+
* llist_node structure of a lockless linked list of huge pages to be freed.
1212+
*/
1213+
static LLIST_HEAD(hpage_freelist);
1214+
1215+
static void free_hpage_workfn(struct work_struct *work)
1216+
{
1217+
struct llist_node *node;
1218+
struct page *page;
1219+
1220+
node = llist_del_all(&hpage_freelist);
1221+
1222+
while (node) {
1223+
page = container_of((struct address_space **)node,
1224+
struct page, mapping);
1225+
node = node->next;
1226+
__free_huge_page(page);
1227+
}
1228+
}
1229+
static DECLARE_WORK(free_hpage_work, free_hpage_workfn);
1230+
1231+
void free_huge_page(struct page *page)
1232+
{
1233+
/*
1234+
* Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
1235+
*/
1236+
if (!in_task()) {
1237+
/*
1238+
* Only call schedule_work() if hpage_freelist is previously
1239+
* empty. Otherwise, schedule_work() had been called but the
1240+
* workfn hasn't retrieved the list yet.
1241+
*/
1242+
if (llist_add((struct llist_node *)&page->mapping,
1243+
&hpage_freelist))
1244+
schedule_work(&free_hpage_work);
1245+
return;
1246+
}
1247+
1248+
__free_huge_page(page);
1249+
}
1250+
12021251
static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
12031252
{
12041253
INIT_LIST_HEAD(&page->lru);

0 commit comments

Comments
 (0)