Skip to content

Commit 1d45126

Browse files
yanggeopsiff
authored andcommitted
mm: compaction: use the proper flag to determine watermarks
mainline inclusion from mainline-v6.14-rc1 category: feature CVE: NA -------------------------------- commit 6268f0a upstream. There are 4 NUMA nodes on my machine, and each NUMA node has 32GB of memory. I have configured 16GB of CMA memory on each NUMA node, and starting a 32GB virtual machine with device passthrough is extremely slow, taking almost an hour. Long term GUP cannot allocate memory from CMA area, so a maximum of 16 GB of no-CMA memory on a NUMA node can be used as virtual machine memory. There is 16GB of free CMA memory on a NUMA node, which is sufficient to pass the order-0 watermark check, causing the __compaction_suitable() function to consistently return true. For costly allocations, if the __compaction_suitable() function always returns true, it causes the __alloc_pages_slowpath() function to fail to exit at the appropriate point. This prevents timely fallback to allocating memory on other nodes, ultimately resulting in excessively long virtual machine startup times. Call trace: __alloc_pages_slowpath if (compact_result == COMPACT_SKIPPED || compact_result == COMPACT_DEFERRED) goto nopage; // should exit __alloc_pages_slowpath() from here We could use the real unmovable allocation context to have __zone_watermark_unusable_free() subtract CMA pages, and thus we won't pass the order-0 check anymore once the non-CMA part is exhausted. There is some risk that in some different scenario the compaction could in fact migrate pages from the exhausted non-CMA part of the zone to the CMA part and succeed, and we'll skip it instead. But only __GFP_NORETRY allocations should be affected in the immediate "goto nopage" when compaction is skipped, others will attempt with DEF_COMPACT_PRIORITY anyway and won't fail without trying to compact-migrate the non-CMA pageblocks into CMA pageblocks first, so it should be fine. After this fix, it only takes a few tens of seconds to start a 32GB virtual machine with device passthrough functionality. Link: https://lore.kernel.org/lkml/[email protected]/ Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: yangge <[email protected]> Acked-by: Vlastimil Babka <[email protected]> Reviewed-by: Baolin Wang <[email protected]> Acked-by: Johannes Weiner <[email protected]> Cc: Barry Song <[email protected]> Cc: David Hildenbrand <[email protected]> Signed-off-by: Andrew Morton <[email protected]>
1 parent 0143de8 commit 1d45126

File tree

1 file changed

+25
-4
lines changed

1 file changed

+25
-4
lines changed

mm/compaction.c

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2386,7 +2386,8 @@ bool compaction_zonelist_suitable(struct alloc_context *ac, int order,
23862386
*/
23872387
static enum compact_result
23882388
compaction_suit_allocation_order(struct zone *zone, unsigned int order,
2389-
int highest_zoneidx, unsigned int alloc_flags)
2389+
int highest_zoneidx, unsigned int alloc_flags,
2390+
bool async)
23902391
{
23912392
unsigned long watermark;
23922393

@@ -2395,6 +2396,23 @@ compaction_suit_allocation_order(struct zone *zone, unsigned int order,
23952396
alloc_flags))
23962397
return COMPACT_SUCCESS;
23972398

2399+
/*
2400+
* For unmovable allocations (without ALLOC_CMA), check if there is enough
2401+
* free memory in the non-CMA pageblocks. Otherwise compaction could form
2402+
* the high-order page in CMA pageblocks, which would not help the
2403+
* allocation to succeed. However, limit the check to costly order async
2404+
* compaction (such as opportunistic THP attempts) because there is the
2405+
* possibility that compaction would migrate pages from non-CMA to CMA
2406+
* pageblock.
2407+
*/
2408+
if (order > PAGE_ALLOC_COSTLY_ORDER && async &&
2409+
!(alloc_flags & ALLOC_CMA)) {
2410+
watermark = low_wmark_pages(zone) + compact_gap(order);
2411+
if (!__zone_watermark_ok(zone, 0, watermark, highest_zoneidx,
2412+
0, zone_page_state(zone, NR_FREE_PAGES)))
2413+
return COMPACT_SKIPPED;
2414+
}
2415+
23982416
if (!compaction_suitable(zone, order, highest_zoneidx))
23992417
return COMPACT_SKIPPED;
24002418

@@ -2428,7 +2446,8 @@ compact_zone(struct compact_control *cc, struct capture_control *capc)
24282446
if (!is_via_compact_memory(cc->order)) {
24292447
ret = compaction_suit_allocation_order(cc->zone, cc->order,
24302448
cc->highest_zoneidx,
2431-
cc->alloc_flags);
2449+
cc->alloc_flags,
2450+
cc->mode == MIGRATE_ASYNC);
24322451
if (ret != COMPACT_CONTINUE)
24332452
return ret;
24342453
}
@@ -2934,7 +2953,8 @@ static bool kcompactd_node_suitable(pg_data_t *pgdat)
29342953

29352954
ret = compaction_suit_allocation_order(zone,
29362955
pgdat->kcompactd_max_order,
2937-
highest_zoneidx, ALLOC_WMARK_MIN);
2956+
highest_zoneidx, ALLOC_WMARK_MIN,
2957+
false);
29382958
if (ret == COMPACT_CONTINUE)
29392959
return true;
29402960
}
@@ -2975,7 +2995,8 @@ static void kcompactd_do_work(pg_data_t *pgdat)
29752995
continue;
29762996

29772997
ret = compaction_suit_allocation_order(zone,
2978-
cc.order, zoneid, ALLOC_WMARK_MIN);
2998+
cc.order, zoneid, ALLOC_WMARK_MIN,
2999+
false);
29793000
if (ret != COMPACT_CONTINUE)
29803001
continue;
29813002

0 commit comments

Comments
 (0)