e_by_rq_type`) so the RX queue never comes up. `stmmac_init_rx_buffers()` follows the same pattern in `drivers/net/ethernet/stmicro/stmmac/stmmac_main.c:2051`–`drivers/net/ ethernet/stmicro/stmmac/stmmac_main.c:2066`, meaning larger rings or MTU-derived pools currently make the interface unusable. - The lower cap is safe: when the ptr_ring fills, the existing slow-path already frees excess pages (`page_pool_recycle_in_ring()` at `net/core/page_pool.c:746` together with the fallback in `page_pool_put_unrefed_netmem()` at `net/core/page_pool.c:873`), so a smaller cache only increases occasional allocations but does not change correctness. No ABI or driver interfaces are touched, and every driver benefits automatically without per-driver clamps. - This is a minimal, localized fix that prevents hard user-visible failures (device queues refusing to start) on systems with large RX rings or jumbo MTUs, making it an excellent candidate for stable backports. net/core/page_pool.c | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/net/core/page_pool.c b/net/core/page_pool.c index e224d2145eed9..1a5edec485f14 100644 --- a/net/core/page_pool.c +++ b/net/core/page_pool.c @@ -211,11 +211,7 @@ static int page_pool_init(struct page_pool *pool, return -EINVAL; if (pool->p.pool_size) - ring_qsize = pool->p.pool_size; - - /* Sanity limit mem that can be pinned down */ - if (ring_qsize > 32768) - return -E2BIG; + ring_qsize = min(pool->p.pool_size, 16384); /* DMA direction is either DMA_FROM_DEVICE or DMA_BIDIRECTIONAL. * DMA_BIDIRECTIONAL is for allowing page used for DMA sending, -- 2.51.0[PATCH AUTOSEL 6.17-5.4] page_pool: Clamp pool size to max 16K pagesSasha Levin undefinedpatches@lists.linux.dev, stable@vger.kernel.org undefined undefined undefined undefined undefined undefined undefined undefined¢dƒÇ