Skip to content

Commit b174ae5

Browse files
PatrisiousHaddadNipaLocal
authored andcommitted
net/mlx5: Improve write-combining test reliability for ARM64 Grace CPUs
Write combining is an optimization feature in CPUs that is frequently used by modern devices to generate 32 or 64 byte TLPs at the PCIe level. These large TLPs allow certain optimizations in the driver to HW communication that improve performance. As WC is unpredictable and optional the HW designs all tolerate cases where combining doesn't happen and simply experience a performance degradation. Unfortunately many virtualization environments on all architectures have done things that completely disable WC inside the VM with no generic way to detect this. For example WC was fully blocked in ARM64 KVM until commit 8c47ce3 ("KVM: arm64: Set io memory s2 pte as normalnc for vfio pci device"). Trying to use WC when it is known not to work has a measurable performance cost (~5%). Long ago mlx5 developed an boot time algorithm to test if WC is available or not by using unique mlx5 HW features to measure how many large TLPs the device is receiving. The SW generates a large number of combining opportunities and if any succeed then WC is declared working. In mlx5 the WC optimization feature is never used by the kernel except for the boot time test. The WC is only used by userspace in rdma-core. Sadly modern ARM CPUs, especially NVIDIA Grace, have a combining implementation that is very unreliable compared to pretty much everything prior. This is being fixed architecturally in new CPUs with a new ST64B instruction, but current shipping devices suffer this problem. Unreliable means the SW can present thousands of combining opportunities and the HW will not combine for any of them, which creates a performance degradation, and critically fails the mlx5 boot test. However, the CPU is very sensitive to the instruction sequence used, with the better options being sufficiently good that the performance loss from the unreliable CPU is not measurable. Broadly there are several options, from worst to best: 1) A C loop doing a u64 memcpy. This was used prior to commit ef30228 ("IB/mlx5: Use __iowrite64_copy() for write combining stores") and failed almost all the time on Grace CPUs. 2) ARM64 assembly with consecutive 8 byte stores. This was implemented as an arch-generic __iowriteXX_copy() family of functions suitable for performance use in drivers for WC. commit ead7911 ("arm64/io: Provide a WC friendly __iowriteXX_copy()") provided the ARM implementation. 3) ARM64 assembly with consecutive 16 byte stores. This was rejected from kernel use over fears of virtualization failures. Common ARM VMMs will crash if STP is used against emulated memory. 4) A single NEON store instruction. Userspace has used this option for a very long time, it performs well. 5) For future silicon the new ST64B instruction is guaranteed to generate a 64 byte TLP 100% of the time The past upgrade from kernel-patches#1 to kernel-patches#2 was thought to be sufficient to solve this problem. However, more testing on more systems shows that kernel-patches#3 is still problematic at a low frequency and the kernel test fails. Thus, make the mlx5 use the same instructions as userspace during the boot time WC self test. This way the WC test matches the userspace and will properly detect the ability of HW to support the WC workload that userspace will generate. While kernel-patches#4 still has imperfect combining performance, it is substantially better than kernel-patches#2, and does actually give a performance win to applications. Self-test failures with kernel-patches#2 are like 3/10 boots, on some systems, kernel-patches#4 has never seen a boot failure. There is no real general use case for a NEON based WC flow in the kernel. This is not suitable for any performance path work as getting into/out of a NEON context is fairly expensive compared to the gain of WC. Future CPUs are going to fix this issue by using an new ARM instruction and __iowriteXX_copy() will be updated to use that automatically, probably using the ALTERNATES mechanism. Since this problem is constrained to mlx5's unique situation of needing a non-performance code path to duplicate what mlx5 userspace is doing as a matter of self-testing, implement it as a one line inline assembly in the driver directly. Lastly, this was concluded from the discussion with ARM maintainers which confirms that this is the best approach for the solution: https://lore.kernel.org/r/[email protected] Signed-off-by: Patrisious Haddad <[email protected]> Reviewed-by: Michael Guralnik <[email protected]> Reviewed-by: Moshe Shemesh <[email protected]> Signed-off-by: Tariq Toukan <[email protected]> Signed-off-by: NipaLocal <nipa@local>
1 parent 3799b46 commit b174ae5

File tree

4 files changed

+54
-2
lines changed

4 files changed

+54
-2
lines changed

drivers/net/ethernet/mellanox/mlx5/core/Makefile

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -176,3 +176,9 @@ mlx5_core-$(CONFIG_PCIE_TPH) += lib/st.o
176176

177177
obj-$(CONFIG_MLX5_DPLL) += mlx5_dpll.o
178178
mlx5_dpll-y := dpll.o
179+
180+
#
181+
# NEON WC specific for mlx5
182+
#
183+
mlx5_core-$(CONFIG_KERNEL_MODE_NEON) += lib/wc_neon_iowrite64_copy.o
184+
FLAGS_lib/wc_neon_iowrite64_copy.o += $(CC_FLAGS_FPU)
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
2+
/* Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
3+
4+
#include "lib/wc_neon_iowrite64_copy.h"
5+
6+
void mlx5_wc_neon_iowrite64_copy(void __iomem *to, const void *from)
7+
{
8+
asm volatile
9+
("ld1 {v0.16b, v1.16b, v2.16b, v3.16b}, [%0]\n\t"
10+
"st1 {v0.16b, v1.16b, v2.16b, v3.16b}, [%1]"
11+
:
12+
: "r"(from), "r"(to)
13+
: "memory", "v0", "v1", "v2", "v3");
14+
}
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
2+
/* Copyright (c) 2025, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
3+
4+
#ifndef __MLX5_LIB_WC_NEON_H__
5+
#define __MLX5_LIB_WC_NEON_H__
6+
7+
/* Executes a 64 byte copy between the two provided pointers via ARM neon
8+
* instruction.
9+
*/
10+
void mlx5_wc_neon_iowrite64_copy(void __iomem *to, const void *from);
11+
12+
#endif /* __MLX5_LIB_WC_NEON_H__ */

drivers/net/ethernet/mellanox/mlx5/core/wc.c

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,11 @@
77
#include "mlx5_core.h"
88
#include "wq.h"
99

10+
#ifdef CONFIG_KERNEL_MODE_NEON
11+
#include "lib/wc_neon_iowrite64_copy.h"
12+
#include <asm/neon.h>
13+
#endif
14+
1015
#define TEST_WC_NUM_WQES 255
1116
#define TEST_WC_LOG_CQ_SZ (order_base_2(TEST_WC_NUM_WQES))
1217
#define TEST_WC_SQ_LOG_WQ_SZ TEST_WC_LOG_CQ_SZ
@@ -249,6 +254,22 @@ static int mlx5_wc_create_sq(struct mlx5_core_dev *mdev, struct mlx5_wc_sq *sq)
249254
return err;
250255
}
251256

257+
static void mlx5_iowrite64_copy(struct mlx5_wc_sq *sq, __be32 mmio_wqe[16],
258+
size_t mmio_wqe_size)
259+
{
260+
#ifdef CONFIG_KERNEL_MODE_NEON
261+
if (cpu_has_neon()) {
262+
kernel_neon_begin();
263+
mlx5_wc_neon_iowrite64_copy(sq->bfreg.map + sq->bfreg.offset,
264+
mmio_wqe);
265+
kernel_neon_end();
266+
return;
267+
}
268+
#endif
269+
__iowrite64_copy(sq->bfreg.map + sq->bfreg.offset, mmio_wqe,
270+
mmio_wqe_size / 8);
271+
}
272+
252273
static void mlx5_wc_destroy_sq(struct mlx5_wc_sq *sq)
253274
{
254275
mlx5_core_destroy_sq(sq->cq.mdev, sq->sqn);
@@ -288,8 +309,7 @@ static void mlx5_wc_post_nop(struct mlx5_wc_sq *sq, bool signaled)
288309
*/
289310
wmb();
290311

291-
__iowrite64_copy(sq->bfreg.map + sq->bfreg.offset, mmio_wqe,
292-
sizeof(mmio_wqe) / 8);
312+
mlx5_iowrite64_copy(sq, mmio_wqe, sizeof(mmio_wqe));
293313

294314
sq->bfreg.offset ^= buf_size;
295315
}

0 commit comments

Comments
 (0)