generated from delphix/.github
-
Notifications
You must be signed in to change notification settings - Fork 8
DLPX-83442 Disable various kernel modules which we don't use #14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2e70437
to
0197b21
Compare
sdimitro
approved these changes
Oct 20, 2022
865278d
to
4889345
Compare
85e9b55
to
0769015
Compare
0769015
to
e07ba43
Compare
sebroy
approved these changes
Nov 28, 2022
don-brady
pushed a commit
to don-brady/linux-kernel-gcp
that referenced
this pull request
Nov 29, 2022
delphix-devops-bot
pushed a commit
that referenced
this pull request
Dec 2, 2022
delphix-devops-bot
pushed a commit
that referenced
this pull request
Dec 15, 2022
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jan 7, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jan 16, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Feb 15, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Mar 4, 2023
…KVM vectors BugLink: https://bugs.launchpad.net/bugs/2003896 Sami reports that linux panic()s when resuming from suspend to RAM. This is because when CPUs are brought back online, they re-enable any necessary mitigations. The Spectre-v2 and Spectre-BHB mitigations interact as both need to done by KVM when exiting a guest. Slots KVM can use as vectors are allocated, and templates for the mitigation are patched into the vector. This fails if a new slot needs to be allocated once the kernel has finished booting as it is no-longer possible to modify KVM's vectors: | root@adam:/sys/devices/system/cpu/cpu1# echo 1 > online | Unable to handle kernel write to read-only memory at virtual add> | Mem abort info: | ESR = 0x9600004e | Exception class = DABT (current EL), IL = 32 bits | SET = 0, FnV = 0 | EA = 0, S1PTW = 0 | Data abort info: | ISV = 0, ISS = 0x0000004e | CM = 0, WnR = 1 | swapper pgtable: 4k pages, 48-bit VAs, pgdp = 000000000f07a71c | [ffff800000b4b800] pgd=00000009ffff8803, pud=00000009ffff7803, p> | Internal error: Oops: 9600004e [#1] PREEMPT SMP | Modules linked in: | Process swapper/1 (pid: 0, stack limit = 0x0000000063153c53) | CPU: 1 PID: 0 Comm: swapper/1 Not tainted 4.19.252-dirty #14 | Hardware name: ARM LTD ARM Juno Development Platform/ARM Juno De> | pstate: 000001c5 (nzcv dAIF -PAN -UAO) | pc : __memcpy+0x48/0x180 | lr : __copy_hyp_vect_bpi+0x64/0x90 | Call trace: | __memcpy+0x48/0x180 | kvm_setup_bhb_slot+0x204/0x2a8 | spectre_bhb_enable_mitigation+0x1b8/0x1d0 | __verify_local_cpu_caps+0x54/0xf0 | check_local_cpu_capabilities+0xc4/0x184 | secondary_start_kernel+0xb0/0x170 | Code: b8404423 b80044c3 3618006 f8408423 (f80084c3) | ---[ end trace 859bcacb09555348 ]--- | Kernel panic - not syncing: Attempted to kill the idle task! | SMP: stopping secondary CPUs | Kernel Offset: disabled | CPU features: 0x10,25806086 | Memory Limit: none | ---[ end Kernel panic - not syncing: Attempted to kill the idle ] This is only a problem on platforms where there is only one CPU that is vulnerable to both Spectre-v2 and Spectre-BHB. The Spectre-v2 mitigation identifies the slot it can re-use by the CPU's 'fn'. It unconditionally writes the slot number and 'template_start' pointer. The Spectre-BHB mitigation identifies slots it can re-use by the CPU's template_start pointer, which was previously clobbered by the Spectre-v2 mitigation. When there is only one CPU that is vulnerable to both issues, this causes Spectre-v2 to try to allocate a new slot, which fails. Change both mitigations to check whether they are changing the slot this CPU uses before writing the percpu variables again. This issue only exists in the stable backports for Spectre-BHB which have to use totally different infrastructure to mainline. Reported-by: Sami Lee <[email protected]> Fixes: 9013fd4bc958 ("arm64: Mitigate spectre style branch history side channels") Signed-off-by: James Morse <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]> Signed-off-by: Kamal Mostafa <[email protected]> Signed-off-by: Stefan Bader <[email protected]>
delphix-devops-bot
pushed a commit
that referenced
this pull request
Mar 4, 2023
…g the sock BugLink: https://bugs.launchpad.net/bugs/2003914 [ Upstream commit 3cf7203 ] There is a race condition in vxlan that when deleting a vxlan device during receiving packets, there is a possibility that the sock is released after getting vxlan_sock vs from sk_user_data. Then in later vxlan_ecn_decapsulate(), vxlan_get_sk_family() we will got NULL pointer dereference. e.g. #0 [ffffa25ec6978a38] machine_kexec at ffffffff8c669757 #1 [ffffa25ec6978a90] __crash_kexec at ffffffff8c7c0a4d #2 [ffffa25ec6978b58] crash_kexec at ffffffff8c7c1c48 #3 [ffffa25ec6978b60] oops_end at ffffffff8c627f2b #4 [ffffa25ec6978b80] page_fault_oops at ffffffff8c678fcb #5 [ffffa25ec6978bd8] exc_page_fault at ffffffff8d109542 #6 [ffffa25ec6978c00] asm_exc_page_fault at ffffffff8d200b62 [exception RIP: vxlan_ecn_decapsulate+0x3b] RIP: ffffffffc1014e7b RSP: ffffa25ec6978cb0 RFLAGS: 00010246 RAX: 0000000000000008 RBX: ffff8aa000888000 RCX: 0000000000000000 RDX: 000000000000000e RSI: ffff8a9fc7ab803e RDI: ffff8a9fd1168700 RBP: ffff8a9fc7ab803e R8: 0000000000700000 R9: 00000000000010ae R10: ffff8a9fcb748980 R11: 0000000000000000 R12: ffff8a9fd1168700 R13: ffff8aa000888000 R14: 00000000002a0000 R15: 00000000000010ae ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 #7 [ffffa25ec6978ce8] vxlan_rcv at ffffffffc10189cd [vxlan] #8 [ffffa25ec6978d90] udp_queue_rcv_one_skb at ffffffff8cfb6507 #9 [ffffa25ec6978dc0] udp_unicast_rcv_skb at ffffffff8cfb6e45 #10 [ffffa25ec6978dc8] __udp4_lib_rcv at ffffffff8cfb8807 #11 [ffffa25ec6978e20] ip_protocol_deliver_rcu at ffffffff8cf76951 #12 [ffffa25ec6978e48] ip_local_deliver at ffffffff8cf76bde #13 [ffffa25ec6978ea0] __netif_receive_skb_one_core at ffffffff8cecde9b #14 [ffffa25ec6978ec8] process_backlog at ffffffff8cece139 #15 [ffffa25ec6978f00] __napi_poll at ffffffff8ceced1a #16 [ffffa25ec6978f28] net_rx_action at ffffffff8cecf1f3 #17 [ffffa25ec6978fa0] __softirqentry_text_start at ffffffff8d4000ca #18 [ffffa25ec6978ff0] do_softirq at ffffffff8c6fbdc3 Reproducer: https://github.com/Mellanox/ovs-tests/blob/master/test-ovs-vxlan-remove-tunnel-during-traffic.sh Fix this by waiting for all sk_user_data reader to finish before releasing the sock. Reported-by: Jianlin Shi <[email protected]> Suggested-by: Jakub Sitnicki <[email protected]> Fixes: 6a93cc9 ("udp-tunnel: Add a few more UDP tunnel APIs") Signed-off-by: Hangbin Liu <[email protected]> Reviewed-by: Jiri Pirko <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Sasha Levin <[email protected]> Signed-off-by: Kamal Mostafa <[email protected]> Signed-off-by: Stefan Bader <[email protected]>
delphix-devops-bot
pushed a commit
that referenced
this pull request
Mar 4, 2023
prakashsurya
pushed a commit
that referenced
this pull request
Mar 14, 2023
prakashsurya
pushed a commit
that referenced
this pull request
Mar 14, 2023
prakashsurya
pushed a commit
that referenced
this pull request
Mar 14, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Mar 30, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Apr 20, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Apr 28, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
May 26, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jun 3, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jun 4, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jun 5, 2023
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 21, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 22, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 23, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 24, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 25, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 25, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 26, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 27, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 28, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 29, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 30, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Jul 31, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Aug 1, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Aug 2, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Aug 3, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Aug 4, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Aug 7, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Aug 8, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 9, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 25, 2025
BugLink: https://bugs.launchpad.net/bugs/2115678 [ Upstream commit 88f7f56d16f568f19e1a695af34a7f4a6ce537a6 ] When a bio with REQ_PREFLUSH is submitted to dm, __send_empty_flush() generates a flush_bio with REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC, which causes the flush_bio to be throttled by wbt_wait(). An example from v5.4, similar problem also exists in upstream: crash> bt 2091206 PID: 2091206 TASK: ffff2050df92a300 CPU: 109 COMMAND: "kworker/u260:0" #0 [ffff800084a2f7f0] __switch_to at ffff80004008aeb8 #1 [ffff800084a2f820] __schedule at ffff800040bfa0c4 #2 [ffff800084a2f880] schedule at ffff800040bfa4b4 #3 [ffff800084a2f8a0] io_schedule at ffff800040bfa9c4 #4 [ffff800084a2f8c0] rq_qos_wait at ffff8000405925bc #5 [ffff800084a2f940] wbt_wait at ffff8000405bb3a0 #6 [ffff800084a2f9a0] __rq_qos_throttle at ffff800040592254 #7 [ffff800084a2f9c0] blk_mq_make_request at ffff80004057cf38 #8 [ffff800084a2fa60] generic_make_request at ffff800040570138 #9 [ffff800084a2fae0] submit_bio at ffff8000405703b4 #10 [ffff800084a2fb50] xlog_write_iclog at ffff800001280834 [xfs] #11 [ffff800084a2fbb0] xlog_sync at ffff800001280c3c [xfs] #12 [ffff800084a2fbf0] xlog_state_release_iclog at ffff800001280df4 [xfs] #13 [ffff800084a2fc10] xlog_write at ffff80000128203c [xfs] #14 [ffff800084a2fcd0] xlog_cil_push at ffff8000012846dc [xfs] #15 [ffff800084a2fda0] xlog_cil_push_work at ffff800001284a2c [xfs] #16 [ffff800084a2fdb0] process_one_work at ffff800040111d08 #17 [ffff800084a2fe00] worker_thread at ffff8000401121cc #18 [ffff800084a2fe70] kthread at ffff800040118de4 After commit 2def284 ("xfs: don't allow log IO to be throttled"), the metadata submitted by xlog_write_iclog() should not be throttled. But due to the existence of the dm layer, throttling flush_bio indirectly causes the metadata bio to be throttled. Fix this by conditionally adding REQ_IDLE to flush_bio.bi_opf, which makes wbt_should_throttle() return false to avoid wbt_wait(). Signed-off-by: Jinliang Zheng <[email protected]> Reviewed-by: Tianxiang Peng <[email protected]> Reviewed-by: Hao Peng <[email protected]> Signed-off-by: Mikulas Patocka <[email protected]> Signed-off-by: Sasha Levin <[email protected]> CVE-2025-38063 Signed-off-by: Manuel Diewald <[email protected]> Signed-off-by: Mehmet Basaran <[email protected]>
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 25, 2025
BugLink: https://bugs.launchpad.net/bugs/2119603 [ Upstream commit 0153f36041b8e52019ebfa8629c13bf8f9b0a951 ] When the XDP program is loaded, the XDP callback adds new Tx queues. This means that the callback must update the Tx scheduler with the new queue number. In the event of a Tx scheduler failure, the XDP callback should also fail and roll back any changes previously made for XDP preparation. The previous implementation had a bug that not all changes made by the XDP callback were rolled back. This caused the crash with the following call trace: [ +9.549584] ice 0000:ca:00.0: Failed VSI LAN queue config for XDP, error: -5 [ +0.382335] Oops: general protection fault, probably for non-canonical address 0x50a2250a90495525: 0000 [#1] SMP NOPTI [ +0.010710] CPU: 103 UID: 0 PID: 0 Comm: swapper/103 Not tainted 6.14.0-net-next-mar-31+ #14 PREEMPT(voluntary) [ +0.010175] Hardware name: Intel Corporation M50CYP2SBSTD/M50CYP2SBSTD, BIOS SE5C620.86B.01.01.0005.2202160810 02/16/2022 [ +0.010946] RIP: 0010:__ice_update_sample+0x39/0xe0 [ice] [...] [ +0.002715] Call Trace: [ +0.002452] <IRQ> [ +0.002021] ? __die_body.cold+0x19/0x29 [ +0.003922] ? die_addr+0x3c/0x60 [ +0.003319] ? exc_general_protection+0x17c/0x400 [ +0.004707] ? asm_exc_general_protection+0x26/0x30 [ +0.004879] ? __ice_update_sample+0x39/0xe0 [ice] [ +0.004835] ice_napi_poll+0x665/0x680 [ice] [ +0.004320] __napi_poll+0x28/0x190 [ +0.003500] net_rx_action+0x198/0x360 [ +0.003752] ? update_rq_clock+0x39/0x220 [ +0.004013] handle_softirqs+0xf1/0x340 [ +0.003840] ? sched_clock_cpu+0xf/0x1f0 [ +0.003925] __irq_exit_rcu+0xc2/0xe0 [ +0.003665] common_interrupt+0x85/0xa0 [ +0.003839] </IRQ> [ +0.002098] <TASK> [ +0.002106] asm_common_interrupt+0x26/0x40 [ +0.004184] RIP: 0010:cpuidle_enter_state+0xd3/0x690 Fix this by performing the missing unmapping of XDP queues from q_vectors and setting the XDP rings pointer back to NULL after all those queues are released. Also, add an immediate exit from the XDP callback in case of ring preparation failure. Fixes: efc2214 ("ice: Add support for XDP") Reviewed-by: Dawid Osuchowski <[email protected]> Reviewed-by: Przemek Kitszel <[email protected]> Reviewed-by: Jacob Keller <[email protected]> Signed-off-by: Michal Kubiak <[email protected]> Reviewed-by: Aleksandr Loktionov <[email protected]> Reviewed-by: Simon Horman <[email protected]> Tested-by: Jesse Brandeburg <[email protected]> Tested-by: Saritha Sanigani <[email protected]> (A Contingent Worker at Intel) Signed-off-by: Tony Nguyen <[email protected]> Signed-off-by: Sasha Levin <[email protected]> CVE-2025-38127 Signed-off-by: Manuel Diewald <[email protected]> Signed-off-by: Mehmet Basaran <[email protected]>
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 25, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 26, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 27, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 28, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 29, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Sep 30, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Oct 1, 2025
…operly started (#14)
delphix-devops-bot
pushed a commit
that referenced
this pull request
Oct 3, 2025
…operly started (#14)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.