Skip to content

Conversation

@bsbernd
Copy link
Collaborator

@bsbernd bsbernd commented Jun 25, 2025

No description provided.

bsbernd added 3 commits June 25, 2025 15:04
This is another preparation and will be used for decision
which queue to add a request to.
This is preparation for follow up commits that allow to run with a
reduced number of queues.

Signed-off-by: Bernd Schubert <[email protected]>
@bsbernd bsbernd requested review from hbirth and openunix June 25, 2025 21:41
Copy link
Collaborator

@jianhuangli jianhuangli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Then tries the first available queue on the
    current numa node

other numa nodes?

Copy link
Collaborator

@hbirth hbirth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

bsbernd added 2 commits June 26, 2025 11:46
So far queue selection was statically - the a request on core X
was always handled by the queue corresponding to core X.
A previous commit introduced bitmaps that track which queues
are available - queue selection can make use of these bitmaps
and try to use the ideal queue.

Rules are

- Tries the queue of the current core first, if available
and if that queue does not have too many entries queued

- Then tries the first available queue on the current
numa node. It does not use random distribution here,
because light queue usage is probably ok, so that
kernel/userspace switches can be avoided

- Then tries the first available global queue ignoring
numa nodes.

If no queue is free, it tries again the queue of the current
core, but if that queue does not exist it falls back
to a random queue on the current numa node - that also might
not exist - it then uses a random available queue.
Now that queue selection for fuse requests supports missing
queues we don't need to wait for full queue initialization.
In fact, fuse-server (daemon) might want to run with a reduced
number of queues, for example to reduce memory usage.
This also simplifies startup - after first queue entry
registration fuse-io-uring is ready to be used.

Signed-off-by: Bernd Schubert <[email protected]>
@bsbernd bsbernd force-pushed the redfs-ubuntu-noble-6.8.0-58.60-updates-reduce-queues branch from 2801d80 to d763efd Compare June 26, 2025 09:48
@bsbernd
Copy link
Collaborator Author

bsbernd commented Jun 26, 2025

@jianhuangli thank you, updated the commit message.

@bsbernd bsbernd requested a review from yongzech June 26, 2025 09:52
@bsbernd
Copy link
Collaborator Author

bsbernd commented Oct 27, 2025

Deprecated by new approach.

@bsbernd bsbernd closed this Oct 27, 2025
@bsbernd bsbernd deleted the redfs-ubuntu-noble-6.8.0-58.60-updates-reduce-queues branch October 28, 2025 21:12
bsbernd pushed a commit that referenced this pull request Nov 7, 2025
jira LE-1907
cve CVE-2024-26974
Rebuild_History Non-Buildable kernel-5.14.0-427.24.1.el9_4
commit-author Herbert Xu <[email protected]>
commit d3b17c6

Using completion_done to determine whether the caller has gone
away only works after a complete call.  Furthermore it's still
possible that the caller has not yet called wait_for_completion,
resulting in another potential UAF.

Fix this by making the caller use cancel_work_sync and then freeing
the memory safely.

Fixes: 7d42e09 ("crypto: qat - resolve race condition during AER recovery")
	Cc: <[email protected]> #6.8+
	Signed-off-by: Herbert Xu <[email protected]>
	Reviewed-by: Giovanni Cabiddu <[email protected]>
	Signed-off-by: Herbert Xu <[email protected]>
(cherry picked from commit d3b17c6)
	Signed-off-by: Jonathan Maple <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants