-
Notifications
You must be signed in to change notification settings - Fork 419
OCPBUGS-14644: Do not set master node selector if there's no masters #1347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-14644: Do not set master node selector if there's no masters #1347
Conversation
This is so oc adm must-gather works by default in hypershift clusters
soltysh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold
| } | ||
| for _, node := range nodes.Items { | ||
| pod, err := o.Client.CoreV1().Pods(ns.Name).Create(context.TODO(), o.newPod(node.Name, image), metav1.CreateOptions{}) | ||
| pod, err := o.Client.CoreV1().Pods(ns.Name).Create(context.TODO(), o.newPod(node.Name, image, hasMaster), metav1.CreateOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before we start adding weird edge cases, have you considered using --node-selector=node-role.kubernetes.io/worker instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't look at Openshift hosted clusters powered by hypershift as a weird use case but rather as the only/default scenario for Red Hat Openshift managed services offering and the increasing choice for self-hosted multi-cluster offering.
The intend was to avoid end user to have to set an additional flag for it work ootb in any of the scenarios above.
Whether this is the right impl approach to satisfy the use case I have no opinions / not familiar with this code base, so feel free to close and request a deeper discussion. I'm curious why we run in a master by default?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious why we run in a master by default?
We care about what's going on masters only, even though parts of the data we scrape is reachable through API, significant portion is scraping data directly from the OS running the control plane components. That's why it was natural for us to force running them on masters.
|
cc @soltysh any thoughts on how to move this forward? |
soltysh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold cancel
/lgtm
/approve
I'll have to make a few refactorings to that code, so let's land it as is for now. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: enxebre, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@enxebre: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
@soltysh we'd want this in 4.12. Let's discuss if you have any objections, concerns... |
|
@enxebre: new pull request created: #1366 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
Too late to get this bug linked up to the 4.13 GA errata or anything, but just to make it easier for folks looking at this pull request to find the bug chain it's the tip of: /retitle OCPBUGS-14644: Do not set master node selector if there's no masters |
|
@enxebre: Jira Issue OCPBUGS-14644: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-14644 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is so oc adm must-gather works by default in hypershift clusters.
Ref https://issues.redhat.com/browse/HOSTEDCP-836