-
Notifications
You must be signed in to change notification settings - Fork 30
Description
Hi,
By default, the Helm chart creates a cluster role binding on the service account using a cluster role with a very wide set of permissions.
komoplane/charts/komoplane/templates/serviceaccount.yaml
Lines 17 to 26 in 006172b
kind: ClusterRole | |
apiVersion: rbac.authorization.k8s.io/v1 | |
metadata: | |
name: {{ include "app.serviceAccountName" . }} | |
labels: | |
{{- include "app.labels" . | nindent 4 }} | |
rules: | |
- apiGroups: ["*"] | |
resources: ["*"] | |
verbs: ["get", "list", "watch"] |
Deploying the Helm chart with no extra values and running
kubectl auth can-i get secrets --as=system:serviceaccount:komoplane:komoplane -n kube-system
returns yes
, i.e. the service account can read all secrets in the cluster, which seems a bit excessive.
Crossplane ships with a crossplane-view
cluster role that aggregates get/list/watch permissions for all relevant Crossplane objects if it is deployed with the RBAC manager pod. The only missing set of permissions is to list CRDs (to get providers). This can easily be granted with a custom cluster role.
I made a clean install of Komoplane in a cluster with a variety of providers, XRDs, XRs, and claims, removed the cluster role binding the Helm chart creates by default, and applied this instead:
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: komoplane-crossplane-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: crossplane-view
subjects:
- kind: ServiceAccount
namespace: komoplane
name: komoplane
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: komoplane-view-crds
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: komoplane-view-crds
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: komoplane-view-crds
subjects:
- kind: ServiceAccount
namespace: komoplane
name: komoplane
The outcome is that Komoplane runs exactly as before, but now the service account doesn't have excessive permissions (the previous kubectl auth query returns no
).
The only drawback I can think of is that it would assume you are deploying Crossplane with the RBAC manager pod. This pod is deployed by default, and I doubt many would disable it, but it is nevertheless a possibility.
If we are OK with this I would be happy to make a PR.