-
Couldn't load subscription status.
- Fork 636
Description
/kind bug
What steps did you take and what happened:
Follow the quickstart documentation with Kubernetes v1.30.5 and a custom built AMI (the public AMIs are missing for that version and the default v1.31.0 version).
The ELB Health Check fails and the cluster is stuck after creating the first control-plane instance. The AWS console shows that 0 of 1 instanced are in service.
- The CAPA API defaults create a Classic ELB with an SSL health check target. (HTTPS also doesn't work, but TCP does)
- Starting in Go 1.22 the RSA ciphers were removed - crypto/tls: disable RSA key exchange cipher suites by default golang/go#63413
- Kubernetes >v1.30 switched to Go 1.22 in recent releases kubernetes/kubernetes@ddb0b8d
What did you expect to happen:
The defaults should result in a working cluster.
Anything else you would like to add:
-
Changing the health check to TCP in the AWS console did fix the check, but this update is not allowed by a webhook here and even after removing the webhook, the new value from AWSCluster never got updated.
-
Setting this on the apiserver and other control-plane components allowed the ELB health check to pass
tls-cipher-suites: ...,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA
- Using an NLB loadbalancer works
controlPlaneLoadBalancer:
loadBalancerType: nlb
Some discussion about this in the Kuberentes slack https://kubernetes.slack.com/archives/C3QUFP0QM/p1726622974749509
Environment:
- Cluster-api-provider-aws version: 2.6.1
- Kubernetes version: (use
kubectl version): v1.30.5 - OS (e.g. from
/etc/os-release):