Skip to content

Helm operator logs full deploy yaml every reconcile cycle even when no changes after upgrading to 1.9 from 1.7 #5041

@chlam4

Description

@chlam4

Bug Report

We have an operator built on the operator-sdk and its helm-operator. After upgrading to sdk 1.9 from 1.7, we are now seeing the operator dumps out the full deployment yaml in the info log at every reconcile cycle, even there are no changes at all. There are no patch requests sent to the kube apiserver. This behavior is new in 1.9; it didn't exist in 1.7.

What did you do?

Follow the procedure here to deploy the operator in a Kubernetes cluster. Tailing the operator log and we will see it dumping out logs of the full deployment yaml every minute. Here is an excerpt of the logging at the beginning of each reconcile cycle, and attached please find the entire log of a full deployment yaml printed out at each cycle.

{"level":"info","ts":1625235104.0440075,"logger":"helm.controller","msg":"Upgraded release","namespace":"turbonomic","name":"xl-release","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Xl","release":"xl-release","force":false}
 ---
 # Source: xl/templates/telemetry-secret.yaml
 apiVersion: v1
 kind: Secret
 metadata:
   name: prometheus-datacloud-gateway
   labels:
     app.kubernetes.io/name: xl
     app.kubernetes.io/instance: xl-release
     app.kubernetes.io/managed-by: Helm
 type: Opaque
 data:
...

What did you expect to see?

We expect to see no such logging at the info level when there're no changes in the custom resource of the operator, just like what it was when we built with operator-sdk 1.7. With 1.7, we only see the first line:

{"level":"info","ts":1625235104.0440075,"logger":"helm.controller","msg":"Upgraded release","namespace":"turbonomic","name":"xl-release","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Xl","release":"xl-release","force":false}

What did you see instead? Under which circumstances?

We see logging of the full deployment yaml every minute. Attached please find the entire log of a full deployment yaml printed out at each cycle.

Environment

Operator type:
helm operator

Kubernetes cluster type:
vanilla

$ operator-sdk version

operator-sdk version
operator-sdk version: "v1.9.0-6-g2781cf89", commit: "2781cf89a0d51a2783fa979a50ce8fdb22c820a9", kubernetes version: "v1.20.2", go version: "go1.16.3", GOOS: "darwin", GOARCH: "amd64"

$ go version (if language is Go)
N/A

$ kubectl version

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T21:15:16Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.10", GitCommit:"62876fc6d93e891aa7fbe19771e6a6c03773b0f7", GitTreeState:"clean", BuildDate:"2020-10-15T01:43:56Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.21) and server (1.18) exceeds the supported minor version skew of +/-1

Additional context

We've done some investigation, and were able to isolate the difference was introduced in the following commit. We don't know if it's intentional.
cae12a2

In particular, it is this line of the new code that made the difference:
https://github.com/operator-framework/operator-sdk/blob/master/internal/helm/release/manager.go#L146
It now compares m.chart and deployedRelease.Chart. It used to just compare the deployedRelease.Manifest and candidateRelease.Manifest. That seems intended, but m.chart and
deployedRelease.Chart seem very different, at least in the case of our operator (please see the attached).

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.language/helmIssue is related to a Helm operator project

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions