An Ansible AWX operator for Kubernetes built with Operator SDK and Ansible.
- AWX Operator
- Table of Contents
This operator is meant to provide a more Kubernetes-native installation method for AWX via an AWX Custom Resource Definition (CRD).
⚠️ The operator is not supported by Red Hat, and is in alpha status. For now, use it at your own risk!
This Kubernetes Operator is meant to be deployed in your Kubernetes cluster(s) and can manage one or more AWX instances in any namespace.
For testing purposes, the awx-operator can be deployed on a Minikube cluster. Due to different OS and hardware environments, please refer to the official Minikube documentation for further information.
$ minikube start --addons=ingress --cpus=4 --cni=flannel --install-addons=true \
--kubernetes-version=stable --memory=6g
😄 minikube v1.20.0 on Fedora 34
✨ Using the kvm2 driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating kvm2 VM (CPUs=4, Memory=6144MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔗 Configuring Flannel (Container Networking Interface) ...
🔎 Verifying Kubernetes components...
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.44.0
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎 Verifying ingress addon...
🌟 Enabled addons: storage-provisioner, default-storageclass, ingress
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by defaultOnce Minikube is deployed, check if the node(s) and kube-apiserver communication is working as expected.
$ minikube kubectl -- get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 6m28s v1.20.2
$ minikube kubectl -- get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-tjk94 0/1 Completed 0 6m4s
ingress-nginx ingress-nginx-admission-patch-r4pl6 0/1 Completed 0 6m4s
ingress-nginx ingress-nginx-controller-5d88495688-sbtp9 1/1 Running 0 6m4s
kube-system coredns-74ff55c5b-2wz6n 1/1 Running 0 6m4s
kube-system etcd-minikube 1/1 Running 0 6m13s
kube-system kube-apiserver-minikube 1/1 Running 0 6m13s
kube-system kube-controller-manager-minikube 1/1 Running 0 6m13s
kube-system kube-flannel-ds-amd64-lw7lv 1/1 Running 0 6m3s
kube-system kube-proxy-lcxx7 1/1 Running 0 6m3s
kube-system kube-scheduler-minikube 1/1 Running 0 6m13s
kube-system storage-provisioner 1/1 Running 1 6m17sIt is not required for kubectl to be separately installed since it comes already wrapped inside minikube. As demonstrated above, simply prefix minikube kubectl -- before kubectl command, i.e. kubectl get nodes would become minikube kubectl -- get nodes
Let's create an alias for easier usage:
$ alias kubectl="minikube kubectl --"Now you need to deploy AWX Operator into your cluster. Start by going to https://github.com/ansible/awx-operator/releases and making note of the latest release. Replace <TAG> in the URL https://raw.githubusercontent.com/ansible/awx-operator/<TAG>/deploy/awx-operator.yaml with the version you are deploying.
$ kubectl apply -f https://raw.githubusercontent.com/ansible/awx-operator/<TAG>/deploy/awx-operator.yaml
customresourcedefinition.apiextensions.k8s.io/awxs.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxbackups.awx.ansible.com created
customresourcedefinition.apiextensions.k8s.io/awxrestores.awx.ansible.com created
clusterrole.rbac.authorization.k8s.io/awx-operator created
clusterrolebinding.rbac.authorization.k8s.io/awx-operator created
serviceaccount/awx-operator created
deployment.apps/awx-operator createdWait a few minutes and you should have the awx-operator running.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
awx-operator-7dbf9db9d7-z9hqx 1/1 Running 0 50sThen create a file named awx-demo.yml with the suggested content. The metadata.name you provide, will be the name of the resulting AWX deployment. If you deploy more than one AWX instance to the same namespace, be sure to use unique names.
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx-demo
spec:
service_type: nodeport
ingress_type: none
hostname: awx-demo.example.comFinally, use kubectl to create the awx instance in your cluster:
$ kubectl apply -f awx-demo.yml
awx.awx.ansible.com/awx-demo createdAfter a few minutes, the new AWX instance will be deployed. One can look at the operator pod logs in order to know where the installation process is at. This can be done by running the following command: kubectl logs -f deployments/awx-operator.
$ kubectl get pods -l "app.kubernetes.io/managed-by=awx-operator"
NAME READY STATUS RESTARTS AGE
awx-demo-77d96f88d5-pnhr8 4/4 Running 0 3m24s
awx-demo-postgres-0 1/1 Running 0 3m34s
$ kubectl get svc -l "app.kubernetes.io/managed-by=awx-operator"
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-demo-postgres ClusterIP None <none> 5432/TCP 4m4s
awx-demo-service NodePort 10.109.40.38 <none> 80:31006/TCP 3m56sOnce deployed, the AWX instance will be accessible by the command minikube service awx-demo-service --url.
By default, the admin user is admin and the password is available in the <resourcename>-admin-password secret. To retrieve the admin password, run kubectl get secret <resourcename>-admin-password -o jsonpath="{.data.password}" | base64 --decode
You just completed the most basic install of an AWX instance via this operator. Congratulations!!!!
For an example using the Nginx Controller in Minukube, don't miss our demo video.
There are three variables that are customizable for the admin user account creation.
| Name | Description | Default |
|---|---|---|
| admin_user | Name of the admin user | admin |
| admin_email | Email of the admin user | [email protected] |
| admin_password_secret | Secret that contains the admin user password | Empty string |
⚠️ admin_password_secret must be a Kubernetes secret and not your text clear password.
If admin_password_secret is not provided, the operator will look for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator will generate a password and create a Secret from it named <resourcename>-admin-password.
To retrieve the admin password, run kubectl get secret <resourcename>-admin-password -o jsonpath="{.data.password}" | base64 --decode
The secret that is expected to be passed should be formatted as follow:
---
apiVersion: v1
kind: Secret
metadata:
name: <resourcename>-admin-password
namespace: <target namespace>
stringData:
password: mysuperlongpasswordIf the service_type is not specified, the ClusterIP service will be used for your AWX Tower service.
The service_type supported options are: ClusterIP, LoadBalancer and NodePort.
The following variables are customizable for any service_type
| Name | Description | Default |
|---|---|---|
| service_labels | Add custom labels | Empty string |
---
spec:
...
service_type: ClusterIP
service_labels: |
environment: testing- LoadBalancer
The following variables are customizable only when service_type=LoadBalancer
| Name | Description | Default |
|---|---|---|
| loadbalancer_annotations | LoadBalancer annotations | Empty string |
| loadbalancer_protocol | Protocol to use for Loadbalancer ingress | http |
| loadbalancer_port | Port used for Loadbalancer ingress | 80 |
---
spec:
...
service_type: LoadBalancer
loadbalancer_protocol: https
loadbalancer_port: 443
loadbalancer_annotations: |
environment: testing
service_labels: |
environment: testingWhen setting up a Load Balancer for HTTPS you will be required to set the loadbalancer_port to move the port away from 80.
The HTTPS Load Balancer also uses SSL termination at the Load Balancer level and will offload traffic to AWX over HTTP.
- NodePort
The following variables are customizable only when service_type=NodePort
| Name | Description | Default |
|---|---|---|
| nodeport_port | Port used for NodePort | 30080 |
---
spec:
...
service_type: NodePort
nodeport_port: 30080By default, the AWX operator is not opinionated and won't force a specific ingress type on you. So, when the ingress_type is not specified, it will default to none and nothing ingress-wise will be created.
The ingress_type supported options are: none, ingress and route. To toggle between these options, you can add the following to your AWX CRD:
- None
---
spec:
...
ingress_type: none- Generic Ingress Controller
The following variables are customizable when ingress_type=ingress. The ingress type creates an Ingress resource as documented which can be shared with many other Ingress Controllers as listed.
| Name | Description | Default |
|---|---|---|
| ingress_annotations | Ingress annotations | Empty string |
| ingress_tls_secret | Secret that contains the TLS information | Empty string |
| hostname | Define the FQDN | {{ meta.name }}.example.com |
| ingress_path | Define the ingress path to the service | / |
---
spec:
...
ingress_type: ingress
hostname: awx-demo.example.com
ingress_annotations: |
environment: testing- Route
The following variables are customizable when ingress_type=route
| Name | Description | Default |
|---|---|---|
| route_host | Common name the route answers for | <instance-name>-<namespace>-<routerCanonicalHostname> |
| route_tls_termination_mechanism | TLS Termination mechanism (Edge, Passthrough) | Edge |
| route_tls_secret | Secret that contains the TLS information | Empty string |
---
spec:
...
ingress_type: route
route_host: awx-demo.example.com
route_tls_termination_mechanism: Passthrough
route_tls_secret: custom-route-tls-secret-nameIn order for the AWX instance to rely on an external database, the Custom Resource needs to know about the connection details. Those connection details should be stored as a secret and either specified as postgres_configuration_secret at the CR spec level, or simply be present on the namespace under the name <resourcename>-postgres-configuration.
The secret should be formatted as follows:
---
apiVersion: v1
kind: Secret
metadata:
name: <resourcename>-postgres-configuration
namespace: <target namespace>
stringData:
host: <external ip or url resolvable by the cluster>
port: <external port, this usually defaults to 5432>
database: <desired database name>
username: <username to connect as>
password: <password to connect with>
sslmode: prefer
type: unmanaged
type: OpaqueIt is possible to set a specific username, password, port, or database, but still have the database managed by the operator. In this case, when creating the postgres-configuration secret, the
type: managedfield should be added.
Note: The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, verify-full.
For instructions on how to migrate from an older version of AWX, see migration.md.
If you don't have access to an external PostgreSQL service, the AWX operator can deploy one for you along side the AWX instance itself.
The following variables are customizable for the managed PostgreSQL service
| Name | Description | Default |
|---|---|---|
| postgres_image | Path of the image to pull | postgres:12 |
| postgres_resource_requirements | PostgreSQL container resource requirements | Empty object |
| postgres_storage_requirements | PostgreSQL container storage requirements | requests: {storage: 8Gi} |
| postgres_storage_class | PostgreSQL PV storage class | Empty string |
| postgres_data_path | PostgreSQL data path | /var/lib/postgresql/data/pgdata |
Example of customization could be:
---
spec:
...
postgres_resource_requirements:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 1
memory: 4Gi
postgres_storage_requirements:
requests:
storage: 8Gi
limits:
storage: 50Gi
postgres_storage_class: fast-ssdNote: If postgres_storage_class is not defined, Postgres will store it's data on a volume using the default storage class for your cluster.
There are a few variables that are customizable for awx the image management.
| Name | Description |
|---|---|
| image | Path of the image to pull |
| image_version | Image version to pull |
| image_pull_policy | The pull policy to adopt |
| image_pull_secret | The pull secret to use |
| ee_images | A list of EEs to register |
| redis_image | Path of the image to pull |
| redis_image_version | Image version to pull |
Example of customization could be:
---
spec:
...
image: myorg/my-custom-awx
image_version: latest
image_pull_policy: Always
image_pull_secret: pull_secret_name
ee_images:
- name: my-custom-awx-ee
image: myorg/my-custom-awx-eeNote: The image and image_version are intended for local mirroring scenarios. Please note that using a version of AWX other than the one bundled with the awx-operator is not supported. For the default values, check the main.yml file.
Depending on the type of tasks that you'll be running, you may find that you need the task pod to run as privileged. This can open yourself up to a variety of security concerns, so you should be aware (and verify that you have the privileges) to do this if necessary. In order to toggle this feature, you can add the following to your custom resource:
---
spec:
...
task_privileged: trueIf you are attempting to do this on an OpenShift cluster, you will need to grant the awx ServiceAccount the privileged SCC, which can be done with:
#> oc adm policy add-scc-to-user privileged -z awxAgain, this is the most relaxed SCC that is provided by OpenShift, so be sure to familiarize yourself with the security concerns that accompany this action.
The resource requirements for both, the task and the web containers are configurable - both the lower end (requests) and the upper end (limits).
| Name | Description | Default |
|---|---|---|
| web_resource_requirements | Web container resource requirements | requests: {cpu: 1000m, memory: 2Gi} |
| task_resource_requirements | Task container resource requirements | requests: {cpu: 500m, memory: 1Gi} |
| ee_resource_requirements | EE control plane container resource requirements | requests: {cpu: 500m, memory: 1Gi} |
Example of customization could be:
---
spec:
...
web_resource_requirements:
requests:
cpu: 1000m
memory: 2Gi
limits:
cpu: 2000m
memory: 4Gi
task_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
ee_resource_requirements:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2GiYou can constrain the AWX pods created by the operator to run on a certain subset of nodes. node_selector and postgres_selector constrains
the AWX pods to run only on the nodes that match all the specified key/value pairs. tolerations and postgres_tolerations allow the AWX
pods to be scheduled onto nodes with matching taints.
| Name | Description | Default |
|---|---|---|
| postgres_image | Path of the image to pull | 12 |
| postgres_image_version | Image version to pull | 12 |
| node_selector | AWX pods' nodeSelector | '' |
| tolerations | AWX pods' tolerations | '' |
| postgres_selector | Postgres pods' nodeSelector | '' |
| postgres_tolerations | Postgres pods' tolerations | '' |
Example of customization could be:
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"In cases which you need to trust a custom Certificate Authority, there are few variables you can customize for the awx-operator.
Trusting a custom Certificate Authority allows the AWX to access network services configured with SSL certificates issued locally, such as cloning a project from from an internal Git server via HTTPS. It is common for these scenarios, experiencing the error unable to verify the first certificate.
| Name | Description | Default |
|---|---|---|
| ldap_cacert_secret | LDAP Certificate Authority secret name | '' |
| bundle_cacert_secret | Certificate Authority secret name | '' |
Please note the awx-operator will look for the data field ldap-ca.crt in the specified secret when using the ldap_cacert_secret, whereas the data field bundle-ca.crt is required for bundle_cacert_secret parameter.
Example of customization could be:
---
spec:
...
ldap_cacert_secret: <resourcename>-custom-certs
bundle_cacert_secret: <resourcename>-custom-certsTo create the secret, you can use the command below:
# kubectl create secret generic <resourcename>-custom-certs \
--from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \
--from-file=bundle-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE>In cases which you want to persist the /var/lib/projects directory, there are few variables that are customizable for the awx-operator.
| Name | Description | Default |
|---|---|---|
| projects_persistence | Whether or not the /var/lib/projects directory will be persistent | false |
| projects_storage_class | Define the PersistentVolume storage class | '' |
| projects_storage_size | Define the PersistentVolume size | 8Gi |
| projects_storage_access_mode | Define the PersistentVolume access mode | ReadWriteMany |
| projects_existing_claim | Define an existing PersistentVolumeClaim to use (cannot be combined with projects_storage_*) |
'' |
Example of customization when the awx-operator automatically handles the persistent volume could be:
---
spec:
...
projects_persistence: true
projects_storage_class: rook-ceph
projects_storage_size: 20GiIn a scenario where custom volumes and volume mounts are required to either overwrite defaults or mount configuration files.
| Name | Description | Default |
|---|---|---|
| extra_volumes | Specify extra volumes to add to the application pod | '' |
| web_extra_volume_mounts | Specify volume mounts to be added to Web container | '' |
| task_extra_volume_mounts | Specify volume mounts to be added to Task container | '' |
| ee_extra_volume_mounts | Specify volume mounts to be added to Execution container | '' |
| init_container_extra_volume_mounts | Specify volume mounts to be added to Init container | '' |
| init_container_extra_commands | Specify additional commands for Init container | '' |
⚠️ Theee_extra_volume_mountsandextra_volumeswill only take effect to the globally available Execution Environments. For customee, please customize the Pod spec.
Example configuration for ConfigMap
---
apiVersion: v1
kind: ConfigMap
metadata:
name: <resourcename>-extra-config
namespace: <target namespace>
data:
ansible.cfg: |
[defaults]
remote_tmp = /tmp
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s
custom.py: |
INSIGHTS_URL_BASE = "example.org"
AWX_CLEANUP_PATHS = TrueExample spec file for volumes and volume mounts
---
spec:
...
extra_volumes: |
- name: ansible-cfg
configMap:
defaultMode: 420
items:
- key: ansible.cfg
path: ansible.cfg
name: <resourcename>-extra-config
- name: custom-py
configMap:
defaultMode: 420
items:
- key: custom.py
path: custom.py
name: <resourcename>-extra-config
- name: shared-volume
persistentVolumeClaim:
claimName: my-external-volume-claim
init_container_extra_volume_mounts: |
- name: shared-volume
mountPath: /shared
init_container_extra_commands: |
# set proper permissions (rwx) for the awx user
chmod 775 /shared
chgrp 1000 /shared
ee_extra_volume_mounts: |
- name: ansible-cfg
mountPath: /etc/ansible/ansible.cfg
subPath: ansible.cfg
task_extra_volume_mounts: |
- name: custom-py
mountPath: /etc/tower/conf.d/custom.py
subPath: custom.py
- name: shared-volume
mountPath: /shared
⚠️ Volume and VolumeMount names cannot contain underscores(_)
In order to register default execution environments from private registries, the Custom Resource needs to know about the pull credentials. Those credentials should be stored as a secret and either specified as ee_pull_credentials_secret at the CR spec level, or simply be present on the namespace under the name <resourcename>-ee-pull-credentials . Instance initialization will register a Container registry type credential on the deployed instance and assign it to the registered default execution environments.
The secret should be formated as follows:
---
apiVersion: v1
kind: Secret
metadata:
name: <resourcename>-ee-pull-credentials
namespace: <target namespace>
stringData:
url: <registry url. i.e. quay.io>
username: <username to connect as>
password: <password to connect with>
ssl_verify: <Optional attribute. Whether verify ssl connection or not. Accepted values "True" (default), "False" >
type: OpaqueThe images listed in "ee_images" will be added as globally available Execution Environments. The "control_plane_ee_image" will be used to run project updates. In order to use a private image for any of these you'll need to use image_pull_secret to provide a k8s pull secret to access it. Currently the same secret is used for any of these images supplied at install time.
You can create image_pull_secret
kubectl create secret <resoucename>-cp-pull-credentials regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
If you need more control (for example, to set a namespace or a label on the new secret) then you can customise the Secret before storing it
Example spec file extra-config
---
apiVersion: v1
kind: Secret
metadata:
name: <resoucename>-cp-pull-credentials
namespace: <target namespace>
data:
.dockerconfigjson: <base64 docker config>
type: kubernetes.io/dockerconfigjsonIf you need to export custom environment variables to your containers.
| Name | Description | Default |
|---|---|---|
| task_extra_env | Environment variables to be added to Task container | '' |
| web_extra_env | Environment variables to be added to Web container | '' |
| ee_extra_env | Environment variables to be added to EE container | '' |
⚠️ Theee_extra_envwill only take effect to the globally available Execution Environments. For customee, please customize the Pod spec.
Example configuration of environment variables
spec:
task_extra_env: |
- name: MYCUSTOMVAR
value: foo
web_extra_env: |
- name: MYCUSTOMVAR
value: foo
ee_extra_env: |
- name: MYCUSTOMVAR
value: fooWithextra_settings, you can pass multiple custom settings via the awx-operator. The parameter extra_settings will be appended to the /etc/tower/settings.py and can be an alternative to the extra_volumes parameter.
| Name | Description | Default |
|---|---|---|
| extra_settings | Extra settings | '' |
Example configuration of extra_settings parameter
spec:
extra_settings:
- setting: MAX_PAGE_SIZE
value: "500"
- setting: AUTH_LDAP_BIND_DN
value: "cn=admin,dc=example,dc=com"If you need to modify some ServiceAccount proprieties
| Name | Description | Default |
|---|---|---|
| service_account_annotations | Annotations to the ServiceAccount | '' |
Example configuration of environment variables
spec:
service_account_annotations: |
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>To uninstall an AWX deployment instance, you basically need to remove the AWX kind related to that instance. For example, to delete an AWX instance named awx-demo, you would do:
$ kubectl delete awx awx-demo
awx.awx.ansible.com "awx-demo" deletedDeleting an AWX instance will remove all related deployments and statefulsets, however, persistent volumes and secrets will remain. To enforce secrets also getting removed, you can use garbage_collect_secrets: true.
To upgrade AWX, it is recommended to upgrade the awx-operator to the version that maps to the desired version of AWX. To find the version of AWX that will be installed by the awx-operator by default, check the version specified in the image_version variable in roles/installer/defaults/main.yml for that particular release.
Apply the awx-operator.yml for that release to upgrade the operator, and in turn also upgrade your AWX deployment.
Cluster-scope to Namespace-scope considerations
Starting with awx-operator 0.14.0, AWX can only be deployed in the namespace that the operator exists in. This is called a namespace-scoped operator. If you are upgrading from an earlier version, you will want to
delete your existing awx-operator service account, role and role binding.
Please visit our contributing guidelines.
There are a few moving parts to this project:
- The
awx-operatorcontainer image which powers AWX Operator - The
awx-operator.yamlfile, which initially deploys the Operator - The ClusterServiceVersion (CSV), which is generated as part of the bundle and needed for the olm-catalog
Each of these must be appropriately built in preparation for a new tag:
Update the awx-operator version:
ansible/group_vars/all
Once the version has been updated, run from the root of the repo:
#> ansible-playbook ansible/chain-operator-files.ymlGenerate the olm-catalog bundle.
$ operator-sdk generate bundle --operator-name awx-operator --version <new_tag>This should be done with operator-sdk v0.19.4.
It is a good idea to use the build script at this point to build the catalog and test out installing it in Operator Hub.
Run the following command inside this directory:
#> operator-sdk build quay.io/<user>/awx-operator:<new-version>Then push the generated image to Docker Hub:
#> docker push quay.io/<user>/awx-operator:<new-version>After it is built, test it on a local cluster:
#> minikube start --memory 6g --cpus 4
#> minikube addons enable ingress
#> ansible-playbook ansible/deploy-operator.yml -e operator_image=quay.io/<user>/awx-operator -e operator_version=<new-version> -e pull_policy=Always
#> kubectl create namespace example-awx
#> ansible-playbook ansible/instantiate-awx-deployment.yml -e namespace=example-awx -e image=quay.io/<user>/awx -e service_type=nodeport
#> # Verify that the awx-task and awx-web containers are launched
#> # with the right version of the awx image
#> minikube deleteGenerate a list of commits between the versions and add it to the changelog.
#> git log --no-merges --pretty="- %s (%an) - %h " <old_tag>..<new_tag>If everything works, commit the updated version, then publish a new release using the same version you used in ansible/group_vars/all.
After creating the release, this GitHub Workflow will run and publish the new image to quay.io.
This operator was originally built in 2019 by Jeff Geerling and is now maintained by the Ansible Team