-
Notifications
You must be signed in to change notification settings - Fork 145
Closed
Labels
lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Description
Hey,
I'm not quite sure if this is a bug or a feature, but since the upgrade to openshift 0.12.0, my KUBECONFIG environment variable is being ignored. I'm using the openshift python client through the ansible k8s community collection.
My playbook looks like this:
---
- hosts: localhost
become: no
gather_facts: yes
tasks:
- community.kubernetes.k8s:
name: testnamespace
api_version: v1
kind: Namespace
state: present
environment:
KUBECONFIG: /path/to/kubeconfig
With openshift==0.11.2 which I've previously used, this task executed just fine. However when upgrading to openshift 0.12.0 I receive the following error:
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to get client due to HTTPConnectionPool(host='localhost', port=80): Max retries exceeded with url: /version (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fa7823188e0>: Failed to establish a new connection: [Errno 111] Connection refused'))"}
I know I could utilize the kubeconfig attribute of the module, but as far as I know this attribute always resolves the config file on localhost. However my goal is to reference a kubeconfig on a remote host.
Is this behaviour expected?
Kind regards
Philipp
Tyler-2, akamac, silvanob, dtschan, andrej-urvantsev and 1 more
Metadata
Metadata
Assignees
Labels
lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.