-
Notifications
You must be signed in to change notification settings - Fork 1.9k
ci-operator/templates/openshift: Drop KUBE_SSH_* #3582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
These are from b7cc916 (Set KUBE_SSH_USER for new installer for AWS tests, openshift#2274) and 43dde9e (Set KUBE_SSH_BASTION and KUBE_SSH_KEY_PATH in installer tests, 2018-12-23, openshift#2469). But moving forward, reliable SSH access direct to nodes will be hard, with things like openshift/installer@6add0ab447 (Remove public IPs from masters, 2019-01-10, openshift/installer#1045) making a SSH bastion a requirement for that sort of thing (at least on AWS). Going forward, ideally e2e tests can be ported to use privileged pods within the cluster to check what they need to check. But however that works out, stop carrying local dead code that is not affecting test results. We can always drag it back out of version control later if it turns out we actually want to go down the KUBE_SSH_* route.
b24732e
to
a4948da
Compare
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: vrutkovs, wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@wking: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@wking: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/close |
@sdodson: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
These are from b7cc916 (#2274) and 43dde9e (#2469). But moving forward, reliable SSH access direct to nodes will be hard, with things like openshift/installer@6add0ab447 (openshift/installer#1045) making a SSH bastion a requirement for that sort of thing (at least on AWS). Going forward, ideally e2e tests can be ported to use privileged pods within the cluster to check what they need to check. But however that works out, stop carrying local dead code that is not affecting test results. We can always drag it back out of version control later if it turns out we actually want to go down the
KUBE_SSH_*
route.CC @eparis, @smarterclayton, @vrutkovs