Skip to content
This repository was archived by the owner on May 6, 2022. It is now read-only.

Define a strategy for certs for brokers #606

Closed
vaikas opened this issue Mar 22, 2017 · 12 comments
Closed

Define a strategy for certs for brokers #606

vaikas opened this issue Mar 22, 2017 · 12 comments
Labels
api lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Milestone

Comments

@vaikas
Copy link
Contributor

vaikas commented Mar 22, 2017

Some brokers (especially internal ones, or testing ones) do not have proper certs and we probably do not even install proper roots in our containers. This issue tracks defining a procedure for doing this:

  1. install proper root certs
  2. define a process for adding additional ones for organizations
@arschles
Copy link
Contributor

arschles commented Mar 28, 2017

@vaikas-google do you consider the following to be a solution:

  1. Install standard CA roots to the controller-manager Dockerfile
  2. Default to standard TLS (without InsecureSkipVerify) for all controller-to-broker communication
  3. Add a new brokerSecurity field to the Broker resource, which has the following possible values: full (default), skipVerify (which sets InsecureSkipVerify), and none

I haven't closely followed the discussion on what exactly hasn't worked for the various brokers, so please let me know if I'm missing something here.

@arschles
Copy link
Contributor

arschles commented Apr 3, 2017

Punting this to beta to decide.

@arschles arschles added this to the 0.1.0 milestone Apr 3, 2017
@vaikas
Copy link
Contributor Author

vaikas commented Apr 5, 2017

Yeah, so I think that this works for certs from roots. I'd be curious if there are organizations that would be running with their own CAs and in that case how we would be adding additional certs for those.

@pmorie
Copy link
Contributor

pmorie commented May 13, 2017

Idea: control insecure-skip-verify like so:

type BrokerSpec struct {
  // other fields omitted

  Insecure *bool
}

@arschles
Copy link
Contributor

Moving to 1.0.0, since we can already support brokers with root CAs now

@arschles arschles modified the milestones: 1.0.0, 0.1.0 May 15, 2017
@pmorie
Copy link
Contributor

pmorie commented Jul 24, 2017

I have a strong need for this in beta. Adding to the agenda to discuss in the July 24 2017 SIG meeting.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 21, 2019
@jberkhahn jberkhahn removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 20, 2019
@mszostok
Copy link
Contributor

/remove-lifecycle rotten
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 20, 2019
@mrbobbytables
Copy link

This project is being archived, closing open issues and PRs.
Please see this PR for more information: kubernetes/community#6632

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

8 participants