Skip to content

Merge master into master-next #1223

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
5fef732
Remove temp branding override
rhamilto Dec 11, 2018
e729933
Fix catalog views to 'Show More' filter items
jeff-phillips-18 Feb 4, 2019
88ab69c
Fix wrapping API groups in Resource Dropdown
nicolethoen Jan 29, 2019
156634e
Add events tab to DaemonSets details page
jhadvig Feb 14, 2019
f383428
Bug 1677214 - Email field should be an optional on Create Image Pull …
jhadvig Feb 14, 2019
d7816e7
Monitoring: Fix node detail page's Memory Usage graph
kyoto Feb 15, 2019
9bfcbc6
Merge pull request #1145 from nicolethoen/resource_dropdown_api_groups
openshift-merge-robot Feb 15, 2019
cc12cf6
Fine tuned adjustments to transtions to be quicker and smoother
sg00dwin Feb 18, 2019
e8e9b7e
Remove orphaned images
rhamilto Feb 19, 2019
f367d2d
Hide machine deployments nav
spadgett Feb 20, 2019
b18bc3c
Fix runtime exception on newly created machine config pools
TheRealJon Feb 19, 2019
7c04ec5
Bug 1679272 - Validate console can talk to OAuth token URL
spadgett Feb 20, 2019
be656a7
Merge pull request #1206 from spadgett/validate-token-url
openshift-merge-robot Feb 21, 2019
a81c034
Bug 1679495 - Use prometheus tenancy proxy for all project status met…
spadgett Feb 21, 2019
6b6e015
Bug 1677545 - Correctly handle last chunk on incremental load
spadgett Feb 20, 2019
2dae49a
frontend: Remove documentation links
rebeccaalpert Feb 21, 2019
c18197d
Test in-cluster console for e2e tests
spadgett Feb 18, 2019
427060a
Integration tests: increase resource timing buffer size
spadgett Feb 21, 2019
a9cfead
unit tests for Bugzilla 1677545
alecmerdler Feb 20, 2019
eeeb4eb
Integration tests: temporarily disable OLM etcd scenario
spadgett Feb 22, 2019
afb2707
Merge pull request #1197 from spadgett/test-in-cluster-console
openshift-merge-robot Feb 22, 2019
7a4c6b8
Truncate image names within table. Which tightens up the container ta…
sg00dwin Feb 22, 2019
82fc09d
auth: test issuer endpoint instead of token endpoint
spadgett Feb 22, 2019
09df3ba
improve workflow for installing single-namespace Operators from Marke…
alecmerdler Feb 6, 2019
721a623
Merge pull request #1172 from alecmerdler/ALM-894
openshift-merge-robot Feb 22, 2019
3910e5c
Merge pull request #1204 from spadgett/incremental-load
openshift-merge-robot Feb 24, 2019
1d84945
Merge pull request #1194 from sg00dwin/transition-adjustments
openshift-merge-robot Feb 24, 2019
c192688
Merge pull request #1201 from spadgett/hide-machine-deployments
openshift-merge-robot Feb 24, 2019
5d815c7
Merge pull request #1208 from spadgett/normal-user-project-status-met…
openshift-merge-robot Feb 24, 2019
b2a2e13
Merge pull request #933 from rhamilto/branding-rollback
openshift-merge-robot Feb 24, 2019
47c91bd
Merge pull request #1185 from jhadvig/daemon-events
openshift-merge-robot Feb 25, 2019
c47c76f
Monitoring: The "DeadMansSwitch" alert has been renamed to "Watchdog"
kyoto Feb 15, 2019
30beff3
Merge pull request #1190 from kyoto/fix-node-details-memeory-graph
openshift-merge-robot Feb 25, 2019
38ff708
Merge pull request #1203 from TheRealJon/bug-1678556
openshift-merge-robot Feb 25, 2019
af377b4
Merge pull request #1214 from sg00dwin/container-image-truncate
openshift-merge-robot Feb 25, 2019
36d7139
Merge pull request #1211 from rebeccaalpert/documentation-updates
openshift-merge-robot Feb 25, 2019
46efbb4
Merge pull request #1216 from spadgett/test-issuer
openshift-merge-robot Feb 25, 2019
c527bb5
Merge pull request #1189 from kyoto/monitoring-watchdog-alert-rename
openshift-merge-robot Feb 25, 2019
950867b
Merge pull request #1199 from rhamilto/remove-orphaned-images
openshift-merge-robot Feb 25, 2019
e30743f
Merge pull request #1165 from jeff-phillips-18/fixes
openshift-merge-robot Feb 25, 2019
f5ece42
Merge pull request #1186 from jhadvig/secret-email
openshift-merge-robot Feb 25, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 17 additions & 13 deletions auth/auth.go
Original file line number Diff line number Diff line change
Expand Up @@ -93,9 +93,9 @@ type Config struct {
ClientSecret string
Scope []string

// DiscoveryCA is required for OpenShift OAuth metadata discovery. This is the CA
// K8sCA is required for OpenShift OAuth metadata discovery. This is the CA
// used to talk to the master, which might be different than the issuer CA.
DiscoveryCA string
K8sCA string

SuccessURL string
ErrorURL string
Expand Down Expand Up @@ -140,33 +140,37 @@ func newHTTPClient(issuerCA string, includeSystemRoots bool) (*http.Client, erro
// NewAuthenticator initializes an Authenticator struct. It blocks until the authenticator is
// able to contact the provider.
func NewAuthenticator(ctx context.Context, c *Config) (*Authenticator, error) {
a, err := newUnstartedAuthenticator(c)
if err != nil {
return nil, err
}

// Retry connecting to the identity provider a few times
backoff := time.Second * 2
maxSteps := 5
maxSteps := 7
steps := 0

for {
var (
a *Authenticator
lm loginMethod
endpoint oauth2.Endpoint
err error
)

a, err = newUnstartedAuthenticator(c)
if err != nil {
return nil, err
}

switch c.AuthSource {
case AuthSourceOpenShift:
// Use the k8s CA for OAuth metadata discovery.
var client *http.Client
client, err = newHTTPClient(c.DiscoveryCA, false)
var k8sClient *http.Client
// Don't include system roots when talking to the API server.
k8sClient, err = newHTTPClient(c.K8sCA, false)
if err != nil {
return nil, err
}

endpoint, lm, err = newOpenShiftAuth(ctx, &openShiftConfig{
client: client,
k8sClient: k8sClient,
oauthClient: a.client,
issuerURL: c.IssuerURL,
cookiePath: c.CookiePath,
secureCookies: c.SecureCookies,
Expand All @@ -183,11 +187,11 @@ func NewAuthenticator(ctx context.Context, c *Config) (*Authenticator, error) {
if err != nil {
steps++
if steps > maxSteps {
log.Errorf("error contacting openid connect provider: %v", err)
log.Errorf("error contacting auth provider: %v", err)
return nil, err
}

log.Errorf("error contacting openid connect provider (retrying in %s): %v", backoff, err)
log.Errorf("error contacting auth provider (retrying in %s): %v", backoff, err)

time.Sleep(backoff)
backoff *= 2
Expand Down
18 changes: 16 additions & 2 deletions auth/auth_openshift.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@ type openShiftAuth struct {
}

type openShiftConfig struct {
client *http.Client
k8sClient *http.Client
oauthClient *http.Client
issuerURL string
cookiePath string
secureCookies bool
Expand Down Expand Up @@ -52,7 +53,7 @@ func newOpenShiftAuth(ctx context.Context, c *openShiftConfig) (oauth2.Endpoint,
return oauth2.Endpoint{}, nil, err
}

resp, err := c.client.Do(req.WithContext(ctx))
resp, err := c.k8sClient.Do(req.WithContext(ctx))
if err != nil {
return oauth2.Endpoint{}, nil, err
}
Expand Down Expand Up @@ -86,6 +87,19 @@ func newOpenShiftAuth(ctx context.Context, c *openShiftConfig) (oauth2.Endpoint,
return oauth2.Endpoint{}, nil, err
}

// Make sure we can talk to the issuer endpoint.
req, err = http.NewRequest(http.MethodHead, metadata.Issuer, nil)
if err != nil {
return oauth2.Endpoint{}, nil, err
}

resp, err = c.oauthClient.Do(req.WithContext(ctx))
if err != nil {
return oauth2.Endpoint{}, nil, fmt.Errorf("request to OAuth issuer endpoint %s failed: %v",
metadata.Token, err)
}
defer resp.Body.Close()

kubeAdminLogoutURL := proxy.SingleJoiningSlash(metadata.Issuer, "/logout")
return oauth2.Endpoint{
AuthURL: metadata.Auth,
Expand Down
8 changes: 2 additions & 6 deletions cmd/bridge/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -143,10 +143,6 @@ func main() {
if branding == "origin" {
branding = "okd"
}
// Temporarily default okd to openshift
if branding == "okd" {
branding = "openshift"
}
switch branding {
case "okd":
case "openshift":
Expand Down Expand Up @@ -366,7 +362,7 @@ func main() {

// Use the k8s CA file for OpenShift OAuth metadata discovery.
// This might be different than IssuerCA.
DiscoveryCA: caCertFilePath,
K8sCA: caCertFilePath,

ErrorURL: authLoginErrorEndpoint,
SuccessURL: authLoginSuccessEndpoint,
Expand Down Expand Up @@ -394,7 +390,7 @@ func main() {
}

if srv.Auther, err = auth.NewAuthenticator(context.Background(), oidcClientConfig); err != nil {
log.Fatalf("Error initializing OIDC authenticator: %v", err)
log.Fatalf("Error initializing authenticator: %v", err)
}
case "disabled":
log.Warningf("running with AUTHENTICATION DISABLED!")
Expand Down
3 changes: 3 additions & 0 deletions frontend/__mocks__/k8sResourcesMocks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ export const testClusterServiceVersion: ClusterServiceVersionKind = {
'alm-owner-testapp': 'testapp.clusterserviceversions.operators.coreos.com.v1alpha1',
},
},
installModes: [],
install: {
strategy: 'Deployment',
spec: {
Expand Down Expand Up @@ -129,6 +130,7 @@ export const localClusterServiceVersion: ClusterServiceVersionKind = {
'alm-owner-local-testapp': 'local-testapp.clusterserviceversions.operators.coreos.com.v1alpha1',
},
},
installModes: [],
install: {
strategy: 'Deployment',
spec: {
Expand Down Expand Up @@ -268,6 +270,7 @@ export const testPackageManifest: PackageManifestKind = {
provider: {
name: 'CoreOS, Inc',
},
installModes: [],
},
}],
defaultChannel: 'alpha',
Expand Down
6 changes: 6 additions & 0 deletions frontend/__mocks__/operatorHubItemsMocks.ts
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ const amqPackageManifest = {
provider: {
name: 'Red Hat',
},
installModes: [],
annotations: {
'alm-examples': '[{"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"Kafka","metadata":{"name":"my-cluster"},"spec":{"kafka":{"replicas":3,"listeners":{"plain":{},"tls":{}},"config":{"offsets.topic.replication.factor":3,"transaction.state.log.replication.factor":3,"transaction.state.log.min.isr":2},"storage":{"type":"ephemeral"}},"zookeeper":{"replicas":3,"storage":{"type":"ephemeral"}},"entityOperator":{"topicOperator":{},"userOperator":{}}}}, {"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnect","metadata":{"name":"my-connect-cluster"},"spec":{"replicas":1,"bootstrapServers":"my-cluster-kafka-bootstrap:9093","tls":{"trustedCertificates":[{"secretName":"my-cluster-cluster-ca-cert","certificate":"ca.crt"}]}}}, {"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaConnectS2I","metadata":{"name":"my-connect-cluster"},"spec":{"replicas":1,"bootstrapServers":"my-cluster-kafka-bootstrap:9093","tls":{"trustedCertificates":[{"secretName":"my-cluster-cluster-ca-cert","certificate":"ca.crt"}]}}}, {"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaTopic","metadata":{"name":"my-topic","labels":{"strimzi.io/cluster":"my-cluster"}},"spec":{"partitions":10,"replicas":3,"config":{"retention.ms":604800000,"segment.bytes":1073741824}}}, {"apiVersion":"kafka.strimzi.io/v1alpha1","kind":"KafkaUser","metadata":{"name":"my-user","labels":{"strimzi.io/cluster":"my-cluster"}},"spec":{"authentication":{"type":"tls"},"authorization":{"type":"simple","acls":[{"resource":{"type":"topic","name":"my-topic","patternType":"literal"},"operation":"Read","host":"*"},{"resource":{"type":"topic","name":"my-topic","patternType":"literal"},"operation":"Describe","host":"*"},{"resource":{"type":"group","name":"my-group","patternType":"literal"},"operation":"Read","host":"*"},{"resource":{"type":"topic","name":"my-topic","patternType":"literal"},"operation":"Write","host":"*"},{"resource":{"type":"topic","name":"my-topic","patternType":"literal"},"operation":"Create","host":"*"},{"resource":{"type":"topic","name":"my-topic","patternType":"literal"},"operation":"Describe","host":"*"}]}}}]',
description: '**Red Hat AMQ Streams** is a massively scalable, distributed, and high performance data streaming platform based on the Apache Kafka project. \nAMQ Streams provides an event streaming backbone that allows microservices and other application components to exchange data with extremely high throughput and low latency.\n\n**The core capabilities include**\n* A pub/sub messaging model, similar to a traditional enterprise messaging system, in which application components publish and consume events to/from an ordered stream\n* The long term, fault-tolerant storage of events\n* The ability for a consumer to replay streams of events\n* The ability to partition topics for horizontal scalability\n\n# Before you start\n\n1. Create AMQ Streams Cluster Roles\n```\n$ oc apply -f http://amq.io/amqstreams/rbac.yaml\n```\n2. Create following bindings\n```\n$ oc adm policy add-cluster-role-to-user strimzi-cluster-operator -z strimzi-cluster-operator --namespace <namespace>\n$ oc adm policy add-cluster-role-to-user strimzi-kafka-broker -z strimzi-cluster-operator --namespace <namespace>\n```',
Expand Down Expand Up @@ -89,6 +90,7 @@ const etcdPackageManifest = {
provider: {
name: 'CoreOS, Inc',
},
installModes: [],
annotations: {
'alm-examples': '[{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdCluster","metadata":{"name":"example","namespace":"default"},"spec":{"size":3,"version":"3.2.13"}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdRestore","metadata":{"name":"example-etcd-cluster"},"spec":{"etcdCluster":{"name":"example-etcd-cluster"},"backupStorageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}},{"apiVersion":"etcd.database.coreos.com/v1beta2","kind":"EtcdBackup","metadata":{"name":"example-etcd-cluster-backup"},"spec":{"etcdEndpoints":["<etcd-cluster-endpoints>"],"storageType":"S3","s3":{"path":"<full-s3-path>","awsSecret":"<aws-secret>"}}}]',
'tectonic-visibility': 'ocs',
Expand Down Expand Up @@ -136,6 +138,7 @@ const federationv2PackageManifest = {
provider: {
name: 'Red Hat',
},
installModes: [],
annotations: {
description: 'Kubernetes Federation V2 namespace-scoped installation',
categories: '',
Expand Down Expand Up @@ -184,6 +187,7 @@ const prometheusPackageManifest = {
provider: {
name: 'Red Hat',
},
installModes: [],
annotations: {
'alm-examples': '[{"apiVersion":"monitoring.coreos.com/v1","kind":"Prometheus","metadata":{"name":"example","labels":{"prometheus":"k8s"}},"spec":{"replicas":2,"version":"v2.3.2","serviceAccountName":"prometheus-k8s","securityContext": {}, "serviceMonitorSelector":{"matchExpressions":[{"key":"k8s-app","operator":"Exists"}]},"ruleSelector":{"matchLabels":{"role":"prometheus-rulefiles","prometheus":"k8s"}},"alerting":{"alertmanagers":[{"namespace":"monitoring","name":"alertmanager-main","port":"web"}]}}},{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"name":"example","labels":{"k8s-app":"prometheus"}},"spec":{"selector":{"matchLabels":{"k8s-app":"prometheus"}},"endpoints":[{"port":"web","interval":"30s"}]}},{"apiVersion":"monitoring.coreos.com/v1","kind":"Alertmanager","metadata":{"name":"alertmanager-main"},"spec":{"replicas":3, "securityContext": {}}}]',
description: 'The Prometheus Operator for Kubernetes provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.',
Expand Down Expand Up @@ -230,6 +234,7 @@ const svcatPackageManifest = {
provider: {
name: 'Red Hat',
},
installModes: [],
annotations: {
description: 'Service Catalog lets you provision cloud services directly from the comfort of native Kubernetes tooling.',
categories: 'catalog',
Expand Down Expand Up @@ -276,6 +281,7 @@ const dummyPackageManifest = {
provider: {
name: 'Dummy',
},
installModes: [],
annotations: {
description: 'Dummy is not a real operator',
categories: 'dummy',
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ describe(SubscriptionChannelModal.name, () => {
provider: {
name: 'CoreOS, Inc',
},
installModes: [],
},
}, {
name: 'nightly',
Expand All @@ -48,6 +49,7 @@ describe(SubscriptionChannelModal.name, () => {
provider: {
name: 'CoreOS, Inc',
},
installModes: [],
},
}];

Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,12 @@
/* eslint-disable no-undef, no-unused-vars */

import * as React from 'react';
import * as _ from 'lodash-es';
import { shallow } from 'enzyme';

import { requireOperatorGroup, NoOperatorGroupMsg } from '../../../public/components/operator-lifecycle-manager/operator-group';
import { testOperatorGroup } from '../../../__mocks__/k8sResourcesMocks';
import { requireOperatorGroup, NoOperatorGroupMsg, supports, InstallModeSet, InstallModeType, installedFor } from '../../../public/components/operator-lifecycle-manager/operator-group';
import { OperatorGroupKind, SubscriptionKind } from '../../../public/components/operator-lifecycle-manager';
import { testOperatorGroup, testSubscription } from '../../../__mocks__/k8sResourcesMocks';

describe('requireOperatorGroup', () => {
const SomeComponent = () => <div>Requires OperatorGroup</div>;
Expand Down Expand Up @@ -33,3 +35,111 @@ describe('requireOperatorGroup', () => {
expect(wrapper.find(NoOperatorGroupMsg).exists()).toBe(false);
});
});

describe('installedFor', () => {
const pkgName = testSubscription.spec.name;
const ns = testSubscription.metadata.namespace;
let subscriptions: SubscriptionKind[];
let operatorGroups: OperatorGroupKind[];

beforeEach(() => {
subscriptions = [];
operatorGroups = [];
});

it('returns false if no `Subscriptions` exist for the given package', () => {
subscriptions = [testSubscription];
operatorGroups = [{...testOperatorGroup, status: {namespaces: [ns], lastUpdated: null}}];

expect(installedFor(subscriptions)(operatorGroups)('new-operator')(ns)).toBe(false);
});

it('returns false if no `OperatorGroups` target the given namespace', () => {
subscriptions = [testSubscription];
operatorGroups = [{...testOperatorGroup, status: {namespaces: ['prod-a', 'prod-b'], lastUpdated: null}}];

expect(installedFor(subscriptions)(operatorGroups)(pkgName)(ns)).toBe(false);
});

it('returns false if checking for `all-namespaces`', () => {
subscriptions = [testSubscription];
operatorGroups = [{...testOperatorGroup, status: {namespaces: [ns], lastUpdated: null}}];

expect(installedFor(subscriptions)(operatorGroups)(pkgName)('')).toBe(false);
});

it('returns true if `Subscription` exists in the "global" `OperatorGroup`', () => {
subscriptions = [testSubscription];
operatorGroups = [{...testOperatorGroup, status: {namespaces: [''], lastUpdated: null}}];

expect(installedFor(subscriptions)(operatorGroups)(pkgName)(ns)).toBe(true);
});

it('returns true if `Subscription` exists in an `OperatorGroup` that targets given namespace', () => {
subscriptions = [testSubscription];
operatorGroups = [{...testOperatorGroup, status: {namespaces: [ns], lastUpdated: null}}];

expect(installedFor(subscriptions)(operatorGroups)(pkgName)(ns)).toBe(true);
});
});

describe('supports', () => {
let set: InstallModeSet;
let ownNamespaceGroup: OperatorGroupKind;
let singleNamespaceGroup: OperatorGroupKind;
let multiNamespaceGroup: OperatorGroupKind;
let allNamespacesGroup: OperatorGroupKind;

beforeEach(() => {
ownNamespaceGroup = _.cloneDeep(testOperatorGroup);
ownNamespaceGroup.status = {namespaces: [ownNamespaceGroup.metadata.namespace], lastUpdated: null};
singleNamespaceGroup = _.cloneDeep(testOperatorGroup);
singleNamespaceGroup.status = {namespaces: ['test-ns'], lastUpdated: null};
multiNamespaceGroup = _.cloneDeep(testOperatorGroup);
multiNamespaceGroup.status = {namespaces: ['test-ns', 'default'], lastUpdated: null};
allNamespacesGroup = _.cloneDeep(testOperatorGroup);
allNamespacesGroup.status = {namespaces: [''], lastUpdated: null};
});

it('correctly returns for an Operator that can only run in its own namespace', () => {
set = [
{type: InstallModeType.InstallModeTypeOwnNamespace, supported: true},
{type: InstallModeType.InstallModeTypeSingleNamespace, supported: true},
{type: InstallModeType.InstallModeTypeMultiNamespace, supported: false},
{type: InstallModeType.InstallModeTypeAllNamespaces, supported: false},
];

expect(supports(set)(ownNamespaceGroup)).toBe(true);
expect(supports(set)(singleNamespaceGroup)).toBe(true);
expect(supports(set)(multiNamespaceGroup)).toBe(false);
expect(supports(set)(allNamespacesGroup)).toBe(false);
});

it('correctly returns for an Operator which can run in several namespaces', () => {
set = [
{type: InstallModeType.InstallModeTypeOwnNamespace, supported: true},
{type: InstallModeType.InstallModeTypeSingleNamespace, supported: true},
{type: InstallModeType.InstallModeTypeMultiNamespace, supported: true},
{type: InstallModeType.InstallModeTypeAllNamespaces, supported: false},
];

expect(supports(set)(ownNamespaceGroup)).toBe(true);
expect(supports(set)(singleNamespaceGroup)).toBe(true);
expect(supports(set)(multiNamespaceGroup)).toBe(true);
expect(supports(set)(allNamespacesGroup)).toBe(false);
});

it('correctly returns for an Operator which can only run in all namespaces', () => {
set = [
{type: InstallModeType.InstallModeTypeOwnNamespace, supported: true},
{type: InstallModeType.InstallModeTypeSingleNamespace, supported: false},
{type: InstallModeType.InstallModeTypeMultiNamespace, supported: false},
{type: InstallModeType.InstallModeTypeAllNamespaces, supported: true},
];

expect(supports(set)(ownNamespaceGroup)).toBe(false);
expect(supports(set)(singleNamespaceGroup)).toBe(false);
expect(supports(set)(multiNamespaceGroup)).toBe(false);
expect(supports(set)(allNamespacesGroup)).toBe(true);
});
});
Loading