Skip to content

Building and pushing to kind when offline #295

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
slinkydeveloper opened this issue Jan 15, 2021 · 10 comments
Open

Building and pushing to kind when offline #295

slinkydeveloper opened this issue Jan 15, 2021 · 10 comments

Comments

@slinkydeveloper
Copy link

slinkydeveloper commented Jan 15, 2021

Today I had a network outage and I've found out that ko doesn't work when offline when trying to push to kind, even if the base image is already on the machine:

2021/01/15 11:17:43 Using base docker.io/debian for knative.dev/control-data-plane-communication/cmd/webhook
2021/01/15 11:17:43 Using base docker.io/debian for knative.dev/control-data-plane-communication/cmd/receive_adapter
2021/01/15 11:17:43 Using base docker.io/debian for knative.dev/control-data-plane-communication/cmd/controller
namespace/knative-samples unchanged
serviceaccount/control-data-plane-communication-controller unchanged
serviceaccount/control-data-plane-communication-webhook unchanged
clusterrole.rbac.authorization.k8s.io/control-data-plane-communication-controller unchanged
clusterrole.rbac.authorization.k8s.io/control-data-plane-communication-observer unchanged
clusterrolebinding.rbac.authorization.k8s.io/control-data-plane-communication-controller-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/control-data-plane-communication-webhook-rolebinding unchanged
clusterrolebinding.rbac.authorization.k8s.io/control-data-plane-communication-controller-addressable-resolver unchanged
clusterrole.rbac.authorization.k8s.io/control-data-plane-communication-webhook unchanged
customresourcedefinition.apiextensions.k8s.io/samplesources.samples.knative.dev unchanged
service/control-data-plane-communication-controller-manager unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/defaulting.webhook.knative-samples.knative.dev unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.knative-samples.knative.dev unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.knative-samples.knative.dev unchanged
secret/webhook-certs unchanged
configmap/config-logging unchanged
configmap/config-observability unchanged
2021/01/15 11:17:43 error processing import paths in "config/500-webhook.yaml": error resolving image references: Get "https://index.docker.io/v2/": dial tcp: lookup index.docker.io: no such host

After a discussion with @imjasonh, We've found out that ko needs to check on docker hub if I have on my machine the last debian image. I wonder, can we just print the warning without failing and trying to perform a "best effort" build in that case?

@jonjohnsonjr
Copy link
Collaborator

jonjohnsonjr commented Jan 15, 2021

We've found out that ko needs to check on docker hub if I have on my machine the last debian image.

I think that's not quite right. ko doesn't do anything clever for caching base images. Currently, it will always hit the registry to get that base image. When pushing to a registry, we can be smart about de-duplicating blobs that already exist in the target registry. With the (current) kind implementation, we do a ctr images import, which requires us to serialize the image to a tarball, so even if kind has the base image available, it doesn't matter.

With docker, I did some prototyping that allows you to probe the daemon for what layers exist already, but I'm not sure how that would work with ctr: google/go-containerregistry#559

Using https://kind.sigs.k8s.io/docs/user/local-registry/ would help here, a bit, especially if you configure your base image to pull from a local registry. That would be the easiest thing, I think. I haven't kept up with progress here, but @BenTheElder could probably give better suggestions.

I wonder, can we just print the warning without failing and trying to perform a "best effort" build in that case?

I'm not sure what a best effort build would be, here. In theory we could look into alternative sources for base images (i.e. not a registry), but that would involve moving a lot of code around.

It would be easy to add support for a scratch image base, but that probably won't help you if you need debian.

We could also do optional base image caching? Even then, if your base image is referenced by tag, you'll end up with a stale cache. For a digest reference, we could do pretty well... but with tags, we'd have to do some cache invalidation. Your best effort suggestion makes sense in this case, where if we can't hit the source registry, we could just fall back to whatever we have in the cache and log a giant warning about being unable to reach the registry.

@BenTheElder
Copy link

ko doesn't do anything clever for caching base images. Currently, it will always hit the registry to get that base image.

Hmm, when I develop offline with docker build I can depend on the base image existing in docker's local storage unless I opt in to --pull. For tags that are expected not to be mutated (e.g. from k8s.gcr.io where we don't do that) or image digests this is reasonable.

We could also do optional base image caching?

I think people need this to work offline effectively, along with a way to just populate the cache.

Using https://kind.sigs.k8s.io/docs/user/local-registry/ would help here, a bit, especially if you configure your base image to pull from a local registry. That would be the easiest thing, I think. I haven't kept up with progress here, but @BenTheElder could probably give better suggestions.

This is still the current approach to kind + local registry, but in the future we hope to update the guide with a built-in instead. Some tricky bugs and better multi-platform support are taking priority ATM.

@jonjohnsonjr
Copy link
Collaborator

in the future we hope to update the guide with a built-in instead

🎉

better multi-platform support

ko is almost certainly too naive to help here (I assume whatever needs multi-platform-ing requires delicate packaging), but possibly worth looking at https://github.com/google/ko#multi-platform-images

@slinkydeveloper
Copy link
Author

Your best effort suggestion makes sense in this case, where if we can't hit the source registry, we could just fall back to whatever we have in the cache and log a giant warning about being unable to reach the registry.

👍 that's exactly what I mean.

I think people need this to work offline effectively, along with a way to just populate the cache.

For me "populating the cache" should be transparent, like when you run the first ko build and you're online, then your cache get populated with the base image. Adding commands like docker pull sounds too much overhead for the user IMO.

@github-actions
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.

@slinkydeveloper
Copy link
Author

/remove-lifecycle stale

@github-actions
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Keep fresh with the 'lifecycle/frozen' label.

@jdolitsky
Copy link

Just hit this. Assumed I could run against a localhost registry offline.

Would the team accept an --offline flag? Or some sort of timeout to fallback to cache if cannot reach?

@imjasonh
Copy link
Member

Now that we have KOCACHE I think we could keep base image data in there too, in OCI layout form, and use it as a fallback if remote.Head fails permanently.

I'd rather not add a flag for this, and instead just always fallback to doing the right thing, with a big warning message that we're pulling base image info from the offline cache instead of the registry.

@jdolitsky if you feel like diving into this that'd be great. If you want some help, lemme know.

@jdolitsky
Copy link

I feel like diving. Show me the way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants