-
Notifications
You must be signed in to change notification settings - Fork 416
Building and pushing to kind when offline #295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I think that's not quite right. With docker, I did some prototyping that allows you to probe the daemon for what layers exist already, but I'm not sure how that would work with Using https://kind.sigs.k8s.io/docs/user/local-registry/ would help here, a bit, especially if you configure your base image to pull from a local registry. That would be the easiest thing, I think. I haven't kept up with progress here, but @BenTheElder could probably give better suggestions.
I'm not sure what a best effort build would be, here. In theory we could look into alternative sources for base images (i.e. not a registry), but that would involve moving a lot of code around. It would be easy to add support for a We could also do optional base image caching? Even then, if your base image is referenced by tag, you'll end up with a stale cache. For a digest reference, we could do pretty well... but with tags, we'd have to do some cache invalidation. Your best effort suggestion makes sense in this case, where if we can't hit the source registry, we could just fall back to whatever we have in the cache and log a giant warning about being unable to reach the registry. |
Hmm, when I develop offline with
I think people need this to work offline effectively, along with a way to just populate the cache.
This is still the current approach to kind + local registry, but in the future we hope to update the guide with a built-in instead. Some tricky bugs and better multi-platform support are taking priority ATM. |
🎉
|
👍 that's exactly what I mean.
For me "populating the cache" should be transparent, like when you run the first ko build and you're online, then your cache get populated with the base image. Adding commands like |
This issue is stale because it has been open for 90 days with no |
/remove-lifecycle stale |
This issue is stale because it has been open for 90 days with no |
Just hit this. Assumed I could run against a Would the team accept an |
Now that we have I'd rather not add a flag for this, and instead just always fallback to doing the right thing, with a big warning message that we're pulling base image info from the offline cache instead of the registry. @jdolitsky if you feel like diving into this that'd be great. If you want some help, lemme know. |
I feel like diving. Show me the way. |
Today I had a network outage and I've found out that ko doesn't work when offline when trying to push to kind, even if the base image is already on the machine:
After a discussion with @imjasonh, We've found out that ko needs to check on docker hub if I have on my machine the last debian image. I wonder, can we just print the warning without failing and trying to perform a "best effort" build in that case?
The text was updated successfully, but these errors were encountered: