-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Load module directly from a URL is very cute #195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
There are environments (such as node) that require a centralized source of trust, maybe those fit your needs better? I believe the nice thing about loading from any Related to #94 , maybe this should be closed? |
I can see a lot of devs mistakenly using different versions in different source files causing unnecessary duplication of cached packages. I much prefer the single source of package and version declaration! |
Point is: you can do that (module download) upfront. No need to do that at runtime or even be part of the project. This is right the same as the require<->package.json coupling that led to npm and leftpad. Just keep Deno simple now. |
That's a mighty subjective definition of "simple". Did you watch Ryan's talk? He has already offered rationale. |
I see having package.json and require as being unnecessarily complicated and implicit - see package.json, package-lock.json, npmjs... all baked in. A URI is just an explicit, simple identifier. Seems simpler to me. |
Yep, this is why I think this is really a cute feature |
@jedahan really? what do you think will happen if you are using a module which internally imports something else which depends on a module linked to an expired domains? what if the domain is hijacked? I cannot trust multiple origins, this is the whole point of having a modular system: trust the work of others, not their server provider |
Too many similar issue, please see #47 |
@pibi What if that module specifies the alternate origin in their package.json? Then you've got the exact same problem. Unless you're suggesting that Deno becomes actually beholden to a centralized repository like npm, and specifying any other host is now not allowed entirely. In which case I think you're going to find very little support from anyone. |
@pibi thats why I am excited to try non-http protocols, content-addressed ones like |
@rivertam No, I just think Ryan have some good points and an unnecessary one. I cannot see why having the package manager (because this is still a package manager) embedded in the runtime could solve the issues npm still has. @jedahan I don't understand why you can't do that in node, right now. just specify it on your package.json as we are already doing with git based dependencies. |
why not just add some switch like "--allow-remote-imports" for guys like you? |
@pibi does npm support arbitrary resolvers? According to the docs...
I know projects like I hope I can share what I think is interesting - none of the ideas I've seen in deno stop you from writing tooling to support centralized use-cases, reproduceability, etc; but by not prescribing a particular ahead-of-time package manager, we will not be restricted to it's design decisions in the future. |
There seems to be some consternation about this feature on Reddit & Hackernews, so it might be worth having a recap of the broader issues. It's true that loading a package from an unfamiliar URL has its risks. The domain might go down. It might get hijacked! Or you might make a typo. What's not true is that Node's package system really solves this. What's also not true is that URL-based imports preclude using some centralised package authority. It would be quite feasible to - import lodash from "https://d.pm/lodash/5.7.1.ts" ... if the community decided that Another approach might be using independent devtools to bundle the files directly with the executable, so you could then import them like a local module:
import lodash from '../deps/lodash.ts'
One thing that might help here is a better way to specify a path resolving from the top level of the project. If you could use such "absolute" paths, you could rewrite the above to import lodash from 'deps/lodash.ts' ...which I think would be a good feature anyway. Finally, it would probably not be unfeasible to insist on HTTPS resources. Perhaps HTTP imports could throw a warning or exception unless a certain flag was set. |
This is the best approach, and should be the only possible approach. Allowing modules to be loaded directly from a URL adds unnecessary complexity and risk when this functionality could instead be provided by a separate tool. Context/disclaimer: I build dependency analysis tools for my day job (fossa.io), I've worked with and built analysis tools for many languages and many package managers (fossa-cli), and I spend a lot of time thinking about package and dependency management. Imports over the network in the way that the talk describes ("load once on first execution, then cache") is a really bad idea. The problem is reproducibility. A network resource may change or become unavailable between the times that:
Implicit caches make it very difficult for me to guarantee that my build is reproducible across all of these environments: I either need to replicate the dependency cache (at the very least, copying One counterpoint I've seen is that Go uses URL imports and it seems to work well. This is because Go's imports follow extremely specific and limited semantics:
These problems could all be mitigated if the dependency cache's semantics were explicit, public, and stable, but at that point you might as well use a separate tool and reduce the complexity of the runtime. (Using a separate tool also has a variety of other advantages e.g. allowing users to support their own network protocols, but reproducibility is the most important one.) |
To me the implementation seems more in line with browser behaviour. In browsers you also specify .js files/modules as urls which then browser caches (depending on the HTTP caching headers) upon loading the html page the first time. In turn this made people use cdn and package builders which can bundle all dependencies in one file which is then cached by the runtime after being included the first time. The core functionality of the runtime (browser or deno) is kept simple, but the tooling around it is free to develop. So in the end instead of |
Regarding matching browser semantics: availability requirements are different for browsers and not-browsers. For not-browsers, it makes sense to run a program offline even if that program has third-party dependencies. For browsers, all programs must be run online anyway, so the conditional likelihood of a dependency being unavailable given that the program is available is much lower. (That said, have your web apps ever been broken by a third-party hosting an analytics tool or a jQuery plugin that was modified or became unavailable? Mine have. It's not great.) |
@ilikebits In that case |
Requiring a tool like Using a tool to generate the bundle means you need access when the tool is explicitly invoked to download dependencies, and it's easy to copy vendored dependencies after they've been downloaded so that future deployments don't rely on the network. Using the proposed "download at first execution time, then cache" mechanism requires network access every first run of the program, which is what I'm concerned about. Since a program may be "first executed" (e.g. on a new machine, in a new container image, etc.) at many different points in time, it's difficult to ensure that, for every "first execution", a program will be able to download its dependencies and that it'll download the same dependency source code. |
One feature I do like about npm is the ability to "wildcard" versions. The absolute url path removes this ability. That is unless the servers implement some means to do it (/url/[email protected], etc), but then you would only get this feature from some package hosts. |
@ilikebits It doesn't require network access every first run of the program if you bundle the sources at build time and require that. You would again use some tool as you do for web apps. Deployed application in docker or wherever would not need to have network access same as you do that today with I'm just an outsider looking on how you can get the same behaviour in this case because the primary reason is to not have npm complexity as far as I understood from the presentation and I'm perfectly fine with that. |
So, we are all-in for reproducible builds (yarn.lock, package-lock.json, docker files), immutability, sandboxing and security, right? So, what about CI/CD for deno apps when we are using tons of third party modules we cannot cache every time? BTW, some of the Ryan's points about npm are quite true, but let me ask what prevents us to just drop the npm server dependency and move on to something else distributed, immutable, reproducible (ipfs maybe?). If the main point of deno is "get rid of the npm mess", then we are talking about a new package manager, but:
|
@ilikebits You and I are of the same mind, I think. If I were designing Deno - and I'm not - I would simply have my runtime pull dependencies from a local cache. I'd leave it up to a separate tool to populate that cache, perhaps provided as a 'sibling' project a la One advantage of this approach might be that you could instrument your dependency code during unit tests. You could quite easily replace a module with a stub or a mocked equivalent. You could also use symlinks to pull in other projects as dependencies - a transparent, non-proprietary alternative to |
It was unfortunate phrasing in my talk to deride unnecessary features - calling them “cute” - and then to minutes later to use the same word for URL imports. (And apologies if I’m not repling to some other comments here - I’ve only skimmed it - ping me otherwise.) |
@ry Considering that browser vendors use the MIME type in the response instead of the url, is it a goal to gain some inter-op with this heuristic? |
Imagine we're working with something like React, that has a good chance of being imported in most of your files. import { Component } from "https://unpkg.com/[email protected]/react.ts" Now you want to update to |
I guess you can do // ./react.ts
export * from "https://unpkg.com/[email protected]/react.ts"
// ./foo.ts
import { Component } from './react.ts'; ? |
Wouldn't that be very poor for code splitting purposes? And isn't it just shifting the responsibility of package management to users? And like @ilikebits mentioned, if dependencies are downloaded at execution time, how do we do the equivalent "install dependencies" when building a container image. I get @ry's complaints about package.json, but I do see value in there being a file like
Not unlike markdown's reference links. |
@mikew I think that file resolution could be done on the bundler level. So as you wrote |
It's probably outside the scope of this issue, but how would common dependencies be handled. If both |
@mikew If they both load it from the same URL they will not be. @thysultan I think for now we'll just pass everything thru the TS compiler and ignore MIME. I will close this issue. Reopen new issues with more specific comments if necessary. |
hey @liftM im late to the party here and might have missed someone making a similar suggestion. but what about a hosted dependency bundle that the team can share? say you have 3 machines:
dev A wants to import a dependency. rather than deal with local caching they issue some command "install dependency X". now this triggers the following process:
dev B comes along and wants to play. rather than rely on pulling and locally caching each of those deps themself they issue "install" which requests the dep bundle from the dep host. the process repeats itself essentially the dep host acts as a project-scoped CDN of all the deps. installing / updating / removing are commands issued against that dep host not the local machine that issues it. it centralizes (relative to a project and its devs/consumers) the cache so that it can be used across the various environments and pipelines etc. it can be controlled to restrict changes (semver rules etc). the dep host can even version the dep bundle so that it can be quickly rolled back if needed. what do you think of that mate? i appreciate your insight and think its shitty out of all the responses here yours didnt get one. felt you brought a lot of experience to the table. |
So please, remove this feature. It is unnecessary.
P.S. just think what could be a leftpad case without a centralized source of trust.
The text was updated successfully, but these errors were encountered: