-
Notifications
You must be signed in to change notification settings - Fork 29.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: Local Modules #146
Comments
@srcspider I agree that require('#foo/bar') I would have preferred |
i still agree with @Raynos on this one (trying to cite correctly): write applications in a flat structure (read: |
I second that, if your application is getting out of hand with nested directories it's a solid sign that you need to split it up even more. You can even do this within an application and avoid the need to publish small, single-use packages by using an approach such as linklocal (I'm hoping that the npm client will add first-class support for features like this, but that might be a pipe dream). |
@rvagg there are many scenarios where that approach is not practical, some apps really are big and complicated and separating into smaller modules brings with it its own set of problems. If iojs is to grow it must be possible (and fun!) to write these bigger apps using it. Your linklocal suggestion sounds like a hack around the problem - what is the difference between writing a module like that and simply having another folder under |
Fwiw I've seen people writing their own global |
@phpnode can you find an example of a big app where you'd do I think that would really help the case here. |
I've thought about this a lot (and I'm probably biased as I've written an editor where instead of So just fixing the relative requires isn't going to solve the bigger issue of having repetitive and/or hard to reason with My own conclusion is that we probably shouldn't change the module loader or resolution algorithm, which is working wonderfully well (I mean, come on, it's fascinating that multiple versions of packages in a tree work as well as they do without coordination!). Instead we should create new tools to better manage our namespaces. Again, here's why I think that way:
Instead of fixing this in core I'd rather think of how to fix this in the ecosystem. And determine if core needed to change in some small way to allow more ecosystem experimentation. |
I've been struggling with this as well. It doesn't always make sense to break everything up into isolated packages if each piece is only relevant in the context of its sibling packages. Reality of "business" means not every component can be open-sourced or is immediately of open-source quality. Problem domains often messy and it's hard to get the ideal tree structure correct from the outset. Publishing stuff to a registry incurs a lot of overhead and makes architectural refactoring very difficult. As rod mentioned, I've been trying to solve these problems with linklocal. I don't think this belongs in core at all, though wouldn't mind seeing this in npm one day.
Then you can pull dependencies in-between components like any other dependency: "name": "component1",
"dependencies": {
"component2":"file: ../component2",
"component4":"file: ../component4"
} then you call
We've also been combining this with scoped packages to prevent future namespace collisions, and only require package.json changes if we eventually want to take advantage of scopes to publish to a private registry.
linklocal gives you the flexibility of
|
@benjamingr I was typing a long reply here but @timoxley did a better job of explaining the problem, even though I disagree with his solution. I'm not aware of any large projects that do this legitimately on github, I do it myself in oriento but I think I could potentially solve those issues by using the (apparently now discouraged) When developing applications it's very common for changes to affect more than one module, much more so than when developing small open source components and keeping these changes in sync is a lot of overhead. It's exactly the same problem faced by people who make use of git submodules in their apps. It is conceptual purity at the expense of pragmatism and developer productivity. When there is no hope of reusing the various parts of the application between projects what is gained by having distinct modules? |
there's |
We've been using |
Sorry, not familiar with how the There are several other problems. You said that refers to the root of the project. But that seems less flexible then what I suggested as an existing workaround which is just to name the directory you stick your modules in "node_modules" so on resolution names bubble up and then hit it, think it's the npm one and jump back in. That satisfies the functionality you describe with My problem wasn't referring to local modules, the node_modules solution works well enough. My problem is "managing local modules." Essentially having more freedom on the layout of the project (which the Note that in my example with With regard to the custom Doesn't actually achive what I described in the example. Note that at the end I describe how the different libraries can easily choose if they want to have their own I also hate it for another reason: IT only works if you're gonna work on a single project all your life. I often find projects go on waiting periods and I work on others. Or you're working on an isolated piece of code. Having magical Doesn't solve my problem. It essentially creates the same problem as PATH, it's doesn't dynamically resolve. Creates a lot of configuration. I would prefer to configure locations of many modules, not individual module locations. It's much easier to reason with. Note that even right now you can do this,
No configuration. Dynamically resolves. No extra tooling required. And you should be able to easily transition the code to a npm module if you ever need to, with little to no changes to the code just moving files. The issue is only that you can't have that same structure there be flat, and you can't rename the |
off-topic, but to answer this
You can easily solve this by bundling at the code level. For example if you have something like module.exports = {
Model: require('./Model'),
Repo: require('./Repo'),
View: require('./View'),
Domestic: require('./Domestic') // object with it's own Model, Repo, etc
// ...
} You can now just do If you feel like being optimal you can always still do This principle applies to utility functions as well, module.exports = {
merge: require('./merge'),
href: require('./href'),
hsl: require('./hsl'),
Model: require('./Model'),
// ...
}; var lib = require('lib');
var Chiken = function (data) {
lib.Model.call(this, data)
};
Chiken.prototype = lib.merge(lib.Model.prototype, {
// ...
});
module.exports = Chiken; Personally, I like aliasing locally any important functions like Promises, Request management, Logging, etc, so their implementation is in only one place in code and can be easily swapped out for a different one. I also prefer referring and loading directly very specialized libraries like |
There are a few techniques that can be applied to large applications to make
Technique 1: Have multiple test foldersOne of the common problems I see in large apps is files like
Instead I recommend:
Technique 2: Dependency injection.One of the common problems I see in large applications is statefully requiring completely different sub systems in your application. Let's say you have files like
We've found that requiring other sub systems statefully has always been a terrible idea. It's not flexible and hard to test. The coupling also makes refactors a nightmare. The better practice is dependency injection. In our case we would have something like
There are other techniques as well, but these two around tests and sub systems get rid of most of the nasty require statements. Note that this seperation of "multiple packages in one git repository" does not have the overhead of multiple git repositories but has all the benefits of small packages. Especially if you keep one package.json per git repository. |
Whats the difference between pushing opts into every method call like you did there and just doing: Module-Localized Testing project/
node_modules/ # npm modules
src/
node_modules/
module1
tests/
node_module/
...dependency injection...
index.js
module2
tests/
node_module/
...dependency injection...
index.js
module3
tests/
node_module/
...dependency injection...
index.js Your test file, (example is simplified) var Tested = require('./temp/Tested');
// ...run test code for Tested... cd src/node_modules/module1
mkdir tests/temp
cp Something.js tests/temp/Tested.js
run-tests
rm -rf tests/temp Here's a version with the "modules_root"
(note: it's pretty easy to make the You can create a specialized "mock" module and then just do Project-Localized Testing You can save more code, and avoid false positives by having the test code outside the main source tree, so that after the temporary copy your piece of code can't access anything in the normal source tree, just mocks and whatever else you tell it. project/
node_modules/ # npm modules
src/
node_modules/
module1
module2
module3
tests/
node_modules/
Workspace/
...you copy your the tested file(s) here...
lib/
Promise.js # mock
test/
module1Tests
module2Tests var Module1 = require('Workspace/Module1');
// ...test Module1... So long as you wrap any important 3rd-party libraries into a local name space, eg. The nice thing about the two method above is they can be applied with out much effort; so long as you haven't created a tangled mess of things. So you can use them with out refactoring too much of your project or including boilerplate. You would also not have a need for all the wrapping, configuration, deferring and more importantly since the the resolution is based on the directory tree it's very clear how the resolution happens (anything could happen with dependency injection on the other hand; though I trust most people keep it sane). Just for comparison here's the code for creating the Dynamically Resolved var merge = require('lib/merge');
var Model = require('lib/Model');
var Chicken = function (data) {
Model.call(this, data)
};
Chicken.prototype = merge(Model.prototype, {
// ...
});
module.exports = Chicken; // file: example.js
var Chiken = require('models/Chicken');
var Food = new Chicken(); Dependency Injection module.exports = function (di) {
var merge = di.lib.merge;
var Model = di.lib.Model;
var Chicken = function (data) {
Model.call(this, data)
};
Chicken.prototype = merge(Model.prototype, {
// ...
});
return Chiken;
} // file: di.conf.js
var di = {};
// Load Utilities
// --------------
di.merge = require('lib/merge')();
// ...
// Load Utility Types
// ------------------
di.Model = require('lib/Model')(di);
// ... // file: main.js
var di = require('di.conf');
var example = require('example');
example(di); // file: example.js
modul.exports = function (di) {
var Chiken = di.Chicken;
var food = new Chicken();
} Personally everything in my project including non-javascript stuff is in some build pipeline so isolating code in a testable context is practically just as complicated to run as the dependency injection example; initial costs of writing build scripts not considered. |
Its worth mentioning this technique from substack's browserify handbook: |
That would be really good to have. |
You realize the model system is frozen right ? We cannot make any changes to the |
No, I did not realize of it. May I ask why is anything "frozen" at this stage and why changes are not allowed in the |
Apologies for my tone. The module system is "locked" ( not frozen, wrong terminology ). See https://github.com/iojs/io.js/blob/v0.12/doc/api/modules.markdown#modules If we change the semantics of This might not seem scary, however because npm supports nested dependencies and ranges you might require a module A that supports node 0.10 and a dependency of A that is three levels deep say module E might break backwards compatibility by using this new require semantics in a patch or minor version change, this means module A no longer works. This can be a very unexpected and frustrating change. node has made an enormous effort to not break backwards compatibility between 0.8 and 0.10, and between 0.10 and 0.12. I doubt the folks involved with iojs would want a backwards incompatible change between node 0.12 and io 1.0 |
Thanks for the comment.
Clear. Being honest, I was just considering this new feature (using "#" to indicate root of the project) for custom projects rather than for NPM libraries (as typicall libraries are small enough not require features like that one). But yes... there is no way to categorize/distinguish them, so it would be dangerous anyhow. |
The semantics can be locked specifically to the Many languages have this distinction: 3rd party modules load one way, your local modules load whichever way you feel is best, and the system work fine. The issue is mainly really "how would npm block misuse of the conventions" as well as how would stuff like browserify handle such conventions with out much fuss. Here is, in my opinion the simplest solution to these problems:
Example var Model = local('lib/Model');
var Example = function (data) {
Model.call(this, data);
};
module.exports = Example; The
Example usage, obviously very little changes: node ./src/main.js
browserify -t iojs ./src/main.js -o ./public/main.js // in whatever build system you're using
browserify({ transform: [ 'iojs' ] }) So in summary, Are there any other political issues with adding local-only modules functionality? |
I also never had any problems with too long paths. Actually you anyway should just pass a variable (like a |
If my package is named |
@srcspider The module system is, as @Raynos mentioned, locked: no changes will be made to it. This precludes changing the API for modules, both explicitly (modifying require directly), and implicitly (changing the expectations of what's available to modules -- e.g., new globals). Much of Node's value is tied up in the node package ecosystem; the easiest way to destroy that value it would be to change the module system and introduce incompatibilities, however subtle or unintentional. That being said, as I understand the proposal, it should be able to be implemented as a standalone package on npm using require.extensions and perhaps a bit of monkey-patching. An exploration of what code written against such an API would look like in a more fleshed-out fashion would be valuable for future module systems. Closing this for now. It is very unlikely that we'll make such a drastic change to such a delicate, locked API, but feel free to continue the conversation here. I would be interested in the results of your experiment, if you choose to release a package on npm! |
I would like to make a brief plug for using this problem space as an opportunity to model ways of introducing ES6 modules into Node and writing a flexible, potentially metadata-driven |
Currently one of the biggest pain points with node development is the problem of referring to local modules (ie. how to avoid the unreadable
require('../../../../../something')
).The more modules you create in your project the biggest the issue becomes.
Node conventions recommends splitting things into modules and uploading to npm however that has some issues of it's own:
There's a simple solution for both, and that's to create "fake" node_modules directories. Something like this:
This works great since any code inside
src/node_modules
can call internal modules and node will resolve them correctly once it's search algorithm goes intosrc/
.It also doesn't require any path aliasing (which I believe is frown upon?), so you can just stick everything in
src/
into a module, point tosrc/node_modules/main.js
as the entry point and your entire application is now reusable (either by the world or members of your private group).The problem with that solution is that it's very annoyingly:
Solution
Changes to module search algorithm
node_modules
directory is not found in directory, node should search for a.iojs
file.iojs
file is found then the configuration is read and rules in it applyExample
.iojs
files that resolves our problemIn each custom module library you place the following, unless it's literally called "node_modules"
In the root you place
.iojs
file if you wish to expose modules inside them directly to other modules:We can now have something like:
All three directories in
src/
would have a copy of the "modules_root".iojs
file. And insrc
you would have a copy of the "node_modules_dirs".iojs
.Let's say for example both
sharedLibrary1
,sharedLibrary2
,sharedPrivate1
andsharedPrivate2
all have a filelib/merge.js
.Because the
.iojs
applies while still inside them callingrequire('lib/merge')
would resolve tosharedPrivate1/lib/merge
in one andsharedPrivate2/lib/merge
in the other; which is exactly what we want.If you call
require('lib/merge')
insomeAppModules
will resolve tosharedLibrary1/lib/merge
because it's the first one specified in thesrc/.iojs
file, however if you call it insharedLibrary2
it will resolve tosharedLibrary2/lib/merge
since it will get resolved bysrc/sharedLibrary2/.iojs
before it gets resolved bysrc/.iojs
Backwards compatible convection of
someAppModules
to a single application NPM moduleOf course this assumes you convert some parts of your code to rely on generic modules as opposed to local ones. Easy peasy since everything has a nice grep-able name.
The text was updated successfully, but these errors were encountered: