danfromisrael danfromisrael - 2 months ago 12
Javascript Question

Javascript dependency injection & DIP in node: require vs constructor injection

I'm new to NodeJs development coming from the .NET world
i'm searching the web for best practices regrading DI / DIP in Javascript

In .NET i would declare my dependencies at the constructor whereas in javascript i see a common pattern is to declare dependencies in the module level via a require statement.

for me it looks like that when i use require i'm coupled to a specific file while using a constructor to receive my dependency is more flexible.

What would you recommend doing as a best practice in javascript? (I'm looking for the architectural pattern and not an IOC technical solution)

searching the web i came along this blog post (which has some very interesting discussion in the comments):
https://blog.risingstack.com/dependency-injection-in-node-js/

it summerizes my conflict pretty good.
here's some code from the blog post to make you understand what i'm talking about:

// team.js
var User = require('./user');

function getTeam(teamId) {
return User.find({teamId: teamId});
}

module.exports.getTeam = getTeam;


A simple test would look something like this:

// team.spec.js
var Team = require('./team');
var User = require('./user');

describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];

this.sandbox.stub(User, 'find', function() {
return Promise.resolve(users);
});

var team = yield team.getTeam();

expect(team).to.eql(users);
});
});


VS DI:

// team.js
function Team(options) {
this.options = options;
}

Team.prototype.getTeam = function(teamId) {
return this.options.User.find({teamId: teamId})
}

function create(options) {
return new Team(options);
}


test:

// team.spec.js
var Team = require('./team');

describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];

var fakeUser = {
find: function() {
return Promise.resolve(users);
}
};

var team = Team.create({
User: fakeUser
});

var team = yield team.getTeam();

expect(team).to.eql(users);
});
});

Answer

Regarding your question: I don't think that there is a common practice in the JS community. I've seen both types in the wild, require modifications (like rewire or proxyquire) and constructor injection (often using a dedicated DI container). However, personally, I think not using a DI container is a better fit with JS. And that's because JS is a dynamic language with functions as first-class citizens. Let me explain that:

Using DI containers enforce constructor injection for everything. It creates a huge configuration overhead for two main reasons:

  1. Providing mocks in unit tests
  2. Creating abstract components that know nothing about their environment

Regarding the first argument: I would not adjust my code just for my unit tests. If it makes your code cleaner, simpler, more versatile and less error-prone, then go for out. But if your only reason is your unit test, I would not take the trade-off. You can get pretty far with require modifications and monkey patching. And if you find yourself writing too many mocks, you should probably not write a unit test at all, but an integration test. Eric Elliott has written a great article about this problem.

Regarding the second argument: This is a valid argument. If you want to create a component that only cares about an interface, but not about the actual implementation, I would opt for a simple constructor injection. However, since JS does not force you to use classes for everything, why not just use functions?

In functional programming, separating stateful IO from actual processing is a common paradigm. For instance, if you're writing code that is supposed to count file types in a folder, one could write this (especially when he/she is coming from a language that enforce classes everywhere):

const fs = require("fs");

class FileTypeCounter {
    countFileTypes(dirname, callback) {
        fs.readdir(dirname, function (err) {
            if (err) return callback(err);
            // recursively walk all folders and count file types
            // ...
            callback(null, fileTypes);
        });
    }
}

Now if you want to test that, you need to change your code in order to inject a fake fs module:

class FileTypeCounter {
    constructor(fs) {
        this.fs = fs;
    }
    countFileTypes(dirname, callback) {
        this.fs.readdir(dirname, function (err) {
            // ...
        });
    }
}

Now, everyone who is using your class needs to inject fs into the constructor. Since this is boring and makes your code more complicated once you have long dependency graphs, developers invented DI containers where they can just configure stuff and the DI container figures out the instantiation.

However, what about just writing pure functions?

function fileTypeCounter(allFiles) {
    // count file types
    return fileTypes;
}

function getAllFilesInDir(dirname, callback) {
    // recursively walk all folders and collect all files
    // ...
    callback(null, allFiles);
}

// now let's compose both functions
function getAllFileTypesInDir(dirname, callback) {
    getAllFilesInDir(dirname, (err, allFiles) => {
        callback(err, !err && fileTypeCounter(allFiles));
    });
}

Now you have two super-versatile functions out-of-the-box, one which is doing IO and the other one which processes the data. fileTypeCounter is a pure function and super-easy to test. getAllFilesInDir is impure but a such a common task, you'll often find it already on npm where other people have written integration tests for it. getAllFileTypesInDir just composes your functions with a little bit of control flow. This is a typical case for an integration test where you want to make sure that your whole application is working correctly.

By separating your code between IO and data processing, you won't find the need to inject anything at all. And if you don't need to inject anything, that's a good sign. Pure functions are the easiest thing to test and are still the easiest way to share code between projects.

Comments