7

I'm developing a JavaScript heavy web application; heavy as in, without JavaScript, the whole application is useless. I'm currently using requirejs as my module loader, and the r.js tool to optimize my JS into a single file in production.

Currently, in production my markup looks something like this;

<script src="/js/require.js"></script>
<script>
    require.config({
       // blah blah blah
    });

    require(['editor']); // Bootstrap the JavaScript code.
</script>

However, this loads the JavaScript asynchronously, which leaves the page rendered albeit unusable until the JavaScript is loaded; I don't see the point. Instead, I'd like to load the JavaScript synchronously like so;

<script src="/js/bundle.js"></script><!-- combine require.js, config and editor.js -->

This way, when the page is rendered, it is usable. I've read that all modern browsers support parallel loading, which leads me to believe most of the advice on the Internet suggesting to avoid this approach as it blocks parallel downloads is outdated.

Yet;

  1. AMD (Asynchronous Module Definition) hints that this is not how requirejs should be used.
  2. In development, I want to insert the uncombined files as several script tags, rather than the single minified file;

    <script src="/js/require.js"></script>
    <script>/* require.config(...); */</script>
    <script src="/js/editor-dep-1.js"></script>
    <script src="/js/editor-dep-2.js"></script>
    <script src="/js/editor.js"></script>
    

    ... yet this seems so fiddly in requirejs (Use r.js to produce a fake build, to get a list of the dependencies of editor.js), it feels wrong.

My question(s) are therefore as follows;

  1. Am I right about avoiding synchronous <script />'s advice being outdated?
  2. Is using requirejs/ AMD in this way as wrong as it feels?
  3. Are there alternative techniques/ approaches/ tools/ patterns I've missed?
4

4 回答 4

3

Short answer: yes, it is wrong. You use require.js to first load all your dependencies, and then once all of them are loaded, you run the code that is dependent on all the things you loaded.

If your page is unusable until after your require-wrapped code runs, the problem is not require, but your page: instead, make a page that is minimal and indicates it is still loading, with nothing else (visible) on it (use css display:none on elements that shouldn't be used until the JS finishes, for instance), and enable/show the actual functional page elements only once require is done and your code has set up all the necessary UI/UX.

于 2013-06-08T14:31:17.040 回答
2

Take a moment to think about why you are using requirejs in the first place. It helps manage your dependencies, avoiding a long list of script tags that must be in precisely the right order. You could argue this only becomes unmanageable when a large number of scripts are involved.

Second, it loads scripts asynchronously. Again, with a large of scripts this can greatly reduce load times, but the benefit is smaller when a small number of scripts are used.

If your application only uses a few javascript files, you might decide that the overhead of setting up requirejs properly is not worth the effort. The benefits of requirejs only become obvious when a large number of scripts are involved. If you find yourself wanting to use a framework in a way that feels "wrong", it helps to step back and ask whether you need to use the framework at all.

Edit:

To solve your problem with RequireJS, initially set your main content area to display: none, or better yet display a loading spinner animation. Then at the end of your main RequireJS file simply fade in the content area.

于 2013-06-08T17:01:58.537 回答
2

I decided to take ljfranklin's advice, and do away with RequireJS completely. I personally think AMD is doing it all wrong, and CommonJS (with it's synchronous behaviour) is the way to go; but that's for another discussion.

One thing I looked at is moving to Browserify, but in development each compilation (as it scans all your files and hunts down require() calls) took far too long for me to deem acceptable.

In the end, I rolled out my own bespoke solution. It's basically Browserify, but instead it requires you to specify all your dependencies, rather than having Browserify figure it out itself. It means compilation is just a few seconds rather than 30 seconds.

That's the TL;DR. Below, I go into detail as to how I did it. Sorry for the length. Hope this helps someone... or at least gives someone some inspiration!


Firstly, I have my JavaScript files. They are written à la CommonJS, with the limitation that exports isn't available as a "global" variable (you have to use module.exports instead). e.g:

var anotherModule = require('./another-module');

module.exports.foo = function () {
    console.log(anotherModule.saySomething());
};

Then, I specify the in-order list of dependencies in a config file (note js/support.js, it saves the day later):

{
  "js": [
    "js/support.js",
    "js/jquery.js",
    "js/jquery-ui.js",
    "js/handlebars.js",
    // ...
    "js/editor/manager.js",
    "js/editor.js"
  ]
}

Then, in the compilation process, I map all of my JavaScript files (in the js/ directory) to the form;

define('/path/to/js_file.js', function (require, module) {
    // The contents of the JavaScript file
});

This is completely transparent to the original JavaScript file though; below we provide all the support for define, require and module etc, such that, to the original JavaScript file it just works.

I do the mapping using grunt; first to copy the files into a build directory (so I don't mess with the originals) and then to rewrite the file.

// files were previous in public/js/*, move to build/js/*
grunt.initConfig({
    copy: {
      dist: {
        files: [{
          expand: true,
          cwd: 'public',
          src: '**/*',
          dest: 'build/'
        }]
      }
    }
});

grunt.loadNpmTasks('grunt-contrib-copy');

grunt.registerTask('buildjs', function () {
    var path = require('path');

    grunt.file.expand('build/**/*.js').forEach(function (file) {
      grunt.file.copy(file, file, {
        process: function (contents, folder) {
          return 'define(\'' + folder + '\', function (require, module) {\n' + contents + '\n});'
        },
        noProcess: 'build/js/support.js'
      });
    });
});

I have a file /js/support.js, which defines the define() function I wrap each file with; here's where the magic happens, as it adds support for module.exports and require() in less than 40 lines!

(function () {
    var cache = {};

    this.define = function (path, func) {
        func(function (module) {
            var other = module.split('/');
            var curr = path.split('/');
            var target;

            other.push(other.pop() + '.js');
            curr.pop();

            while (other.length) {
                var next = other.shift();

                switch (next) {
                case '.':
                break;
                case '..':
                    curr.pop();
                break;
                default:
                    curr.push(next);
                }
            }

            target = curr.join('/');

            if (!cache[target]) {
                throw new Error(target + ' required by ' + path + ' before it is defined.');
            } else {
                return cache[target].exports;
            }
        }, cache[path] = {
            exports: {}
        });
    };
}.call(this));

Then, in development, I literally iterate over each file in the config file and output it as a separate <script /> tag; everything synchronous, nothing minified, everything quick.

{{#iter scripts}}<script src="{{this}}"></script>
{{/iter}}

This gives me;

<script src="js/support.js"></script>
<script src="js/jquery.js"></script>
<script src="js/jquery-ui.js"></script>
<script src="js/handlebars.js"></script>
<!-- ... -->
<script src="js/editor/manager.js"></script>
<script src="js/editor.js"></script>

In production, I minify and combine the JS files using UglifyJs. Well, technically I use a wrapper around UglifyJs; mini-fier.

grunt.registerTask('compilejs', function () {
    var minifier = require('mini-fier').create();

    if (config.production) {
      var async = this.async();
      var files = bundles.js || [];

      minifier.js({
        srcPath: __dirname + '/build/',
        filesIn: files,
        destination: __dirname + '/build/js/all.js'
      }).on('error', function () {
        console.log(arguments);
        async(false);
      }).on('complete', function () {
        async();
      });
    }
});

... then in the application code, I change scripts (the variable I use to house the scripts to output in the view), to just be ['/build/js/all.js'], rather than the array of actual files. That gives me a single

<script src="/js/all.js"></script> 

... output. Synchronous, minified, reasonably quick.

于 2014-05-02T10:57:27.223 回答
2

It's a bit late in the game, but that's my opinion on this topic:

Yes, it's wrong. AMD adds "syntactical noise" to your project without adding any benefit.

It has been designed to load modules step by step only when needed. While this is well-intentioned, it becomes a problem in large projects. I've seen several applications, that required 2 seconds or more just to bootstrap the application. That is because requirejs can only request additional dependencies after the module has been parsed on the client. Thus you'll get a waterfall-like picture in the network tab of your developer tools.

A better approach is to use a synchronous module style (such as CommonJS or the upcoming ES6 modules) and to divide the application into chunks. Then these chunks can be loaded only on demand. webpack is doing a great job when it comes to code splitting (though browserify can be configured to support it too).

Usually you do your normal requires, such as:

var a = require("a");
var b = require("b");
var c = require("c");

Then, when you decide that a module is only required in certain cases, you write:

// Creates a new chunk
require.ensure(["d"], function () { // will be called after d has been requested
    var d = require("d");
});

If d required a module e and e is not required by a, b or c, then it will only be included into the second chunk. webpack exports all the chunks into the output folder and loads them autonomously on runtime. You don't have to deal with these things. You just need to use require.ensure (or the bundle-/promise-loader) whenever you want to load code asynchronously.

This approach yields to fast bootstrapping while keeping the entry bundle small.


The only advantage I see with requirejs is, that the development setup is quite easy. You just have to add requirejs as script tag, create a small config and you're ready to go.

But imho that's a bit short-sighted, because you need a strategy to split your code into chunks in production. That's why I don't think that preprocessing your code on the server before sending it to the client will go away.

于 2015-04-10T08:56:23.527 回答