0

I want to be able to update data and visualization on a chart made with d3, from a large json file. The data comes from the USDA's nutrient database. The json file was sourced from here: http://ashleyw.co.uk/project/food-nutrient-database, it is incredibly large (specifically 30MB). Just loading it is a hassle. Notepad++ loads it all on (1) line (after about a minute), Notepad (in about 20 seconds) loads it (across multiple lines) with poor formatting. Is it possible to efficiently use a Json database that large? Will it crash a browser or cause some kind of loading lag?

4

1 回答 1

2

正如我上面提到的,我的建议是预处理 JSON 以删除您不需要的任何内容。下面是 Node.js 中的示例脚本,它将读取您正在使用的文件并生成一个新文件,该文件中的大量内容被过滤掉了。

在此示例中,我忽略了除描述之外的所有字段,仅包括有关维生素的信息。根数组中仍应有 6600 个元素。

这个脚本生成的文件大约是 5mb 而不是 30mb

var fs = require('fs');

// open the file
fs.readFile('./foods-2011-10-03.json','utf8',function(err,data){
    if (err) throw err;
    var output = [];

    // parse the file from a string into an object
    data = JSON.parse(data);

    // Loop through each element
    data.forEach(function(d,i){
        // spit an example element into the console for inspection:
        // if ( i == 0 ) console.log(d);

        // decide which parts of the object you'd like to keep
        var element = {
            description : d.description,
            nutrients : []
        };

        // for example here I'm just keeping vitamins
        d.nutrients.forEach(function(d,i){
            if ( d.description.indexOf("Vitamin") == 0 ) element.nutrients.push(d);
        });

        output.push(element);
    })

    fs.writeFile( './foods-output.json', JSON.stringify(output), function(err){
        if ( err ) throw err;
        console.log('ok');
    })
})
于 2013-10-15T09:25:06.087 回答