有一个名为“ stream-json”的不错的模块,它可以完全满足您的需求。
它可以解析远远超出可用内存的JSON文件。
和
StreamArray处理一个频繁的用例:与Django生产的数据库转储类似的大量相对较小的对象。它逐个流处理阵列组件,并自动进行组装。
这是一个非常基本的示例:
const StreamArray = require('stream-json/streamers/StreamArray');const path = require('path');const fs = require('fs');const jsonStream = StreamArray.withParser();//You'll get json objects here//Key is an array-index herejsonStream.on('data', ({key, value}) => { console.log(key, value);});jsonStream.on('end', () => { console.log('All done');});const filename = path.join(__dirname, 'sample.json');fs.createReadStream(filename).pipe(jsonStream.input);如果您想做一些更复杂的事情,例如按顺序处理一个对象(保持顺序)并为每个对象应用一些异步操作,则可以执行以下自定义Writeable流:
const StreamArray = require('stream-json/streamers/StreamArray');const {Writable} = require('stream');const path = require('path');const fs = require('fs');const fileStream = fs.createReadStream(path.join(__dirname, 'sample.json'));const jsonStream = StreamArray.withParser();const processingStream = new Writable({ write({key, value}, encoding, callback) { //Save to mongo or do any other async actions setTimeout(() => { console.log(value); //Next record will be read only current one is fully processed callback(); }, 1000); }, //Don't skip this, as we need to operate with objects, not buffers objectMode: true});//Pipe the streams as followsfileStream.pipe(jsonStream.input);jsonStream.pipe(processingStream);//So we're waiting for the 'finish' event when everything is done.processingStream.on('finish', () => console.log('All done'));请注意: 以上示例已针对“ stream-json@1.1.3”进行了测试。对于某些以前的版本(可能早于1.0.0),您可能必须:
const StreamArray = require('stream-json/utils/StreamArray');然后
const jsonStream = StreamArray.make();



