目前我正在使用pptxgenjs
并导出它创建一个 ppt 文件。pptexporter 文件包含许多其他函数来获取资产和字符串等,但是当我使用比容器更大的图像和视频等大型资产时,总是存在内存泄漏,mem_limit
这会杀死容器。我在生成ppt文件之前和之后做了堆转储,差别很大。但不知何故,在发送响应后堆内存仍然很大。是因为我没有正确传输数据吗?但它似乎pptxgenjs
在构建 ppt 的同时照顾资产。
如果不使用大图像资源或视频,则堆稳定在 51mb 左右。但是,如果我使用总大小为 300mb 的资产或许多资产,docker stats 会显示内存一次跃升至 1.6g。
pptExporter.js 的一部分:
export default async (project) => {
const pptx = await initPresentation(project);
slide.addText(project.properties.courseTitle.properties.value.data, slideHelpers.mainTitleOptions);
await createLessons(pptx, project);
slide.addText(project.properties.courseAudience.properties.value.data, slideHelpers.mainAudienceOptions);
return pptx.stream().then(data => {
return new Buffer(data, "base64");
});
};
getCoursePpt.js:
import getPptFile from './pptExporter';
export default async (project, fileName, credentials) => {
heapdump.writeSnapshot('/' + Date.now() + '.heapsnapshot');
heapdump.writeSnapshot(function (err, filename) {
console.log('dump1 written to', filename);
});
const pptFile = await getPptFile(project);
heapdump.writeSnapshot('/' + Date.now() + '.heapsnapshot');
heapdump.writeSnapshot(function (err, filename) {
console.log('dump2 written to', filename);
});
const s3 = new AWS.S3({
credentials: new AWS.Credentials({
accessKeyId: credentials.accessKey,
secretAccessKey: credentials.secretAccess
})
});
const params = {
Key: `${fileName}.pptx`,
Bucket: credentials.name,
Body: pptFile
};
const uploadPromise = () => new Promise((resolve, reject) => {
s3.upload(params, (error, data) => {
if (error) {
reject(error);
} else {
resolve(data.Location);
}
});
});
let data;
try {
data = await uploadPromise(params);
} catch (error) {
throw error;
}
return data;
};
然后重新运行:
getPptLink = async (request, response) => {
const { user, body: { rootId } } = request;
const link = await ppt.getCoursePpt(rootId, user);
response.json({ link });
heapdump.writeSnapshot('/' + Date.now() + '.heapsnapshot');
heapdump.writeSnapshot(function (err, filename) {
console.log('dump4 written to', filename);
});
};