阅读背景:

在保存到存储桶时,AWS Lambda中的S3 PutObject(通过节点)将文件大小加倍

来源:互联网 

I have been working with http.get and s3.putObject. Basically, just want to get a file from a http location and save it, as is, to a bucket in S3. Seems rather simple. The original filesize is 47kb.

我一直在使用http.get和s3.putObject。基本上,只是想从http位置获取文件并将其保存到S3中的存储桶中。看起来很简单。原始文件大小为47kb。

The problem is, the retrieved file (47kb) is being saved to the S3 bucket (using s3.putObject) as 92.4kb in size. Somewhere, the file has doubled in size, making it non-usable.

问题是,检索到的文件(47kb)被保存到S3存储桶(使用s3.putObject),大小为92.4kb。在某处,文件的大小增加了一倍,使其无法使用。

How do I prevent the file from doubling in size by the time it is saved to the S3 bucket?

如何在文件保存到S3存储桶时防止文件大小翻倍?

Here's the entire code used:

这是使用的完整代码:

exports.handler = function(event, context) {
    var imgSourceURL = "https://www.asite.com/an-image.jpg";
    var body;
    var stagingparams;
    http.get(imgSourceURL, function(res) {
        res.on('data', function(chunk) { body += chunk; });
        res.on('end', function() {
            var tmp_contentType = res.headers['content-type']; // Reported as image/jpeg
            var tmp_contentLength = res.headers['content-length']; // The reported filesize is 50kb (the actual filesize on disk is 47kb)
            stagingparams = {
                Bucket: "myspecialbucket",
                Key: "mytestimage.jpg",
                Body: body
            };
            // When putObject saves the file to S3, it doubles the size of the file to 92.4kb, thus making file non-readable.
            s3.putObject(stagingparams, function(err, data) {
                if (err) {
                    console.error(err, err.stack);
                }
                else {
                    console.log(data);
                }
            });
        });
    });
};

1 个解决方案

#1


1  

Use an array to store the readable stream bytes and then concatenate all the buffer instances in the array together before calling s3.putObject:

使用数组存储可读的流字节,然后在调用s3.putObject之前将数组中的所有缓冲区实例连接在一起:

exports.handler = function(event, context) {
    var imgSourceURL = "https://www.asite.com/an-image.jpg";
    var body = [];
    var stagingparams;
    http.get(imgSourceURL, function(res) {
        res.on('data', function(chunk) { body.push(chunk); });
        res.on('end', function() {
            var tmp_contentType = res.headers['content-type']; // Reported as image/jpeg
            var tmp_contentLength = res.headers['content-length']; // The reported filesize is 50kb (the actual filesize on disk is 47kb)
            stagingparams = {
                Bucket: "myspecialbucket",
                Key: "mytestimage.jpg",
                Body: Buffer.concat(body)
            };
            // When putObject saves the file to S3, it doubles the size of the file to 92.4kb, thus making file non-readable.
            s3.putObject(stagingparams, function(err, data) {
                if (err) {
                    console.error(err, err.stack);
                }
                else {
                    console.log(data);
                }
            });
        });
    });
};

分享到: