阅读背景:

aws转码器覆盖s3上的文件

来源:互联网 

I'm using the AWS PHP SDK to upload a file to S3 then trancode it with Elastic Transcoder.

我正在使用AWS PHP SDK将文件上传到S3,然后使用Elastic Transcoder对其进行转码。

First pass everything works fine, the putobject command overwrites the old file (always named the same) on s3:

首先传递一切正常,putobject命令覆盖s3上的旧文件(总是命名相同):

$s3->putObject([
      'Bucket'     => Config::get('app.aws.S3.bucket'),
      'Key'        => $key,
      'SourceFile' => $path,          
      'Metadata'   => [
        'title'     => Input::get('title')
      ]
    ]);

However when creating a second transcoding job, i get the error:

但是,当创建第二个转码作业时,我收到错误:

  The specified object could not be saved in the specified bucket because an object by that name already exists

the transcoder role has full s3 access. Is there a way around this or will i have to delete the files using the sdk everytime before its transcoded?

转码器角色具有完全s3访问权限。有没有办法解决这个问题,还是我必须在每次转码前使用sdk删除文件?

my create job:

我的创造工作:

    $result = $transcoder->createJob([
      'PipelineId' => Config::get('app.aws.ElasticTranscoder.PipelineId'),
      'Input' => [
        'Key' => $key
      ],
      'Output' => [
        'Key' => 'videos/'.$user.'/'.$output_key,
        'ThumbnailPattern' => 'videos/'.$user.'/thumb-{count}',
        'Rotate' => '0',
        'PresetId' => Config::get('app.aws.ElasticTranscoder.PresetId')       
      ],
    ]);

2 个解决方案

#1


4  

The Amazon Elastic Transcoder service documents that this is the expected behavior here: https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key.

Amazon Elastic Transcoder服务在此处记录了这是预期的行为:http://docs.aws.amazon.com/elastictranscoder/latest/developerguide/job-settings.html#job-settings-output-key。

If your workflow requires you to overwrite the same key, then it sounds like you should have the job output somewhere unique and then issue an S3 CopyObject operation to overwrite the older file.

如果您的工作流程要求您覆盖相同的密钥,那么听起来您应该将作业输出放在某处,然后发出S3 CopyObject操作来覆盖旧文件。

#2


-1  

I can think of two ways to implement it:

我可以想到两种方法来实现它:

  1. Create two buckets, one for temp file storage (where its uploaded) and another where transcoded file is placed. Post transcoding when new file is created, you can delete temp file.
  2. 创建两个存储桶,一个用于临时文件存储(上载的位置),另一个存储转码文件。创建新文件后转码后,您可以删除临时文件。
  3. Use single bucket and upload file with some suffix/prefix. Create transcoded file in same bucket removing prefex/suffix (which you used for temp name).
  4. 使用单个存储桶并上传带有一些后缀/前缀的文件。在同一个桶中创建转码文件,删除prefex / suffix(用于临时名称)。

In both cases for automated deletion of uploaded files you can use Lambda function with S3 notifications.

在自动删除上传文件的两种情况下,您都可以将Lambda函数与S3通知一起使用。


分享到: