English 中文(简体)
How to get the md5sum of a file on Amazon s S3
原标题:
  • 时间:2009-11-21 15:44:45
  •  标签:
  • amazon-s3

If I have existing files on Amazon s S3, what s the easiest way to get their md5sum without having to download the files?

问题回答

AWS s documentation of ETag says:

The entity tag is a hash of the object. The ETag reflects changes only to the contents of an object, not its metadata. The ETag may or may not be an MD5 digest of the object data. Whether or not it is depends on how the object was created and how it is encrypted as described below:

  • Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.
  • Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.
  • If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption.

Reference: http://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonResponseHeaders.html

ETag does not seem to be MD5 for multipart uploads (as per Gael Fraiteur s comment). In these cases it contains a suffix of minus and a number. However, even the bit before the minus does not seem to be the MD5, even though it is the same length as an MD5. Possibly the suffix is the number of parts uploaded?

This is a very old question, but I had a hard time find the information below, and this is one of the first places I could find, so I wanted to detail it in case anyone needs.

ETag is a MD5. But for the Multipart uploaded files, the MD5 is computed from the concatenation of the MD5s of each uploaded part. So you don t need to compute the MD5 in the server. Just get the ETag and it s all.

As @EmersonFarrugia said in this answer:

Say you uploaded a 14MB file and your part size is 5MB. Calculate 3 MD5 checksums corresponding to each part, i.e. the checksum of the first 5MB, the second 5MB, and the last 4MB. Then take the checksum of their concatenation. Since MD5 checksums are hex representations of binary data, just make sure you take the MD5 of the decoded binary concatenation, not of the ASCII or UTF-8 encoded concatenation. When that s done, add a hyphen and the number of parts to get the ETag.

So the only other things you need is the ETag and the upload part size. But the ETag has a -NumberOfParts suffix. So you can divide the size by the suffix and get part size. 5Mb is the minimum part size and the default value. The part size has to be integer, so you can t get things like 7,25Mb each part size. So it should be easy get the part size information.

Here is a script to make this in osx, with a Linux version in comments: https://gist.github.com/emersonf/7413337

I ll leave both script here in case the page above is no longer accessible in the future:

Linux version:

#!/bin/bash
set -euo pipefail
if [ $# -ne 2 ]; then
    echo "Usage: $0 file partSizeInMb";
    exit 0;
fi
file=$1
if [ ! -f "$file" ]; then
    echo "Error: $file not found." 
    exit 1;
fi
partSizeInMb=$2
fileSizeInMb=$(du -m "$file" | cut -f 1)
parts=$((fileSizeInMb / partSizeInMb))
if [[ $((fileSizeInMb % partSizeInMb)) -gt 0 ]]; then
    parts=$((parts + 1));
fi
checksumFile=$(mktemp -t s3md5.XXXXXXXXXXXXX)
for (( part=0; part<$parts; part++ ))
do
    skip=$((partSizeInMb * part))
    $(dd bs=1M count=$partSizeInMb skip=$skip if="$file" 2> /dev/null | md5sum >> $checksumFile)
done
etag=$(echo $(xxd -r -p $checksumFile | md5sum)-$parts | sed  s/ --/-/ )
echo -e "${1}	${etag}"
rm $checksumFile

OSX version:

#!/bin/bash

if [ $# -ne 2 ]; then
    echo "Usage: $0 file partSizeInMb";
    exit 0;
fi

file=$1

if [ ! -f "$file" ]; then
    echo "Error: $file not found." 
    exit 1;
fi

partSizeInMb=$2
fileSizeInMb=$(du -m "$file" | cut -f 1)
parts=$((fileSizeInMb / partSizeInMb))
if [[ $((fileSizeInMb % partSizeInMb)) -gt 0 ]]; then
    parts=$((parts + 1));
fi

checksumFile=$(mktemp -t s3md5)

for (( part=0; part<$parts; part++ ))
do
    skip=$((partSizeInMb * part))
    $(dd bs=1m count=$partSizeInMb skip=$skip if="$file" 2>/dev/null | md5 >>$checksumFile)
done

echo $(xxd -r -p $checksumFile | md5)-$parts
rm $checksumFile

Below that s work for me to compare local file checksum with s3 etag. I used Python

def md5_checksum(filename):
    m = hashlib.md5()
    with open(filename,  rb ) as f:
        for data in iter(lambda: f.read(1024 * 1024), b  ):
            m.update(data)
   
    return m.hexdigest()


def etag_checksum(filename, chunk_size=8 * 1024 * 1024):
    md5s = []
    with open(filename,  rb ) as f:
        for data in iter(lambda: f.read(chunk_size), b  ):
            md5s.append(hashlib.md5(data).digest())
    m = hashlib.md5(b"".join(md5s))
    print( {}-{} .format(m.hexdigest(), len(md5s)))
    return  {}-{} .format(m.hexdigest(), len(md5s))

def etag_compare(filename, etag):
    et = etag[1:-1] # strip quotes
    print( et ,et)
    if  -  in et and et == etag_checksum(filename):
        return True
    if  -  not in et and et == md5_checksum(filename):
        return True
    return False


def main():   
    session = boto3.Session(
        aws_access_key_id=s3_accesskey,
        aws_secret_access_key=s3_secret
    )
    s3 = session.client( s3 )
    obj_dict = s3.get_object(Bucket=bucket_name, Key=your_key)

    etag = (obj_dict[ ETag ])
    print( etag , etag)
    
    validation = etag_compare(filename,etag)
    print(validation)
    etag_checksum(filename, chunk_size=8 * 1024 * 1024)
    return validation

For anyone who spend time to search around to find out that why the md5 not the same as ETag in S3.

ETag will calculate against chuck of data and concat all md5hash to make md5 hash again and keep the number of chunk at the end.

Here is C# version to generate hash

    string etag = HashOf("file.txt",8);

source code

    private string HashOf(string filename,int chunkSizeInMb)
    {
        string returnMD5 = string.Empty;
        int chunkSize = chunkSizeInMb * 1024 * 1024;

        using (var crypto = new MD5CryptoServiceProvider())
        {
            int hashLength = crypto.HashSize/8;

            using (var stream = File.OpenRead(filename))
            {
                if (stream.Length > chunkSize)
                {
                    int chunkCount = (int)Math.Ceiling((double)stream.Length/(double)chunkSize);

                    byte[] hash = new byte[chunkCount*hashLength];
                    Stream hashStream = new MemoryStream(hash);

                    long nByteLeftToRead = stream.Length;
                    while (nByteLeftToRead > 0)
                    {
                        int nByteCurrentRead = (int)Math.Min(nByteLeftToRead, chunkSize);
                        byte[] buffer = new byte[nByteCurrentRead];
                        nByteLeftToRead -= stream.Read(buffer, 0, nByteCurrentRead);

                        byte[] tmpHash = crypto.ComputeHash(buffer);

                        hashStream.Write(tmpHash, 0, hashLength);

                    }

                    returnMD5 = BitConverter.ToString(crypto.ComputeHash(hash)).Replace("-", string.Empty).ToLower()+"-"+ chunkCount;
                }
                else {
                    returnMD5 = BitConverter.ToString(crypto.ComputeHash(stream)).Replace("-", string.Empty).ToLower();

                }
                stream.Close();
            }
        }
        return returnMD5;
    }

As of 2022-02-25, S3 features a new Checksum Retrieval function GetObjectAttributes:

New – Additional Checksum Algorithms for Amazon S3 | AWS News Blog

Checksum Retrieval – The new GetObjectAttributes function returns the checksum for the object and (if applicable) for each part.

This function supports SHA-1, SHA-256, CRC-32, and CRC-32C for checking the integrity of the transmission.

It appears that MD5 is actually not an option for the new features, so this may not resolve your original question, but MD5 is deprecated for lots of reasons, and if use of an alternate checksum works for you, this may be what you re looking for.

The easiest way would be to set the checksum yourself as metadata before you upload these files to your bucket :

ObjectMetadata md = new ObjectMetadata();
md.setContentMD5("foobar");
PutObjectRequest req = new PutObjectRequest(BUCKET, KEY, new File("/path/to/file")).withMetadata(md);
tm.upload(req).waitForUploadResult();

Now you can access these metadata without downloading the file :

ObjectMetadata md2 = s3Client.getObjectMetadata(BUCKET, KEY);
System.out.println(md2.getContentMD5());

source : https://github.com/aws/aws-sdk-java/issues/1711

I found that s3cmd has a --list-md5 option that can be used with the ls command, e.g.

s3cmd ls --list-md5 s3://bucket_of_mine/

Hope this helps.

MD5 is a deprecated algorithm and not supported by AWS S3 but you can get the SHA256 checksum given you upload the file with the --checksum-algorithm like this:

aws s3api put-object --bucket picostat --key nasdaq.csv --body nasdaq.csv --checksum-algorithm SHA256

That will return output like this:

{
    "ETag": ""25f798aae1c15d44a556366b24d85b6d"",
    "ChecksumSHA256": "TEqQVO6ZsOR9FEDv3ofP8KDKbtR02P6foLKEQYFd+MI=",
    "ServerSideEncryption": "AES256"
}

And then run this base64 algorithm on the original file to compare:

shasum -a 256 nasdaq.csv | cut -f1 -d | xxd -r -p | base64 

Replace the references to the CSV file with your own and make the bucket name your own.

Whenever you want to retrieve the checksum, you can run:

aws s3api get-object-attributes --bucket picostat --key nasdaq.csv --object-attributes "Checksum"

I have cross checked jets3t and management console against uploaded files MD5sum, and ETag seems to be equal to MD5sum. You can just view properties of the file in AWS management console:

https://console.aws.amazon.com/s3/home

I have used the following approach with success. I present here a Python fragment with notes.

Let s suppose we want the MD5 checksum for an object stored in S3 and that the object was loaded using the multipart upload process. The ETag value stored with the object in S3 is not the MD5 checksum we want. The following Python commands can be used to stream the binary of the object, without downloading or opening the object file, to compute the desired MD5 checksum. Please note this approach assumes a connection to the S3 account containing the object has been established, and that the boto3 and hashlib modules have been imported:

#
# specify the S3 object...
#
bucket_name = "raw-data"
object_key = "/date/study-name/sample-name/file-name"
s3_object = s3.Object(bucket_name, object_key)

#
# compute the MD5 checksum for the specified object...
#
s3_object_md5 = hashlib.md5(s3_object.get()[ Body ].read()).hexdigest()

This approach works for all objects stored in S3 (i.e., objects that have been loaded with or without using the multipart upload process).

This works for me. In PHP, you can compare the checksum between local file e amazon file using this:



    // get localfile md5
    $checksum_local_file = md5_file (  /home/file  );

    // compare checksum between localfile and s3file    
    public function compareChecksumFile($file_s3, $checksum_local_file) {

        $Connection = new AmazonS3 ();
        $bucket = amazon_bucket;
        $header = $Connection->get_object_headers( $bucket, $file_s3 );

        // get header
        if (empty ( $header ) || ! is_object ( $header )) {
            throw new RuntimeException( checksum error );
        }
        $head = $header->header;
        if (empty ( $head ) || !is_array($head)) {
            throw new RuntimeException( checksum error );
        }
        // get etag (md5 amazon)
        $etag = $head[ etag ];
        if (empty ( $etag )) {
            throw new RuntimeException( checksum error );
        }
        // remove quotes
        $checksumS3 = str_replace( " ,   , $etag);

        // compare md5
        if ($checksum_local_file === $checksumS3) {
            return TRUE;
        } else {
            return FALSE;
        }
    }

Here s the code to get the S3 ETag for an object in PowerShell converted from c#.

function Get-ETag {
  [CmdletBinding()]
  param(
    [Parameter(Mandatory=$true)]
    [string]$Path,
    [Parameter(Mandatory=$true)]
    [int]$ChunkSizeInMb
  )

  $returnMD5 = [string]::Empty
  [int]$chunkSize = $ChunkSizeInMb * [Math]::Pow(2, 20)

  $crypto = New-Object System.Security.Cryptography.MD5CryptoServiceProvider
  [int]$hashLength = $crypto.HashSize / 8

  $stream = [System.IO.File]::OpenRead($Path)

  if($stream.Length -gt $chunkSize) {
    $chunkCount = [int][Math]::Ceiling([double]$stream.Length / [double]$chunkSize)
    [byte[]]$hash = New-Object byte[]($chunkCount * $hashLength)
    $hashStream = New-Object System.IO.MemoryStream(,$hash)
    [long]$numBytesLeftToRead = $stream.Length
    while($numBytesLeftToRead -gt 0) {
      $numBytesCurrentRead = [int][Math]::Min($numBytesLeftToRead, $chunkSize)
      $buffer = New-Object byte[] $numBytesCurrentRead
      $numBytesLeftToRead -= $stream.Read($buffer, 0, $numBytesCurrentRead)
      $tmpHash = $crypto.ComputeHash($buffer)
      $hashStream.Write($tmpHash, 0, $hashLength)
    }
    $returnMD5 = [System.BitConverter]::ToString($crypto.ComputeHash($hash)).Replace("-", "").ToLower() + "-" + $chunkCount
  }
  else {
    $returnMD5 = [System.BitConverter]::ToString($crypto.ComputeHash($stream)).Replace("-", "").ToLower()
  }

  $stream.Close()  
  $returnMD5
}




相关问题
how to debug curl call to amazon s3 when it get stuck

I m using the PHP S3 class and this backup script to backup ~500Mb file from Linux server to S3. The call to s3 gets stuck (never returns) and top shows httpd process which consumes 100% CPU and 1% ...

Synchronizing S3 Folders/Buckets [closed]

I have an S3 Bucket that holds static content for all my clients in production. I also have a staging environment which I use for testing before I deploy to production. I also want the staging ...

Pure Javascript app + Amazon S3?

I m looking to confirm or refute the following: For what I have read so far it is not possible to write a web application with only javascript -- no server side logic -- served from Amazon S3 that ...

Using a CDN like Amazon S3 to control access to media

I want to use Amazon S3/CloudFront to store flash files. These files must be private as they will be accessed by members. This will be done by storing each file with a link to Amazon using a mysql ...

What s a good way to collect logs from Amazon EC2 instances?

My app is hosted on an Amazon EC2 cluster. Each instance writes events to log files. I need to collect (and data mine) over these logs at the end of each day. What s a recommended way to collect these ...

热门标签