I have a big file transfer (say 4gb or so) and rather than using shutil, I m just opening and writing it the normal file way so I can include a progress percentage as it moves along.
It then occurred to me to try to attempt to resume the file write, if for some reason it borked out during the process. I haven t had any luck though. I presumed it would be some clever combination of offsetting the read of the source file and using seek, but I haven t had any luck so far. Any ideas?
Additionally, is there some sort of dynamic way to figure what block size to use when reading and writing files? I m fairly novice to that area, and just read to use a larger size for larger file (I m using 65536 at the moment). Is there a smart way to do it, or does one simply guess..? Thanks guys.
Here is the code snippet of the appending file transfer:
newsrc = open(src, rb )
dest_size = os.stat(destFile).st_size
print Dest file exists, resuming at block %s % dest_size
newsrc.seek(dest_size)
newdest = open(destFile, a )
cur_block_pos = dest_size
# Start copying file
while True:
cur_block = newsrc.read(131072)
cur_block_pos += 131072
if not cur_block:
break
else:
newdest.write(cur_block)
It does append and start writing, but it then writes dest_size more data at the end than it should for probably obvious reasons to the rest of you. Any ideas?