** S3utils deals with files on Amazon S3 Bucket **
in your settings file:
S3UTILS_DEBUG_LEVEL=1
AWS_ACCESS_KEY_ID = 'your access key'
AWS_SECRET_ACCESS_KEY = 'your secret key'
AWS_STORAGE_BUCKET_NAME = 'your bucket name'
in your code:
from s3utils import S3utils
s3utils = S3utils()
in your code:
from s3utils import S3utils
s3utils = S3utils(
AWS_ACCESS_KEY_ID = 'your access key',
AWS_SECRET_ACCESS_KEY = 'your secret key',
AWS_STORAGE_BUCKET_NAME = 'your bucket name',
S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
)
Methods
sets permissions for a file on S3
Parameters: | target_file : string
acl : string, optional
|
---|
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.chmod("path/to/file","private")
establishes the connection
connects to cloud front which is more control than just S3
Copies a file or folder from local to s3
Parameters: | local_path : string
target_path : string
acl : string, optional
del_after_upload : boolean, optional
overwrite : boolean, optional
invalidate : boolean, optional
|
---|
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.cp("path/to/folder","/test/")
copying /path/to/myfolder/test2.txt to test/myfolder/test2.txt
copying /path/to/myfolder/test.txt to test/myfolder/test.txt
copying /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
copying /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff
deals with cropduster images saving to S3
closes the connection
returns grant permission, grant owner, grant owner email and grant id as a list. It needs you to set k.key to a key on amazon (file path) before running this. note that Amazon returns a list of grants for each file.
Invalidates the CDN (distribution) cache for a certain file of files. This might take up to 15 minutes to be effective.
You can check for the invalidation status using check_invalidation_request.
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> aa = s3utils.invalidate("test/no_upload/hoho/photo.JPG")
>>> print aa
('your distro id', u'your request id')
>>> invalidation_request_id = aa[1]
>>> bb = s3utils.check_invalidation_request(*aa)
>>> for inval in bb:
... print 'Object: %s, ID: %s, Status: %s' % (inval, inval.id, inval.status)
Gets the list of files and permissions from S3
Parameters: | folder : string
num: integer, optional :
begin_from_file : string, optional
all_grant_data : Boolean, optional
|
---|
gets the list of file names (keys) in a s3 folder
Parameters: | folder : string
num: integer, optional :
begin_from_file: string, optional :
|
---|
Moves the file to the S3 and deletes the local copy
It is basically s3utils.cp that has del_after_upload=True
Examples
>>> from s3utils import S3utils
>>> s3utils = S3utils(
... AWS_ACCESS_KEY_ID = 'your access key',
... AWS_SECRET_ACCESS_KEY = 'your secret key',
... AWS_STORAGE_BUCKET_NAME = 'your bucket name',
... S3UTILS_DEBUG_LEVEL = 1, #change it to 0 for less verbose
... )
>>> s3utils.mv("path/to/folder","/test/")
moving /path/to/myfolder/test2.txt to test/myfolder/test2.txt
moving /path/to/myfolder/test.txt to test/myfolder/test.txt
moving /path/to/myfolder/hoho/photo.JPG to test/myfolder/hoho/photo.JPG
moving /path/to/myfolder/hoho/haha/ff to test/myfolder/hoho/haha/ff