Advanced usage

This page aims to cover some of the more advanced topics, the ones you rarely need but would be nice to have written down anyway.

Specifying a timeout

Pass a suitable timeout argument to the S3Bucket constructor:

simples3.S3Bucket.__init__(name, access_key, secret_key[, base_url, timeout])
  • name – the bucket name
  • access_key – public component of the access key
  • secret_key – private component of the access key
  • base_url – URL override for the bucket (required for own domain hosting - omit the trailing slash)
  • timeout – if given, every request will have that many seconds to complete before it is aborted

Alternatively the timeout can be set after construction via the attribute by the same name:

>>> bucket.timeout = 10.0  # 10 seconds of timeout
>>> bucket.timeout = None  # timeout disabled

Note that this timeout is fairly primitive and will be applied for all requests, which may or may not be a desirable behavior.

Temporarily disabling the timeout


New in version 1.1.

Context manager that disables any potential timeout set for the bucket. For example:

>>> with bucket.timeout_disabled():
>>>     bucket.put("large_file.bin", large_file)

Copying keys

simples3.S3Bucket.copy(source, key[, acl, metadata, mimetype, headers])
  • source – the bucket and key to copy in <bucket>/<key> format
  • key – the destination key to copy to
  • acl – the ACL to set for the destination key
  • metadata – if set, the metadata is replaced with this

Copying keys within the same bucket and between two buckets on the same account is fairly straight-forward.

The method you’re looking for is S3Bucket.copy(). It takes a source, and a destination. The specification of the source is <bucket>/<key>:

s3b.copy("source/file.txt", "copied.txt", acl="private")

Notice one thing: the ACL must be specified. S3 can’t copy the ACL.

You can use the same bucket for source and destination:

s3b.copy( + "/old.txt", "new.txt", acl="private")

If metadata is specified, it specifies new metadata to set for this key. Otherwise, the previous metadata is copied by S3.

New in version 0.5.

Modifying existing metadata

Although Amazon S3 doesn’t have any provisions for doing this, there’s a neat trick that can be played which avoids reuploading the entire key to the bucket because of metadata change.

S3 allows you to copy from and to the exact same key, combine that with being able to replace metadata when copying, and you’ve got a recipe for changing metadata:

>>> s3b.copy( + "/" + key, key, acl="private",
...          metadata={"new": "metadata"})

Creating and deleting buckets

simples3.S3Bucket.put_bucket([config_xml, acl])
  • config_xml – configuration XML to use for bucket creation
  • acl – the ACL for the bucket

Creates a bucket, essentially. Doing this on an existing bucket changes its ACL or configuration.

This is the snippet to use for Europe-based buckets:

<?xml version="1.0" encoding="utf-8" ?>

Deletes the current bucket.

Generating authenticated or private URLs

simples3.S3Bucket.make_url_authed(key[, expire])
  • key – key for which to generate authenticated URL
  • expire – expiry delta, i.e. how long until the URL expires (default is five minutes from now)

Generate an authenticated URL for a key.

An authenticated URL makes it possible for an unauthenticated client to access content that otherwise would be protected by the ACL for that object.

What this means in the long run is that you can selectively allow third parties access to some of your content, for a limited period of time.

expire can be a datetime.timedelta instance, a datetime.datetime instance, an integer delta in seconds, or a UNIX timestamp in UTC.


This method replaces S3Bucket.url_for(), which still exists but is deprecated.

class simples3.bucket.ReadOnlyS3Bucket

This subclass of simples3.S3Bucket doesn’t ever call urllib2, and is useful for generating authenticated URLs.