http://techwiki.co.uk/index.php?title=RClone_-_Amazon_S3&feed=atom&action=historyRClone - Amazon S3 - Revision history2024-03-29T11:36:35ZRevision history for this page on the wikiMediaWiki 1.34.0http://techwiki.co.uk/index.php?title=RClone_-_Amazon_S3&diff=128&oldid=prevAdam.birds: Created page with "=Amazon S3= ==Setup== Paths are specified as '''remote:bucket''' (or '''remote:''' for the '''lsd''' command.) You may put subdirectories in too, eg '''remote:bucket/path/to..."2016-04-30T15:01:50Z<p>Created page with "=Amazon S3= ==Setup== Paths are specified as '''remote:bucket''' (or '''remote:''' for the '''lsd''' command.) You may put subdirectories in too, eg '''remote:bucket/path/to..."</p>
<p><b>New page</b></p><div>=Amazon S3=<br />
<br />
==Setup==<br />
<br />
Paths are specified as '''remote:bucket''' (or '''remote:''' for the '''lsd''' command.) You may put subdirectories in too, eg '''remote:bucket/path/to/dir'''.<br />
<br />
Here is an example of making an s3 configuration. First run:<br />
<br />
<pre><br />
rclone config<br />
</pre><br />
<br />
This will guide you through an interactive setup process:<br />
<br />
<pre><br />
No remotes found - make a new one<br />
n) New remote<br />
s) Set configuration password<br />
n/s> n<br />
name> remote<br />
Type of storage to configure.<br />
Choose a number from below, or type in your own value<br />
1 / Amazon Cloud Drive<br />
\ "amazon cloud drive"<br />
2 / Amazon S3 (also Dreamhost, Ceph)<br />
\ "s3"<br />
3 / Backblaze B2<br />
\ "b2"<br />
4 / Dropbox<br />
\ "dropbox"<br />
5 / Google Cloud Storage (this is not Google Drive)<br />
\ "google cloud storage"<br />
6 / Google Drive<br />
\ "drive"<br />
7 / Hubic<br />
\ "hubic"<br />
8 / Local Disk<br />
\ "local"<br />
9 / Microsoft OneDrive<br />
\ "onedrive"<br />
10 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)<br />
\ "swift"<br />
11 / Yandex Disk<br />
\ "yandex"<br />
Storage> 2<br />
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.<br />
Choose a number from below, or type in your own value<br />
1 / Enter AWS credentials in the next step<br />
\ "false"<br />
2 / Get AWS credentials from the environment (env vars or IAM)<br />
\ "true"<br />
env_auth> 1<br />
AWS Access Key ID - leave blank for anonymous access or runtime credentials.<br />
access_key_id> access_key<br />
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.<br />
secret_access_key> secret_key<br />
Region to connect to.<br />
Choose a number from below, or type in your own value<br />
/ The default endpoint - a good choice if you are unsure.<br />
1 | US Region, Northern Virginia or Pacific Northwest.<br />
| Leave location constraint empty.<br />
\ "us-east-1"<br />
/ US West (Oregon) Region<br />
2 | Needs location constraint us-west-2.<br />
\ "us-west-2"<br />
/ US West (Northern California) Region<br />
3 | Needs location constraint us-west-1.<br />
\ "us-west-1"<br />
/ EU (Ireland) Region Region<br />
4 | Needs location constraint EU or eu-west-1.<br />
\ "eu-west-1"<br />
/ EU (Frankfurt) Region<br />
5 | Needs location constraint eu-central-1.<br />
\ "eu-central-1"<br />
/ Asia Pacific (Singapore) Region<br />
6 | Needs location constraint ap-southeast-1.<br />
\ "ap-southeast-1"<br />
/ Asia Pacific (Sydney) Region<br />
7 | Needs location constraint ap-southeast-2.<br />
\ "ap-southeast-2"<br />
/ Asia Pacific (Tokyo) Region<br />
8 | Needs location constraint ap-northeast-1.<br />
\ "ap-northeast-1"<br />
/ South America (Sao Paulo) Region<br />
9 | Needs location constraint sa-east-1.<br />
\ "sa-east-1"<br />
/ If using an S3 clone that only understands v2 signatures<br />
10 | eg Ceph/Dreamhost<br />
| set this and make sure you set the endpoint.<br />
\ "other-v2-signature"<br />
/ If using an S3 clone that understands v4 signatures set this<br />
11 | and make sure you set the endpoint.<br />
\ "other-v4-signature"<br />
region> 1<br />
Endpoint for S3 API.<br />
Leave blank if using AWS to use the default endpoint for the region.<br />
Specify if using an S3 clone such as Ceph.<br />
endpoint> <br />
Location constraint - must be set to match the Region. Used when creating buckets only.<br />
Choose a number from below, or type in your own value<br />
1 / Empty for US Region, Northern Virginia or Pacific Northwest.<br />
\ ""<br />
2 / US West (Oregon) Region.<br />
\ "us-west-2"<br />
3 / US West (Northern California) Region.<br />
\ "us-west-1"<br />
4 / EU (Ireland) Region.<br />
\ "eu-west-1"<br />
5 / EU Region.<br />
\ "EU"<br />
6 / Asia Pacific (Singapore) Region.<br />
\ "ap-southeast-1"<br />
7 / Asia Pacific (Sydney) Region.<br />
\ "ap-southeast-2"<br />
8 / Asia Pacific (Tokyo) Region.<br />
\ "ap-northeast-1"<br />
9 / South America (Sao Paulo) Region.<br />
\ "sa-east-1"<br />
location_constraint> 1<br />
Remote config<br />
--------------------<br />
[remote]<br />
env_auth = false<br />
access_key_id = access_key<br />
secret_access_key = secret_key<br />
region = us-east-1<br />
endpoint = <br />
location_constraint = <br />
--------------------<br />
y) Yes this is OK<br />
e) Edit this remote<br />
d) Delete this remote<br />
y/e/d> y<br />
</pre><br />
<br />
==Usage==<br />
<br />
This remote is called '''remote''' and can now be used like this:<br />
<br />
See all buckets<br />
<br />
<pre><br />
rclone lsd remote:<br />
</pre><br />
<br />
Make a new bucket<br />
<br />
<pre><br />
rclone mkdir remote:bucket<br />
</pre><br />
<br />
List the contents of a bucket<br />
<br />
<pre><br />
rclone ls remote:bucket<br />
</pre><br />
<br />
Sync '''/home/local/directory''' to the remote bucket, deleting any excess files in the bucket.<br />
<br />
<pre><br />
rclone sync /home/local/directory remote:bucket<br />
</pre><br />
<br />
==Modified Time==<br />
<br />
The modified time is stored as metadata on the object as '''X-Amz-Meta-Mtime''' as floating point since the epoch accurate to 1 ns.<br />
<br />
==Multipart uploads==<br />
<br />
rclone supports multipart uploads with S3 which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don’t have an MD5SUM.<br />
<br />
==Buckets and Regions==<br />
<br />
With Amazon S3 you can list buckets ('''rclone lsd''') using any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, '''incorrect region, the bucket is not in 'XXX' region'''.<br />
<br />
==Authentication==<br />
<br />
There are two ways to supply '''rclone''' with a set of AWS credentials. In order of precedence:<br />
<br />
*Directly in the rclone configuration file (as configured by '''rclone config''')<br />
**set '''access_key_id''' and '''secret_access_key'''<br />
*Runtime configuration:<br />
**set '''env_auth''' to '''true''' in the config file<br />
**Exporting the following environment variables before running '''rclone'''<br />
***Access Key ID: '''AWS_ACCESS_KEY_ID''' or '''AWS_ACCESS_KEY'''<br />
***Secret Access Key: '''AWS_SECRET_ACCESS_KEY''' or '''AWS_SECRET_KEY'''<br />
**Running '''rclone''' on an EC2 instance with an IAM role<br />
<br />
If none of these option actually end up providing '''rclone''' with AWS credentials then S3 interaction will be non-authenticated (see below).<br />
<br />
==Anonymous access to public buckets==<br />
<br />
If you want to use rclone to access a public bucket, configure with a '''blank access_key_id''' and '''secret_access_key'''. Eg<br />
<br />
<pre><br />
No remotes found - make a new one<br />
n) New remote<br />
q) Quit config<br />
n/q> n<br />
name> anons3<br />
What type of source is it?<br />
Choose a number from below<br />
1) amazon cloud drive<br />
2) b2<br />
3) drive<br />
4) dropbox<br />
5) google cloud storage<br />
6) swift<br />
7) hubic<br />
8) local<br />
9) onedrive<br />
10) s3<br />
11) yandex<br />
type> 10<br />
Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.<br />
Choose a number from below, or type in your own value<br />
* Enter AWS credentials in the next step<br />
1) false<br />
* Get AWS credentials from the environment (env vars or IAM)<br />
2) true<br />
env_auth> 1<br />
AWS Access Key ID - leave blank for anonymous access or runtime credentials.<br />
access_key_id><br />
AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials.<br />
secret_access_key><br />
...<br />
</pre><br />
<br />
Then use it as normal with the name of the public bucket, eg<br />
<br />
<pre><br />
rclone lsd anons3:1000genomes<br />
</pre><br />
<br />
You will be able to list and copy data but not upload it.<br />
<br />
==Ceph==<br />
<br />
Ceph is an object storage system which presents an Amazon S3 interface.<br />
<br />
To use rclone with ceph, you need to set the following parameters in the config.<br />
<br />
<pre><br />
access_key_id = Whatever<br />
secret_access_key = Whatever<br />
endpoint = https://ceph.endpoint.goes.here/<br />
region = other-v2-signature<br />
</pre><br />
<br />
Note also that Ceph sometimes puts '''/''' in the passwords it gives users. If you read the secret access key using the command line tools you will get a JSON blob with the '''/''' escaped as '''\/'''. Make sure you only write '''/''' in the secret access key.<br />
<br />
Eg the dump from Ceph looks something like this (irrelevant keys removed).<br />
<br />
<pre><br />
{<br />
"user_id": "xxx",<br />
"display_name": "xxxx",<br />
"keys": [<br />
{<br />
"user": "xxx",<br />
"access_key": "xxxxxx",<br />
"secret_key": "xxxxxx\/xxxx"<br />
}<br />
],<br />
}<br />
</pre><br />
<br />
Because this is a json dump, it is encoding the '''/''' as '''\/''', so if you use the secret key as '''xxxxxx/xxxx''' it will work fine.<br />
<br />
[[Category:RClone]]<br />
[[Category:Linux]]<br />
[[Category:Contents]]</div>Adam.birds