Using rclone for Cloud to Cloud Transfer

 

rclone allows you to download data from any other cloud into your rsync.net account.

rclone also allows you to transfer data between cloud accounts without having to download and re-upload that data.

 

For instance, you could archive your MS Onedrive data to your rsync.net account.

Or, you could move an S3 bucket directly to your Google Drive.

Or, you could create backup copies of each cloud you use inside your rsync.net account.

 

Overview

 

First, you need to configure rclone by running 'rclone config' here at rsync.net:

 

ssh user@rsync.net rclone config

 

... and answering some simple questions:

 

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n

 

rclone will use your answers - including tokens and API keys you might paste - to create a config file with this new "remote" in it.

The config file is in your rsync.net account named .config/rclone/rclone.conf and may contain multiple different remotes.

If you back up both your s3 buckets and your Team Dropbox to your rsync.net account, your config file will have two remotes.

 

rclone supports a wide variety of providers and there is help and examples provided for each of them.

 

Step by Step Example - S3 Bucket to rsync.net Account

 

Let's assume your rsync.net login is: 1005   @   denver.rsync.net.

Let's also assume that you have a public S3 bucket named 'rsynctest'.

 

First, you would run 'rclone config' over SSH and press 'n' to create a new config, then give it a name:

# ssh 1005@denver.rsync.net rclone config
Password:

No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> s3remote

 

In this example, we named our remote "s3remote". Now we will be asked a series of additional questions:

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / 1Fichier
\ "fichier"
2 / Alias for an existing remote
\ "alias"
3 / Amazon Drive
\ "amazon cloud drive"
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
\ "s3"
 
... and on and on ...
 
Storage> 4

 

In this example, we chose '4' for "Amazon S3 Compliant Storage Provider" then we choose '1' as our S3 Provider:

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
 
provider> 1

 

The example S3 bucket that we are creating this remote for is a public bucket - so we do not need AWS credentials:

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
2 / Get AWS credentials from the environment (env vars or IAM)
\ "true"
 
env_auth> 1
 
AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
 
access_key_id>    we left this blank...
 
AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
 
secret_access_key>    we left this blank...

 

Finally, we specify our AWS Region, Endpoint for S3 API, and some other miscellaneous settings:

Region to connect to.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
/ The default endpoint - a good choice if you are unsure.
1 | US Region, Northern Virginia or Pacific Northwest.
| Leave location constraint empty.
\ "us-east-1"
 
... and on and on ...
 
region> 1
 
Endpoint for S3 API.
Leave blank if using AWS to use the default endpoint for the region.
Enter a string value. Press Enter for the default ("").
 
endpoint>    we left this blank...
 
Location constraint - must be set to match the Region.
Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Empty for US Region, Northern Virginia or Pacific Northwest.
\ ""
 
... and on and on ...
 
location_constraint> 1
 
Canned ACL used when creating buckets and storing or copying objects.
This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Owner gets FULL_CONTROL. No one else has access rights (default).
\ "private"
 
... and on and on ...
 
acl> 1
 
The server-side encryption algorithm used when storing this object in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / None
\ ""
2 / AES256
\ "AES256"
3 / aws:kms
\ "aws:kms"
 
server_side_encryption> 1
 
If using KMS ID you must provide the ARN of Key.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / None
\ ""
2 / arn:aws:kms:*
\ "arn:aws:kms:us-east-1:*"
 
sse_kms_key_id> 1
 
The storage class to use when storing new objects in S3.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Default
\ ""
 
... and on and on ...
 
storage_class> 1
 
Edit advanced config? (y/n)    we said no

 

rclone will now show you a summary of the remote you have just added to your config file:

Remote config
--------------------
[s3remote]
type = s3
provider = AWS
env_auth = false
region = us-east-1
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
... and you press 'y' to save it and then 'q' to quit the config. Your ssh connection to rsync.net will now end.

 

We can now test this remote by listing the contents of our S3 bucket named "rsynctest":

ssh 1005@denver.rsync.net rclone ls s3remote:rsynctest
Password:
 
82930 untitled.JPG

... and then transferring the single file in the bucket to our rsync.net account:

ssh 1005@denver.rsync.net rclone copy s3remote:rsynctest/untitled.JPG .     note the period at the end ...
If you wanted to sync the entire S3 bucket to your rsync.net account, the command would look like:
ssh 1005@denver.rsync.net rclone sync s3remote:rsynctest rsynctestdirectory
... and the entire contents of the "rsynctest" S3 bucket would be transferred into rsynctestdirectory inside your rsync.net account.

 

Note - at no point did we download, or transfer, any files to our own computer - the transfer took place directly between the S3 bucket and your rsync.net account.

 

Expert Mode

 

In the above example we ran 'rclone config' and followed the interactive prompts to create our remote.

This is very typical and is the most common way that rclone remotes are added to your config file.

However, it is also possible to issue a single, very long rclone command that creates the same remote in your config file.

 

This command, for example, will create the exact same remote in your rclone config file:

ssh 1005@denver.rsync.net rclone config create s3remote s3 [provider AWS] [env_auth false] [s3-region us-east-1] [acl private]

 

Further References and Information

 

- rsync.net Support Overview Page

- rsync.net SSH / SSL Server Fingerprints

- Generating and using ssh keys for automated backups

- Remote commands you may run over SSH

- rsync.net Physical delivery guidelines

- rsync.net Warrant Canary

- rsync.net PGP/GPG Public Key

 

           

 

 

Click here for Simple Pricing - Or call 619-819-9156 or email info@rsync.net for more information.