Is there a way to connect to an Amazon S3 bucket with FTP or SFTP rather than the built-in Amazon file transfer interface in the AWS console? Seems odd that this isn't a readily available option.

share|improve this question
up vote 31 down vote accepted

Just mount the bucket using s3fs file system (or similar) to a Linux server (e.g. Amazon EC2) and use the server's built-in SFTP server to access the bucket.

  • Install the s3fs
  • Add your security credentials in a form access-key-id:secret-access-key to /etc/passwd-s3fs
  • Add a bucket mounting entry to fstab:

    <bucket> /mnt/<bucket> fuse.s3fs rw,nosuid,nodev,allow_other 0 0
    

For details, see my guide Setting up an SFTP access to Amazon S3.

share|improve this answer
2  
Having the bucket mounted as root gives later transfer permission denied problems when connecting with ec2-user via SFTP. /mnt/<bucket> folder is owned by root and has the group root as well. – elvismdev Feb 1 '16 at 8:10
    
@elvismdev /others - Mount as ftp user (using uid/gid options) and make sure it's mounted with allow_other (or -o allow_other if mounting from s3fs command line).. works for me. It's also a good idea to write the files as read-only permissions ( -o default_acl=public-read) in my case (on a private bucket). – bshea Apr 11 at 1:36

There are theoretical and practical reasons why this isn't a perfect solution, but it does work...

You can install an FTP/SFTP service (such as proftpd) on a linux server, either in EC2 or in your own data center... then mount a bucket into the filesystem where the ftp server is configured to chroot, using s3fs.

I have a client that serves content out of S3, and the content is provided to them by a 3rd party who only supports ftp pushes... so, with some hesitation (due to the impedance mismatch between S3 and an actual filesystem) but lacking the time to write a proper FTP/S3 gateway server software package (which I still intend to do one of these days), I proposed and deployed this solution for them several months ago and they have not reported any problems with the system.

As a bonus, since proftpd can chroot each user into their own home directory and "pretend" (as far as the user can tell) that files owned by the proftpd user are actually owned by the logged in user, this segregates each ftp user into a "subdirectory" of the bucket, and makes the other users' files inaccessible.


There is a problem with the default configuration, however.

Once you start to get a few tens or hundreds of files, the problem will manifest itself when you pull a directory listing, because ProFTPd will attempt to read the .ftpaccess files over, and over, and over again, and for each file in the directory, .ftpaccess is checked to see if the user should be allowed to view it.

You can disable this behavior in ProFTPd, but I would suggest that the most correct configuration is to configure additional options -o enable_noobj_cache -o stat_cache_expire=30 in s3fs:

-o stat_cache_expire (default is no expire)

specify expire time(seconds) for entries in the stat cache

Without this option, you'll make fewer requests to S3, but you also will not always reliably discover changes made to objects if external processes or other instances of s3fs are also modifying the objects in the bucket. The value "30" in my system was selected somewhat arbitrarily.

-o enable_noobj_cache (default is disable)

enable cache entries for the object which does not exist. s3fs always has to check whether file(or sub directory) exists under object(path) when s3fs does some command, since s3fs has recognized a directory which does not exist and has files or subdirectories under itself. It increases ListBucket request and makes performance bad. You can specify this option for performance, s3fs memorizes in stat cache that the object (file or directory) does not exist.

This option allows s3fs to remember that .ftpaccess wasn't there.


Unrelated to the performance issues that can arise with ProFTPd, which are resolved by the above changes, you also need to enable -o enable_content_md5 in s3fs.

-o enable_content_md5 (default is disable)

verifying uploaded data without multipart by content-md5 header. Enable to send "Content-MD5" header when uploading a object without multipart posting. If this option is enabled, it has some influences on a performance of s3fs when uploading small object. Because s3fs always checks MD5 when uploading large object, this option does not affect on large object.

This is an option which never should have been an option -- it should always be enabled, because not doing this bypasses a critical integrity check for only a negligible performance benefit. When an object is uploaded to S3 with a Content-MD5: header, S3 will validate the checksum and reject the object if it's corrupted in transit. However unlikely that might be, it seems short-sighted to disable this safety check.

Quotes are from the man page of s3fs. Grammatical errors are in the original text.

share|improve this answer
4  
could you elaborate on the reasons why this solution isn't ideal? – fernio Oct 23 '14 at 22:43
    
I did this thing. Do you know why ProFTP always goes into timeout while listing my bucket folder? From command line I can do ls without issues – Marco Marsala Jan 12 '15 at 14:57
2  
@MarcoMarsala the fixes for large directories have been added to the answer. – Michael - sqlbot Feb 18 '15 at 21:51
1  
@Michael-sqlbot have you tried to use "AllowOverride off" directive in ProFTPd config to make it stop trying to read ".ftpaccess" files completely? – Greg Dubicki Jun 12 '15 at 5:56
1  
I've tried everything and can only set user:group / permissions at the folder level where the S3 bucket is mounted. Then those permissions propagate down to every folder on S3. I've tried many things including many variations on this S3FS command sudo s3fs bucket-name /local-mount-folder-name/ -o iam_role=sftp-server -o allow_other -o umask=022 -o uid=501 -o gid=501 - I can't change any permissions on the folders in the Mounted S3 folder once it's created. – T. Brian Jones May 20 '16 at 22:05

Well, S3 isn't FTP. There are lots and lots of clients that support S3, however.

Pretty much every notable FTP client on OS X has support, including Transmit and Cyberduck.

If you're on Windows, take a look at Cyberduck or CloudBerry.

share|improve this answer
1  
Cyberduck works fantastically easy if you're a server newbie like myself. Just clicked on Open Connection, selected S3 from the dropdown, and input my credentials. Much easier than some of the options mentioned above! – Marco Del Valle Oct 3 '16 at 18:10

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.