s3fs (1) - Linux Manuals
s3fs: The S3 FUSE filesystem disk management utility
NAME
s3fs - The S3 FUSE filesystem disk management utilitySYNOPSIS
s3fs [<-C> [-h] | [-cdrf <bucket>] [-p <access_key>] [-s secret_access_key] ] | [ -o <options> <mountpoint>]DESCRIPTION
s3fs is a FUSE filesystem application backed by amazon web services simple storage service (s3, http://aws.amazon.com). s3fs can operate in a command mode or a mount mode.In mount mode, s3fs will mount an amazon s3 bucket (that has been properly formatted) as a local file system.
In command mode, s3fs is capable of manipulating amazon s3 buckets in various usefull ways
OPTIONS
Options are used in command mode. To enter command mode, you must specify -C as the first command line option. Note these options are only available in command mode
- -C
-
Enter command mode. This must be the first option on the command line when
using s3fs in command mode
- -h
- Display usage information on command mode
- -c <bucket>
- Create the named s3 bucket
- -d <bucket>
- Delete the named s3 bucket (and any data contained there, use with caution!)
- -r <bucket>
- Interactively repair a broken s3 filesystem (not yet implemented)
- -f <bucket>
- Format an s3 bucket to make it suitable for mounting
- -k <bucket>
- Toggle the lock bit on a bucket. buckets must be unlocked to format or delete them
- -p <access_key>
- Provide the AWS access key if not set in your environment
- -s <secret_access_key>
- Provide the AWS secret access key if not set in your environment
- -o <options>
- Specify mount options when s3fs is operating in mount mode
MOUNT OPTIONS
Note these options are only available when operating s3fs in mount mode- bucket=<bucket>
- Specify the name of the Amazon S3 bucket you wish to mount (default: none)
- preserve_cache=[no|yes]
- If set to yes, the cache for the given mount will not be deleted on unmount, saving the need for downloads on a subsequent remount (default: no)
- cachedir=<directory>
- Sets the base directory where cached s3fs files will be stored. (default=$HOME/.fuse-s3fs-cache/)
- host=<host name>
- Allows overriding of default amazon s3 hostname. defaults to s3.aws.amazon.com
- lazy_fsdata=[no|yes]
- If set to yes, filesystem metadata will only be written back to S3 on unmount (default: yes)
- writeback_time=<seconds>
- Number of seconds of hysteresis to pause when considering uploading a file to S3. If, after the pause, s3fs detects that other processes have opened a file, uploading is postponed (default=10)
- AWS_ACCESS_KEY_ID=<key>
- Specify the AWS_ACCESS_KEY_ID for your aws account if you don't wish to place it in your environment (default: none)
- AWS_SECRET_ACCESS_KEY=<key>
- Specify your AWS_SECRET_ACCESS_KEY for your aws account if you don't wish to place it in your environment. NOTE: If you add a mount to /etc/fstab, it is discouraged for you to use this option. Doing so will publish your secret key and allow others to use your account. (default: none)
UNMOUNTING
Note that to unmount FUSE filesystems the fusermount utility should be used.ENVIRONMENT VARIABLES
- AWS_ACCESS_KEY_ID
- This is your Amazon web service public key. It must be set so that s3fs can identify you to amazon
- AWS_SECRET_ACCESS_KEY
- This is your Amazon web service private key. It must be set so that s3fs can identify you to amazon.
- Both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY must be set and exported by the
- user preforming a mount with s3fs in order to successfully use s3fs
EXAMPLES
- Create an S3 Bucket
- s3fs -C -c <bucket_name>
- Format an S3 Bucket
- s3fs -C -f <bucket_name>
- Mount an S3 Bucket
- s3fs -o [other mount options],bucket=<bucket_name> <mount point>
FILES
- /etc/fstab
The format for an s3fs entry in /etc/fstab should look like this
fstab
Note that the <mntpoint> must be readable by the mounting user
NOTES
- Multi User capability
-
While it is possible to share s3 buckets among multiple users, the current data
consistency model for Amazons S3 service prevent the safe usage of multiple
mounts from multiple users. While s3fs will currently allow multiple mounts,
data corruption may result from such activity. A future release will contain a
locking mechanism to safely guard against multiple read-write mounts. multiple
read-only mounts following a single read-write mount is safe, but will not
reflect changes made by the writeable mount to any of the file or filesystem
metadata, limiting its usefulness there
- File System Layout & Limitations
-
The s3fs filesystem is designed to be very, very simple. S3 maintains a flat
storage system, meaning no directory heirarchy is allowed on the backing store.
Directory information is stored in the filesystem metadata that is retrieved
during the mount operation from a file called fsdata. This file is a python
class that has been pickled, and maintains all the directory heirarchy. Files
are stored against their fully qualified path names within the file system,
which makes for easy file retrieval via any web based interface to S3, should
the metadata become corrupted. Note that the metadata (stored on S3 in a file
called fsdata), has a layout that is defined by the pyhon class definition in
the s3fs executable. To understand or view the layout of this data, one must
read the python code. Specifically the S3DriveMetaData class should be
investigated, as that is the base class that is pickled.
Given that files are stored as individual objects on S3, coupled with the fact
that there is a 5GB limit to each object in S3, s3fs currently has a natural
limit of 5GB per file
- Compatibility with other S3 Access mechanisms
-
Amazon S3 is simply a storage back end. s3fs is simply a storage api that
exports that storage in the form of a local file system. Note there are many
other mechanisms for using S3, some filesystem oriented, like s3fs, as well as
other methods. It is important to note, that this application creates a unique
filesystem structure on the targeted S3 storage bucket. As such it is
incompatible with other S3 fuse implementations. One should not use alternate
s3fs clients (of which several are available) to mount buckets created with this
utility. This is true for all S3 filesystem implementations, as each has a
unique filesystem structure. That being said, the design of this particular S3
filesystem layout does make it compatible with web based S3 access mechanism.
Using a web browser, it is possible to retrieve individual files from a bucket
that has been formatted for use as an s3fs file system.