Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update limitations docs #85

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ S3FS is a [PyFilesystem](https://www.pyfilesystem.org/) interface to
Amazon S3 cloud storage.

As a PyFilesystem concrete class, [S3FS](http://fs-s3fs.readthedocs.io/en/latest/) allows you to work with S3 in the
same way as any other supported filesystem.
same way as any other supported filesystem. Note that as S3 is not strictly
speaking a filesystem there are some limitations which are discussed in detail
in the [documentation](https://fs-s3fs.readthedocs.io/en/latest/#limitations).

## Installing

Expand Down
45 changes: 24 additions & 21 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,10 @@ Amazon S3 cloud storage.

As a PyFilesystem concrete class,
`S3FS <http://fs-s3fs.readthedocs.io/en/latest/>`__ allows you to work
with S3 in the same way as any other supported filesystem.
with S3 in the same way as any other supported filesystem. Note that as
S3 is not strictly speaking a filesystem there are some limitations
which are discussed in detail in the
`documentation <https://fs-s3fs.readthedocs.io/en/latest/#limitations>`__.

Installing
----------
Expand All @@ -15,7 +18,7 @@ You can install S3FS from pip as follows:

::

pip install fs-s3fs
pip install fs-s3fs

Opening a S3FS
--------------
Expand All @@ -24,37 +27,37 @@ Open an S3FS by explicitly using the constructor:

.. code:: python

from fs_s3fs import S3FS
s3fs = S3FS('mybucket')
from fs_s3fs import S3FS
s3fs = S3FS('mybucket')

Or with a FS URL:

.. code:: python

from fs import open_fs
s3fs = open_fs('s3://mybucket')
from fs import open_fs
s3fs = open_fs('s3://mybucket')

Downloading Files
-----------------

To *download* files from an S3 bucket, open a file on the S3 filesystem
for reading, then write the data to a file on the local filesystem.
Here's an example that copies a file ``example.mov`` from S3 to your HD:
Heres an example that copies a file ``example.mov`` from S3 to your HD:

.. code:: python

from fs.tools import copy_file_data
with s3fs.open('example.mov', 'rb') as remote_file:
with open('example.mov', 'wb') as local_file:
copy_file_data(remote_file, local_file)
from fs.tools import copy_file_data
with s3fs.open('example.mov', 'rb') as remote_file:
with open('example.mov', 'wb') as local_file:
copy_file_data(remote_file, local_file)

Although it is preferable to use the higher-level functionality in the
``fs.copy`` module. Here's an example:
``fs.copy`` module. Heres an example:

.. code:: python

from fs.copy import copy_file
copy_file(s3fs, 'example.mov', './', 'example.mov')
from fs.copy import copy_file
copy_file(s3fs, 'example.mov', './', 'example.mov')

Uploading Files
---------------
Expand All @@ -77,9 +80,9 @@ to a bucket:

.. code:: python

import fs, fs.mirror
s3fs = S3FS('example', upload_args={"CacheControl": "max-age=2592000", "ACL": "public-read"})
fs.mirror.mirror('/path/to/mirror', s3fs)
import fs, fs.mirror
s3fs = S3FS('example', upload_args={"CacheControl": "max-age=2592000", "ACL": "public-read"})
fs.mirror.mirror('/path/to/mirror', s3fs)

see `the Boto3
docs <https://boto3.readthedocs.io/en/latest/reference/customizations/s3.html#boto3.s3.transfer.S3Transfer.ALLOWED_UPLOAD_ARGS>`__
Expand All @@ -91,9 +94,9 @@ and can be used in URLs. It is important to URL-Escape the

.. code:: python

import fs, fs.mirror
with open fs.open_fs('s3://example?acl=public-read&cache_control=max-age%3D2592000%2Cpublic') as s3fs
fs.mirror.mirror('/path/to/mirror', s3fs)
import fs, fs.mirror
with open fs.open_fs('s3://example?acl=public-read&cache_control=max-age%3D2592000%2Cpublic') as s3fs
fs.mirror.mirror('/path/to/mirror', s3fs)

S3 URLs
-------
Expand All @@ -102,7 +105,7 @@ You can get a public URL to a file on a S3 bucket as follows:

.. code:: python

movie_url = s3fs.geturl('example.mov')
movie_url = s3fs.geturl('example.mov')

Documentation
-------------
Expand Down
7 changes: 5 additions & 2 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -62,12 +62,15 @@ directory exists.
If you create all your files and directories with S3FS, then you can
forget about how things are stored under the hood. Everything will work
as you expect. You *may* run in to problems if your data has been
uploaded without the use of S3FS. For instance, if you create a
uploaded without the use of S3FS. For instance, if you create or open a
`"foo/bar"` object without a `"foo/"` object. If this occurs, then S3FS
may give errors about directories not existing, where you would expect
them to be. The solution is to create an empty object for all
them to be. One solution is to create an empty object for all
directories and subdirectories. Fortunately most tools will do this for
you, and it is probably only required of you upload your files manually.
Alternatively you may be able to get away with creating the `S3FS` object
directly with ``strict=False`` to bypass some consistency checks
which could fail when empty objects are missing.


Authentication
Expand Down