-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
S3FileSystemProvider.newInputStream issue #87
Comments
Hello @glassfox I try to reproduce the issue but I cant... S3Object object = s3Path.getFileSystem().getClient().getObject(s3Path.getFileStore().name(), key); Then we cant read the inputstream because I close the Object who hold the InputStream. Maybe the problem is the AmazonS3Client itself or the Amazon S3 limits: This is my test to try to reproduce the issue: @Test
public void testConcurrency() throws IOException, InterruptedException {
int concurrency = 2000;
//final String content = "sample content";
//final Path file = uploadSingleFile(content);
final FileSystemProvider provider = fileSystemAmazon.provider();
final List<Path> uploadedFiles = new ArrayList<>();
final List<String> uploadedContent = new ArrayList<>();
for (int i = 0; i < concurrency; i++) {
final String content = "sample content"+i;
final Path file = uploadSingleFile(content);
uploadedFiles.add(file);
uploadedContent.add(content);
}
ExecutorService service = Executors.newFixedThreadPool(concurrency);
for (int i = 0; i < concurrency; i++) {
final int index = i;
service.execute(new Runnable() {
@Override
public void run() {
try {
System.out.println("Reading: " + index);
InputStream stream = provider.newInputStream(uploadedFiles.get(index));
String result = new String(IOUtils.toByteArray(stream));
assertEquals(uploadedContent.get(index), result);
stream.close();
System.out.println("Closing: " + index);
} catch (IOException e) {
fail("err!");
}
}
});
}
service.awaitTermination(20, TimeUnit.SECONDS);
}
private static final String bucket = EnvironmentBuilder.getBucket();
private static final URI uriGlobal = EnvironmentBuilder.getS3URI(S3_GLOBAL_URI_IT);
private FileSystem fileSystemAmazon;
@Before
public void setup() throws IOException {
System.clearProperty(S3FileSystemProvider.AMAZON_S3_FACTORY_CLASS);
fileSystemAmazon = build();
}
private static FileSystem createNewFileSystem() throws IOException {
return FileSystems.newFileSystem(uriGlobal, EnvironmentBuilder.getRealEnv());
}
private static FileSystem build() throws IOException {
try {
FileSystems.getFileSystem(uriGlobal).close();
return createNewFileSystem();
} catch (FileSystemNotFoundException e) {
return createNewFileSystem();
}
}
private Path uploadSingleFile(String content) throws IOException {
try (FileSystem linux = MemoryFileSystemBuilder.newLinux().build("linux")) {
Path file = Files.createFile(linux.getPath(UUID.randomUUID().toString()));
Files.write(file, content.getBytes());
Path result = fileSystemAmazon.getPath(bucket, UUID.randomUUID().toString());
Files.copy(file, result);
return result;
}
}
private Path uploadDir() throws IOException {
try (FileSystem linux = MemoryFileSystemBuilder.newLinux().build("linux")) {
Path assets = Files.createDirectories(linux.getPath("/upload/assets1"));
Path dir = fileSystemAmazon.getPath(bucket, "0000example" + UUID.randomUUID().toString() + "/");
Files.walkFileTree(assets.getParent(), new CopyDirVisitor(assets.getParent(), dir));
return dir;
}
} |
Hi,
Sure, required to close S3Object only after InputStream has been closed.
I propose to create wrapper for InputStream [InputStreamWrapper.java.txt](https://github.com/Upplication/Amazon-S3-FileSystem-NIO2/files/1545379/InputStreamWrapper.java.txt)
and wrap with it every InputStream returned by the function like this:
S3Object object = s3Path.getFileSystem().getClient().getObject(s3Path.getFileStore().name(), key);
InputStream res = object.getObjectContent();
if (res == null)
throw new IOException(String.format("The specified path is a directory: %s", path));
return new InputStreamWrapper(res, object);
|
Hi, That is an option and im goint to test it, but I read the source code of S3Object and if Im not wrong, the close method, only close the InputStream, and is the InputStream (wrapped) who release all the http connections. You can see here:
|
s3fs version: 1.5.3
Hi all,
When I trying to read a lot of files. 2000 simultaneous files and more. Thrown exception:
Proposal:
After short investigation in internet, I found that S3Object object required to be closed in the end of use.
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/S3Object.html#close--
The text was updated successfully, but these errors were encountered: