For an artifact with embedded AWS SDK (the easiest way to bootstrap without jar hell)
<dependency>
<groupId>com.github.abashev</groupId>
<artifactId>vfs-s3-with-awssdk-v1</artifactId>
<version>4.4.0</version>
</dependency>
For an artifact without dependencies
<dependency>
<groupId>com.github.abashev</groupId>
<artifactId>vfs-s3</artifactId>
<version>4.4.0</version>
</dependency>
implementation 'com.github.abashev:vfs-s3:4.4.0'
s3://[[access-key:secret-key]:sign-region]@endpoint-url/folder-or-bucket/
- Access key and Secret key could come from default AWS SDK chain (environment, container etc)
- Sign-region is useful for custom implementations
- If endpoint URL from known providers then we will try to get bucket name from host, if not able to do it then bucket is first path segment
Provider | URL | Example URL |
---|---|---|
Amazon S3 | https://aws.amazon.com/s3/ | s3://s3-tests.s3-eu-west-1.amazonaws.com |
Yandex Object Storage | https://cloud.yandex.ru/services/storage | s3://s3-tests.storage.yandexcloud.net/ |
Mail.ru Cloud Storage | https://mcs.mail.ru/storage/ | s3://s3-tests.hb.bizmrg.com/ |
Alibaba Cloud Object Storage Service | https://www.alibabacloud.com/product/oss | s3://s3-tests.oss-eu-central-1.aliyuncs.com/ |
Oracle Cloud Object Storage | https://www.oracle.com/cloud/storage/object-storage.html | s3://frifqsbag2em.compat.objectstorage.eu-frankfurt-1.oraclecloud.com/s3-tests/ |
DigitalOcean Spaces Object Storage | https://www.digitalocean.com/products/spaces/ | s3://s3-tests2.ams3.digitaloceanspaces.com |
SberCloud Object Storage Service | https://sbercloud.ru/ru/products/object-storage-service | s3://s3-tests.obs.ru-moscow-1.hc.sbercloud.ru |
Sample groovy scripts - https://github.com/abashev/vfs-s3/tree/branch-4.x.x/samples
s3-copy
able to copy between clouds, via http url or between different accounts
s3-copy s3://access1:secret1@s3-tests.storage.yandexcloud.net/javadoc.jar s3://access2:secret2@s3.eu-central-1.amazonaws.com/s3-tests-2/javadoc.jar
s3-copy https://oss.sonatype.org/some-name/120133-1-javadoc.jar s3://s3.eu-central-1.amazonaws.com/s3-tests-2/javadoc.jar
// Create a folder
FileSystemManager fsManager = VFS.getManager();
FileObject dir = fsManager.resolveFile("s3://simple-bucket.s3-eu-west-1.amazonaws.com/test-folder/");
dir.createFolder();
// Upload file to S3
FileObject dest = fsManager.resolveFile("s3://s3-eu-west-1.amazonaws.com/test-bucket/backup.zip");
FileObject src = fsManager.resolveFile(new File("/path/to/local/file.zip").getAbsolutePath());
dest.copyFrom(src, Selectors.SELECT_SELF);
Sample Java Code for Yandex Cloud https://cloud.yandex.ru/
// Upload file
FileObject dest = fsManager.resolveFile("s3://s3-tests.storage.yandexcloud.net/backup.zip");
FileObject src = fsManager.resolveFile(new File("/path/to/local/file.zip").getAbsolutePath());
dest.copyFrom(src, Selectors.SELECT_SELF);
For running tests you need active credentials for AWS. You can specify them as
-
Shell environment properties
export AWS_ACCESS_KEY=AAAAAAA export AWS_SECRET_KEY=SSSSSSS export BASE_URL=s3://<full url like simple-bucket.s3-eu-west-1.amazonaws.com or s3-eu-west-1.amazonaws.com/test-bucket>
-
Or any standard ways how to do it in AWS SDK (iam role and so on)
Make sure that you never commit your credentials!
- Shadow all dependencies inside vfs-s3 artifact
Branch | Build Status | Code coverage |
---|---|---|
branch-3.0.x | ||
branch-2.4.x | ||
branch-2.3.x | ||
branch-2.2.x |