diff --git a/.gitignore b/.gitignore index 43e7cf1..bc0a134 100644 --- a/.gitignore +++ b/.gitignore @@ -116,4 +116,9 @@ dist .yarn/unplugged .yarn/build-state.yml .yarn/install-state.gz -.pnp.* \ No newline at end of file +.pnp.* + +.DS_Store + +test.js +*.odt \ No newline at end of file diff --git a/README.md b/README.md index aa682a9..82efad3 100644 --- a/README.md +++ b/README.md @@ -1,273 +1,76 @@ -# High available Node Client for OpenStack Switf Object Storage +# Tiny client for distributed S3/Swift storages + ![GitHub release (latest by date)](https://img.shields.io/github/v/release/carboneio/high-availability-object-storage?style=for-the-badge) [![Documentation](https://img.shields.io/badge/documentation-yes-brightgreen.svg?style=for-the-badge)](#api-usage) - -> High availability, Performances, and Simplicity are the main focus of this tiny Node SDK to request the OpenStack Object Storage API. It was initially made to request the OVHCloud Object storage, but it can be used for any OpenStack Object Storage. +> High availability, Performances, and Simplicity are the main focus of this tiny Node client to request AWS S3 API or the OpenStack Swift Object Storage API. It was initially made to request OVHCloud, but it can be used for any Server/Cloud provider. ## Highlights -* 🦄 **Simple to use** - Only 5 methods: `uploadFile`, `deleteFile`, `listFiles`, `downloadFile` and `request` for custom requests. -* 🌎 **High availability** - Initiate the SDK with a list of object storages credentials, and the SDK will switch storage if something goes wrong (Server/DNS not responding, timeout, error 500, too many redirection, authentication error, and more...). -* ✨ **Reconnect automatically** - If a request fails due to an authentication token expiration, the SDK fetches a new authentication token and retry the initial request with it. -* 🚀 **Performances** - Less than 500 lines of code with only 2 dependencies `simple-get` and `debug`. -* ✅ **100% tested** - Battle-tested against hundreds of GBs of file uploads & downloads - -## Install - -### 1. Prior installing - -you need a minimum of one object storage container, or you can synchronize Object Storages containers in order to access same objects if a fallback occur: -- Sync 2 containers: `1 <=> 2`. They would both need to share the same secret synchronization key. -- You can also set up a chain of synced containers if you want more than two. You would point `1 -> 2`, then `2 -> 3`, and finally `3 -> 1` for three containers. They would all need to share the same secret synchronization key. -Learn more [on the OpenStack documentation](https://docs.openstack.org/swift/latest/overview_container_sync.html) or [on the OVHCloud documentation](https://docs.ovh.com/us/en/storage/pcs/sync-container/). -
- Quick tutorial to synchronise 1 container into another with OVHCloud Object Storage (1 -> 2 one way sync) - - 1. Install the `swift-pythonclient`, an easy way to access Storages is with the Swift command line client, run on your terminal: - ``` - $ pip install python-swiftclient - ``` - 2. Download the OpenStack RC file on the OVH account to change environment variables. Tab `Public Cloud` > `Users & Roles` > Pick the user and “Download OpenStack’s RC file” - 3. Open a terminal, load the contents of the file into the current environment: - ```bash - $ source openrc.sh - ``` - 4. In order for the containers to identify themselves, a key must be created and then configured on each container: - ```bash - $ sharedKey=$(openssl rand -base64 32) - ``` - 5. See which region you are connected to: - ```bash - env | grep OS_REGION - ``` - 6. Retrieve the Account ID `AUTH_xxxxxxx` of the destination container in order to configure the source container: - ```bash - destContainer=$(swift --debug stat containerBHS 2>&1 | grep 'curl -i.*storage' | awk '{ print $4 }') && echo $destContainer - ``` - 7. Change to the source region: - ```bash - OS_REGION_NAME=RegionSource - ``` - 8. Upload the key and the destination sync url to the source container: - ```bash - $ swift post -t ‘//OVH_PUBLIC_CLOUD/RegionDestination/AUTH_xxxxxxxxx/containerNameDestination’ -k "$sharedKey" containerNameSource - ``` - 9. You can check that this has been configured by using the following command: - ```bash - $ swift stat containerName - ``` - 10. You can check if the synchronization worked by listing the files in each of the containers: - ```bash - $ OS_REGION_NAME=RegionSource && swift list containerName - $ OS_REGION_NAME=RegionDestination && swift list containerName - ``` -
- -### 2. Install the package with your package manager: - -```bash -$ npm install --save high-availability-object-storage -// od -$ yarn add high-availability-object-storage -``` -## API Usage - -### Connection - -Initialise the SDK with one or multiple storage, if something goes wrong, the next region will take over automatically. If any storage is available, an error message is returned `Error: Object Storages are not available`. +* 🦄 **Simple to use** - Only 5 methods: `uploadFile`, `deleteFile`, `listFiles`, `downloadFile` and `request` for custom requests. +* 🚀 **Performances** - Vanilla JS + Only 2 dependencies [simple-get](https://github.com/feross/simple-get) for HTTP requests and [aws4](https://github.com/mhart/aws4) for signing S3 requests. +* 🌎 **High availability** - Provide one or a list of storages credentials: the SDK will switch storage if something goes wrong (Server/DNS not responding, timeout, error 500, too many redirection, authentication error, and more...). As soon as the main storage is available, the SDK returns to the main storage +* ✨ **Reconnect automatically** - If a request fails due to an authentication token expiration, the SDK fetches a new authentication token and retry the initial request with it (Concerns only Swift Storage). +* ✅ **100% tested** - Production battle-tested against hundreds of GBs of file uploads & downloads +* 👉 **JSON responses** - XML responses are automatically converted as Javascript Objects (Concerns only S3 Storage: `ListObjects` and `Errors`). +* 🚩 **Mixing S3 and Swift credentials is not supported** - When initialising the Tiny SDK client, provide only a list of S3 or a list of Swift credentials, switching from one storage system to another is not supported. + +## Getting Start + +Install and setup in less than 2 minutes: +- [AWS S3 API](./USAGE-S3.md) +- [Open Stack Swift Storage API](./USAGE-SWIFT.md) + +## Supported Methods + +| Swift API | S3 API | Method | Description | +|-------------------------|------------|-------------------|------------------------------------------------------------------------| +| ✅ [example](./USAGE-SWIFT.md#upload-a-file) | ✅ [example](./USAGE-S3.md#upload-a-file) | `uploadFile` | Upload a file from a Buffer or file absolute path. | +| ✅ [example](./USAGE-SWIFT.md#download-a-file) | ✅ [example](./USAGE-S3.md#download-a-file) | `downloadFile` | Download a file as Buffer or Stream | +| ✅ [example](./USAGE-SWIFT.md#delete-a-file) | ✅ [example](./USAGE-S3.md#delete-file) | `deleteFile` | Delete a file | +| ❌ | ✅ [example](./USAGE-S3.md#delete-files) | `deleteFiles` | Bulk delete files (1000 max/per request) | +| ✅ [example](./USAGE-SWIFT.md#list-objects-from-a-container) | ✅ [example](./USAGE-S3.md#list-files) | `listFiles` | List files (1000 max/per requests) use query parameters for pagination | +| ✅ [example](./USAGE-SWIFT.md#get-file-metadata) | ✅ [example](./USAGE-S3.md#get-file-metadata) | `getFileMetadata` | Fetch custom metadatas | +| ✅ [example](./USAGE-SWIFT.md#set-file-metadata) | ✅ [example](./USAGE-S3.md#set-file-metadata) | `setFileMetadata` | Set custom file metadatas | +| ❌ | ✅ [example](./USAGE-S3.md#head-bucket) | `headBucket` | Determine if a bucket exists and you have permission to access it | +| ❌ | ✅ [example](./USAGE-S3.md#list-buckets) | `listBuckets` | Returns a list of all buckets owned by the authenticated sender of the request. | +| ✅ [example](./USAGE-SWIFT.md#custom-request) | ✅ [example](./USAGE-S3.md#custom-requests) | `request` | Create custom requests | +| ✅ [example](./USAGE-SWIFT.md#connection) | ❌ | `connection` | Connection is required only for Openstack Swift Object storage to get a unique auth token | +| ❌ | ✅ [example](./USAGE-S3.md#bucket-alias) | Bucket Alias | Simplify requests by using bucket alias | + + +## S3 Example + +The following exemple is an initialisation of the SDK client with a list of S3 credentials and a request to download a file. +If something goes wrong when downloading the file, the SDK will switch storage and retry to download with the second credentials. +As soon as the first storage is available, the SDK returns to the main storage ```js const storageSDK = require('high-availability-object-storage'); -let storage = storageSDK([{ - authUrl : 'https://auth.cloud.ovh.net/v3', - username : 'username-1', - password : 'password-1', - tenantName : 'tenantName-1', - region : 'region-1' +const s3storage = storageSDK({ + accessKeyId : 'accessKeyId', + secretAccessKey: 'secretAccessKey', + url : 's3.gra.io.cloud.ovh.net', + region : 'gra' }, { - authUrl : 'https://auth.cloud.ovh.net/v3', - username : 'username-2', - password : 'password-2', - tenantName : 'tenantName-2', - region : 'region-2' -}]); - -storage.connection((err) => { - if (err) { - // Invalid credentials - } - // Success, connected! + accessKeyId : 'accessKeyId', + secretAccessKey: 'secretAccessKey', + url : 's3.eu-west-3.amazonaws.com', + region : 'eu-west-3' }) -``` -### Upload a file -```js -const path = require(path); - -/** SOLUTION 1: The file content can be passed by giving the file absolute path **/ -storage.uploadFile('container', 'filename.jpg', path.join(__dirname, './assets/file.txt'), (err) => { +s3storage.downloadFile('bucketName', 'filename.pdf', (err, resp) => { if (err) { - // handle error - } - // success -}); - -/** SOLUTION 2: A buffer can be passed for the file content **/ -storage.uploadFile('container', 'filename.jpg', Buffer.from("File content"), (err) => { - if (err) { - // handle error - } - // success -}); - -/** SOLUTION 3: the function accepts a optionnal fourth argument `option` including query parameters and headers. List of query parameters and headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-replace-object-detail#create-or-replace-object **/ -storage.uploadFile('container', 'filename.jpg', Buffer.from("File content"), { queries: { temp_url_expires: '1440619048' }, headers: { 'X-Object-Meta-LocationOrigin': 'Paris/France' }}, (err) => { - if (err) { - // handle error - } - // success -}); -``` - -### Download a file - -```js -storage.downloadFile('templates', 'filename.jpg', (err, body, headers) => { - if (err) { - // handle error - } - // success, the `body` argument is the content of the file as a Buffer -}); -``` - -### Delete a file - -```js -storage.deleteFile('templates', 'filename.jpg', (err) => { - if (err) { - // handle error - } - // success -}); -``` - -### List objects from a container - -```js -/** - * SOLUTION 1 - **/ -storage.listFiles('templates', function (err, body) { - if (err) { - // handle error - } - // success -}); - -/** - * SOLUTION 2 - * Possible to pass queries and overwrite request headers, list of options: https://docs.openstack.org/api-ref/object-store/? expanded=show-container-details-and-list-objects-detail#show-container-details-and-list-objects - **/ -storage.listFiles('templates', { queries: { prefix: 'prefixName' }, headers: { Accept: 'application/xml' } }, function (err, body) { - if (err) { - // handle error - } - // success -}); -``` - -### Get file metadata - -Shows object metadata. Checkout the list of [headers](https://docs.openstack.org/api-ref/object-store/?expanded=create-or-update-object-metadata-detail,show-object-metadata-detail#show-object-metadata). - -```js -storage.getFileMetadata('templates', 'filename.jpg', (err, headers) => { - if (err) { - // handle error + return console.log("Error on download: ", err); } /** - * Returned headers: { - * Content-Length: 14 - * Accept-Ranges: bytes - * Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT - * Etag: 451e372e48e0f6b1114fa0724aa79fa1 - * X-Timestamp: 1389906751.73463 - * X-Object-Meta-Book: GoodbyeColumbus - * Content-Type: application/octet-stream - * X-Trans-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f - * X-Openstack-Request-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f - * Date: Thu, 16 Jan 2014 21:13:19 GMT - * X-Object-Meta-Custom-Metadata-1: Value - * X-Object-Meta-Custom-Metadata-2: Value - * } - * // Details: https://docs.openstack.org/api-ref/object-store/?expanded=show-object-metadata-detail#show-object-metadata + * Request reponse: + * - resp.body => downloaded file as Buffer + * - resp.headers + * - resp.statusCode */ -}); -``` - -### Set file metadata - -To create or update custom metadata, use the "X-Object-Meta-name" header, where `name` is the name of the metadata item. The function overwrite all custom metadata applied on the file. -Checkout the list of [headers availables](https://docs.openstack.org/api-ref/object-store/?expanded=create-or-replace-object-detail,create-or-update-object-metadata-detail#create-or-update-object-metadata). - -```js -storage.setFileMetadata('templates', 'filename.jpg', { headers: { 'Content-Type': 'image/jpeg', 'X-Object-Meta-LocationOrigin': 'Paris/France', 'X-Delete-At': 1440619048 }} (err, headers) => { - if (err) { - // handle error - } - // success -}); -``` - -### Custom request - -The `request` function can be used to request the object storage with custom options. -Prototype to get the data as Buffer: -```js -request(method, path, { headers, queries, body }, (err, body, headers) => {}). -``` -Prototype to get the data as Stream, set the option `stream:true`: -```js -request(method, path, { headers, queries, body, stream: true }, (err, dataStream) => {})`. -``` - -The base URL requests by default the account, passing an empty string will request the account details. For container requests, pass the container name, such as: `/{container}`. For file requests, pass the container and the file, such as: `/{container}/{filename}`. Object Storage Swift API specification: https://docs.openstack.org/api-ref/object-store/ - -The `request` function automatically reconnects to the Object Storage or switch storage if something goes wrong. - -Example of custom request, bulk delete file from a `customerDocuments` container: -```js - const _headers = { - 'Content-Type': 'text/plain', - 'Accept' : 'application/json' - } - - storage.request('POST', '/customerDocuments?bulk-delete', { headers: _headers, body: 'file1\nfile2\n' }, (err, body, headers) => { - /** - * body: { - * "Number Not Found": 0, - * "Response Status": "200 OK", - * "Errors": [], - * "Number Deleted": 2, - * "Response Body": "" - * } - */ - done(); -}); -``` - - - -### Log - -The package uses debug to print logs into the terminal. To activate logs, you must pass the `DEBUG=*` environment variable. -You can use the `setLogFunction` to override the default log function. Create a function with two arguments: `message` as a string, `level` as a string and the value can be: `info`/`warning`/`error`. Example to use: -```js -storage.setLogFunction((message, level) => { - console.log(`${level} : ${message}`); }) ``` diff --git a/USAGE-S3.md b/USAGE-S3.md new file mode 100644 index 0000000..d71edd2 --- /dev/null +++ b/USAGE-S3.md @@ -0,0 +1,414 @@ +# Tiny node client for distributed S3 + + +## Highlight + +* 🚀 Vanilla JS + Only 2 dependencies [simple-get](https://github.com/feross/simple-get) for HTTP requests and [aws4](https://github.com/mhart/aws4) for signing S3 requests. +* 🌎 Provide one or a list of S3 storages credentials: the SDK will switch storage if something goes wrong (Server/DNS not responding, timeout, error 500, too many redirection, authentication error, and more...). As soon as the main storage is available, the SDK returns to the main storage. +* ✨ File names and request parameters are automatically encoded. +* ⚡️ Use [Bucket alias](#bucket-alias) if you have synchronised buckets into multiple regions/datacenters +* 👉 XML responses from S3 are automatically converted as Javascript Objects (for `ListObjects`, `deleteFiles` and any `Errors`). +* 🚩 When initialising the Tiny SDK client, provide only a list of S3 or a list of Swift credentials, switching from one storage system to another is not supported. +* ✅ Production battle-tested against hundreds of GBs of file uploads & downloads + +## Install + +```bash +$ npm install --save high-availability-object-storage +// or +$ yarn add high-availability-object-storage +``` + +## API Usage + +### Setup + +Initialise the SDK with one or multiple storage, if something goes wrong (Error 500 / Timeout), the next region/provider will take over automatically. If any storage is available, an error message is returned `Error: All S3 storages are not available`. + +On the following example, the SDK is initialised with credentials of 2 cloud providers: a OVHCloud S3 storage and a AWS S3 storage. + +```js +const storageSDK = require('high-availability-object-storage'); + +const s3storage = storageSDK({ + accessKeyId : 'accessKeyId', + secretAccessKey: 'secretAccessKey', + url : 's3.gra.io.cloud.ovh.net', + region : 'gra' +}, +{ + accessKeyId : 'accessKeyId', + secretAccessKey: 'secretAccessKey', + url : 's3.eu-west-3.amazonaws.com', + region : 'eu-west-3' +}) +``` + +### Upload a file + +```js +const path = require(path); + +/** SOLUTION 1: The file content can be passed by giving the file absolute path **/ +s3storage.uploadFile('bucketName', 'file.pdf', path.join(__dirname, 'dir2', 'file.pdf'), (err, resp) => { + if (err) { + return console.log("Error on upload: ", err.toString()); + } + /** + * Request reponse: + * - resp.body + * - resp.headers + * - resp.statusCode + */ +}) + +/** SOLUTION 2: A buffer can be passed for the file content **/ +s3storage.uploadFile('bucketName', 'file.pdf', Buffer.from('file-buffer'), (err, resp) => { + if (err) { + return console.log("Error on upload: ", err.toString()); + } + /** + * Request reponse: + * - resp.body + * - resp.headers + * - resp.statusCode + */ +}) + +/** SOLUTION 3: the function accepts a optionnal fourth argument `option` including query parameters and headers. List of query parameters and headers **/ +s3storage.uploadFile('bucketName', 'file.pdf', Buffer.from('file-buffer'), { + headers: { + "x-amz-meta-name": "invoice-2023", + "x-amz-meta-version": "1.85.2" + } +}, (err, resp) => { + if (err) { + return console.log("Error on upload: ", err.toString()); + } + /** + * Request reponse: + * - resp.body + * - resp.headers + * - resp.statusCode + */ +}) + +``` + +### Download a file + +```js +/** Solution 1: Download the file as Buffer */ +s3storage.downloadFile('bucketName', '2023-invoice.pdf', (err, resp) => { + if (err) { + return console.log("Error on download: ", err); + } + /** + * Request reponse: + * - resp.body => downloaded file as Buffer + * - resp.headers + * - resp.statusCode + */ +}) + +/** Solution 2: Download the file as Stream by providing the option `stream:true` */ +s3storage.downloadFile('bucketName', '2023-invoice.pdf', { stream: true }, (err, resp) => { + if (err) { + return console.log("Error on download: ", err); + } + /** + * Request reponse: + * - resp => file stream to pipe + * - resp.headers + * - resp.statusCode + */ +}) +``` + +### Delete file + +Removes an object. If the object does not exist, S3 storage will still respond that the command was successful. + +```js +s3storage.deleteFile('bucketName', 'invoice-2023.pdf', (err, resp) => { + if (err) { + return console.log("Error on delete: ", err.toString()); + } + /** + * Request reponse: + * - resp.body => empty body + * - resp.headers + * - resp.statusCode + */ +}); +``` + +### Delete files + +Bulk delete files (Maximum 1000 keys per requests) + +```js +/** + * Create a list of objects, it can be: + * - a list of string ["object1.pdf", "object2.docx", "object3.pptx"] + * - a list of object with `keys` as attribute name [{ "keys": "object1.pdf"}, { "keys": "object2.docx" }, { "keys": "object3.pptx" }] +*/ +const files = ["object1.pdf", "object2.docx", "object3.pptx"]; + +s3storage.deleteFiles('bucketName', files, (err, resp) => { + if (err) { + return console.log("Error on deleting files: ", err.toString()); + } + /** + * Request reponse: + * - resp.headers + * - resp.statusCode + * - resp.body => body as JSON listing deleted files and errors: + * { + * deleted: [ + * { key: 'object1.pdf' }, + * { key: 'object2.docx' } + * ], + * error: [ + * { + * key : 'object3.pptx', + * code : 'AccessDenied', + * message: 'Access Denied' + * } + * ] + * } + */ +}); +``` + +### List files + +```js +/** Solution 1: only provide the bucket name */ +s3storage.listFiles('bucketName', function(err, resp) { + if (err) { + return console.log("Error on listing files: ", err.toString()); + } + /** + * Request reponse: + * - resp.headers + * - resp.statusCode + * - resp.body => list of files as JSON format: + * { + * "name": "bucketName", + * "keycount": 1, + * "maxkeys": 1000, + * "istruncated": false, + * "contents": [ + * { + * "key": "file-1.docx", + * "lastmodified": "2023-03-07T17:03:54.000Z", + * "etag": "7ad22b1297611d62ef4a4704c97afa6b", + * "size": 61396, + * "storageclass": "STANDARD" + * } + * ] + * } + */ +}); + +/** Solution 2: only provide the bucket name and query parameters for pagination*/ +const _queries = { + "max-keys": 100, + "start-after": "2022-02-invoice-client.pdf" +} +s3storage.listFiles('bucketName', { queries: _queries } function(err, resp) { + if (err) { + return console.log("Error on listing files: ", err.toString()); + } + /** + * Request reponse: + * - resp.headers + * - resp.statusCode + * - resp.body => list of files as JSON format: + * { + * "name": "bucketName", + * "keycount": 1, + * "maxkeys": 100, + * "istruncated": false, + * "contents": [ + * { + * "key": "file-1.docx", + * "lastmodified": "2023-03-07T17:03:54.000Z", + * "etag": "7ad22b1297611d62ef4a4704c97afa6b", + * "size": 61396, + * "storageclass": "STANDARD" + * } + * ] + * } + */ +}); +``` + +### Get file metadata + +```js +s3storage.getFileMetadata('bucketName', '2023-invoice.pdf', (err, resp) => { + if (err) { + return console.log("Error on fetching metadata: ", err.toString()); + } + /** + * Request reponse: + * - resp.body => empty string + * - resp.headers => all custom metadata and headers + * - resp.statusCode + */ +}); +``` + +### Set file metadata + +Create custom metadatas by providing headers starting with "x-amz-meta-", followed by a name to create a custom key. By default, metadata are replaced with metadata provided in the request. Set the header `"x-amz-metadata-directive":"COPY"` to copy metadata from the source object. + +Metadata can be as large as 2KB total (2048 Bytes). To calculate the total size of user-defined metadata sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must conform to US-ASCII standards. + +```js + +const _headers = { + "x-amz-meta-name": "2023-invoice-company.pdf", + "x-amz-meta-version": "2023-invoice-company.pdf" +} + +s3storage.setFileMetadata('steeve-test-bucket', 'template.odt', { headers: _headers }, (err, resp) => { + if (err) { + return console.log("Error on updating metadata: ", err.toString()); + } + /** + * Request reponse: + * - resp.body + * - resp.headers + * - resp.statusCode + */ +}) +``` +### Head Bucket + + The action `headBucket` is useful to determine if a bucket exists and you have permission to access it thanks to the Status code. A message body is not included, so you cannot determine the exception beyond these error codes. Two possible answers: + - The action returns a 200 OK if the bucket exists and you have permission to access it. + - If the bucket does not exist or you do not have permission to access it, the HEAD request returns a generic 400 Bad Request, 403 Forbidden or 404 Not Found code. + +```js +s3storage.headBucket('bucketName', (err, resp) => { + if (err) { + return console.log("Error head Bucket: ", err.toString()); + } + /** + * Request reponse: + * - resp.body => empty string + * - resp.headers + * - resp.statusCode + */ +}); +``` + +### List Buckets + +Returns a list of all buckets owned by the authenticated sender of the request. To use this operation, you must have the s3:ListAllMyBuckets permission. + +```js +storage.listBuckets((err, resp) => { + if (err) { + return console.log("Error list Buckets: ", err.toString()); + } + /** + * Request reponse: + * - resp.body => { bucket: [ { "name": "bucket1", "creationdate": "2023-02-27T11:46:24.000Z" } ] } + * - resp.headers + * - resp.statusCode + */ +}) +``` + +### Bucket Alias + +To simplify requests to custom named bucket into different S3 providers, it is possible to create aliases by providing a `buckets` object on credentials. When calling a function, define the bucket alias as first argument, it will request the current active storage automatically. + +```js +const storageSDK = require('high-availability-object-storage'); + +const s3storage = storageSDK({ + accessKeyId : 'accessKeyId', + secretAccessKey: 'secretAccessKey', + url : 's3.gra.io.cloud.ovh.net', + region : 'gra', + buckets : { + invoices : "invoices-ovh-gra", + www : "www-ovh-gra" + } +}, +{ + accessKeyId : 'accessKeyId', + secretAccessKey: 'secretAccessKey', + url : 's3.eu-west-3.amazonaws.com', + region : 'eu-west-3', + buckets : { + invoices : "invoices-aws-west-3", + www : "www-aws-west-3" + } +}) + +/** + * On the following example, "downloadFile" will request the main storage "invoices-ovh-gra" + * or the backup "invoices-aws-west-3" if something goes wrong. + */ +s3storage.downloadFile('invoices', '2023-invoice.pdf', (err, resp) => { + if (err) { + return console.log("Error on download: ", err); + } + /** + * Request reponse: + * - resp.body => downloaded file as Buffer + * - resp.headers + * - resp.statusCode + */ +}) + +``` + +### Custom requests + +The `request` function can be used to request the object storage with custom options. +Prototype to get the data as Buffer: +```js +request(method, path, { headers, queries, body }, (err, resp) => { + /** + * Request reponse: + * - resp.body => body as Buffer + * - resp.headers + * - resp.statusCode + */ +}). +``` +Prototype to get the data as Stream, set the option `stream:true`: +```js +request(method, path, { headers, queries, body, stream: true }, (err, resp) => { + /** + * Request reponse: + * - resp.body => body as Stream + * - resp.headers + * - resp.statusCode + */ +})`. +``` +For container requests, pass the container name as `path`, such as: `/{container}`. For object requests, pass the container and the object name, such as: `/{container}/{object}`. + +### Logs + +By default, logs are printed with to `console.log`. You can use the `setLogFunction` to override the default log function. Create a function with two arguments: `message` as a string, `level` as a string and the value can be: `info`/`warning`/`error`. Example to use: +```js +s3storage.setLogFunction((message, level) => { + console.log(`${level} : ${message}`); +}) +``` + +### Timeout + +The default request timeout is 5 seconds, change it by calling `setTimeout`: +```js +s3storage.setTimeout(30000); // 30 seconds +``` \ No newline at end of file diff --git a/USAGE-SWIFT.md b/USAGE-SWIFT.md new file mode 100644 index 0000000..92e8311 --- /dev/null +++ b/USAGE-SWIFT.md @@ -0,0 +1,274 @@ +# Tiny node client for distributed Openstack Swift Object Storage + +## Highlight + +* 🚀 Vanilla JS + Only 2 dependencies [simple-get](https://github.com/feross/simple-get) for HTTP requests and [aws4](https://github.com/mhart/aws4) for signing S3 requests. +* 🌎 Provide one or a list of storages credentials: the SDK will switch storage if something goes wrong (Server/DNS not responding, timeout, error 500, too many redirection, authentication error, and more...). As soon as the main storage is available, the SDK returns to the main storage +* ✨ If a request fails due to an authentication token expiration, the SDK fetches a new authentication token and retry the initial request with it (Concerns only Swift Storage). +* 🚩 When initialising the Tiny SDK client, provide only a list of S3 or a list of Swift credentials, switching from one storage system to another is not supported. +* ✅ Production battle-tested against hundreds of GBs of file uploads & downloads + +## Install + +### 1. Prior installing + +you need a minimum of one object storage container, or you can synchronize Object Storages containers in order to access same objects if a fallback occur: +- Sync 2 containers: `1 <=> 2`. They would both need to share the same secret synchronization key. +- You can also set up a chain of synced containers if you want more than two. You would point `1 -> 2`, then `2 -> 3`, and finally `3 -> 1` for three containers. They would all need to share the same secret synchronization key. +Learn more [on the OpenStack documentation](https://docs.openstack.org/swift/latest/overview_container_sync.html) or [on the OVHCloud documentation](https://docs.ovh.com/us/en/storage/pcs/sync-container/). + +
+ Quick tutorial to synchronise 1 container into another with OVHCloud Object Storage (1 -> 2 one way sync) + + 1. Install the `swift-pythonclient`, an easy way to access Storages is with the Swift command line client, run on your terminal: + ``` + $ pip install python-swiftclient + ``` + 2. Download the OpenStack RC file on the OVH account to change environment variables. Tab `Public Cloud` > `Users & Roles` > Pick the user and “Download OpenStack’s RC file” + 3. Open a terminal, load the contents of the file into the current environment: + ```bash + $ source openrc.sh + ``` + 4. In order for the containers to identify themselves, a key must be created and then configured on each container: + ```bash + $ sharedKey=$(openssl rand -base64 32) + ``` + 5. See which region you are connected to: + ```bash + env | grep OS_REGION + ``` + 6. Retrieve the Account ID `AUTH_xxxxxxx` of the destination container in order to configure the source container: + ```bash + destContainer=$(swift --debug stat containerBHS 2>&1 | grep 'curl -i.*storage' | awk '{ print $4 }') && echo $destContainer + ``` + 7. Change to the source region: + ```bash + OS_REGION_NAME=RegionSource + ``` + 8. Upload the key and the destination sync url to the source container: + ```bash + $ swift post -t ‘//OVH_PUBLIC_CLOUD/RegionDestination/AUTH_xxxxxxxxx/containerNameDestination’ -k "$sharedKey" containerNameSource + ``` + 9. You can check that this has been configured by using the following command: + ```bash + $ swift stat containerName + ``` + 10. You can check if the synchronization worked by listing the files in each of the containers: + ```bash + $ OS_REGION_NAME=RegionSource && swift list containerName + $ OS_REGION_NAME=RegionDestination && swift list containerName + ``` +
+ +### 2. Install the package with your package manager: + +```bash +$ npm install --save high-availability-object-storage +// or +$ yarn add high-availability-object-storage +``` +## API Usage + +### Connection + +Initialise the SDK with one or multiple storage, if something goes wrong, the next region will take over automatically. If any storage is available, an error message is returned `Error: Object Storages are not available`. + +```js +const storageSDK = require('high-availability-object-storage'); + +let storage = storageSDK([{ + authUrl : 'https://auth.cloud.ovh.net/v3', + username : 'username-1', + password : 'password-1', + tenantName : 'tenantName-1', + region : 'region-1' +}, +{ + authUrl : 'https://auth.cloud.ovh.net/v3', + username : 'username-2', + password : 'password-2', + tenantName : 'tenantName-2', + region : 'region-2' +}]); + +storage.connection((err) => { + if (err) { + // Invalid credentials + } + // Success, connected! +}) +``` +### Upload a file + +```js +const path = require(path); + +/** SOLUTION 1: The file content can be passed by giving the file absolute path **/ +storage.uploadFile('container', 'filename.jpg', path.join(__dirname, './assets/file.txt'), (err) => { + if (err) { + // handle error + } + // success +}); + +/** SOLUTION 2: A buffer can be passed for the file content **/ +storage.uploadFile('container', 'filename.jpg', Buffer.from("File content"), (err) => { + if (err) { + // handle error + } + // success +}); + +/** SOLUTION 3: the function accepts a optionnal fourth argument `option` including query parameters and headers. List of query parameters and headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-replace-object-detail#create-or-replace-object **/ +storage.uploadFile('container', 'filename.jpg', Buffer.from("File content"), { queries: { temp_url_expires: '1440619048' }, headers: { 'X-Object-Meta-LocationOrigin': 'Paris/France' }}, (err) => { + if (err) { + // handle error + } + // success +}); +``` + +### Download a file + +```js +storage.downloadFile('templates', 'filename.jpg', (err, body, headers) => { + if (err) { + // handle error + } + // success, the `body` argument is the content of the file as a Buffer +}); +``` + +### Delete a file + +```js +storage.deleteFile('templates', 'filename.jpg', (err) => { + if (err) { + // handle error + } + // success +}); +``` + +### List objects from a container + +```js +/** + * SOLUTION 1 + **/ +storage.listFiles('templates', function (err, body) { + if (err) { + // handle error + } + // success +}); + +/** + * SOLUTION 2 + * Possible to pass queries and overwrite request headers, list of options: https://docs.openstack.org/api-ref/object-store/? expanded=show-container-details-and-list-objects-detail#show-container-details-and-list-objects + **/ +storage.listFiles('templates', { queries: { prefix: 'prefixName' }, headers: { Accept: 'application/xml' } }, function (err, body) { + if (err) { + // handle error + } + // success +}); +``` + +### Get file metadata + +Shows object metadata. Checkout the list of [headers](https://docs.openstack.org/api-ref/object-store/?expanded=create-or-update-object-metadata-detail,show-object-metadata-detail#show-object-metadata). + +```js +storage.getFileMetadata('templates', 'filename.jpg', (err, headers) => { + if (err) { + // handle error + } + /** + * Returned headers: { + * Content-Length: 14 + * Accept-Ranges: bytes + * Last-Modified: Thu, 16 Jan 2014 21:12:31 GMT + * Etag: 451e372e48e0f6b1114fa0724aa79fa1 + * X-Timestamp: 1389906751.73463 + * X-Object-Meta-Book: GoodbyeColumbus + * Content-Type: application/octet-stream + * X-Trans-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f + * X-Openstack-Request-Id: tx37ea34dcd1ed48ca9bc7d-0052d84b6f + * Date: Thu, 16 Jan 2014 21:13:19 GMT + * X-Object-Meta-Custom-Metadata-1: Value + * X-Object-Meta-Custom-Metadata-2: Value + * } + * // Details: https://docs.openstack.org/api-ref/object-store/?expanded=show-object-metadata-detail#show-object-metadata + */ +}); +``` + +### Set file metadata + +To create or update custom metadata, use the "X-Object-Meta-name" header, where `name` is the name of the metadata item. The function overwrite all custom metadata applied on the file. +Checkout the list of [headers availables](https://docs.openstack.org/api-ref/object-store/?expanded=create-or-replace-object-detail,create-or-update-object-metadata-detail#create-or-update-object-metadata). + +```js +storage.setFileMetadata('templates', 'filename.jpg', { headers: { 'Content-Type': 'image/jpeg', 'X-Object-Meta-LocationOrigin': 'Paris/France', 'X-Delete-At': 1440619048 }} (err, headers) => { + if (err) { + // handle error + } + // success +}); +``` + +### Custom request + +The `request` function can be used to request the object storage with custom options. +Prototype to get the data as Buffer: +```js +request(method, path, { headers, queries, body }, (err, body, headers) => {}). +``` +Prototype to get the data as Stream, set the option `stream:true`: +```js +request(method, path, { headers, queries, body, stream: true }, (err, dataStream) => {})`. +``` + +The base URL requests by default the account, passing an empty string will request the account details. For container requests, pass the container name, such as: `/{container}`. For file requests, pass the container and the file, such as: `/{container}/{filename}`. Object Storage Swift API specification: https://docs.openstack.org/api-ref/object-store/ + +The `request` function automatically reconnects to the Object Storage or switch storage if something goes wrong. + +Example of custom request, bulk delete file from a `customerDocuments` container: +```js + const _headers = { + 'Content-Type': 'text/plain', + 'Accept' : 'application/json' + } + + storage.request('POST', '/customerDocuments?bulk-delete', { headers: _headers, body: 'file1\nfile2\n' }, (err, body, headers) => { + /** + * body: { + * "Number Not Found": 0, + * "Response Status": "200 OK", + * "Errors": [], + * "Number Deleted": 2, + * "Response Body": "" + * } + */ + done(); +}); +``` + + + +### Logs + +By default, logs are printed with to `console.log`. You can use the `setLogFunction` to override the default log function. Create a function with two arguments: `message` as a string, `level` as a string and the value can be: `info`/`warning`/`error`. Example to use: +```js +storage.setLogFunction((message, level) => { + console.log(`${level} : ${message}`); +}) +``` + +### Timeout + +The default request timeout is 5 seconds, change it by calling `setTimeout`: +```js +storage.setTimeout(30000); // 30 seconds +``` \ No newline at end of file diff --git a/index.js b/index.js index 3e2d02d..b320e9c 100644 --- a/index.js +++ b/index.js @@ -1,709 +1,14 @@ -const get = require('simple-get'); -const fs = require('fs'); -const { Readable } = require('stream'); +const s3 = require('./s3.js'); +const swift = require('./swift.js'); -let _config = { - storages: [], - actifStorage: 0, - endpoints: {}, - token: '', - timeout: 5000 -} - -/** - * @description Authenticate and initialise the auth token and retreive the endpoint based on the region - * - * @param {function} callback function(err):void = The `err` is null by default, return an object if an error occurs. - */ -function connection (callback, originStorage = 0) { - const arrayArguments = [callback, originStorage]; - - if (_config.actifStorage === _config.storages.length) { - /** Reset the index of the actual storage */ - _config.actifStorage = 0; - log(`Object Storages are not available`, 'error'); - return callback(new Error('Object Storages are not available')); - } - const _storage = _config.storages[_config.actifStorage]; - log(`Object Storage index "${_config.actifStorage}" region "${_storage.region}" connection...`, 'info'); - const _json = { - auth : { - identity : { - methods : ['password'], - password : { - user : { - name : _storage.username, - domain : { id : 'default' }, - password : _storage.password - } - } - }, - scope : { - project : { - domain : { - id : 'default' - }, - name : _storage.tenantName - } - } - } - }; - - get.concat({ - url : `${_storage.authUrl}/auth/tokens`, - method : 'POST', - json : true, - body : _json, - timeout: _config.timeout - }, (err, res, data) => { - if (err) { - log(`Object Storage index "${_config.actifStorage}" region "${_storage.region}" Action "connection" ${err.toString()}`, 'error'); - activateFallbackStorage(originStorage); - arrayArguments[1] = _config.actifStorage; - return connection.apply(null, arrayArguments); - } - - if (res.statusCode < 200 || res.statusCode >= 300) { - log(`Object Storage index "${_config.actifStorage}" region "${_storage.region}" connexion failled | Status ${res.statusCode.toString()} | Message: ${res.statusMessage}`, 'error'); - activateFallbackStorage(originStorage); - arrayArguments[1] = _config.actifStorage; - return connection.apply(null, arrayArguments); - } - - _config.token = res.headers['x-subject-token']; - - const _serviceCatalog = data.token.catalog.find((element) => { - return element.type === 'object-store'; - }); - - if (!_serviceCatalog) { - log(`Object Storage index "${_config.actifStorage}" region "${_storage.region}" Storage catalog not found`, 'error'); - activateFallbackStorage(originStorage); - arrayArguments[1] = _config.actifStorage; - return connection.apply(null, arrayArguments); - } - - _config.endpoints = _serviceCatalog.endpoints.find((element) => { - return element.region === _storage.region; - }); - - if (!_config.endpoints) { - log(`Object Storage index "${_config.actifStorage}" region "${_storage.region} Storage endpoint not found, invalid region`, 'error'); - activateFallbackStorage(originStorage); - arrayArguments[1] = _config.actifStorage; - return connection.apply(null, arrayArguments); - } - log(`Object Storage index "${_config.actifStorage}" region "${_storage.region}" connected!`, 'info'); - return callback(null); - }); -} -/** - * @description List objects from a container. It is possible to pass as a second argument as an object with queries or headers to overwrite the request. - * - * @param {String} container container name - * @param {Object} options [OPTIONAL]: { headers: {}, queries: {} } List of headers and queries: https://docs.openstack.org/api-ref/object-store/?expanded=show-container-details-and-list-objects-detail#show-container-details-and-list-objects - * @param {function} callback function(err, body):void = The second argument `body` is the content of the file as a Buffer. The `err` argument is null by default, return an object if an error occurs. - */ -function listFiles(container, options, callback) { - const arrayArguments = [...arguments]; - - if (callback === undefined) { - callback = options; - arrayArguments.push(options); - options = { headers: {}, queries: {} }; - arrayArguments[1] = options; - } - - arrayArguments.push({ originStorage : _config.actifStorage }) - - const { headers, queries } = getHeaderAndQueryParameters(options); - get.concat({ - url : `${_config.endpoints.url}/${container}${queries}`, - method : 'GET', - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json', - ...headers - }, - timeout: _config.timeout - }, (err, res, body) => { - - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 ? err.toString() : null; - - checkIsConnected(res, 'listFiles', arrayArguments, (error) => { - if (error) { - return callback(error); - } - - if (res && res.statusCode === 404) { - return callback(new Error('Container does not exist')); - } - - err = err || checkResponseError(res); - - /** TODO: remove? it should never happen as every error switch to another storage */ - if (err) { - return callback(err); - } - - return callback(null, body); - }); - }); -} - -/** - * @description Save a file on the OVH Object Storage - * - * @param {string} container Container name - * @param {string} filename file to store - * @param {string|Buffer} localPathOrBuffer absolute path to the file - * @param {Object} options [OPTIONAL]: { headers: {}, queries: {} } List of query parameters and headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-replace-object-detail#create-or-replace-object - * @param {function} callback function(err):void = The `err` is null by default, return an object if an error occurs. - * @returns {void} - */ -function uploadFile (container, filename, localPathOrBuffer, options, callback) { - let readStream = Buffer.isBuffer(localPathOrBuffer) === true ? Readable.from(localPathOrBuffer) : fs.createReadStream(localPathOrBuffer); - - const arrayArguments = [...arguments]; - - if (callback === undefined) { - callback = options; - arrayArguments.push(options); - options = { headers: {}, queries: {} }; - arrayArguments[3] = options; - } - - arrayArguments.push({ originStorage : _config.actifStorage }) - - const { headers, queries } = getHeaderAndQueryParameters(options); - get.concat({ - url : `${_config.endpoints.url}/${container}/${filename}${queries}`, - method : 'PUT', - body : readStream, - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json', - ...headers - }, - timeout: _config.timeout - }, (err, res, body) => { - - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 && err.code !== 'ENOENT' ? err.toString() : null; - - checkIsConnected(res, 'uploadFile', arrayArguments, (error) => { - if (error) { - return callback(error); - } - - err = err || checkResponseError(res, body.toString()); - - if (err) { - if (err.code === 'ENOENT') { - return callback(new Error('The local file does not exist')); - } - - return callback(err); - } - return callback(null); - }); - }); -} - -/** - * @description Download a file from the OVH Object Storage - * - * @param {string} container Container name - * @param {string} filename filename to download - * @param {function} callback function(err, body):void = The second argument `body` is the content of the file as a Buffer. The `err` argument is null by default, return an object if an error occurs. - * @returns {void} - */ -function downloadFile (container, filename, callback) { - - const arrayArguments = [...arguments, { originStorage : _config.actifStorage }]; - - get.concat({ - url : `${_config.endpoints.url}/${container}/${filename}`, - method : 'GET', - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json' - }, - timeout: _config.timeout - }, (err, res, body) => { - - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 ? err.toString() : null; - - checkIsConnected(res, 'downloadFile', arrayArguments, (error) => { - if (error) { - return callback(error); - } - - if (res && res.statusCode === 404) { - return callback(new Error('File does not exist')); - } - - err = err || checkResponseError(res); - - /** TODO: remove? it should never happen as every error switch to another storage */ - if (err) { - return callback(err); - } - - return callback(null, body, res.headers); - }); - }); -} - -/** - * @description Delete a file from the OVH Object Storage - * - * @param {string} container Container name - * @param {string} filename filename to store - * @param {function} callback function(err):void = The `err` argument is null by default, return an object if an error occurs. - * @returns {void} - */ -function deleteFile (container, filename, callback) { - - const arrayArguments = [...arguments, { originStorage : _config.actifStorage }]; - - get.concat({ - url : `${_config.endpoints.url}/${container}/${filename}`, - method : 'DELETE', - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json' - }, - timeout: _config.timeout - }, (err, res) => { - - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 ? err.toString() : null; - - checkIsConnected(res, 'deleteFile', arrayArguments, (error) => { - if (error) { - return callback(error); - } - - if (res && res.statusCode === 404) { - return callback(new Error('File does not exist')); - } - - err = err || checkResponseError(res); - - /** TODO: remove? it should never happen as every error switch to another storage */ - if (err) { - return callback(err); - } - - return callback(null); - }); - }); -} - -/** - * @description Get object metadata - * - * @param {string} container Container name - * @param {string} filename filename to store - * @param {function} callback function(err, headers):void = The `err` argument is null by default, return an object if an error occurs. - * @returns {void} - */ -function getFileMetadata(container, filename, callback) { - const arrayArguments = [...arguments, { originStorage : _config.actifStorage }]; - - get.concat({ - url : `${_config.endpoints.url}/${container}/${filename}`, - method : 'HEAD', - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json' - }, - timeout: _config.timeout - }, (err, res) => { - - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 ? err.toString() : null; - - checkIsConnected(res, 'getFileMetadata', arrayArguments, (error) => { - if (error) { - return callback(error); - } - - if (res && res.statusCode === 404) { - return callback(new Error('File does not exist')); - } - - err = err || checkResponseError(res); - - /** TODO: remove? it should never happen as every error switch to another storage */ - if (err) { - return callback(err); - } - - return callback(null, res.headers); - }); - }); - } - - /** - * @description Create or update object metadata. - * @description To create or update custom metadata - * @description use the X-Object-Meta-name header, - * @description where name is the name of the metadata item. - * - * @param {string} container Container name - * @param {string} filename file to store - * @param {string|Buffer} localPathOrBuffer absolute path to the file - * @param {Object} options { headers: {}, queries: {} } List of query parameters and headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-update-object-metadata-detail#create-or-update-object-metadata - * @param {function} callback function(err, headers):void = The `err` is null by default, return an object if an error occurs. - * @returns {void} - */ -function setFileMetadata (container, filename, options, callback) { - - const arrayArguments = [...arguments]; - - if (callback === undefined) { - callback = options; - arrayArguments.push(options); - options = { headers: {}, queries: {} }; - arrayArguments[3] = options; - } - - arrayArguments.push({ originStorage : _config.actifStorage }) - - const { headers, queries } = getHeaderAndQueryParameters(options); - get.concat({ - url : `${_config.endpoints.url}/${container}/${filename}${queries}`, - method : 'POST', - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json', - ...headers - }, - timeout: _config.timeout - }, (err, res) => { - - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 ? err.toString() : null; - - checkIsConnected(res, 'setFileMetadata', arrayArguments, (error) => { - if (error) { - return callback(error); - } - - if (res && res.statusCode === 404) { - return callback(new Error('File does not exist')); - } - - err = err || checkResponseError(res); - - /** TODO: remove? it should never happen as every error switch to another storage */ - if (err) { - return callback(err); - } - return callback(null, res.headers); - }); - }); -} - - /** - * @description Send a custom request to the object storage - * - * @param {string} method HTTP method used (POST, COPY, etc...) - * @param {string} path path requested, passing an empty string will request the account details. For container request pass the container name, such as: '/containerName'. For file request, pass the container and the file, such as: '/container/filename.txt'. - * @param {Object} options { headers: {}, queries: {}, body: '' } Pass to the request the body, query parameters and/or headers. List of headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-update-object-metadata-detail#create-or-update-object-metadata - * @param {function} callback function(err, body, headers):void = The `err` is null by default. - * @returns {void} - */ -function request (method, path, options, callback) { - - const arrayArguments = [...arguments]; - - if (callback === undefined) { - callback = options; - arrayArguments.push(options); - options = { headers: {}, queries: {}, body: null }; - arrayArguments[3] = options; - } - - arrayArguments.push({ originStorage : _config.actifStorage }) - - const { headers, queries, body } = getHeaderAndQueryParameters(options); - - const _requestOptions = { - url : `${_config.endpoints.url}${path}${queries}`, - method : method, - headers : { - 'X-Auth-Token' : _config.token, - Accept : 'application/json', - ...headers - }, - timeout: _config.timeout, - ...(body ? { body } : {}) - } - - const _requestCallback = function (err, res, body) { - /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ - res = res || {}; - res.error = err && err.toString().length > 0 ? err.toString() : null; - checkIsConnected(res, 'request', arrayArguments, (error) => { - if (error) { - return callback(error); - } - err = err || checkResponseError(res); - - /** TODO: remove? it should never happen as every error switch to another storage */ - if (err) { - return callback(err); - } - return options?.stream === true ? callback(null, res) : callback(null, body, res.headers); - }); - } - return options?.stream === true ? get(_requestOptions, _requestCallback) : get.concat(_requestOptions, _requestCallback); -} - -/** - * @description Check the response status code and return an Error. - * - * @param {Object} response Response object from request - * @returns {null|Error} - */ -function checkResponseError (response, body = '') { - /** TODO: remove? it should never happen as every error switch to another storage */ - if (!response) { - return new Error('No response'); - } - - if (response.statusCode < 200 || response.statusCode >= 300) { - return new Error(`${response.statusCode.toString()} ${response.statusMessage || body}`); - } - - return null; -} - -/** - * @description Check if the request is authorized, if not, it authenticate again to generate a new token, and execute again the initial request. - * - * @param {Object} response Request response - * @param {String} from Original function called - * @param {Object} args Arguments of the original function. - * @param {function} callback function(err):void = The `err` argument is null by default, return an object if an error occurs. - * @returns {void} - */ -function checkIsConnected (response, from, args, callback) { - if (!response || (response?.statusCode < 500 && response?.statusCode !== 401) || (!response?.statusCode && !!response?.error !== true)) { - return callback(null); - } - - if (response?.statusCode >= 500) { - log(`Object Storage index "${_config.actifStorage}" region "${_config.storages[_config.actifStorage].region}" Action "${from}" Status ${response?.statusCode}`, 'error'); - activateFallbackStorage(args[args.length - 1].originStorage); - } - - if (!!response?.error === true) { - log(`Object Storage index "${_config.actifStorage}" region "${_config.storages[_config.actifStorage].region}" Action "${from}" ${response.error}`, 'error'); - activateFallbackStorage(args[args.length - 1].originStorage); - } - - if (response?.statusCode === 401) { - log(`Object Storage index "${_config.actifStorage}" region "${_config.storages[_config.actifStorage].region}" try reconnect...`, 'info'); - } - - // Reconnect to object storage - connection((err) => { - if (err) { - return callback(err); - } - - switch (from) { - case 'downloadFile': - downloadFile.apply(null, args); - break; - case 'uploadFile': - uploadFile.apply(null, args); - break; - case 'deleteFile': - deleteFile.apply(null, args); - break; - case 'listFiles': - listFiles.apply(null, args); - break; - case 'getFileMetadata': - getFileMetadata.apply(null, args); - break; - case 'setFileMetadata': - setFileMetadata.apply(null, args); - break; - case 'request': - request.apply(null, args); - break; - default: - /** TODO: remove? it should never happen */ - return callback(null); - } - }, args[args.length - 1].originStorage); -} - - -/** - * @description Set and overwrite the Object Storage SDK configurations - * - * @param {Object} config - * @param {String} config.authUrl URL used for authentication, default: "https://auth.cloud.ovh.net/v3" - * @param {String} config.username Username for authentication - * @param {String} config.password Password for authentication - * @param {String} config.tenantName Tenant Name/Tenant ID for authentication - * @param {String} config.region Region used to retreive the Object Storage endpoint to request - */ -function setStorages(storages) { - _config.token = ''; - _config.endpoints = {}; - _config.actifStorage = 0; - if (Array.isArray(storages) === true) { - /** List of storage */ - _config.storages = storages; - } else if (typeof storages === 'object') { - /** Only a single storage is passed */ - _config.storages = []; - _config.storages.push(storages) - } -} - -/** - * Set the timeout - * - * @param {Integer} timeout - */ -function setTimeout(timeout) { - _config.timeout = timeout; -} - -/** - * @description Return the list of storages - * - * @returns {String} The list of storages - */ -function getStorages() { - return _config.storages; -} - -/** - * @description Return the configuration object - * - * @returns {String} The list of storages - */ -function getConfig() { - return _config; -} - -/** - * log messages - * - * @param {String} msg Message - * @param {type} type warning, error - */ -function log(msg, level = 'info') { - return console.log(level === 'error' ? `❗️ Error: ${msg}` : level === 'warning' ? `⚠️ ${msg}` : msg ); -} - -/** - * Override the log function, it takes to arguments: message, level - * @param {Function} newLogFunction (message, level) => {} The level can be: `info`, `warning`, `error` - */ -function setLogFunction (newLogFunction) { - if (newLogFunction) { - // eslint-disable-next-line no-func-assign - log = newLogFunction; - } -} - -/** - * - * @description Initialise and return an instance of the Object Storage SDK. - * - * @param {Object} config - * @param {String} config.authUrl URL used for authentication, default: "https://auth.cloud.ovh.net/v3" - * @param {String} config.username Username for authentication - * @param {String} config.password Password for authentication - * @param {String} config.tenantName Tenant Name/Tenant ID for authentication - * @param {String} config.region Region used to retreive the Object Storage endpoint to request - */ module.exports = (config) => { - setStorages(config) - return { - connection, - uploadFile, - downloadFile, - deleteFile, - listFiles, - getFileMetadata, - setFileMetadata, - setTimeout, - setStorages, - getStorages, - getConfig, - setLogFunction, - request - } -} - -/** ============ Utils =========== */ - -/** - * Convert an Object of query parameters into a string - * Example: { "prefix" : "user_id_1234", "format" : "xml"} => "?prefix=user_id_1234&format=xml" - * - * @param {Object} queries - * @returns - */ -function getQueryParameters (queries) { - let _queries = ''; - - if (queries && typeof queries === "object") { - const _queriesEntries = Object.keys(queries); - const _totalQueries = _queriesEntries.length; - for (let i = 0; i < _totalQueries; i++) { - if (i === 0) { - _queries += '?' - } - _queries += `${_queriesEntries[i]}=${queries[_queriesEntries[i]]}` - if (i + 1 !== _totalQueries) { - _queries += '&' - } - } - } - return _queries; -} - -function getHeaderAndQueryParameters (options) { - let headers = {}; - let queries = ''; - let body = null - - if (options?.queries) { - queries = getQueryParameters(options.queries); - } - if (options?.headers) { - headers = options.headers; - } - if (options?.body) { - body = options.body; - } - return { headers, queries, body } -} - -function activateFallbackStorage(originStorage) { - if (originStorage === _config.actifStorage && _config.actifStorage + 1 <= _config.storages.length) { - _config.actifStorage += 1; - log(`Object Storage Activate Fallback Storage index "${_config.actifStorage}" 🚩`, 'warning'); + /** Check the first credential and return storage type: S3 or Swift client */ + const _auth = Array.isArray(config) && config.length > 0 ? config[0] : config; + if (_auth?.accessKeyId && _auth?.secretAccessKey && _auth?.url && _auth?.region) { + return s3(config); + } else if (_auth?.username && _auth?.password && _auth?.authUrl && _auth?.tenantName && _auth?.region) { + return swift(config); + } else { + throw new Error("Storage connexion not recognised - did you provide correct credentials for a S3 or Swift storage?") } } \ No newline at end of file diff --git a/package-lock.json b/package-lock.json index 7ebb11c..d755755 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,32 +1,57 @@ { - "name": "high-availability-object-storage", - "version": "0.3.0", + "name": "tiny-storage-client", + "version": "1.0.0", "lockfileVersion": 2, "requires": true, "packages": { "": { - "name": "high-availability-object-storage", - "version": "0.3.0", + "name": "tiny-storage-client", + "version": "1.0.0", "license": "Apache-2.0", "dependencies": { + "aws4": "=1.12.0", "simple-get": "=4.0.1" }, "devDependencies": { - "eslint": "=8.26.0", - "mocha": "=10.1.0", - "nock": "=13.2.9" + "eslint": "=8.36.0", + "mocha": "=10.2.0", + "nock": "=13.3.0" + } + }, + "node_modules/@eslint-community/eslint-utils": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.3.0.tgz", + "integrity": "sha512-v3oplH6FYCULtFuCeqyuTd9D2WKO937Dxdq+GmHOLL72TTRriLxz2VLlNfkZRsvj6PKnOPAtuT6dwrs/pA5DvA==", + "dev": true, + "dependencies": { + "eslint-visitor-keys": "^3.3.0" + }, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + }, + "peerDependencies": { + "eslint": "^6.0.0 || ^7.0.0 || >=8.0.0" + } + }, + "node_modules/@eslint-community/regexpp": { + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.4.0.tgz", + "integrity": "sha512-A9983Q0LnDGdLPjxyXQ00sbV+K+O+ko2Dr+CZigbHWtX9pNfxlaBkMR8X1CztI73zuEyEBXTVjx7CE+/VSwDiQ==", + "dev": true, + "engines": { + "node": "^12.0.0 || ^14.0.0 || >=16.0.0" } }, "node_modules/@eslint/eslintrc": { - "version": "1.3.3", - "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.3.3.tgz", - "integrity": "sha512-uj3pT6Mg+3t39fvLrj8iuCIJ38zKO9FpGtJ4BBJebJhEwjoT+KLVNCcHT5QC9NGRIEi7fZ0ZR8YRb884auB4Lg==", + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-2.0.1.tgz", + "integrity": "sha512-eFRmABvW2E5Ho6f5fHLqgena46rOj7r7OKHYfLElqcBfGFHHpjBhivyi5+jOEQuSpdc/1phIZJlbC2te+tZNIw==", "dev": true, "dependencies": { "ajv": "^6.12.4", "debug": "^4.3.2", - "espree": "^9.4.0", - "globals": "^13.15.0", + "espree": "^9.5.0", + "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", @@ -40,10 +65,19 @@ "url": "https://opencollective.com/eslint" } }, + "node_modules/@eslint/js": { + "version": "8.36.0", + "resolved": "https://registry.npmjs.org/@eslint/js/-/js-8.36.0.tgz", + "integrity": "sha512-lxJ9R5ygVm8ZWgYdUweoq5ownDlJ4upvoWmO4eLxBYHdMo+vZ/Rx0EN6MbKWDJOSUGrqJy2Gt+Dyv/VKml0fjg==", + "dev": true, + "engines": { + "node": "^12.22.0 || ^14.17.0 || >=16.0.0" + } + }, "node_modules/@humanwhocodes/config-array": { - "version": "0.11.7", - "resolved": "https://registry.npmjs.org/@humanwhocodes/config-array/-/config-array-0.11.7.tgz", - "integrity": "sha512-kBbPWzN8oVMLb0hOUYXhmxggL/1cJE6ydvjDIGi9EnAGUyA7cLVKQg+d/Dsm+KZwx2czGHrCmMVLiyg8s5JPKw==", + "version": "0.11.8", + "resolved": "https://registry.npmjs.org/@humanwhocodes/config-array/-/config-array-0.11.8.tgz", + "integrity": "sha512-UybHIJzJnR5Qc/MsD9Kr+RpO2h+/P1GhOwdiLPXK5TWk5sgTdu88bTD9UP+CKbPPh5Rni1u0GjAdYQLemG8g+g==", "dev": true, "dependencies": { "@humanwhocodes/object-schema": "^1.2.1", @@ -109,9 +143,9 @@ } }, "node_modules/acorn": { - "version": "8.8.1", - "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.8.1.tgz", - "integrity": "sha512-7zFpHzhnqYKrkYdUjF1HI1bzd0VygEGX8lFk4k5zVMqHEoES+P+7TKI+EvLO9WVMJ8eekdO0aDEK044xTXwPPA==", + "version": "8.8.2", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.8.2.tgz", + "integrity": "sha512-xjIYgE8HBrkpd/sJqOGNspf8uHG+NOHGOw6a/Urj8taM2EXfdNAH2oFcPeIFfsv3+kz/mJrS5VuMqbNLjCa2vw==", "dev": true, "bin": { "acorn": "bin/acorn" @@ -197,6 +231,11 @@ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", "dev": true }, + "node_modules/aws4": { + "version": "1.12.0", + "resolved": "https://registry.npmjs.org/aws4/-/aws4-1.12.0.tgz", + "integrity": "sha512-NmWvPnx0F1SfrQbYwOi7OeaNGokp9XhzNioJ/CSBs8Qa4vxug81mhJEAVZwxXuBmYB5KDRfMq/F3RR0BIU7sWg==" + }, "node_modules/balanced-match": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", @@ -463,13 +502,16 @@ } }, "node_modules/eslint": { - "version": "8.26.0", - "resolved": "https://registry.npmjs.org/eslint/-/eslint-8.26.0.tgz", - "integrity": "sha512-kzJkpaw1Bfwheq4VXUezFriD1GxszX6dUekM7Z3aC2o4hju+tsR/XyTC3RcoSD7jmy9VkPU3+N6YjVU2e96Oyg==", + "version": "8.36.0", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-8.36.0.tgz", + "integrity": "sha512-Y956lmS7vDqomxlaaQAHVmeb4tNMp2FWIvU/RnU5BD3IKMD/MJPr76xdyr68P8tV1iNMvN2mRK0yy3c+UjL+bw==", "dev": true, "dependencies": { - "@eslint/eslintrc": "^1.3.3", - "@humanwhocodes/config-array": "^0.11.6", + "@eslint-community/eslint-utils": "^4.2.0", + "@eslint-community/regexpp": "^4.4.0", + "@eslint/eslintrc": "^2.0.1", + "@eslint/js": "8.36.0", + "@humanwhocodes/config-array": "^0.11.8", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "ajv": "^6.10.0", @@ -479,16 +521,15 @@ "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.1.1", - "eslint-utils": "^3.0.0", "eslint-visitor-keys": "^3.3.0", - "espree": "^9.4.0", - "esquery": "^1.4.0", + "espree": "^9.5.0", + "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", - "globals": "^13.15.0", + "globals": "^13.19.0", "grapheme-splitter": "^1.0.4", "ignore": "^5.2.0", "import-fresh": "^3.0.0", @@ -503,7 +544,6 @@ "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.1", - "regexpp": "^3.2.0", "strip-ansi": "^6.0.1", "strip-json-comments": "^3.1.0", "text-table": "^0.2.0" @@ -531,33 +571,6 @@ "node": "^12.22.0 || ^14.17.0 || >=16.0.0" } }, - "node_modules/eslint-utils": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/eslint-utils/-/eslint-utils-3.0.0.tgz", - "integrity": "sha512-uuQC43IGctw68pJA1RgbQS8/NP7rch6Cwd4j3ZBtgo4/8Flj4eGE7ZYSZRN3iq5pVUv6GPdW5Z1RFleo84uLDA==", - "dev": true, - "dependencies": { - "eslint-visitor-keys": "^2.0.0" - }, - "engines": { - "node": "^10.0.0 || ^12.0.0 || >= 14.0.0" - }, - "funding": { - "url": "https://github.com/sponsors/mysticatea" - }, - "peerDependencies": { - "eslint": ">=5" - } - }, - "node_modules/eslint-utils/node_modules/eslint-visitor-keys": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-2.1.0.tgz", - "integrity": "sha512-0rSmRBzXgDzIsD6mGdJgevzgezI534Cer5L/vyMX0kHzT/jiB43jRhd9YUlMGYLQy2zprNmoT8qasCGtY+QaKw==", - "dev": true, - "engines": { - "node": ">=10" - } - }, "node_modules/eslint-visitor-keys": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.3.0.tgz", @@ -568,9 +581,9 @@ } }, "node_modules/espree": { - "version": "9.4.0", - "resolved": "https://registry.npmjs.org/espree/-/espree-9.4.0.tgz", - "integrity": "sha512-DQmnRpLj7f6TgN/NYb0MTzJXL+vJF9h3pHy4JhCIs3zwcgez8xmGg3sXHcEO97BrmO2OSvCwMdfdlyl+E9KjOw==", + "version": "9.5.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-9.5.0.tgz", + "integrity": "sha512-JPbJGhKc47++oo4JkEoTe2wjy4fmMwvFpgJT9cQzmfXKp22Dr6Hf1tdCteLz1h0P3t+mGvWZ+4Uankvh8+c6zw==", "dev": true, "dependencies": { "acorn": "^8.8.0", @@ -585,9 +598,9 @@ } }, "node_modules/esquery": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.4.0.tgz", - "integrity": "sha512-cCDispWt5vHHtwMY2YrAQ4ibFkAL8RbH5YGBnZBc90MolvvfkkQcJro/aZiAQUlQ3qgrYS6D6v8Gc5G5CQsc9w==", + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.5.0.tgz", + "integrity": "sha512-YQLXUplAwJgCydQ78IMJywZCceoqk1oH01OERdSAJc/7U2AylwjhSCLDEtqwg811idIS/9fIU5GjG73IgjKMVg==", "dev": true, "dependencies": { "estraverse": "^5.1.0" @@ -783,9 +796,9 @@ } }, "node_modules/globals": { - "version": "13.17.0", - "resolved": "https://registry.npmjs.org/globals/-/globals-13.17.0.tgz", - "integrity": "sha512-1C+6nQRb1GwGMKm2dH/E7enFAMxGTmGI7/dEdhy/DNelv85w9B72t3uc5frtMNXIbzrarJJ/lTCjcaZwbLJmyw==", + "version": "13.20.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-13.20.0.tgz", + "integrity": "sha512-Qg5QtVkCy/kv3FUSlu4ukeZDVf9ee0iXLAUYX13gbR17bnejFTzr4iS9bY7kwCf1NztRNm1t91fjOiyx4CSwPQ==", "dev": true, "dependencies": { "type-fest": "^0.20.2" @@ -822,9 +835,9 @@ } }, "node_modules/ignore": { - "version": "5.2.0", - "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.0.tgz", - "integrity": "sha512-CmxgYGiEPCLhfLnpPp1MoRmifwEIOgjcHXxOBjv7mY96c+eWScsOP9c112ZyLdWHi0FxHjI+4uVhKYp/gcdRmQ==", + "version": "5.2.4", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz", + "integrity": "sha512-MAb38BcSbH0eHNBxn7ql2NH/kX33OkB3lZ1BNdh7ENeRChHTYsTvWrMubiIAMNS2llXEEgZ1MUOBtXChP3kaFQ==", "dev": true, "engines": { "node": ">= 4" @@ -1074,9 +1087,9 @@ } }, "node_modules/mocha": { - "version": "10.1.0", - "resolved": "https://registry.npmjs.org/mocha/-/mocha-10.1.0.tgz", - "integrity": "sha512-vUF7IYxEoN7XhQpFLxQAEMtE4W91acW4B6En9l97MwE9stL1A9gusXfoHZCLVHDUJ/7V5+lbCM6yMqzo5vNymg==", + "version": "10.2.0", + "resolved": "https://registry.npmjs.org/mocha/-/mocha-10.2.0.tgz", + "integrity": "sha512-IDY7fl/BecMwFHzoqF2sg/SHHANeBoMMXFlS9r0OXKDssYE1M5O43wUY/9BVPeIvfH2zmEbBfseqN9gBQZzXkg==", "dev": true, "dependencies": { "ansi-colors": "4.1.1", @@ -1180,9 +1193,9 @@ "dev": true }, "node_modules/nock": { - "version": "13.2.9", - "resolved": "https://registry.npmjs.org/nock/-/nock-13.2.9.tgz", - "integrity": "sha512-1+XfJNYF1cjGB+TKMWi29eZ0b82QOvQs2YoLNzbpWGqFMtRQHTa57osqdGj4FrFPgkO4D4AZinzUJR9VvW3QUA==", + "version": "13.3.0", + "resolved": "https://registry.npmjs.org/nock/-/nock-13.3.0.tgz", + "integrity": "sha512-HHqYQ6mBeiMc+N038w8LkMpDCRquCHWeNmN3v6645P3NhN2+qXOBqvPqo7Rt1VyCMzKhJ733wZqw5B7cQVFNPg==", "dev": true, "dependencies": { "debug": "^4.1.0", @@ -1328,9 +1341,9 @@ } }, "node_modules/punycode": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", - "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.0.tgz", + "integrity": "sha512-rRV+zQD8tVFys26lAGR9WUuS4iUAngJScM+ZRSKtvl5tKeZ2t5bvdNFdNHBW9FWR4guGHlgmsZ1G7BSm2wTbuA==", "dev": true, "engines": { "node": ">=6" @@ -1377,18 +1390,6 @@ "node": ">=8.10.0" } }, - "node_modules/regexpp": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/regexpp/-/regexpp-3.2.0.tgz", - "integrity": "sha512-pq2bWo9mVD43nbts2wGv17XLiNLya+GklZ8kaDLV2Z08gDCsGpnKn9BFMepvWuHCbyVvY7J5o5+BVvoQbmlJLg==", - "dev": true, - "engines": { - "node": ">=8" - }, - "funding": { - "url": "https://github.com/sponsors/mysticatea" - } - }, "node_modules/require-directory": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", @@ -1766,16 +1767,31 @@ } }, "dependencies": { + "@eslint-community/eslint-utils": { + "version": "4.3.0", + "resolved": "https://registry.npmjs.org/@eslint-community/eslint-utils/-/eslint-utils-4.3.0.tgz", + "integrity": "sha512-v3oplH6FYCULtFuCeqyuTd9D2WKO937Dxdq+GmHOLL72TTRriLxz2VLlNfkZRsvj6PKnOPAtuT6dwrs/pA5DvA==", + "dev": true, + "requires": { + "eslint-visitor-keys": "^3.3.0" + } + }, + "@eslint-community/regexpp": { + "version": "4.4.0", + "resolved": "https://registry.npmjs.org/@eslint-community/regexpp/-/regexpp-4.4.0.tgz", + "integrity": "sha512-A9983Q0LnDGdLPjxyXQ00sbV+K+O+ko2Dr+CZigbHWtX9pNfxlaBkMR8X1CztI73zuEyEBXTVjx7CE+/VSwDiQ==", + "dev": true + }, "@eslint/eslintrc": { - "version": "1.3.3", - "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-1.3.3.tgz", - "integrity": "sha512-uj3pT6Mg+3t39fvLrj8iuCIJ38zKO9FpGtJ4BBJebJhEwjoT+KLVNCcHT5QC9NGRIEi7fZ0ZR8YRb884auB4Lg==", + "version": "2.0.1", + "resolved": "https://registry.npmjs.org/@eslint/eslintrc/-/eslintrc-2.0.1.tgz", + "integrity": "sha512-eFRmABvW2E5Ho6f5fHLqgena46rOj7r7OKHYfLElqcBfGFHHpjBhivyi5+jOEQuSpdc/1phIZJlbC2te+tZNIw==", "dev": true, "requires": { "ajv": "^6.12.4", "debug": "^4.3.2", - "espree": "^9.4.0", - "globals": "^13.15.0", + "espree": "^9.5.0", + "globals": "^13.19.0", "ignore": "^5.2.0", "import-fresh": "^3.2.1", "js-yaml": "^4.1.0", @@ -1783,10 +1799,16 @@ "strip-json-comments": "^3.1.1" } }, + "@eslint/js": { + "version": "8.36.0", + "resolved": "https://registry.npmjs.org/@eslint/js/-/js-8.36.0.tgz", + "integrity": "sha512-lxJ9R5ygVm8ZWgYdUweoq5ownDlJ4upvoWmO4eLxBYHdMo+vZ/Rx0EN6MbKWDJOSUGrqJy2Gt+Dyv/VKml0fjg==", + "dev": true + }, "@humanwhocodes/config-array": { - "version": "0.11.7", - "resolved": "https://registry.npmjs.org/@humanwhocodes/config-array/-/config-array-0.11.7.tgz", - "integrity": "sha512-kBbPWzN8oVMLb0hOUYXhmxggL/1cJE6ydvjDIGi9EnAGUyA7cLVKQg+d/Dsm+KZwx2czGHrCmMVLiyg8s5JPKw==", + "version": "0.11.8", + "resolved": "https://registry.npmjs.org/@humanwhocodes/config-array/-/config-array-0.11.8.tgz", + "integrity": "sha512-UybHIJzJnR5Qc/MsD9Kr+RpO2h+/P1GhOwdiLPXK5TWk5sgTdu88bTD9UP+CKbPPh5Rni1u0GjAdYQLemG8g+g==", "dev": true, "requires": { "@humanwhocodes/object-schema": "^1.2.1", @@ -1833,9 +1855,9 @@ } }, "acorn": { - "version": "8.8.1", - "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.8.1.tgz", - "integrity": "sha512-7zFpHzhnqYKrkYdUjF1HI1bzd0VygEGX8lFk4k5zVMqHEoES+P+7TKI+EvLO9WVMJ8eekdO0aDEK044xTXwPPA==", + "version": "8.8.2", + "resolved": "https://registry.npmjs.org/acorn/-/acorn-8.8.2.tgz", + "integrity": "sha512-xjIYgE8HBrkpd/sJqOGNspf8uHG+NOHGOw6a/Urj8taM2EXfdNAH2oFcPeIFfsv3+kz/mJrS5VuMqbNLjCa2vw==", "dev": true }, "acorn-jsx": { @@ -1894,6 +1916,11 @@ "integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==", "dev": true }, + "aws4": { + "version": "1.12.0", + "resolved": "https://registry.npmjs.org/aws4/-/aws4-1.12.0.tgz", + "integrity": "sha512-NmWvPnx0F1SfrQbYwOi7OeaNGokp9XhzNioJ/CSBs8Qa4vxug81mhJEAVZwxXuBmYB5KDRfMq/F3RR0BIU7sWg==" + }, "balanced-match": { "version": "1.0.2", "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.2.tgz", @@ -2086,13 +2113,16 @@ "dev": true }, "eslint": { - "version": "8.26.0", - "resolved": "https://registry.npmjs.org/eslint/-/eslint-8.26.0.tgz", - "integrity": "sha512-kzJkpaw1Bfwheq4VXUezFriD1GxszX6dUekM7Z3aC2o4hju+tsR/XyTC3RcoSD7jmy9VkPU3+N6YjVU2e96Oyg==", + "version": "8.36.0", + "resolved": "https://registry.npmjs.org/eslint/-/eslint-8.36.0.tgz", + "integrity": "sha512-Y956lmS7vDqomxlaaQAHVmeb4tNMp2FWIvU/RnU5BD3IKMD/MJPr76xdyr68P8tV1iNMvN2mRK0yy3c+UjL+bw==", "dev": true, "requires": { - "@eslint/eslintrc": "^1.3.3", - "@humanwhocodes/config-array": "^0.11.6", + "@eslint-community/eslint-utils": "^4.2.0", + "@eslint-community/regexpp": "^4.4.0", + "@eslint/eslintrc": "^2.0.1", + "@eslint/js": "8.36.0", + "@humanwhocodes/config-array": "^0.11.8", "@humanwhocodes/module-importer": "^1.0.1", "@nodelib/fs.walk": "^1.2.8", "ajv": "^6.10.0", @@ -2102,16 +2132,15 @@ "doctrine": "^3.0.0", "escape-string-regexp": "^4.0.0", "eslint-scope": "^7.1.1", - "eslint-utils": "^3.0.0", "eslint-visitor-keys": "^3.3.0", - "espree": "^9.4.0", - "esquery": "^1.4.0", + "espree": "^9.5.0", + "esquery": "^1.4.2", "esutils": "^2.0.2", "fast-deep-equal": "^3.1.3", "file-entry-cache": "^6.0.1", "find-up": "^5.0.0", "glob-parent": "^6.0.2", - "globals": "^13.15.0", + "globals": "^13.19.0", "grapheme-splitter": "^1.0.4", "ignore": "^5.2.0", "import-fresh": "^3.0.0", @@ -2126,7 +2155,6 @@ "minimatch": "^3.1.2", "natural-compare": "^1.4.0", "optionator": "^0.9.1", - "regexpp": "^3.2.0", "strip-ansi": "^6.0.1", "strip-json-comments": "^3.1.0", "text-table": "^0.2.0" @@ -2142,23 +2170,6 @@ "estraverse": "^5.2.0" } }, - "eslint-utils": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/eslint-utils/-/eslint-utils-3.0.0.tgz", - "integrity": "sha512-uuQC43IGctw68pJA1RgbQS8/NP7rch6Cwd4j3ZBtgo4/8Flj4eGE7ZYSZRN3iq5pVUv6GPdW5Z1RFleo84uLDA==", - "dev": true, - "requires": { - "eslint-visitor-keys": "^2.0.0" - }, - "dependencies": { - "eslint-visitor-keys": { - "version": "2.1.0", - "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-2.1.0.tgz", - "integrity": "sha512-0rSmRBzXgDzIsD6mGdJgevzgezI534Cer5L/vyMX0kHzT/jiB43jRhd9YUlMGYLQy2zprNmoT8qasCGtY+QaKw==", - "dev": true - } - } - }, "eslint-visitor-keys": { "version": "3.3.0", "resolved": "https://registry.npmjs.org/eslint-visitor-keys/-/eslint-visitor-keys-3.3.0.tgz", @@ -2166,9 +2177,9 @@ "dev": true }, "espree": { - "version": "9.4.0", - "resolved": "https://registry.npmjs.org/espree/-/espree-9.4.0.tgz", - "integrity": "sha512-DQmnRpLj7f6TgN/NYb0MTzJXL+vJF9h3pHy4JhCIs3zwcgez8xmGg3sXHcEO97BrmO2OSvCwMdfdlyl+E9KjOw==", + "version": "9.5.0", + "resolved": "https://registry.npmjs.org/espree/-/espree-9.5.0.tgz", + "integrity": "sha512-JPbJGhKc47++oo4JkEoTe2wjy4fmMwvFpgJT9cQzmfXKp22Dr6Hf1tdCteLz1h0P3t+mGvWZ+4Uankvh8+c6zw==", "dev": true, "requires": { "acorn": "^8.8.0", @@ -2177,9 +2188,9 @@ } }, "esquery": { - "version": "1.4.0", - "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.4.0.tgz", - "integrity": "sha512-cCDispWt5vHHtwMY2YrAQ4ibFkAL8RbH5YGBnZBc90MolvvfkkQcJro/aZiAQUlQ3qgrYS6D6v8Gc5G5CQsc9w==", + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/esquery/-/esquery-1.5.0.tgz", + "integrity": "sha512-YQLXUplAwJgCydQ78IMJywZCceoqk1oH01OERdSAJc/7U2AylwjhSCLDEtqwg811idIS/9fIU5GjG73IgjKMVg==", "dev": true, "requires": { "estraverse": "^5.1.0" @@ -2326,9 +2337,9 @@ } }, "globals": { - "version": "13.17.0", - "resolved": "https://registry.npmjs.org/globals/-/globals-13.17.0.tgz", - "integrity": "sha512-1C+6nQRb1GwGMKm2dH/E7enFAMxGTmGI7/dEdhy/DNelv85w9B72t3uc5frtMNXIbzrarJJ/lTCjcaZwbLJmyw==", + "version": "13.20.0", + "resolved": "https://registry.npmjs.org/globals/-/globals-13.20.0.tgz", + "integrity": "sha512-Qg5QtVkCy/kv3FUSlu4ukeZDVf9ee0iXLAUYX13gbR17bnejFTzr4iS9bY7kwCf1NztRNm1t91fjOiyx4CSwPQ==", "dev": true, "requires": { "type-fest": "^0.20.2" @@ -2353,9 +2364,9 @@ "dev": true }, "ignore": { - "version": "5.2.0", - "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.0.tgz", - "integrity": "sha512-CmxgYGiEPCLhfLnpPp1MoRmifwEIOgjcHXxOBjv7mY96c+eWScsOP9c112ZyLdWHi0FxHjI+4uVhKYp/gcdRmQ==", + "version": "5.2.4", + "resolved": "https://registry.npmjs.org/ignore/-/ignore-5.2.4.tgz", + "integrity": "sha512-MAb38BcSbH0eHNBxn7ql2NH/kX33OkB3lZ1BNdh7ENeRChHTYsTvWrMubiIAMNS2llXEEgZ1MUOBtXChP3kaFQ==", "dev": true }, "import-fresh": { @@ -2539,9 +2550,9 @@ } }, "mocha": { - "version": "10.1.0", - "resolved": "https://registry.npmjs.org/mocha/-/mocha-10.1.0.tgz", - "integrity": "sha512-vUF7IYxEoN7XhQpFLxQAEMtE4W91acW4B6En9l97MwE9stL1A9gusXfoHZCLVHDUJ/7V5+lbCM6yMqzo5vNymg==", + "version": "10.2.0", + "resolved": "https://registry.npmjs.org/mocha/-/mocha-10.2.0.tgz", + "integrity": "sha512-IDY7fl/BecMwFHzoqF2sg/SHHANeBoMMXFlS9r0OXKDssYE1M5O43wUY/9BVPeIvfH2zmEbBfseqN9gBQZzXkg==", "dev": true, "requires": { "ansi-colors": "4.1.1", @@ -2621,9 +2632,9 @@ "dev": true }, "nock": { - "version": "13.2.9", - "resolved": "https://registry.npmjs.org/nock/-/nock-13.2.9.tgz", - "integrity": "sha512-1+XfJNYF1cjGB+TKMWi29eZ0b82QOvQs2YoLNzbpWGqFMtRQHTa57osqdGj4FrFPgkO4D4AZinzUJR9VvW3QUA==", + "version": "13.3.0", + "resolved": "https://registry.npmjs.org/nock/-/nock-13.3.0.tgz", + "integrity": "sha512-HHqYQ6mBeiMc+N038w8LkMpDCRquCHWeNmN3v6645P3NhN2+qXOBqvPqo7Rt1VyCMzKhJ733wZqw5B7cQVFNPg==", "dev": true, "requires": { "debug": "^4.1.0", @@ -2724,9 +2735,9 @@ "dev": true }, "punycode": { - "version": "2.1.1", - "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.1.1.tgz", - "integrity": "sha512-XRsRjdf+j5ml+y/6GKHPZbrF/8p2Yga0JPtdqTIY2Xe5ohJPD9saDJJLPvp9+NSBprVvevdXZybnj2cv8OEd0A==", + "version": "2.3.0", + "resolved": "https://registry.npmjs.org/punycode/-/punycode-2.3.0.tgz", + "integrity": "sha512-rRV+zQD8tVFys26lAGR9WUuS4iUAngJScM+ZRSKtvl5tKeZ2t5bvdNFdNHBW9FWR4guGHlgmsZ1G7BSm2wTbuA==", "dev": true }, "queue-microtask": { @@ -2753,12 +2764,6 @@ "picomatch": "^2.2.1" } }, - "regexpp": { - "version": "3.2.0", - "resolved": "https://registry.npmjs.org/regexpp/-/regexpp-3.2.0.tgz", - "integrity": "sha512-pq2bWo9mVD43nbts2wGv17XLiNLya+GklZ8kaDLV2Z08gDCsGpnKn9BFMepvWuHCbyVvY7J5o5+BVvoQbmlJLg==", - "dev": true - }, "require-directory": { "version": "2.1.1", "resolved": "https://registry.npmjs.org/require-directory/-/require-directory-2.1.1.tgz", diff --git a/package.json b/package.json index a3154ba..7a41b07 100644 --- a/package.json +++ b/package.json @@ -1,11 +1,11 @@ { - "name": "high-availability-object-storage", - "version": "0.4.0", - "description": "High available, performant, and tiny Node SDK Client for OpenStack Swift Object Storage", + "name": "tiny-storage-client", + "version": "1.0.0", + "description": "Tiny node client to request distributed AWS S3 or the OpenStack Swift Object Storage.", "main": "index.js", "scripts": { "test": "mocha ./tests", - "lint": "eslint \"index.js\" \"tests/**/*.js\" --fix" + "lint": "eslint --fix \"index.js\" \"s3.js\" \"swift.js\" \"xmlToJson.js\" \"tests/**/*.js\"" }, "keywords": [ "Object", @@ -17,16 +17,23 @@ "Nodejs", "Performances", "Tiny", - "High Availability" + "High Availability", + "S3", + "Aws4", + "Bucket", + "Distributed", + "Vanillajs" ], "author": "Steevepay", + "homepage": "https://github.com/carboneio/tiny-storage-client", "license": "Apache-2.0", "dependencies": { + "aws4": "=1.12.0", "simple-get": "=4.0.1" }, "devDependencies": { - "eslint": "=8.26.0", - "mocha": "=10.1.0", - "nock": "=13.2.9" + "eslint": "=8.36.0", + "mocha": "=10.2.0", + "nock": "=13.3.0" } } diff --git a/s3.js b/s3.js new file mode 100644 index 0000000..b7cbb92 --- /dev/null +++ b/s3.js @@ -0,0 +1,443 @@ +const get = require('simple-get'); +const aws4 = require('aws4'); +const crypto = require('crypto'); +const fs = require('fs'); +const xmlToJson = require('./xmlToJson.js') + +let _config = { + /** List of S3 credentials */ + storages : [], + /** Request params */ + timeout : 5000, + activeStorage : 0 +} + +let retryReconnectMainStorage = false; + +/** + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html + */ +function downloadFile (bucket, filename, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.alias = bucket; + return request('GET', `/${bucket}/${encodeURIComponent(filename)}`, options, callback); +} + +/** + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html + */ +function uploadFile (bucket, filename, localPathOrBuffer, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.alias = bucket; + const _uploadFileRequest = function (bucket, filename, objectBuffer, options, callback) { + options.body = objectBuffer; + options.headers = { + ...options?.headers + } + return request('PUT', `/${bucket}/${encodeURIComponent(filename)}`, options, callback); + } + /** + * AWS4 does not support computing signature with a Stream + * https://github.com/mhart/aws4/issues/43 + * The file buffer must be read. + */ + if (Buffer.isBuffer(localPathOrBuffer) === false) { + return fs.readFile(localPathOrBuffer, (err, objectBuffer) => { + if (err){ + return callback(err); + } + _uploadFileRequest(bucket, filename, objectBuffer, options, callback); + }); + } + return _uploadFileRequest(bucket, filename, localPathOrBuffer, options, callback); +} + +/** + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html + */ +function deleteFile (bucket, filename, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.alias = bucket; + return request('DELETE', `/${bucket}/${encodeURIComponent(filename)}`, options, callback); +} + + +/** + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html + * + * Query parameters for pagination/filter: + * - "max-keys=3&" : Sets the maximum number of keys returned in the response. By default the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more. + * - "prefix=E&" : Limits the response to keys that begin with the specified prefix. + * - "start-after=": StartAfter is where you want Amazon S3 to start listing from. Amazon S3 starts listing after this specified key. StartAfter can be any key in the bucket. + */ +function listFiles(bucket, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.defaultQueries = 'list-type=2'; + options.alias = bucket; + return request('GET', `/${bucket}`, options, (err, resp) => { + if (err) { + return callback(err); + } + const _body = resp?.body?.toString(); + if (_body && resp.statusCode === 200) { + let _regRes = _body?.match(/]*?>([^]*?)<\/ListBucketResult>/); + resp.body = xmlToJson(_regRes?.[1], { forceArray: ['contents'] }); + } + return callback(null, resp); + }); +} + +/** + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html + */ +function getFileMetadata(bucket, filename, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.alias = bucket; + return request('HEAD', `/${bucket}/${encodeURIComponent(filename)}`, options, callback); +} + +/** + * HEAD Bucket: This action is useful to determine if a bucket exists and you have permission to access it thanks to the Status code. A message body is not included, so you cannot determine the exception beyond these error codes. + * - The action returns a 200 OK if the bucket exists and you have permission to access it. + * - If the bucket does not exist or you do not have permission to access it, the HEAD request returns a generic 400 Bad Request, 403 Forbidden or 404 Not Found code. + * + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html + */ +function headBucket(bucket, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.alias = bucket; + return request('HEAD', `/${bucket}`, options, callback); +} + +/** + * Returns a list of all buckets owned by the authenticated sender of the request. To use this operation, you must have the s3:ListAllMyBuckets permission. + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.html + */ +function listBuckets(options, callback) { + if (!callback) { + callback = options; + options = {}; + } + return request('GET', `/`, options, (err, resp) => { + if (err) { + return callback(err); + } + const _body = resp?.body?.toString(); + if (_body && resp.statusCode === 200) { + let _regRes = _body?.match(/]*?>([^]*?)<\/Buckets>/); + resp.body = xmlToJson(_regRes?.[1], { forceArray: ['bucket'] }); + } + return callback(null, resp); + }); +} + +/** + * @doc https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html + * + * Set metadatas by copying the file, metadata are replaced with metadata provided in the request. Set the header "x-amz-metadata-directive":"COPY" to copy metadata from the source object. + * Custom metadata must start with "x-amz-meta-", followed by a name to create a custom key. + * Metadata can be as large as 2 KB total. To calculate the total size of user-defined metadata, + * sum the number of bytes in the UTF-8 encoding for each key and value. Both keys and their values must conform to US-ASCII standards. + */ +function setFileMetadata(bucket, filename, options, callback) { + + if (!callback) { + callback = options; + options = {}; + } + + options.alias = bucket; + options["headers"] = { + 'x-amz-copy-source': `/${bucket}/${encodeURIComponent(filename)}`, + 'x-amz-metadata-directive': 'REPLACE', + ...options.headers + } + + request('PUT', `/${bucket}/${encodeURIComponent(filename)}`, options, function(err, resp) { + if (err) { + return callback(err); + } + const _body = resp?.body?.toString(); + if (_body && resp.statusCode === 200) { + let _regRes = _body?.match(/]*?>([^]*?)<\/CopyObjectResult>/); + resp.body = xmlToJson(_regRes?.[1] ?? ''); + } + return callback(null, resp); + }); +} + +/** + * BULK DELETE 1000 files maximum + * @documentation https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html#API_DeleteObjects_Examples + */ +function deleteFiles (bucket, files, options, callback) { + if (!callback) { + callback = options; + options = {}; + } + options.alias = bucket; + + let _body = ''; + for (let i = 0; i < files.length; i++) { + _body += `${encodeURIComponent(files?.[i]?.name ?? files?.[i]?.key ?? files?.[i])}`; + } + _body += 'false'; + options.body = _body; + options.headers = { + ...options?.headers, + 'Content-MD5': getMD5(_body) + } + return request('POST', `/${bucket}/?delete`, options, (err, resp) => { + if (err) { + return callback(err); + } + const _body = resp?.body?.toString(); + if (_body && resp.statusCode === 200) { + let _regRes = _body?.match(/]*?>([^]*?)<\/DeleteResult>/); + resp.body = xmlToJson(_regRes?.[1], { forceArray: ['deleted', 'error'] }); + } + return callback(null, resp); + }); +} + +/** + * + * @param {String} method POST/GET/HEAD/DELETE + * @param {String} path Path to a ressource + * @param {Object} options { headers: {}, body: "Buffer", queries: {}, defaultQueries: '' } + * @param {Function} callback function(err, resp):void = The `err` is null by default. `resp` is the HTTP response including: { statusCode: 200, headers: {}, body: "Buffer/Object/String" } + * @returns + */ +function request (method, path, options, callback) { + + if (_config.activeStorage >= _config.storages.length) { + /** Reset the index of the main storage if any storage are available */ + _config.activeStorage = 0; + log(`S3 Storage | All storages are not available - switch to the main storage`, 'error'); + return callback(new Error('All S3 storages are not available')); + } else if (_config.activeStorage !== 0 && options?.requestStorageIndex === undefined && retryReconnectMainStorage === false) { + /** + * Retry to reconnect to the main storage if a child storage is active by requesting GET "/": Request "ListBuckets". Notes: + * - "requestStorageIndex" option is used to request a specific storage, disable the retry and not create an infinite loop of requests into child storages + * - "retryReconnectMainStorage" global variable is used to request one time and not create SPAM parallele requests to the main storage. + */ + retryReconnectMainStorage = true; + request('GET', `/`, { requestStorageIndex: 0 }, function (err, resp) { + /** If everything is alright, the active storage is reset to the main */ + if (resp?.statusCode === 200) { + log(`🟢 S3 Storage | Main storage available - reconnecting for next requests`); + _config.activeStorage = 0; + } + retryReconnectMainStorage = false; + }); + } + + /** Get active storage based on an index */ + const _activeStorage = _config.storages[options?.requestStorageIndex ?? _config.activeStorage]; + options.originalStorage = _config.activeStorage; + + /** + * Return a bucket name based on an alias and current active storage. + * If the alias does not exist, the alias is returned as bucket name + */ + const _activeBucket = _activeStorage?.buckets?.[options?.alias] ?? options?.alias; + let _path = path; + if (options?.alias !== _activeBucket) { + _path = _path.replace(options?.alias, _activeBucket); + } + + const _urlParams = getUrlParameters(options?.queries ?? '', options?.defaultQueries ?? ''); + const _requestParams = aws4.sign({ + method: method, + url: `https://${_activeStorage.url}${_path}${_urlParams ?? ''}`, + ...(options?.body ? { body: options?.body } : {}), + headers: { + ...(options?.headers ? options?.headers : {}) + }, + timeout: _config.timeout, + /** REQUIRED FOR AWS4 SIGNATURE */ + service: 's3', + hostname: _activeStorage.url, + path: `${path}${_urlParams ?? ''}`, + region: _activeStorage.region, + protocol: 'https:' + }, { + accessKeyId: _activeStorage.accessKeyId, + secretAccessKey: _activeStorage.secretAccessKey + }) + + const _requestCallback = function (err, res, body) { + if ((err || res?.statusCode >= 500 || res?.statusCode === 401) && options?.requestStorageIndex === undefined) { + /** Protection when requesting storage in parallel, another request may have already switch to a child storage on Error */ + if (options.originalStorage === _config.activeStorage) { + log(`S3 Storage | Activate fallback storage: switch from "${_config.activeStorage}" to "${_config.activeStorage + 1}" | ${err?.toString() || "Status code: " + res?.statusCode}`, 'warning'); + _config.activeStorage += 1; + } + return request(method, path, options, callback); + } else if (err) { + return callback(err, res); + } + /** If the response is an error as XML and not a stream, the error is parsed as JSON */ + if (res.statusCode >= 400 && res?.headers?.['content-type'] === 'application/xml' && !options?.stream) { + body = xmlToJson(body?.toString() ?? ''); + } + return options?.stream === true ? callback(null, res) : callback(null, { headers : res.headers, statusCode: res.statusCode, body : body }); + } + return options?.stream === true ? get(_requestParams, _requestCallback) : get.concat(_requestParams, _requestCallback); +} + +/** + * Set HTTP requests timeout + * @param {Number} timeout + */ +function setTimeout(timeout) { + _config.timeout = timeout; +} + +/** + * Return S3 configurations requests and list of credentials + * @returns + */ +function getConfig() { + return _config; +} + +/** + * Set a new list of S3 credentials and set the active storage to the first storage on the list + * @param {Object|Array} newConfig + */ +function setConfig(newConfig) { + if (newConfig?.constructor === Object) { + newConfig = [newConfig]; + } + + for (let i = 0; i < newConfig.length; i++) { + const _auth = newConfig[i]; + if (!_auth?.accessKeyId || !_auth?.secretAccessKey || !_auth?.url || !_auth?.region) { + throw new Error("S3 Storage credentials not correct or missing - did you provide correct credentials?") + } + } + + _config.storages = [...newConfig]; + _config.accessKeyId = _config.storages[0].accessKeyId; + _config.secretAccessKey = _config.storages[0].secretAccessKey; + _config.region = _config.storages[0].region; + _config.url = _config.storages[0].url; + _config.activeStorage = 0; +} + +module.exports = (config) => { + setConfig(config); + return { + downloadFile, + uploadFile, + deleteFile, + deleteFiles, + listFiles, + headBucket, + listBuckets, + getFileMetadata, + setFileMetadata, + setTimeout, + getConfig, + setConfig, + xmlToJson, + setLogFunction, + getMetadataTotalBytes + } +} + +/******************** UTILS **********************/ + +/** + * Used for the 'Content-MD5' header: + * The base64-encoded 128-bit MD5 digest of the message + * (without the headers) according to RFC 1864. + */ +function getMD5 (data) { + try { + return crypto.createHash('md5').update(typeof data === 'string' ? Buffer.from(data) : data ).digest('base64'); + } catch(err) { + log(`S3 Storage | getMD5: ${err.toString()}`, "error"); + return ''; + } +} + +/** + * Convert an object of queries into a concatenated string of URL parameters. + * @param {Object|String} queries + * @param {String} defaultQueries + * @returns {String} URL parameters + */ +function getUrlParameters (queries, defaultQueries) { + let _queries = ''; + + if (defaultQueries) { + _queries += defaultQueries + '&'; + } + + if (queries && typeof queries === 'string') { + _queries += queries; + } else if (queries && typeof queries === "object") { + const _queriesEntries = Object.keys(queries); + const _totalQueries = _queriesEntries.length; + for (let i = 0; i < _totalQueries; i++) { + _queries += `${_queriesEntries[i]}=${encodeURIComponent(queries[_queriesEntries[i]])}` + if (i + 1 !== _totalQueries) { + _queries += '&' + } + } + } + return _queries ? '?' + _queries : ''; +} + +function getMetadataTotalBytes(header){ + let _str = ''; + for (const key in header) { + const element = header[key]; + if (key.includes('x-amz-meta-') === true ) { + _str += key.replace('x-amz-meta-', ''); /** must count the metadata name without "x-amz-meta-" */ + _str += element; + } + } + return Buffer.from(_str).length +} + +/** + * log messages + * + * @param {String} msg Message + * @param {type} type warning, error + */ +function log(msg, level = '') { + return console.log(level === 'error' ? `❗️ Error: ${msg}` : level === 'warning' ? `⚠️ ${msg}` : msg ); +} + +/** + * Override the log function, it takes to arguments: message, level + * @param {Function} newLogFunction (message, level) => {} The level can be: `info`, `warning`, `error` + */ +function setLogFunction (newLogFunction) { + if (newLogFunction) { + // eslint-disable-next-line no-func-assign + log = newLogFunction; + } +} \ No newline at end of file diff --git a/swift.js b/swift.js new file mode 100644 index 0000000..b8970c1 --- /dev/null +++ b/swift.js @@ -0,0 +1,709 @@ +const get = require('simple-get'); +const fs = require('fs'); +const { Readable } = require('stream'); + +let _config = { + storages: [], + activeStorage: 0, + endpoints: {}, + token: '', + timeout: 5000 +} + +/** + * @description Authenticate and initialise the auth token and retreive the endpoint based on the region + * + * @param {function} callback function(err):void = The `err` is null by default, return an object if an error occurs. + */ +function connection (callback, originStorage = 0) { + const arrayArguments = [callback, originStorage]; + + if (_config.activeStorage === _config.storages.length) { + /** Reset the index of the actual storage */ + _config.activeStorage = 0; + log(`Object Storages are not available`, 'error'); + return callback(new Error('Object Storages are not available')); + } + const _storage = _config.storages[_config.activeStorage]; + log(`Object Storage index "${_config.activeStorage}" region "${_storage.region}" connection...`, 'info'); + const _json = { + auth : { + identity : { + methods : ['password'], + password : { + user : { + name : _storage.username, + domain : { id : 'default' }, + password : _storage.password + } + } + }, + scope : { + project : { + domain : { + id : 'default' + }, + name : _storage.tenantName + } + } + } + }; + + get.concat({ + url : `${_storage.authUrl}/auth/tokens`, + method : 'POST', + json : true, + body : _json, + timeout: _config.timeout + }, (err, res, data) => { + if (err) { + log(`Object Storage index "${_config.activeStorage}" region "${_storage.region}" Action "connection" ${err.toString()}`, 'error'); + activateFallbackStorage(originStorage); + arrayArguments[1] = _config.activeStorage; + return connection.apply(null, arrayArguments); + } + + if (res.statusCode < 200 || res.statusCode >= 300) { + log(`Object Storage index "${_config.activeStorage}" region "${_storage.region}" connexion failled | Status ${res.statusCode.toString()} | Message: ${res.statusMessage}`, 'error'); + activateFallbackStorage(originStorage); + arrayArguments[1] = _config.activeStorage; + return connection.apply(null, arrayArguments); + } + + _config.token = res.headers['x-subject-token']; + + const _serviceCatalog = data.token.catalog.find((element) => { + return element.type === 'object-store'; + }); + + if (!_serviceCatalog) { + log(`Object Storage index "${_config.activeStorage}" region "${_storage.region}" Storage catalog not found`, 'error'); + activateFallbackStorage(originStorage); + arrayArguments[1] = _config.activeStorage; + return connection.apply(null, arrayArguments); + } + + _config.endpoints = _serviceCatalog.endpoints.find((element) => { + return element.region === _storage.region; + }); + + if (!_config.endpoints) { + log(`Object Storage index "${_config.activeStorage}" region "${_storage.region} Storage endpoint not found, invalid region`, 'error'); + activateFallbackStorage(originStorage); + arrayArguments[1] = _config.activeStorage; + return connection.apply(null, arrayArguments); + } + log(`Object Storage index "${_config.activeStorage}" region "${_storage.region}" connected!`, 'info'); + return callback(null); + }); +} +/** + * @description List objects from a container. It is possible to pass as a second argument as an object with queries or headers to overwrite the request. + * + * @param {String} container container name + * @param {Object} options [OPTIONAL]: { headers: {}, queries: {} } List of headers and queries: https://docs.openstack.org/api-ref/object-store/?expanded=show-container-details-and-list-objects-detail#show-container-details-and-list-objects + * @param {function} callback function(err, body):void = The second argument `body` is the content of the file as a Buffer. The `err` argument is null by default, return an object if an error occurs. + */ +function listFiles(container, options, callback) { + const arrayArguments = [...arguments]; + + if (callback === undefined) { + callback = options; + arrayArguments.push(options); + options = { headers: {}, queries: {} }; + arrayArguments[1] = options; + } + + arrayArguments.push({ originStorage : _config.activeStorage }) + + const { headers, queries } = getHeaderAndQueryParameters(options); + get.concat({ + url : `${_config.endpoints.url}/${container}${queries}`, + method : 'GET', + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json', + ...headers + }, + timeout: _config.timeout + }, (err, res, body) => { + + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 ? err.toString() : null; + + checkIsConnected(res, 'listFiles', arrayArguments, (error) => { + if (error) { + return callback(error); + } + + if (res && res.statusCode === 404) { + return callback(new Error('Container does not exist')); + } + + err = err || checkResponseError(res); + + /** TODO: remove? it should never happen as every error switch to another storage */ + if (err) { + return callback(err); + } + + return callback(null, body); + }); + }); +} + +/** + * @description Save a file on the OVH Object Storage + * + * @param {string} container Container name + * @param {string} filename file to store + * @param {string|Buffer} localPathOrBuffer absolute path to the file + * @param {Object} options [OPTIONAL]: { headers: {}, queries: {} } List of query parameters and headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-replace-object-detail#create-or-replace-object + * @param {function} callback function(err):void = The `err` is null by default, return an object if an error occurs. + * @returns {void} + */ +function uploadFile (container, filename, localPathOrBuffer, options, callback) { + let readStream = Buffer.isBuffer(localPathOrBuffer) === true ? Readable.from(localPathOrBuffer) : fs.createReadStream(localPathOrBuffer); + + const arrayArguments = [...arguments]; + + if (callback === undefined) { + callback = options; + arrayArguments.push(options); + options = { headers: {}, queries: {} }; + arrayArguments[3] = options; + } + + arrayArguments.push({ originStorage : _config.activeStorage }) + + const { headers, queries } = getHeaderAndQueryParameters(options); + get.concat({ + url : `${_config.endpoints.url}/${container}/${filename}${queries}`, + method : 'PUT', + body : readStream, + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json', + ...headers + }, + timeout: _config.timeout + }, (err, res, body) => { + + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 && err.code !== 'ENOENT' ? err.toString() : null; + + checkIsConnected(res, 'uploadFile', arrayArguments, (error) => { + if (error) { + return callback(error); + } + + err = err || checkResponseError(res, body.toString()); + + if (err) { + if (err.code === 'ENOENT') { + return callback(new Error('The local file does not exist')); + } + + return callback(err); + } + return callback(null); + }); + }); +} + +/** + * @description Download a file from the OVH Object Storage + * + * @param {string} container Container name + * @param {string} filename filename to download + * @param {function} callback function(err, body):void = The second argument `body` is the content of the file as a Buffer. The `err` argument is null by default, return an object if an error occurs. + * @returns {void} + */ +function downloadFile (container, filename, callback) { + + const arrayArguments = [...arguments, { originStorage : _config.activeStorage }]; + + get.concat({ + url : `${_config.endpoints.url}/${container}/${filename}`, + method : 'GET', + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json' + }, + timeout: _config.timeout + }, (err, res, body) => { + + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 ? err.toString() : null; + + checkIsConnected(res, 'downloadFile', arrayArguments, (error) => { + if (error) { + return callback(error); + } + + if (res && res.statusCode === 404) { + return callback(new Error('File does not exist')); + } + + err = err || checkResponseError(res); + + /** TODO: remove? it should never happen as every error switch to another storage */ + if (err) { + return callback(err); + } + + return callback(null, body, res.headers); + }); + }); +} + +/** + * @description Delete a file from the OVH Object Storage + * + * @param {string} container Container name + * @param {string} filename filename to store + * @param {function} callback function(err):void = The `err` argument is null by default, return an object if an error occurs. + * @returns {void} + */ +function deleteFile (container, filename, callback) { + + const arrayArguments = [...arguments, { originStorage : _config.activeStorage }]; + + get.concat({ + url : `${_config.endpoints.url}/${container}/${filename}`, + method : 'DELETE', + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json' + }, + timeout: _config.timeout + }, (err, res) => { + + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 ? err.toString() : null; + + checkIsConnected(res, 'deleteFile', arrayArguments, (error) => { + if (error) { + return callback(error); + } + + if (res && res.statusCode === 404) { + return callback(new Error('File does not exist')); + } + + err = err || checkResponseError(res); + + /** TODO: remove? it should never happen as every error switch to another storage */ + if (err) { + return callback(err); + } + + return callback(null); + }); + }); +} + +/** + * @description Get object metadata + * + * @param {string} container Container name + * @param {string} filename filename to store + * @param {function} callback function(err, headers):void = The `err` argument is null by default, return an object if an error occurs. + * @returns {void} + */ +function getFileMetadata(container, filename, callback) { + const arrayArguments = [...arguments, { originStorage : _config.activeStorage }]; + + get.concat({ + url : `${_config.endpoints.url}/${container}/${filename}`, + method : 'HEAD', + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json' + }, + timeout: _config.timeout + }, (err, res) => { + + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 ? err.toString() : null; + + checkIsConnected(res, 'getFileMetadata', arrayArguments, (error) => { + if (error) { + return callback(error); + } + + if (res && res.statusCode === 404) { + return callback(new Error('File does not exist')); + } + + err = err || checkResponseError(res); + + /** TODO: remove? it should never happen as every error switch to another storage */ + if (err) { + return callback(err); + } + + return callback(null, res.headers); + }); + }); + } + + /** + * @description Create or update object metadata. + * @description To create or update custom metadata + * @description use the X-Object-Meta-name header, + * @description where name is the name of the metadata item. + * + * @param {string} container Container name + * @param {string} filename file to store + * @param {string|Buffer} localPathOrBuffer absolute path to the file + * @param {Object} options { headers: {}, queries: {} } List of query parameters and headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-update-object-metadata-detail#create-or-update-object-metadata + * @param {function} callback function(err, headers):void = The `err` is null by default, return an object if an error occurs. + * @returns {void} + */ +function setFileMetadata (container, filename, options, callback) { + + const arrayArguments = [...arguments]; + + if (callback === undefined) { + callback = options; + arrayArguments.push(options); + options = { headers: {}, queries: {} }; + arrayArguments[3] = options; + } + + arrayArguments.push({ originStorage : _config.activeStorage }) + + const { headers, queries } = getHeaderAndQueryParameters(options); + get.concat({ + url : `${_config.endpoints.url}/${container}/${filename}${queries}`, + method : 'POST', + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json', + ...headers + }, + timeout: _config.timeout + }, (err, res) => { + + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 ? err.toString() : null; + + checkIsConnected(res, 'setFileMetadata', arrayArguments, (error) => { + if (error) { + return callback(error); + } + + if (res && res.statusCode === 404) { + return callback(new Error('File does not exist')); + } + + err = err || checkResponseError(res); + + /** TODO: remove? it should never happen as every error switch to another storage */ + if (err) { + return callback(err); + } + return callback(null, res.headers); + }); + }); +} + + /** + * @description Send a custom request to the object storage + * + * @param {string} method HTTP method used (POST, COPY, etc...) + * @param {string} path path requested, passing an empty string will request the account details. For container request pass the container name, such as: '/containerName'. For file request, pass the container and the file, such as: '/container/filename.txt'. + * @param {Object} options { headers: {}, queries: {}, body: '' } Pass to the request the body, query parameters and/or headers. List of headers: https://docs.openstack.org/api-ref/object-store/?expanded=create-or-update-object-metadata-detail#create-or-update-object-metadata + * @param {function} callback function(err, body, headers):void = The `err` is null by default. + * @returns {void} + */ +function request (method, path, options, callback) { + + const arrayArguments = [...arguments]; + + if (callback === undefined) { + callback = options; + arrayArguments.push(options); + options = { headers: {}, queries: {}, body: null }; + arrayArguments[3] = options; + } + + arrayArguments.push({ originStorage : _config.activeStorage }) + + const { headers, queries, body } = getHeaderAndQueryParameters(options); + + const _requestOptions = { + url : `${_config.endpoints.url}${path}${queries}`, + method : method, + headers : { + 'X-Auth-Token' : _config.token, + Accept : 'application/json', + ...headers + }, + timeout: _config.timeout, + ...(body ? { body } : {}) + } + + const _requestCallback = function (err, res, body) { + /** Manage special errors: timeouts, too many redirects or any unexpected behavior */ + res = res || {}; + res.error = err && err.toString().length > 0 ? err.toString() : null; + checkIsConnected(res, 'request', arrayArguments, (error) => { + if (error) { + return callback(error); + } + err = err || checkResponseError(res); + + /** TODO: remove? it should never happen as every error switch to another storage */ + if (err) { + return callback(err); + } + return options?.stream === true ? callback(null, res) : callback(null, body, res.headers); + }); + } + return options?.stream === true ? get(_requestOptions, _requestCallback) : get.concat(_requestOptions, _requestCallback); +} + +/** + * @description Check the response status code and return an Error. + * + * @param {Object} response Response object from request + * @returns {null|Error} + */ +function checkResponseError (response, body = '') { + /** TODO: remove? it should never happen as every error switch to another storage */ + if (!response) { + return new Error('No response'); + } + + if (response.statusCode < 200 || response.statusCode >= 300) { + return new Error(`${response.statusCode.toString()} ${response.statusMessage || body}`); + } + + return null; +} + +/** + * @description Check if the request is authorized, if not, it authenticate again to generate a new token, and execute again the initial request. + * + * @param {Object} response Request response + * @param {String} from Original function called + * @param {Object} args Arguments of the original function. + * @param {function} callback function(err):void = The `err` argument is null by default, return an object if an error occurs. + * @returns {void} + */ +function checkIsConnected (response, from, args, callback) { + if (!response || (response?.statusCode < 500 && response?.statusCode !== 401) || (!response?.statusCode && !!response?.error !== true)) { + return callback(null); + } + + if (response?.statusCode >= 500) { + log(`Object Storage index "${_config.activeStorage}" region "${_config.storages[_config.activeStorage].region}" Action "${from}" Status ${response?.statusCode}`, 'error'); + activateFallbackStorage(args[args.length - 1].originStorage); + } + + if (!!response?.error === true) { + log(`Object Storage index "${_config.activeStorage}" region "${_config.storages[_config.activeStorage].region}" Action "${from}" ${response.error}`, 'error'); + activateFallbackStorage(args[args.length - 1].originStorage); + } + + if (response?.statusCode === 401) { + log(`Object Storage index "${_config.activeStorage}" region "${_config.storages[_config.activeStorage].region}" try reconnect...`, 'info'); + } + + // Reconnect to object storage + connection((err) => { + if (err) { + return callback(err); + } + + switch (from) { + case 'downloadFile': + downloadFile.apply(null, args); + break; + case 'uploadFile': + uploadFile.apply(null, args); + break; + case 'deleteFile': + deleteFile.apply(null, args); + break; + case 'listFiles': + listFiles.apply(null, args); + break; + case 'getFileMetadata': + getFileMetadata.apply(null, args); + break; + case 'setFileMetadata': + setFileMetadata.apply(null, args); + break; + case 'request': + request.apply(null, args); + break; + default: + /** TODO: remove? it should never happen */ + return callback(null); + } + }, args[args.length - 1].originStorage); +} + + +/** + * @description Set and overwrite the Object Storage SDK configurations + * + * @param {Object} config + * @param {String} config.authUrl URL used for authentication, default: "https://auth.cloud.ovh.net/v3" + * @param {String} config.username Username for authentication + * @param {String} config.password Password for authentication + * @param {String} config.tenantName Tenant Name/Tenant ID for authentication + * @param {String} config.region Region used to retreive the Object Storage endpoint to request + */ +function setStorages(storages) { + _config.token = ''; + _config.endpoints = {}; + _config.activeStorage = 0; + if (Array.isArray(storages) === true) { + /** List of storage */ + _config.storages = storages; + } else if (typeof storages === 'object') { + /** Only a single storage is passed */ + _config.storages = []; + _config.storages.push(storages) + } +} + +/** + * Set the timeout + * + * @param {Integer} timeout + */ +function setTimeout(timeout) { + _config.timeout = timeout; +} + +/** + * @description Return the list of storages + * + * @returns {String} The list of storages + */ +function getStorages() { + return _config.storages; +} + +/** + * @description Return the configuration object + * + * @returns {String} The list of storages + */ +function getConfig() { + return _config; +} + +/** + * log messages + * + * @param {String} msg Message + * @param {type} type warning, error + */ +function log(msg, level = 'info') { + return console.log(level === 'error' ? `❗️ Error: ${msg}` : level === 'warning' ? `⚠️ ${msg}` : msg ); +} + +/** + * Override the log function, it takes to arguments: message, level + * @param {Function} newLogFunction (message, level) => {} The level can be: `info`, `warning`, `error` + */ +function setLogFunction (newLogFunction) { + if (newLogFunction) { + // eslint-disable-next-line no-func-assign + log = newLogFunction; + } +} + +/** + * + * @description Initialise and return an instance of the Object Storage SDK. + * + * @param {Object} config + * @param {String} config.authUrl URL used for authentication, default: "https://auth.cloud.ovh.net/v3" + * @param {String} config.username Username for authentication + * @param {String} config.password Password for authentication + * @param {String} config.tenantName Tenant Name/Tenant ID for authentication + * @param {String} config.region Region used to retreive the Object Storage endpoint to request + */ +module.exports = (config) => { + setStorages(config) + return { + connection, + uploadFile, + downloadFile, + deleteFile, + listFiles, + getFileMetadata, + setFileMetadata, + setTimeout, + setStorages, + getStorages, + getConfig, + setLogFunction, + request + } +} + +/** ============ Utils =========== */ + +/** + * Convert an Object of query parameters into a string + * Example: { "prefix" : "user_id_1234", "format" : "xml"} => "?prefix=user_id_1234&format=xml" + * + * @param {Object} queries + * @returns + */ +function getQueryParameters (queries) { + let _queries = ''; + + if (queries && typeof queries === "object") { + const _queriesEntries = Object.keys(queries); + const _totalQueries = _queriesEntries.length; + for (let i = 0; i < _totalQueries; i++) { + if (i === 0) { + _queries += '?' + } + _queries += `${_queriesEntries[i]}=${queries[_queriesEntries[i]]}` + if (i + 1 !== _totalQueries) { + _queries += '&' + } + } + } + return _queries; +} + +function getHeaderAndQueryParameters (options) { + let headers = {}; + let queries = ''; + let body = null + + if (options?.queries) { + queries = getQueryParameters(options.queries); + } + if (options?.headers) { + headers = options.headers; + } + if (options?.body) { + body = options.body; + } + return { headers, queries, body } +} + +function activateFallbackStorage(originStorage) { + if (originStorage === _config.activeStorage && _config.activeStorage + 1 <= _config.storages.length) { + _config.activeStorage += 1; + log(`Object Storage Activate Fallback Storage index "${_config.activeStorage}" 🚩`, 'warning'); + } +} \ No newline at end of file diff --git a/tests/assets/listObjects.prefix.response.json b/tests/assets/listObjects.prefix.response.json new file mode 100644 index 0000000..ea23be1 --- /dev/null +++ b/tests/assets/listObjects.prefix.response.json @@ -0,0 +1,22 @@ +{ + "name": "www", + "keycount": 2, + "maxkeys": 2, + "istruncated": true, + "contents": [ + { + "key": "document-1.docx", + "lastmodified": "2023-03-07T17:03:59.000Z", + "etag": "a55b475be5fc06d42e1baf84231253ce", + "size": 236208, + "storageclass": "STANDARD" + }, + { + "key": "document-2.odt", + "lastmodified": "2023-03-02T07:18:55.000Z", + "etag": "fde6d729123cee4db6bfa3606306bc8c", + "size": 11822, + "storageclass": "STANDARD" + } + ] +} \ No newline at end of file diff --git a/tests/assets/listObjects.prefix.response.xml b/tests/assets/listObjects.prefix.response.xml new file mode 100644 index 0000000..39ba3f3 --- /dev/null +++ b/tests/assets/listObjects.prefix.response.xml @@ -0,0 +1,22 @@ + + + www + + 2 + 2 + true + + document-1.docx + 2023-03-07T17:03:59.000Z + "a55b475be5fc06d42e1baf84231253ce" + 236208 + STANDARD + + + document-2.odt + 2023-03-02T07:18:55.000Z + "fde6d729123cee4db6bfa3606306bc8c" + 11822 + STANDARD + + \ No newline at end of file diff --git a/tests/assets/listObjects.response.json b/tests/assets/listObjects.response.json new file mode 100644 index 0000000..f0323d5 --- /dev/null +++ b/tests/assets/listObjects.response.json @@ -0,0 +1,36 @@ +{ + "name": "www", + "keycount": 4, + "maxkeys": 1000, + "istruncated": false, + "contents": [ + { + "key": "file-1.docx", + "lastmodified": "2023-03-07T17:03:54.000Z", + "etag": "7ad22b1297611d62ef4a4704c97afa6b", + "size": 61396, + "storageclass": "STANDARD" + }, + { + "key": "file-2.odt", + "lastmodified": "2023-03-07T17:03:56.000Z", + "etag": "fa5678c105413929aa8f800b7a944d8e", + "size": 358358, + "storageclass": "STANDARD" + }, + { + "key": "document-1.docx", + "lastmodified": "2023-03-07T17:03:59.000Z", + "etag": "a55b475be5fc06d42e1baf84231253ce", + "size": 236208, + "storageclass": "STANDARD" + }, + { + "key": "document-2.odt", + "lastmodified": "2023-03-02T07:18:55.000Z", + "etag": "fde6d729123cee4db6bfa3606306bc8c", + "size": 11822, + "storageclass": "STANDARD" + } + ] +} \ No newline at end of file diff --git a/tests/assets/listObjects.response.xml b/tests/assets/listObjects.response.xml new file mode 100644 index 0000000..f69b9ce --- /dev/null +++ b/tests/assets/listObjects.response.xml @@ -0,0 +1,36 @@ + + + www + + 4 + 1000 + false + + file-1.docx + 2023-03-07T17:03:54.000Z + "7ad22b1297611d62ef4a4704c97afa6b" + 61396 + STANDARD + + + file-2.odt + 2023-03-07T17:03:56.000Z + "fa5678c105413929aa8f800b7a944d8e" + 358358 + STANDARD + + + document-1.docx + 2023-03-07T17:03:59.000Z + "a55b475be5fc06d42e1baf84231253ce" + 236208 + STANDARD + + + document-2.odt + 2023-03-02T07:18:55.000Z + "fde6d729123cee4db6bfa3606306bc8c" + 11822 + STANDARD + + \ No newline at end of file diff --git a/tests/s3.test.js b/tests/s3.test.js new file mode 100644 index 0000000..62186c3 --- /dev/null +++ b/tests/s3.test.js @@ -0,0 +1,2494 @@ +const s3 = require('../s3.js'); +const assert = require('assert'); +const nock = require('nock'); +const fs = require('fs'); +const path = require('path'); + +let storage = {}; +const url1S3 = 'https://s3.gra.first.cloud.test'; +const url2S3 = 'https://s3.de.first.cloud.test'; + +/** ASSETS for download/upload */ +const fileTxtPath = path.join(__dirname, 'assets', 'file.txt'); +const fileTxt = fs.readFileSync(fileTxtPath).toString(); +const fileXmlPath = path.join(__dirname, 'assets', 'files.xml'); +const fileXml = fs.readFileSync(fileXmlPath).toString(); +/** ASSETS for List objects Requests */ +const _listObjectsResponseXML = fs.readFileSync(path.join(__dirname, "./assets", 'listObjects.response.xml')); +const _listObjectsResponseJSON = require('./assets/listObjects.response.json'); + +describe.only('S3 SDK', function () { + + beforeEach(function() { + storage = s3([{ + accessKeyId : '2371bebbe7ac4b2db39c09eadf011661', + secretAccessKey: '9978f6abf7f445566a2d316424aeef2', + url : url1S3.replace('https://', ''), + region : 'gra', + buckets : { + invoices : "invoices-gra-1234" + } + }, + { + accessKeyId : '2371bebbe7ac4b2db39c09eadf011661', + secretAccessKey: '9978f6abf7f445566a2d316424aeef2', + url : url2S3.replace('https://', ''), + region : 'de', + buckets : { + invoices : "invoices-de-8888" + } + }]); + }) + + describe('constructor/getConfig/setConfig/setTimeout', function () { + it("should create a new s3 instance if the authentication is provided as Object", function () { + const _authS3 = { accessKeyId: '-', secretAccessKey: '-', region: '-', url: '-' } + const _storage = s3(_authS3); + const _config = _storage.getConfig() + assert.strictEqual(_config.timeout, 5000); + assert.strictEqual(_config.activeStorage, 0); + assert.strictEqual(_config.storages.length, 1); + assert.strictEqual(JSON.stringify(_config.storages[0]), JSON.stringify(_authS3)) + }) + + it("should create a new s3 instance if the authentication is provided as List of objects", function () { + const _authS3 = [{ accessKeyId: 1, secretAccessKey: 2, region: 3, url: 4 }, { accessKeyId: 5, secretAccessKey: 6, region: 7, url: 8 }, { accessKeyId: 9, secretAccessKey: 10, region: 11, url: 12 }] + const _storage = s3(_authS3); + const _config = _storage.getConfig() + assert.strictEqual(_config.timeout, 5000); + assert.strictEqual(_config.activeStorage, 0); + assert.strictEqual(_config.storages.length, 3); + assert.strictEqual(JSON.stringify(_config.storages), JSON.stringify(_authS3)) + }) + + it("should throw an error if authentication values are missing", function() { + assert.throws(function(){ s3({}) }, Error); + /** As object */ + assert.throws(function(){ s3({accessKeyId: '', secretAccessKey: '', url: ''}) }, Error); // missing region + assert.throws(function(){ s3({accessKeyId: '', secretAccessKey: '', region: ''}) }, Error); // missing url + assert.throws(function(){ s3({accessKeyId: '', url: '', region: ''}) }, Error); // missing secretAccessKey + assert.throws(function(){ s3({secretAccessKey: '', url: '', region: ''}) }, Error); // missing accessKeyId + /** As array */ + assert.throws(function(){ s3([{ accessKeyId: 1, secretAccessKey: 2, region: 3, url: 4 }, { accessKeyId: 5, secretAccessKey: 6, url: 8 }]) }, Error); // missing region + assert.throws(function(){ s3([{ accessKeyId: 1, secretAccessKey: 2, region: 3, url: 4 }, { accessKeyId: 5, secretAccessKey: 6, region: 8 }]) }, Error); // missing url + assert.throws(function(){ s3([{ accessKeyId: 1, secretAccessKey: 2, region: 3, url: 4 }, { accessKeyId: 5, region: 6, url: 8 }]) }, Error); // missing secretAccessKey + assert.throws(function(){ s3([{ accessKeyId: 1, secretAccessKey: 2, region: 3, url: 4 }, { secretAccessKey: 5, region: 6, url: 8 }]) }, Error); // missing accessKeyId + }); + + it("should set a new config", function () { + const _storage = s3({ accessKeyId: '-', secretAccessKey: '-', region: '-', url: '-' }); + const _authS3 = [{ accessKeyId: 1, secretAccessKey: 2, region: 3, url: 4 }, { accessKeyId: 5, secretAccessKey: 6, region: 7, url: 8 }] + _storage.setConfig(_authS3) + const _config = _storage.getConfig() + assert.strictEqual(_config.timeout, 5000); + assert.strictEqual(_config.activeStorage, 0); + assert.strictEqual(_config.storages.length, 2); + assert.strictEqual(JSON.stringify(_config.storages), JSON.stringify(_authS3)) + }) + + it("should set a new timeout value", function() { + const _storage = s3({ accessKeyId: '-', secretAccessKey: '-', region: '-', url: '-' }); + assert.strictEqual(_storage.getConfig().timeout, 5000); + _storage.setTimeout(10000); + assert.strictEqual(_storage.getConfig().timeout, 10000); + }); + }) + + describe('request - CALLBACK', function() { + + describe("REQUEST MAIN STORAGE", function () { + + }); + + describe("SWITCH TO CHILD STORAGE", function () { + }); + + }); + + describe('request - STREAM', function() { + + describe("REQUEST MAIN STORAGE", function () { + }); + + describe("SWITCH TO CHILD STORAGE", function () { + }); + + }); + + describe('headBucket', function() { + describe("REQUEST MAIN STORAGE", function () { + + it('should return code 200, and request signed with AWS4', function (done) { + const nockRequest = nock(url1S3, + { + reqheaders: { + 'x-amz-content-sha256': () => true, + 'x-amz-date': () => true, + 'authorization': () => true, + 'host': () => true + } + }).intercept("/customBucket", "HEAD").reply(200, ''); + + storage.headBucket('customBucket', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }); + }); + + it('should return code 200 and request a bucket as ALIAS', function (done) { + const nockRequest = nock(url1S3, + { + reqheaders: { + 'x-amz-content-sha256': () => true, + 'x-amz-date': () => true, + 'authorization': () => true, + 'host': () => true + } + }).intercept("/invoices-gra-1234", "HEAD").reply(200, ''); + + storage.headBucket('invoices', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }); + }); + + it('should return code 403 Forbidden', function (done) { + const nockRequest = nock(url1S3).intercept("/customBucket", "HEAD").reply(403, ''); + + storage.headBucket('customBucket', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 403); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }); + }); + }); + + describe("SWITCH TO CHILD STORAGE", function () { + it('should switch to the child storage and return code 200 with bucket as ALIAS', function (done) { + const nockRequestS1 = nock(url1S3).intercept("/invoices-gra-1234", "HEAD").reply(500, ''); + const nockRequestS2 = nock(url2S3).intercept("/invoices-de-8888", "HEAD").reply(200, ''); + const nockRequestS3 = nock(url1S3).get('/').reply(500); + + storage.headBucket('invoices', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }); + }); + }) + }); + + describe('listBuckets', function() { + + describe("REQUEST MAIN STORAGE", function () { + it('should fetch a list of buckets', function (done) { + const _header = { + 'content-type': 'application/xml', + 'content-length': '366', + 'x-amz-id-2': 'tx606add09487142fa88e67-00641aacf4', + 'x-amz-request-id': 'tx606add09487142fa88e67-00641aacf4', + 'x-trans-id': 'tx606add09487142fa88e67-00641aacf4', + 'x-openstack-request-id': 'tx606add09487142fa88e67-00641aacf4', + date: 'Wed, 22 Mar 2023 07:23:32 GMT', + connection: 'close' + } + + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/') + .reply(200, () => { + return "89123456:user-feiowjfOEIJW12345678:user-feiowjfOEIJWinvoices2023-02-27T11:46:24.000Zwww2023-02-27T11:46:24.000Z"; + }); + + storage.listBuckets((err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + "bucket": [ + { "name": "invoices", "creationdate": "2023-02-27T11:46:24.000Z" }, + { "name": "www", "creationdate": "2023-02-27T11:46:24.000Z" } + ] + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it('should return an error if credentials are not correct', function (done) { + const _header = { + 'x-amz-request-id': 'BEFVPYB9PM889VMS', + 'x-amz-id-2': 'Pnby9XcoK7X/GBpwr+vVV/X3XyadxsUkTzGdSJS5zRMhs2RvZDGroWleytOYGmYRSszFbsaZWUo=', + 'content-type': 'application/xml', + 'transfer-encoding': 'chunked', + date: 'Wed, 22 Mar 2023 07:37:22 GMT', + server: 'AmazonS3', + connection: 'close' + } + + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/') + .reply(403, () => { + return "InvalidAccessKeyIdThe AWS Access Key Id you provided does not exist in our records.AKIAUO7WHYLVFADDFL57eBSTT951V1FREKS2XzWFC8ZOiZvyxTUgcYjHDD9rmPDG81TCJHkZhAv4zgguuR5I9aeqSFA9Ns4r5PdKy9+9o+xDLpOk="; + }); + + storage.listBuckets((err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 403); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'InvalidAccessKeyId', + message: 'The AWS Access Key Id you provided does not exist in our records.', + awsaccesskeyid: 'AKIAUO7WHYLVFADDFL57e', + requestid: 'BSTT951V1FREKS2X', + hostid: 'zWFC8ZOiZvyxTUgcYjHDD9rmPDG81TCJHkZhAv4zgguuR5I9aeqSFA9Ns4r5PdKy9+9o+xDLpOk=' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + }); + + describe("SWITCH TO CHILD STORAGE", function () { + it('should fetch a list of buckets', function (done) { + const nockRequestS1 = nock(url1S3) + .get('/') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .get('/') + .reply(200, () => { + return "89123456:user-feiowjfOEIJW12345678:user-feiowjfOEIJWinvoices2023-02-27T11:46:24.000Zwww2023-02-27T11:46:24.000Z"; + }); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.listBuckets((err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + "bucket": [ + { "name": "invoices", "creationdate": "2023-02-27T11:46:24.000Z" }, + { "name": "www", "creationdate": "2023-02-27T11:46:24.000Z" } + ] + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify({})) + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + }); + + describe("Options 'requestStorageIndex'", function() { + + it("should request the first storage and should return an error if the first storage is not available", function(done) { + const nockRequestS1 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.listBuckets({ requestStorageIndex: 0 }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 500); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify({})) + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it("should request the second storage and should return an error if the second storage is not available", function(done) { + const nockRequestS1 = nock(url2S3) + .get('/') + .reply(500, ''); + + storage.listBuckets({ requestStorageIndex: 1 }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 500); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify({})) + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it("should request the second storage and get a list of buckets", function(done) { + const nockRequestS1 = nock(url2S3) + .get('/') + .reply(200, () => { + return "89123456:user-feiowjfOEIJW12345678:user-feiowjfOEIJWinvoices2023-02-27T11:46:24.000Zwww2023-02-27T11:46:24.000Z"; + }); + + storage.listBuckets({ requestStorageIndex: 1 }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + "bucket": [ + { "name": "invoices", "creationdate": "2023-02-27T11:46:24.000Z" }, + { "name": "www", "creationdate": "2023-02-27T11:46:24.000Z" } + ] + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify({})) + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + }) + + }); + + describe('listFiles', function() { + + describe("REQUEST MAIN STORAGE", function () { + it('should fetch a list of objects', function (done) { + const _header = { + 'content-type': 'application/xml', + 'content-length': '1887', + 'x-amz-id-2': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-amz-request-id': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-trans-id': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-openstack-request-id': 'txf0b438dfd25b444ba3f60-00641807d7', + date: 'Mon, 20 Mar 2023 07:14:31 GMT', + connection: 'close' + } + + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/bucket') + .query({ 'list-type' : 2 }) + .reply(200, () => { + return _listObjectsResponseXML; + }); + + storage.listFiles('bucket', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_listObjectsResponseJSON)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it('should fetch a list of objects from a bucket as ALIAS', function (done) { + const _header = { + 'content-type': 'application/xml', + 'content-length': '1887', + 'x-amz-id-2': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-amz-request-id': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-trans-id': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-openstack-request-id': 'txf0b438dfd25b444ba3f60-00641807d7', + date: 'Mon, 20 Mar 2023 07:14:31 GMT', + connection: 'close' + } + + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/invoices-gra-1234') + .query({ 'list-type' : 2 }) + .reply(200, () => { + return _listObjectsResponseXML; + }); + + storage.listFiles('invoices', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_listObjectsResponseJSON)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it('should fetch a list of objects with query parameters (prefix & limit)', function (done) { + const _header = { + 'content-type': 'application/xml', + 'content-length': '1887', + 'x-amz-id-2': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-amz-request-id': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-trans-id': 'txf0b438dfd25b444ba3f60-00641807d7', + 'x-openstack-request-id': 'txf0b438dfd25b444ba3f60-00641807d7', + date: 'Mon, 20 Mar 2023 07:14:31 GMT', + connection: 'close' + } + + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/bucket') + .query({ + "list-type" : 2, + "prefix" : "document", + "max-keys" : 2 + }) + .reply(200, () => { + return _listObjectsResponseXML; + }); + + storage.listFiles('bucket', { queries: { "prefix": "document", "max-keys": 2 } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_listObjectsResponseJSON)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it("should return an error if the bucket does not exist", function (done) { + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'tx8fa5f00b19af4756b9ef3-0064184d77', + 'x-amz-request-id': 'tx8fa5f00b19af4756b9ef3-0064184d77', + 'x-trans-id': 'tx8fa5f00b19af4756b9ef3-0064184d77', + 'x-openstack-request-id': 'tx8fa5f00b19af4756b9ef3-0064184d77', + date: 'Mon, 20 Mar 2023 12:11:35 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + const _expectedBody = { + error: { + code: 'NoSuchBucket', + message: 'The specified bucket does not exist.', + requestid: 'txe285e692106542e88a2f5-0064184e80', + bucketname: 'buckeeeet' + } + } + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_headers) + .get('/buckeeeet') + .query({ + "list-type" : 2 + }) + .reply(404, () => { + return "NoSuchBucketThe specified bucket does not exist.txe285e692106542e88a2f5-0064184e80buckeeeet"; + }); + storage.listFiles('buckeeeet', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + }) + + describe("SWITCH TO CHILD STORAGE", function () { + it('should fetch a list of objects', function (done) { + const nockRequestS1 = nock(url1S3) + .get('/bucket') + .query({ 'list-type' : 2 }) + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .get('/bucket') + .query({ 'list-type' : 2 }) + .reply(200, () => { + return _listObjectsResponseXML; + }); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.listFiles('bucket', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_listObjectsResponseJSON)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify({})) + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + }); + + }); + + describe('downloadFile', function() { + + describe("REQUEST MAIN STORAGE", function () { + it('should download a file', function(done) { + const _header = { + 'content-length': '1492', + 'last-modified': 'Wed, 03 Nov 2021 13:02:39 GMT', + date: 'Wed, 03 Nov 2021 14:28:48 GMT', + etag: 'a30776a059eaf26eebf27756a849097d', + 'x-amz-request-id': '318BC8BC148832E5', + 'x-amz-id-2': 'eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran' + } + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt; + }); + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file as STREAM', function(done) { + const _header = { + 'content-length': '1492', + 'last-modified': 'Wed, 03 Nov 2021 13:02:39 GMT', + date: 'Wed, 03 Nov 2021 14:28:48 GMT', + etag: 'a30776a059eaf26eebf27756a849097d', + 'x-amz-request-id': '318BC8BC148832E5', + 'x-amz-id-2': 'eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran' + } + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt; + }); + storage.downloadFile('bucket', 'file.docx', { stream: true }, function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + let data = ''; + resp.on('data', chunk => data += chunk); + resp.on('end', function () { + assert.strictEqual(data,fileTxt) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + done(); + }); + }) + }) + + it('should download a file from an alias', function(done) { + const _header = { + 'content-length': '1492', + 'last-modified': 'Wed, 03 Nov 2021 13:02:39 GMT', + date: 'Wed, 03 Nov 2021 14:28:48 GMT', + etag: 'a30776a059eaf26eebf27756a849097d', + 'x-amz-request-id': '318BC8BC148832E5', + 'x-amz-id-2': 'eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran' + } + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/invoices-gra-1234/file.docx') + .reply(200, () => { + return fileTxt; + }); + storage.downloadFile('invoices', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file with options', function(done) { + const _header = { + 'content-length': '1492', + 'last-modified': 'Wed, 03 Nov 2021 13:02:39 GMT', + date: 'Wed, 03 Nov 2021 14:28:48 GMT', + etag: 'a30776a059eaf26eebf27756a849097d', + 'x-amz-request-id': '318BC8BC148832E5', + 'x-amz-id-2': 'eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran' + } + const _options = { + headers: { + "x-amz-server-side-encryption-customer-key-MD5": "SSECustomerKeyMD5", + "x-amz-checksum-mode": "ChecksumMode" + }, + queries: { + test : "2", + partNumber : "PartNumber" + } + } + const nockRequest = nock(url1S3, { + reqheaders: { + 'x-amz-server-side-encryption-customer-key-MD5': () => true, + 'x-amz-checksum-mode': () => true + } + }) + .defaultReplyHeaders(_header) + .get('/bucket/file.docx') + .query(_options.queries) + .reply(200, () => { + return fileTxt; + }); + storage.downloadFile('bucket', 'file.docx', _options, function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + it('should return code 404 if the file does not exist', function(done) { + const _header = {'content-type': 'application/xml'} + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/bucket/file.docx') + .reply(404, "NoSuchKeyThe specified key does not exist.txc03d49a36c324653854de-006408d963template222.odt"); + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'NoSuchKey', + message: 'The specified key does not exist.', + requestid: 'txc03d49a36c324653854de-006408d963', + key: 'template222.odt' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + + + it('[STREAM] should return code 404 if the file does not exist and should convert the XML as JSON', function(done) { + const _header = {'content-type': 'application/xml'} + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/bucket/file.docx') + .reply(404, "NoSuchKeyThe specified key does not exist.txc03d49a36c324653854de-006408d963template222.odt"); + storage.downloadFile('bucket', 'file.docx', { stream: true }, function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + let data = ''; + resp.on('data', chunk => data += chunk); + resp.on('end', function () { + assert.strictEqual(JSON.stringify(storage.xmlToJson(data)), JSON.stringify({ + error: { + code: 'NoSuchKey', + message: 'The specified key does not exist.', + requestid: 'txc03d49a36c324653854de-006408d963', + key: 'template222.odt' + } + })) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + done(); + }); + }) + }) + + it("should return an error if the bucket does not exist", function (done) { + const _header = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txfa644d038be848a9938e3-00641850f0', + 'x-amz-request-id': 'txfa644d038be848a9938e3-00641850f0', + 'x-trans-id': 'txfa644d038be848a9938e3-00641850f0', + 'x-openstack-request-id': 'txfa644d038be848a9938e3-00641850f0', + date: 'Mon, 20 Mar 2023 12:26:24 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + const nockRequest = nock(url1S3) + .defaultReplyHeaders(_header) + .get('/buckeeeet/file.docx') + .reply(404, () => { + return "NoSuchBucketThe specified bucket does not exist.txfa644d038be848a9938e3-00641850f0buckeeeet"; + }); + storage.downloadFile('buckeeeet', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'NoSuchBucket', + message: 'The specified bucket does not exist.', + requestid: 'txfa644d038be848a9938e3-00641850f0', + bucketname: 'buckeeeet' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)) + assert.strictEqual(nockRequest.pendingMocks().length, 0); + done(); + }) + }) + }); + + describe("SWITCH TO CHILD STORAGE", function () { + it('should download a file from the second storage if the main storage returns a 500 error', function(done) { + const nockRequestS1 = nock(url1S3) + .get('/bucket/file.docx') + .reply(500); + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 1); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file from the second storage if the main storage returns a 500 error and the container is an ALIAS', function(done) { + const nockRequestS1 = nock(url1S3) + .get('/invoices-gra-1234/file.docx') + .reply(500); + const nockRequestS2 = nock(url2S3) + .get('/invoices-de-8888/file.docx') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.downloadFile('invoices', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 1); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file from the second storage if the main storage returns a 500 error, then should RECONNECT to the main storage', function(done) { + const nockRequestS1 = nock(url1S3) + .get('/bucket/file.docx') + .reply(500) + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(200); + + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 0); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file from the second storage if the authentication on the main storage is not allowed', function(done) { + let nockRequestS1 = nock(url1S3) + .get('/bucket/file.docx') + .reply(401, 'Unauthorized') + + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(401, 'Unauthorized') + + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 1); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file from the second storage if the main storage timeout', function(done) { + storage.setTimeout(200); + let nockRequestS1 = nock(url1S3) + .get('/bucket/file.docx') + .delayConnection(500) + .reply(200, {}); + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .delayConnection(500) + .reply(200, {}); + + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 1); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should download a file from the second storage if the main storage returns any kind of error', function(done) { + let nockRequestS1 = nock(url1S3) + .get('/bucket/file.docx') + .replyWithError('Error Message 1234'); + + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.docx') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .replyWithError('Error Message 1234'); + + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), fileTxt); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 1); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should return an error if all storage are not available, and reset the active storage to the main', function(done) { + const nockRequestS1 = nock(url1S3) + .get('/bucket/file.docx') + .reply(500) + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.docx') + .reply(500, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.downloadFile('bucket', 'file.docx', function (err, resp) { + assert.strictEqual(err.toString(), 'Error: All S3 storages are not available'); + assert.strictEqual(resp, undefined); + const _config = storage.getConfig(); + assert.strictEqual(_config.activeStorage, 0); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + }); + + describe("PARALLEL REQUESTS", function () { + + function getDownloadFilePromise() { + return new Promise((resolve, reject) => { + try { + storage.downloadFile('bucket', 'file.odt', function (err, resp) { + if (err) { + return reject(err); + } + return resolve(resp); + }); + } catch(err) { + return reject(err); + } + }); + } + + it('should fallback to a child if the main storage return any kind of errors, then should reconnect to the main storage automatically', function (done) { + const nockRequestS1 = nock(url1S3) + .get('/bucket/file.odt') + .reply(500) + .get('/bucket/file.odt') + .reply(500); + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(200); + const nockRequestS4 = nock(url1S3) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }) + + let promise1 = getDownloadFilePromise() + let promise2 = getDownloadFilePromise() + + Promise.all([promise1, promise2]).then(results => { + assert.strictEqual(results.length, 2) + assert.strictEqual(results[0].body.toString(), fileTxt); + assert.strictEqual(results[0].statusCode, 200); + assert.strictEqual(results[1].body.toString(), fileTxt); + assert.strictEqual(results[1].statusCode, 200); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 0); + /** Last batch requesting the main storage, everything is ok */ + storage.downloadFile('bucket', 'file.odt', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.body.toString(), fileTxt); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(nockRequestS4.pendingMocks().length, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 0); + done(); + }); + }).catch(err => { + assert.strictEqual(err, null); + done(); + }); + }) + + it('should fallback to a child if the main storage return any kind of errors, then should reconnect to the main storage after multiple try', function (done) { + /** First Batch */ + const nockRequestS1 = nock(url1S3) + .get('/bucket/file.odt') + .reply(500) + .get('/bucket/file.odt') + .reply(500); + const nockRequestS2 = nock(url2S3) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + /** Second Batch */ + const nockRequestS4 = nock(url2S3) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }); + const nockRequestS5 = nock(url1S3) + .get('/') + .reply(500); + + /** Third Batch */ + const nockRequestS6 = nock(url2S3) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }) + const nockRequestS7 = nock(url1S3) + .get('/') + .reply(200); + /** Fourth Batch */ + const nockRequestS8 = nock(url1S3) + .get('/bucket/file.odt') + .reply(200, () => { + return fileTxt + }) + + /** First batch of requests > S3 main return error > Child storage response ok */ + let promise1 = getDownloadFilePromise() + let promise2 = getDownloadFilePromise() + Promise.all([promise1, promise2]).then(function (results) { + assert.strictEqual(results.length, 2) + assert.strictEqual(results[0].body.toString(), fileTxt); + assert.strictEqual(results[0].statusCode, 200); + assert.strictEqual(results[1].body.toString(), fileTxt); + assert.strictEqual(results[1].statusCode, 200); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); + /** Second batch of requests > Still requesting the child storage, the main storage is still not available */ + let promise3 = getDownloadFilePromise() + let promise4 = getDownloadFilePromise() + Promise.all([promise3, promise4]).then(function (results) { + assert.strictEqual(results.length, 2) + assert.strictEqual(results[0].body.toString(), fileTxt); + assert.strictEqual(results[0].statusCode, 200); + assert.strictEqual(results[1].body.toString(), fileTxt); + assert.strictEqual(results[1].statusCode, 200); + assert.strictEqual(nockRequestS4.pendingMocks().length, 0); + assert.strictEqual(nockRequestS5.pendingMocks().length, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); + /** Third batch of requests > Still requesting the child storage, the main storage is now Available! Active storage is reset to the main storage */ + storage.downloadFile('bucket', 'file.odt', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.body.toString(), fileTxt); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(nockRequestS6.pendingMocks().length, 0); + assert.strictEqual(nockRequestS7.pendingMocks().length, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 0); + /** Fourth batch requesting the main storage, everything is ok */ + storage.downloadFile('bucket', 'file.odt', function (err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.body.toString(), fileTxt); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(nockRequestS8.pendingMocks().length, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 0); + done(); + }); + }); + }).catch(function (err) { + assert.strictEqual(err, null); + done(); + }); + }).catch(function (err) { + assert.strictEqual(err, null); + done(); + }); + }) + + }); + + }); + + describe('uploadFile', function() { + + describe("REQUEST MAIN STORAGE", function () { + const _header = { + 'content-length': '0', + 'last-modified': 'Wed, 03 Nov 2021 13:02:39 GMT', + date: 'Wed, 03 Nov 2021 14:28:48 GMT', + etag: 'a30776a059eaf26eebf27756a849097d', + 'x-amz-request-id': '318BC8BC148832E5', + 'x-amz-id-2': 'eftixk72aD6Ap51TnqcoF8eFidJG9Z/2mkiDFu8yU9AS1ed4OpIszj7UDNEHGran' + } + + it("should upload a file provided as buffer", function() { + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .reply(200, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return ''; + }); + + storage.uploadFile('bucket', 'file.pdf', Buffer.from(fileXml), function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + }) + }) + + it("should upload a file provided as buffer to a bucket alias", function() { + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_header) + .put('/invoices-gra-1234/file.pdf') + .reply(200, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return ''; + }); + + storage.uploadFile('invoices', 'file.pdf', Buffer.from(fileXml), function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + }) + }) + + it("should upload a file provided as local path", function() { + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .reply(200, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return ''; + }); + + storage.uploadFile('bucket', 'file.pdf', fileXmlPath, function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + }) + }) + + it("should return an error if the file provided as local path does not exist", function() { + + storage.uploadFile('bucket', 'file.pdf', '/var/random/path/file.pdf', function(err, resp) { + assert.strictEqual(err.toString(), "Error: ENOENT: no such file or directory, open '/var/random/path/file.pdf'"); + assert.strictEqual(resp, undefined); + }) + }) + + it("should upload a file provided as buffer with OPTIONS (like metadata)", function(done) { + const _options = { + headers: { + "x-amz-meta-name": "carbone", + "x-amz-checksum-sha256": "0ea4be78f6c3948588172edc6d8789ffe3cec461f385e0ac447e581731c429b5" + }, + queries: { + test : "2" + } + } + + const nockRequestS1 = nock(url1S3, { + reqheaders: { + 'x-amz-content-sha256': () => true, + 'x-amz-date': () => true, + 'authorization': () => true, + 'host': () => true, + 'x-amz-meta-name': () => true, + 'x-amz-checksum-sha256': () => true + } + }) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .query(_options.queries) + .reply(200, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return ''; + }); + + + storage.uploadFile('bucket', 'file.pdf', Buffer.from(fileXml), _options, function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it("should return an error if the bucket does not exist", function (done) { + + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'tx33e4496c9d8746ad9cfcb-006418540f', + 'x-amz-request-id': 'tx33e4496c9d8746ad9cfcb-006418540f', + 'x-trans-id': 'tx33e4496c9d8746ad9cfcb-006418540f', + 'x-openstack-request-id': 'tx33e4496c9d8746ad9cfcb-006418540f', + date: 'Mon, 20 Mar 2023 12:39:43 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .put('/buckeeeet/file.pdf') + .reply(404, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return Buffer.from("NoSuchBucketThe specified bucket does not exist.tx9d1553e8d8de401bb8949-00641851bdbuckeeeet"); + }); + + storage.uploadFile('buckeeeet', 'file.pdf', fileXmlPath, function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'NoSuchBucket', + message: 'The specified bucket does not exist.', + requestid: 'tx9d1553e8d8de401bb8949-00641851bd', + bucketname: 'buckeeeet' + } + })); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + }); + + describe("SWITCH TO CHILD STORAGE", function () { + + it("should upload a file into a child storage", function(done) { + const _header = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-amz-request-id': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-trans-id': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-openstack-request-id': 'txd14fbe8bc05341c0b548a-00640b2752', + date: 'Fri, 10 Mar 2023 12:49:22 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .reply(200, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return ''; + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.uploadFile('bucket', 'file.pdf', Buffer.from(fileXml), function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it("should upload a file into a child storage into a bucket as ALIAS", function(done) { + const _header = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-amz-request-id': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-trans-id': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-openstack-request-id': 'txd14fbe8bc05341c0b548a-00640b2752', + date: 'Fri, 10 Mar 2023 12:49:22 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_header) + .put('/invoices-gra-1234/file.pdf') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_header) + .put('/invoices-de-8888/file.pdf') + .reply(200, (uri, requestBody) => { + assert.strictEqual(requestBody, fileXml); + return ''; + }); + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + storage.uploadFile('invoices', 'file.pdf', Buffer.from(fileXml), function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it("should not be able to upload a file into a child storage if the write access is denied.", function(done) { + const _header = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-amz-request-id': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-trans-id': 'txd14fbe8bc05341c0b548a-00640b2752', + 'x-openstack-request-id': 'txd14fbe8bc05341c0b548a-00640b2752', + date: 'Fri, 10 Mar 2023 12:49:22 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_header) + .put('/bucket/file.pdf') + .reply(403, () => { + return "AccessDeniedAccess Denied.txd14fbe8bc05341c0b548a-00640b2752"; + }) + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500); + + const _expectedBody = { + error: { + code: 'AccessDenied', + message: 'Access Denied.', + requestid: 'txd14fbe8bc05341c0b548a-00640b2752' + } + } + + storage.uploadFile('bucket', 'file.pdf', Buffer.from(fileXml), function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 403); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_header)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + }); + + }); + + describe('deleteFile', function() { + + describe("REQUEST MAIN STORAGE", function () { + + it('should delete an object (return the same response if the object does not exist)', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '0', + 'x-amz-id-2': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-amz-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-trans-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-openstack-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + date: 'Mon, 20 Mar 2023 08:17:29 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .delete('/www/file.pdf') + .reply(204, ''); + + storage.deleteFile('www', 'file.pdf', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 204); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it('should delete an object into a bucket as ALIAS', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '0', + 'x-amz-id-2': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-amz-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-trans-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-openstack-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + date: 'Mon, 20 Mar 2023 08:17:29 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .delete('/invoices-gra-1234/file.pdf') + .reply(204, ''); + + storage.deleteFile('invoices', 'file.pdf', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 204); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it("should return an error if the bucket does not exist", function (done) { + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'tx424f2a5a6e684da581e77-0064185482', + 'x-amz-request-id': 'tx424f2a5a6e684da581e77-0064185482', + 'x-trans-id': 'tx424f2a5a6e684da581e77-0064185482', + 'x-openstack-request-id': 'tx424f2a5a6e684da581e77-0064185482', + date: 'Mon, 20 Mar 2023 12:41:38 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .delete('/buckeeet/file.pdf') + .reply(404, "NoSuchBucketThe specified bucket does not exist.tx424f2a5a6e684da581e77-0064185482buckeeet"); + + storage.deleteFile('buckeeet', 'file.pdf', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'NoSuchBucket', + message: 'The specified bucket does not exist.', + requestid: 'tx424f2a5a6e684da581e77-0064185482', + bucketname: 'buckeeet' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + }); + + describe("SWITCH TO CHILD STORAGE", function () { + + it('should delete an object from the second bucket', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '0', + 'x-amz-id-2': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-amz-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-trans-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-openstack-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + date: 'Mon, 20 Mar 2023 08:17:29 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .delete('/www/file.pdf') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_headers) + .delete('/www/file.pdf') + .reply(204, ''); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.deleteFile('www', 'file.pdf', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 204); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should delete an object from the second bucket as ALIAS', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '0', + 'x-amz-id-2': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-amz-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-trans-id': 'txf010ba580ff0471ba3a0b-0064181698', + 'x-openstack-request-id': 'txf010ba580ff0471ba3a0b-0064181698', + date: 'Mon, 20 Mar 2023 08:17:29 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .delete('/invoices-gra-1234/file.pdf') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_headers) + .delete('/invoices-de-8888/file.pdf') + .reply(204, ''); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.deleteFile('invoices', 'file.pdf', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 204); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it("should not be able to delete a file of a child storage if the write permission is disallowed", function(done) { + const _bodyErrorAccessDenied = "AccessDeniedAccess Denied.txb40580debedc4ff9b36dc-00641818cb" + const _bodyJson = { + error: { + code: 'AccessDenied', + message: 'Access Denied.', + requestid: 'txb40580debedc4ff9b36dc-00641818cb' + } + } + + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txb40580debedc4ff9b36dc-00641818cb', + 'x-amz-request-id': 'txb40580debedc4ff9b36dc-00641818cb', + 'x-trans-id': 'txb40580debedc4ff9b36dc-00641818cb', + 'x-openstack-request-id': 'txb40580debedc4ff9b36dc-00641818cb', + date: 'Mon, 20 Mar 2023 08:26:51 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .delete('/www/file.pdf') + .reply(500, ''); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_headers) + .delete('/www/file.pdf') + .reply(403, _bodyErrorAccessDenied); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.deleteFile('www', 'file.pdf', (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 403); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_bodyJson)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + }); + + }); + + describe('deleteFiles', function() { + + describe("REQUEST MAIN STORAGE", function () { + + it('should delete a list of objects', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '269', + 'x-amz-id-2': 'txb383f29c0dad46f9919b5-00641844ba', + 'x-amz-request-id': 'txb383f29c0dad46f9919b5-00641844ba', + 'x-trans-id': 'txb383f29c0dad46f9919b5-00641844ba', + 'x-openstack-request-id': 'txb383f29c0dad46f9919b5-00641844ba', + date: 'Mon, 20 Mar 2023 11:34:18 GMT', + connection: 'close' + } + + const _filesToDelete = [ + { key: 'invoice 2023.pdf' }, + { key: 'carbone(1).png' }, + { key: 'file.txt' } + ] + + const _expectedBody = { + deleted: _filesToDelete.map((value) => { + return { + key: encodeURIComponent(value.key) + } + }) + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .post('/www/') + .query((actualQueryObject) => { + assert.strictEqual(actualQueryObject.delete !== undefined, true); + return true; + }) + .reply(200, function() { + return "invoice%202023.pdfcarbone(1).pngfile.txt"; + }) + + storage.deleteFiles('www', _filesToDelete, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it('should delete a list of objects with Bucket as ALIAS', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '269', + 'x-amz-id-2': 'txb383f29c0dad46f9919b5-00641844ba', + 'x-amz-request-id': 'txb383f29c0dad46f9919b5-00641844ba', + 'x-trans-id': 'txb383f29c0dad46f9919b5-00641844ba', + 'x-openstack-request-id': 'txb383f29c0dad46f9919b5-00641844ba', + date: 'Mon, 20 Mar 2023 11:34:18 GMT', + connection: 'close' + } + + const _filesToDelete = [ + { key: 'invoice 2023.pdf' }, + { key: 'carbone(1).png' }, + { key: 'file.txt' } + ] + + const _expectedBody = { + deleted: _filesToDelete.map((value) => { + return { + key: encodeURIComponent(value.key) + } + }) + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .post('/invoices-gra-1234/') + .query((actualQueryObject) => { + assert.strictEqual(actualQueryObject.delete !== undefined, true); + return true; + }) + .reply(200, function() { + return "invoice%202023.pdfcarbone(1).pngfile.txt"; + }) + + storage.deleteFiles('invoices', _filesToDelete, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it('should delete a list of objects with mix success/errors (access denied)', function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '269', + 'x-amz-id-2': 'tx3cf216266bf24a888354a-0064184a78', + 'x-amz-request-id': 'tx3cf216266bf24a888354a-0064184a78', + 'x-trans-id': 'tx3cf216266bf24a888354a-0064184a78', + 'x-openstack-request-id': 'tx3cf216266bf24a888354a-0064184a78', + date: 'Mon, 20 Mar 2023 11:58:49 GMT', + connection: 'close' + } + + const _filesToDelete = [ + { key: 'sample1.txt' }, + { key: 'sample2.txt' } + ] + + const _expectedBody = { + deleted: [ + { key: 'sample1.txt' } + ], + error: [ + { + key : 'sample2.txt', + code : 'AccessDenied', + message: 'Access Denied' + } + ] + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .post('/www/') + .query((actualQueryObject) => { + assert.strictEqual(actualQueryObject.delete !== undefined, true); + return true; + }) + .reply(200, function() { + return "sample1.txtsample2.txtAccessDeniedAccess Denied"; + }) + + storage.deleteFiles('www', _filesToDelete, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + it("should return an error if the bucket does not exist", function (done) { + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'tx84736ac6d5544b44ba91a-0064185021', + 'x-amz-request-id': 'tx84736ac6d5544b44ba91a-0064185021', + 'x-trans-id': 'tx84736ac6d5544b44ba91a-0064185021', + 'x-openstack-request-id': 'tx84736ac6d5544b44ba91a-0064185021', + date: 'Mon, 20 Mar 2023 12:22:57 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const _filesToDelete = [ + { key: 'invoice 2023.pdf' }, + { key: 'carbone(1).png' }, + { key: 'file.txt' } + ] + + const _expectedBody = { + error: { + code: 'NoSuchBucket', + message: 'The specified bucket does not exist.', + requestid: 'tx84736ac6d5544b44ba91a-0064185021', + bucketname: 'buckeeeet' + } + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .post('/buckeeeet/') + .query((actualQueryObject) => { + assert.strictEqual(actualQueryObject.delete !== undefined, true); + return true; + }) + .reply(404, function() { + return "NoSuchBucketThe specified bucket does not exist.tx84736ac6d5544b44ba91a-0064185021buckeeeet"; + }) + + storage.deleteFiles('buckeeeet', _filesToDelete, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }) + }) + + + }); + + describe("SWITCH TO CHILD STORAGE", function () { + + it("should not be able to delete a file of a child storage if the write permission is disallowed (access denied)", function(done) { + const _headers = { + 'content-type': 'text/html; charset=UTF-8', + 'content-length': '431', + 'x-amz-id-2': 'txe69b17ed1cf04260b9090-0064184b17', + 'x-amz-request-id': 'txe69b17ed1cf04260b9090-0064184b17', + 'x-trans-id': 'txe69b17ed1cf04260b9090-0064184b17', + 'x-openstack-request-id': 'txe69b17ed1cf04260b9090-0064184b17', + date: 'Mon, 20 Mar 2023 12:01:28 GMT', + connection: 'close' + } + + const _filesToDelete = [ + { key: 'invoice 2023.pdf' }, + { key: 'carbone(1).png' }, + { key: 'file.txt' } + ] + + const _expectedBody = { + error: _filesToDelete.map((value) => { + return { + key : encodeURIComponent(value.key), + code : 'AccessDenied', + message: 'Access Denied' + } + }) + } + + const nockRequestS1 = nock(url1S3) + .post('/www/') + .query((actualQueryObject) => { + assert.strictEqual(actualQueryObject.delete !== undefined, true); + return true; + }) + .reply(500, '') + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_headers) + .post('/www/') + .query((actualQueryObject) => { + assert.strictEqual(actualQueryObject.delete !== undefined, true); + return true; + }) + .reply(200, function() { + return "invoice%202023.pdfAccessDeniedAccess Deniedcarbone(1).pngAccessDeniedAccess Deniedfile.txtAccessDeniedAccess Denied"; + }) + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.deleteFiles('www', _filesToDelete, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify(_expectedBody)); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + }); + + }); + + describe('getFileMetadata', function() { + + describe("REQUEST MAIN STORAGE", function () { + + it('should get file metadata', function(done){ + const _headers = { + 'content-type': 'application/x-www-form-urlencoded; charset=utf-8', + 'content-length': '11822', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-meta-name': 'Carbone.io', + 'x-amz-meta-version': '858585', + etag: '"fde6d729123cee4db6bfa3606306bc8c"', + 'x-amz-version-id': '1679316796.606606', + 'last-modified': 'Mon, 20 Mar 2023 12:53:16 GMT', + 'x-amz-id-2': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-amz-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-trans-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-openstack-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + date: 'Mon, 20 Mar 2023 12:53:38 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .intercept("/bucket/file.pdf", "HEAD") + .reply(200, ""); + + storage.getFileMetadata('bucket', 'file.pdf', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }); + }) + + it('should get file metadata from a bucket as ALIAS', function(done){ + const _headers = { + 'content-type': 'application/x-www-form-urlencoded; charset=utf-8', + 'content-length': '11822', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-meta-name': 'Carbone.io', + 'x-amz-meta-version': '858585', + etag: '"fde6d729123cee4db6bfa3606306bc8c"', + 'x-amz-version-id': '1679316796.606606', + 'last-modified': 'Mon, 20 Mar 2023 12:53:16 GMT', + 'x-amz-id-2': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-amz-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-trans-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-openstack-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + date: 'Mon, 20 Mar 2023 12:53:38 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .intercept("/invoices-gra-1234/file.pdf", "HEAD") + .reply(200, ""); + + storage.getFileMetadata('invoices', 'file.pdf', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }); + }) + + it('should return an error if the object or bucket don\'t exist', function(done){ + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'tx10b87fee8896442cb93ce-00641855ea', + 'x-amz-request-id': 'tx10b87fee8896442cb93ce-00641855ea', + 'x-trans-id': 'tx10b87fee8896442cb93ce-00641855ea', + 'x-openstack-request-id': 'tx10b87fee8896442cb93ce-00641855ea', + date: 'Mon, 20 Mar 2023 12:47:38 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .defaultReplyHeaders(_headers) + .intercept("/bucket/file.pdf", "HEAD") + .reply(404, ""); + + storage.getFileMetadata('bucket', 'file.pdf', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({})); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + done(); + }); + }) + }); + + describe("SWITCH TO CHILD STORAGE", function () { + + it('should get file metadata in the second storage', function(done){ + const _headers = { + 'content-type': 'application/x-www-form-urlencoded; charset=utf-8', + 'content-length': '11822', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-meta-name': 'Carbone.io', + 'x-amz-meta-version': '858585', + etag: '"fde6d729123cee4db6bfa3606306bc8c"', + 'x-amz-version-id': '1679316796.606606', + 'last-modified': 'Mon, 20 Mar 2023 12:53:16 GMT', + 'x-amz-id-2': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-amz-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-trans-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-openstack-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + date: 'Mon, 20 Mar 2023 12:53:38 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .intercept("/bucket/file.pdf", "HEAD") + .reply(500, ""); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_headers) + .intercept("/bucket/file.pdf", "HEAD") + .reply(200, ""); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.getFileMetadata('bucket', 'file.pdf', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }); + }) + + it('should get file metadata in the second storage with a bucket as ALIAS', function(done){ + const _headers = { + 'content-type': 'application/x-www-form-urlencoded; charset=utf-8', + 'content-length': '11822', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-meta-name': 'Carbone.io', + 'x-amz-meta-version': '858585', + etag: '"fde6d729123cee4db6bfa3606306bc8c"', + 'x-amz-version-id': '1679316796.606606', + 'last-modified': 'Mon, 20 Mar 2023 12:53:16 GMT', + 'x-amz-id-2': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-amz-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-trans-id': 'txd2aa2b0a02554657b5efe-0064185752', + 'x-openstack-request-id': 'txd2aa2b0a02554657b5efe-0064185752', + date: 'Mon, 20 Mar 2023 12:53:38 GMT', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .intercept("/invoices-gra-1234/file.pdf", "HEAD") + .reply(500, ""); + + const nockRequestS2 = nock(url2S3) + .defaultReplyHeaders(_headers) + .intercept("/invoices-de-8888/file.pdf", "HEAD") + .reply(200, ""); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ''); + + storage.getFileMetadata('invoices', 'file.pdf', function(err, resp) { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(resp.body.toString(), ''); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }); + }) + + + }); + + }); + + + describe('setFileMetadata', function() { + + describe("REQUEST MAIN STORAGE", function () { + + it('should set file metadata', function(done){ + + const _headers2 = { + 'content-type': 'application/xml', + 'content-length': '224', + 'x-amz-version-id': '1679317926.773804', + 'last-modified': 'Mon, 20 Mar 2023 13:13:06 GMT', + 'x-amz-copy-source-version-id': '1679317926.773804', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-id-2': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-amz-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-trans-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-openstack-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + date: 'Mon, 20 Mar 2023 13:13:06 GMT', + connection: 'close' + } + + const nockRequestS2 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers2) + .put('/bucket/file.pdf') + .reply(200, "2023-03-20T13:13:06.000Z\"fde6d729123cee4db6bfa3606306bc8c\""); + + storage.setFileMetadata('bucket', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + lastmodified: '2023-03-20T13:13:06.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c' + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers2)); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + done(); + }) + }) + + it('should set file metadata with a bucket as ALIAS', function(done){ + + const _headers2 = { + 'content-type': 'application/xml', + 'content-length': '224', + 'x-amz-version-id': '1679317926.773804', + 'last-modified': 'Mon, 20 Mar 2023 13:13:06 GMT', + 'x-amz-copy-source-version-id': '1679317926.773804', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-id-2': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-amz-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-trans-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-openstack-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + date: 'Mon, 20 Mar 2023 13:13:06 GMT', + connection: 'close' + } + + const nockRequestS2 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers2) + .put('/invoices-gra-1234/file.pdf') + .reply(200, "2023-03-20T13:13:06.000Z\"fde6d729123cee4db6bfa3606306bc8c\""); + + storage.setFileMetadata('invoices', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + lastmodified: '2023-03-20T13:13:06.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c' + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers2)); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + done(); + }) + }) + + it('should return an error if the object does not exist', function(done){ + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txb4919778632448bbac785-0064185d71', + 'x-amz-request-id': 'txb4919778632448bbac785-0064185d71', + 'x-trans-id': 'txb4919778632448bbac785-0064185d71', + 'x-openstack-request-id': 'txb4919778632448bbac785-0064185d71', + date: 'Mon, 20 Mar 2023 13:19:45 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS2 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers) + .put('/bucket/fiiile.pdf') + .reply(404, "NoSuchKeyThe specified key does not exist.txb4919778632448bbac785-0064185d71fiiile.pdf"); + + storage.setFileMetadata('bucket', 'fiiile.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'NoSuchKey', + message: 'The specified key does not exist.', + requestid: 'txb4919778632448bbac785-0064185d71', + key: 'fiiile.pdf' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + done(); + }) + }) + + it('should return an error if the bucket does not exist', function(done){ + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txb63fe612d3364257bec19-0064185fcf', + 'x-amz-request-id': 'txb63fe612d3364257bec19-0064185fcf', + 'x-trans-id': 'txb63fe612d3364257bec19-0064185fcf', + 'x-openstack-request-id': 'txb63fe612d3364257bec19-0064185fcf', + date: 'Mon, 20 Mar 2023 13:29:51 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS2 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers) + .put('/buckeeet/file.pdf') + .reply(404, "NoSuchBucketThe specified bucket does not exist.txb63fe612d3364257bec19-0064185fcfbuckeeet"); + + storage.setFileMetadata('buckeeet', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 404); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'NoSuchBucket', + message: 'The specified bucket does not exist.', + requestid: 'txb63fe612d3364257bec19-0064185fcf', + bucketname: 'buckeeet' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + done(); + }) + }) + + it('should return an error if metadata headers aceed the maximum allowed metadata size (2048 Bytes)', function(done){ + const _headers = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'txb63fe612d3364257bec19-0064185fcf', + 'x-amz-request-id': 'txb63fe612d3364257bec19-0064185fcf', + 'x-trans-id': 'txb63fe612d3364257bec19-0064185fcf', + 'x-openstack-request-id': 'txb63fe612d3364257bec19-0064185fcf', + date: 'Mon, 20 Mar 2023 13:29:51 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS2 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers) + .put('/bucket/file.pdf') + .reply(400, "MetadataTooLargeYour metadata headers exceed the maximum allowed metadata size307220484SJA4PV72M8WXZ46GHxUyWQsQrv4DNU+X6K2YYqBN65twd+IZH0g3yRz7HQ7EXcVlfE8e81eJ559/3SyY0FscUdsyWY="); + + storage.setFileMetadata('bucket', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 400); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'MetadataTooLarge', + message: 'Your metadata headers exceed the maximum allowed metadata size', + size: 3072, + maxsizeallowed: 2048, + requestid: '4SJA4PV72M8WXZ46', + hostid: 'GHxUyWQsQrv4DNU+X6K2YYqBN65twd+IZH0g3yRz7HQ7EXcVlfE8e81eJ559/3SyY0FscUdsyWY=' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers)); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + done(); + }) + }) + + }); + + describe("SWITCH TO CHILD STORAGE", function () { + + it('should set file metadata in the child storage', function(done){ + + const _headers2 = { + 'content-type': 'application/xml', + 'content-length': '224', + 'x-amz-version-id': '1679317926.773804', + 'last-modified': 'Mon, 20 Mar 2023 13:13:06 GMT', + 'x-amz-copy-source-version-id': '1679317926.773804', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-id-2': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-amz-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-trans-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-openstack-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + date: 'Mon, 20 Mar 2023 13:13:06 GMT', + connection: 'close' + } + + + const nockRequestS1 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .put('/bucket/file.pdf') + .reply(500, ""); + + const nockRequestS2 = nock(url2S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers2) + .put('/bucket/file.pdf') + .reply(200, "2023-03-20T13:13:06.000Z\"fde6d729123cee4db6bfa3606306bc8c\""); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ""); + + + storage.setFileMetadata('bucket', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + lastmodified: '2023-03-20T13:13:06.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c' + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers2)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it('should set file metadata in the child storage with a bucket as ALIAS', function(done){ + + const _headers2 = { + 'content-type': 'application/xml', + 'content-length': '224', + 'x-amz-version-id': '1679317926.773804', + 'last-modified': 'Mon, 20 Mar 2023 13:13:06 GMT', + 'x-amz-copy-source-version-id': '1679317926.773804', + 'x-amz-storage-class': 'STANDARD', + 'x-amz-id-2': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-amz-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-trans-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + 'x-openstack-request-id': 'tx1cbdc88e9f104c038aa3d-0064185be2', + date: 'Mon, 20 Mar 2023 13:13:06 GMT', + connection: 'close' + } + + + const nockRequestS1 = nock(url1S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .put('/invoices-gra-1234/file.pdf') + .reply(500, ""); + + const nockRequestS2 = nock(url2S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers2) + .put('/invoices-de-8888/file.pdf') + .reply(200, "2023-03-20T13:13:06.000Z\"fde6d729123cee4db6bfa3606306bc8c\""); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ""); + + + storage.setFileMetadata('invoices', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 200); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + lastmodified: '2023-03-20T13:13:06.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c' + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers2)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + it("should not be able to write file metadata of a child storage if the write permission is disallowed", function(done) { + + const _headers2 = { + 'content-type': 'application/xml', + 'x-amz-id-2': 'tx439620795cdd41b08c58c-0064186222', + 'x-amz-request-id': 'tx439620795cdd41b08c58c-0064186222', + 'x-trans-id': 'tx439620795cdd41b08c58c-0064186222', + 'x-openstack-request-id': 'tx439620795cdd41b08c58c-0064186222', + date: 'Mon, 20 Mar 2023 13:39:46 GMT', + 'transfer-encoding': 'chunked', + connection: 'close' + } + + const nockRequestS1 = nock(url1S3) + .put('/bucket/file.pdf') + .reply(500, ""); + + + const nockRequestS2 = nock(url2S3, { + reqheaders: { + 'x-amz-copy-source': () => true, + 'x-amz-metadata-directive': () => true + } + }) + .defaultReplyHeaders(_headers2) + .put('/bucket/file.pdf') + .reply(403, "AccessDeniedAccess Denied.tx439620795cdd41b08c58c-0064186222"); + + const nockRequestS3 = nock(url1S3) + .get('/') + .reply(500, ""); + + storage.setFileMetadata('bucket', 'file.pdf', { headers: { "x-amz-meta-name": "Invoice 2023", "x-amz-meta-version": "1.2.3" } }, (err, resp) => { + assert.strictEqual(err, null); + assert.strictEqual(resp.statusCode, 403); + assert.strictEqual(JSON.stringify(resp.body), JSON.stringify({ + error: { + code: 'AccessDenied', + message: 'Access Denied.', + requestid: 'tx439620795cdd41b08c58c-0064186222' + } + })); + assert.strictEqual(JSON.stringify(resp.headers), JSON.stringify(_headers2)); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + assert.strictEqual(nockRequestS3.pendingMocks().length, 0); + done(); + }) + }) + + }); + + }); + + describe('getMetadataTotalBytes', function() { + + it('should count headers metadata size', function(done) { + /** value + "name-1" */ + assert.strictEqual(storage.getMetadataTotalBytes({ + "x-amz-meta-name-1": "ehseedwosyevblphjjeqfwhfiuojgznptwtpogzyiqiakqpyfsehfoafciyzjuugmjmtvrwjfgfdhbiocoowyggqpzwmfcogmqvaebcfchaxwkllqspdxdisbaxqnbgexpzkllonbkjjmmtccbosocwjgatjeokarbklcagejpzyypjbrinqzqdbxjgeswyhcmgiuifnwrgqhkpthtfuehseedwosyevblphjjeqfwhfiuojgznptwtpogzyiqiakqpyfsehfoafciyzjuugmjmtvrwjfgfdhbiocoowyggqpzwmfcogmqvaebcfchaxwkllqspdxdisbaxqnbgexpzkllonbkjjmmtccbosocwjgatjeokarbklcagejpzyypjbrinqzqdbxjgeswyhcmgiuifnwrgqhkpthtfuehseedwosyevblphjjeqfwhfiuojgznptwtpogzyiqiakqpyfsehfoafciyzjuugmjmtvrwjfgfdhbiocoowyggqpzwmfcogmqvaebcfchaxwkllqspdxdisbaxqnbgexpzkllonbkjjmmtccbosocwjgatjeokarbklcagejpzyypjbrinqzqdbxjgeswyhcmgiuifnwrgqhkpthtfu" + }), 642) + /** values + "name-1" + "name-2" and ignore "x-amz-metadata" */ + assert.strictEqual(storage.getMetadataTotalBytes({ + "x-amz-metadata": "should not count", + "x-amz-meta-name-1": "123456", + "x-amz-meta-name-2": "789", + }), 21) + /** should ignore if none start with x-amz-meta- */ + assert.strictEqual(storage.getMetadataTotalBytes({ + "x-amz-metadata": "should not count", + "name-1": "123456", + "name-2": "789", + }), 0) + done(); + }) + }) + + describe('setLogFunction', function () { + it('should overload the log function', function (done) { + let i = 0; + + storage.setLogFunction(function (message, level) { + assert.strictEqual(message.length > 0, true) + assert.strictEqual(level.length > 0, true) + i++; + }) + + const nockRequestS1 = nock(url1S3) + .intercept("/bucket", "HEAD") + .reply(500, ""); + const nockRequestS2 = nock(url2S3) + .intercept("/bucket", "HEAD") + .reply(200, ""); + + storage.headBucket('bucket', (err) => { + assert.strictEqual(err, null); + assert.strictEqual(i > 0, true); + assert.strictEqual(nockRequestS1.pendingMocks().length, 0); + assert.strictEqual(nockRequestS2.pendingMocks().length, 0); + done(); + }); + }); + }); + +}); \ No newline at end of file diff --git a/tests/storage.test.js b/tests/swift.test.js similarity index 98% rename from tests/storage.test.js rename to tests/swift.test.js index 879c789..6f3ae67 100644 --- a/tests/storage.test.js +++ b/tests/swift.test.js @@ -1,4 +1,4 @@ -const storageSDK = require('../index.js'); +const storageSDK = require('../swift.js'); const nock = require('nock'); const assert = require('assert'); const fs = require('fs'); @@ -112,7 +112,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { storage.connection((err) => { assert.strictEqual(err, null); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.deepStrictEqual(storage.getConfig().token, tokenAuth); assert.deepStrictEqual(storage.getConfig().endpoints.url, connectionResultSuccessV3.token.catalog[9].endpoints[4].url); assert.strictEqual(firstNock.pendingMocks().length, 0); @@ -144,7 +144,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { storage.connection((err) => { assert.strictEqual(err, null); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.deepStrictEqual(storage.getConfig().token, tokenAuth); assert.deepStrictEqual(storage.getConfig().endpoints.url, connectionResultSuccessV3.token.catalog[9].endpoints[4].url); assert.strictEqual(firstNock.pendingMocks().length, 0); @@ -179,7 +179,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { storage.connection((err) => { assert.strictEqual(err, null); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.deepStrictEqual(storage.getConfig().token, tokenAuth); assert.deepStrictEqual(storage.getConfig().endpoints.url, connectionResultSuccessV3.token.catalog[9].endpoints[4].url); assert.strictEqual(firstNock.pendingMocks().length, 0); @@ -211,7 +211,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { storage.connection((err) => { assert.strictEqual(err, null); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.deepStrictEqual(storage.getConfig().token, tokenAuth); assert.deepStrictEqual(storage.getConfig().endpoints.url, connectionResultSuccessV3.token.catalog[9].endpoints[4].url); assert.strictEqual(firstNock.pendingMocks().length, 0); @@ -243,7 +243,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { storage.connection((err) => { assert.strictEqual(err, null); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.deepStrictEqual(storage.getConfig().token, tokenAuth); assert.deepStrictEqual(storage.getConfig().endpoints.url, connectionResultSuccessV3.token.catalog[9].endpoints[4].url); assert.strictEqual(firstNock.pendingMocks().length, 0); @@ -280,7 +280,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(err.message, 'Object Storages are not available'); storage.connection((err) => { assert.strictEqual(err, null); - assert.deepStrictEqual(storage.getConfig().actifStorage, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 0); assert.deepStrictEqual(storage.getConfig().token, tokenAuth); assert.deepStrictEqual(storage.getConfig().endpoints.url, connectionResultSuccessV3.token.catalog[9].endpoints[20].url); assert.strictEqual(firstNock.pendingMocks().length, 0); @@ -489,7 +489,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(err.message, 'Object Storages are not available'); assert.strictEqual(body, undefined); assert.strictEqual(firstNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 0); + assert.deepStrictEqual(storage.getConfig().activeStorage, 0); done(); }); }); @@ -573,7 +573,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }); @@ -608,7 +608,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }); @@ -647,7 +647,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }); @@ -677,7 +677,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }); @@ -747,7 +747,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_listFiles2[0].name.length > 0, true) assert.strictEqual(_listFiles2[0].content_type.length > 0, true) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -839,7 +839,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_listFiles4[0].name.length > 0, true) assert.strictEqual(_listFiles4[0].content_type.length > 0, true) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -899,7 +899,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_listFiles2[0].name.length > 0, true) assert.strictEqual(_listFiles2[0].content_type.length > 0, true) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -957,7 +957,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_listFiles2[0].name.length > 0, true) assert.strictEqual(_listFiles2[0].content_type.length > 0, true) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -1019,7 +1019,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_listFiles2[0].name.length > 0, true) assert.strictEqual(_listFiles2[0].content_type.length > 0, true) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -1212,7 +1212,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); @@ -1256,7 +1256,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }) @@ -1298,7 +1298,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(headers['date'].length > 0, true); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(thirdNock.pendingMocks().length, 0); done(); }); @@ -1340,7 +1340,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }); @@ -1414,7 +1414,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }).catch(err => { assert.strictEqual(err, null); @@ -1475,7 +1475,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1].headers['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1].headers['content-length'].length > 0, true); assert.strictEqual(results[1].headers['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -1537,7 +1537,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1].headers['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1].headers['content-length'].length > 0, true); assert.strictEqual(results[1].headers['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -1598,7 +1598,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1].headers['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1].headers['content-length'].length > 0, true); assert.strictEqual(results[1].headers['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -1942,7 +1942,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results.length, 2); assert.strictEqual(results[0], undefined) assert.strictEqual(results[1], undefined) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2014,7 +2014,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { let _result3 = await uploadFilePromise(); assert.strictEqual(_result3, undefined); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2062,7 +2062,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[0], undefined) assert.strictEqual(results[1], undefined) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2109,7 +2109,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[0], undefined) assert.strictEqual(results[1], undefined) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2384,7 +2384,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results.length, 2); assert.strictEqual(results[0], undefined) assert.strictEqual(results[1], undefined) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2443,7 +2443,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { let _result3 = await deleteFilePromise(); assert.strictEqual(_result3, undefined); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2484,7 +2484,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[0], undefined) assert.strictEqual(results[1], undefined) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2523,7 +2523,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[0], undefined) assert.strictEqual(results[1], undefined) - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -2713,7 +2713,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); @@ -2756,7 +2756,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }) @@ -2797,7 +2797,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(headers['date'].length > 0, true); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(thirdNock.pendingMocks().length, 0); done(); }); @@ -2838,7 +2838,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }); }); @@ -2910,7 +2910,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); done(); }).catch(err => { assert.strictEqual(err, null); @@ -2969,7 +2969,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1]['content-length'].length > 0, true); assert.strictEqual(results[1]['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -3029,7 +3029,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1]['content-length'].length > 0, true); assert.strictEqual(results[1]['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -3088,7 +3088,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1]['content-length'].length > 0, true); assert.strictEqual(results[1]['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -3483,7 +3483,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['x-openstack-request-id'].length > 0, true); assert.strictEqual(results[1]['content-length'] === '0', true); assert.strictEqual(results[1]['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -3567,7 +3567,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_result3['content-length'] === '0', true); assert.strictEqual(_result3['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -3626,7 +3626,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['content-length'] === '0', true); assert.strictEqual(results[1]['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -3684,7 +3684,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['content-length'] === '0', true); assert.strictEqual(results[1]['date'].length > 0, true); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4133,7 +4133,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['headers']['content-length'] === '0', true); assert.strictEqual(results[1]['headers']['date'].length > 0, true); assert.strictEqual(results[1]['body'].toString(), 'OK'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4221,7 +4221,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(_result3['headers']['date'].length > 0, true); assert.strictEqual(_result3['body'].toString(), 'OK'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4282,7 +4282,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['headers']['date'].length > 0, true); assert.strictEqual(results[1]['body'].toString(), 'OK'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4342,7 +4342,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[1]['headers']['date'].length > 0, true); assert.strictEqual(results[1]['body'].toString(), 'OK'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4665,7 +4665,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results.length, 2); assert.strictEqual(results[0], 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); assert.strictEqual(results[1], 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4733,7 +4733,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { let _result3 = await copyRequestPromise(); assert.strictEqual(_result3, 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4777,7 +4777,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results.length, 2); assert.strictEqual(results[0], 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); assert.strictEqual(results[1], 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); @@ -4821,7 +4821,7 @@ describe('Ovh Object Storage High Availability Node Client', function () { assert.strictEqual(results[0], 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); assert.strictEqual(results[1], 'The platypus, sometimes referred to as the duck-billed platypus, is a semiaquatic, egg-laying mammal endemic to eastern Australia.'); - assert.deepStrictEqual(storage.getConfig().actifStorage, 1); + assert.deepStrictEqual(storage.getConfig().activeStorage, 1); assert.strictEqual(firstNock.pendingMocks().length, 0); assert.strictEqual(secondNock.pendingMocks().length, 0); assert.strictEqual(thirdNock.pendingMocks().length, 0); diff --git a/tests/xmlToJson.test.js b/tests/xmlToJson.test.js new file mode 100644 index 0000000..318a4b3 --- /dev/null +++ b/tests/xmlToJson.test.js @@ -0,0 +1,277 @@ +const xmlToJson = require('../xmlToJson.js') +const assert = require('assert') + +const _assert = (actual, expected) => { + assert.strictEqual(JSON.stringify(actual), JSON.stringify(expected)) +} + +describe('xmlToJson', function () { + it('should not crash with empty/null/undefiend/weird', function () { + _assert(xmlToJson(''), {}) + _assert(xmlToJson(null), {}) + _assert(xmlToJson(undefined), {}) + _assert(xmlToJson({}), {}) + _assert(xmlToJson([]), {}) + _assert(xmlToJson('<</>/><a/>'), {}) + }) + + it('should return simple object', function () { + const _json = xmlToJson('EricBlue') + + const _expected = { + name: 'Eric', + color: 'Blue', + } + + _assert(_json, _expected) + }) + + it('should return simple object, and overwrite an existing field', function () { + const _json = xmlToJson('BlueRed') + + const _expected = { + color: 'Red', + } + + _assert(_json, _expected) + }) + + it('should return simple nested object', function () { + const _json = xmlToJson( + 'EricBlue', + ) + + const _expected = { + bucket: { + name: 'Eric', + color: 'Blue', + }, + } + + _assert(_json, _expected) + }) + + it('should return simple nested object', function () { + const _json = xmlToJson( + 'BlueJohn', + ) + + const _expected = { + bucket: { + color: 'Blue', + }, + name: 'John', + } + + _assert(_json, _expected) + }) + + it('should parse a simple nested object', function () { + let _xml = + 'templatestemplate.odt' + const _json = xmlToJson(_xml) + + const _expected = { + name: 'templates', + contents: { + key: 'template.odt', + }, + } + _assert(_json, _expected) + }) + + it('should parse a simple nested object', function () { + let _xml = + 'templates11000falsetemplate.odt2023-03-02T07:18:55.000Z"fde6d729123cee4db6bfa3606306bc8c"11822STANDARD' + const _json = xmlToJson(_xml) + + const _expected = { + name: 'templates', + keycount: 1, + maxkeys: 1000, + istruncated: false, + contents: { + key: 'template.odt', + lastmodified: '2023-03-02T07:18:55.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c', + size: 11822, + storageclass: 'STANDARD', + }, + } + _assert(_json, _expected) + }) + + it('should parse a simple nested object, if the name is the same, it should be overwrited', function () { + let _xml = 'BlueGreen' + const _json = xmlToJson(_xml) + + const _expected = { + colors: { + name: 'Green', + }, + } + _assert(_json, _expected) + }) + + it('should return simple nested object as Array', function () { + const _json = xmlToJson( + 'EricBlueJohnGreen', + ) + + const _expected = { + bucket: [ + { + name: 'Eric', + color: 'Blue', + }, + { + name: 'John', + color: 'Green', + }, + ], + } + + _assert(_json, _expected) + }) + + it('should parse a the response of "ListObjects V2"', function () { + let _xml = + 'templates11000falsetemplate.odt2023-03-02T07:18:55.000Z"fde6d729123cee4db6bfa3606306bc8c"11822STANDARDtemplate.docx2024-03-02T07:18:55.000Z"fde6d729123cee4db6bfa1111306b222"85000STANDARD' + const _json = xmlToJson(_xml) + + const _expected = { + name: 'templates', + keycount: 1, + maxkeys: 1000, + istruncated: false, + contents: [ + { + key: 'template.odt', + lastmodified: '2023-03-02T07:18:55.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c', + size: 11822, + storageclass: 'STANDARD', + }, + { + key: 'template.docx', + lastmodified: '2024-03-02T07:18:55.000Z', + etag: 'fde6d729123cee4db6bfa1111306b222', + size: 85000, + storageclass: 'STANDARD', + }, + ], + } + _assert(_json, _expected) + }) + + it('should parse a the response of "ListObjects V2" and return an Array for "contents" even if the Content list has one element.', function () { + let _xml = + 'templates11000truetemplate.odt2023-03-02T07:18:55.000Z"fde6d729123cee4db6bfa3606306bc8c"11822STANDARD' + const _json = xmlToJson(_xml, { forceArray: ['contents'] }) + + const _expected = { + name: 'templates', + keycount: 1, + maxkeys: 1000, + istruncated: true, + contents: [ + { + key: 'template.odt', + lastmodified: '2023-03-02T07:18:55.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c', + size: 11822, + storageclass: 'STANDARD', + } + ], + } + _assert(_json, _expected) + }) + + it('should parse a simple nested object and force an element to be a Array (options: forceArray)', function () { + let _xml = + 'templates11000falsetemplate.odt2023-03-02T07:18:55.000Z"fde6d729123cee4db6bfa3606306bc8c"11822STANDARD' + const _json = xmlToJson(_xml, { forceArray: ['contents'] }) + + const _expected = { + name: 'templates', + keycount: 1, + maxkeys: 1000, + istruncated: false, + contents: [ + { + key: 'template.odt', + lastmodified: '2023-03-02T07:18:55.000Z', + etag: 'fde6d729123cee4db6bfa3606306bc8c', + size: 11822, + storageclass: 'STANDARD', + }, + ], + } + _assert(_json, _expected) + }) + + it('should parse the response of "ListObject V2" and skip the 2 depth object', function () { + const _xml = + '' + + '' + + 'bucket' + + '' + + '205' + + '1000' + + 'false' + + '' + + 'my-image.jpg' + + '2009-10-12T17:50:30.000Z' + + '"fba9dede5f27731c9771645a39863328"' + + '434234' + + 'STANDARD' + + '' + + '' + + 'my-image2.jpg' + + '2009-10-12T17:50:30.000Z' + + '"fba9dede5f27731c9771645a39863328"' + + '434234' + + 'STANDARD' + + '' + + '' + + const _json = xmlToJson(_xml) + + const _expected = { + listbucketresult: { + name: 'bucket', + keycount: 205, + maxkeys: 1000, + istruncated: false, + contents: + 'my-image2.jpg2009-10-12T17:50:30.000Z"fba9dede5f27731c9771645a39863328"434234STANDARD', + }, + } + + _assert(_json, _expected) + }) + + it('should return the Error response from S3 as JSON', function () { + const _xml = + '' + + '' + + 'NoSuchKey' + + 'The resource you requested does not exist' + + '/mybucket/myfoto.jpg' + + '4442587FB7D0A2F9' + + '' + + const _json = xmlToJson(_xml) + + const _expected = { + error: { + code: 'NoSuchKey', + message: 'The resource you requested does not exist', + resource: '/mybucket/myfoto.jpg', + requestid: '4442587FB7D0A2F9', + } + } + + _assert(_json, _expected) + }) +}) diff --git a/xmltoJson.js b/xmltoJson.js new file mode 100644 index 0000000..04ea527 --- /dev/null +++ b/xmltoJson.js @@ -0,0 +1,106 @@ +/** + * Convert XML to JSON, supports only 1 object depth such as: "{ child: [], child2: {} }". + * + * @param {String} xml + * @param {Object} options (Optional) accepts "forceArray" a list of strings, define if "child" of one element must be lists + * @returns {Object} XML as JSON + */ +function xmlToJson (xml, options) { + + options = options ?? {}; + + /** JSON variables */ + let root = {}; + let child = null; + let childName = null; + let _previousTag = ''; + let _previousTagFull = ''; + let _skipObject = null; + /** Regex variables */ + const _xmlTagRegExp = /<([^>]+?)>/g; + let _previousLastIndex = 0; + let _tagParsed = []; + /** Loop through all XML tags */ + try { + while ((_tagParsed = _xmlTagRegExp.exec(xml))) { + const _tagStr = _tagParsed[1]; + const _tagAttributeIndex = _tagStr.indexOf(' '); /** remove attributes from HTML tags
*/ + const _tagFull = _tagStr.slice(0, _tagAttributeIndex > 0 ? _tagAttributeIndex : _tagStr.length); + const _tag = _tagFull.replace('/', '').toLowerCase(); + + /** End of skipped elements */ + if (_skipObject === _tag && _tagFull[0] === '/') { + _skipObject = null; + } + + if (_tagFull === '?xml' || _tagFull?.[_tagFull.length - 1] === '/' || _skipObject !== null) { + continue; + } + + /** Create a new child {}/[] if two opening tags are different, such as: value */ + if(_tag !== _previousTag && (child === null && _previousTag !== '' && _tagFull[0] !== '/' && _previousTagFull[0] !== '/')) { + child = options?.forceArray?.includes(_previousTag) === true ? [{}] : {}; + childName = _previousTag; + } /** If a child already exist, and the two tags are equal, the existing element is retreive from the JSON and transformed as LIST */ + else if (_tag === _previousTag && _tagFull[0] !== '/' && _previousTagFull[0] === '/' && child === null && (root[_tag]?.constructor === Object || root[_tag]?.constructor === Array)) { + child = root[_tag]?.constructor === Object ? [root[_tag]] : root[_tag]; + childName = _tag; + } /** Skip objects of 2 depth */ + else if (_tag !== _previousTag && child !== null && childName !== _previousTag && _tagFull[0] !== '/' && _previousTagFull[0] !== '/') { + _skipObject = _previousTag; + continue; + } + + /** When we reach the end of a list of child tags ``, the child is assigned to the root object */ + if (_tagFull[0] === '/' && _previousTagFull[0] === '/' && child) { + root[childName] = child?.constructor === Array ? [ ...child ] : { ...child }; + child = null; + childName = null; + } /** When we reach the end of a tag red, the value is assign to the child or root object */ + else if (_tagFull[0] === '/') { + const _value = getValue(xml.slice(_previousLastIndex, _tagParsed.index)) + if (child) { + if (child?.constructor === Array) { + /** Tag already exist, we must create a new element on the list */ + if (child[child.length - 1]?.[_tag]) { + child.push({}); + } + child[child.length - 1][_tag] = _value; + } + child[_tag] = _value; + } else { + root[_tag] = _value; + } + } + + _previousTag = _tag; + _previousTagFull = _tagFull; + _previousLastIndex = _xmlTagRegExp.lastIndex; + } + } catch(_) { + return root; + } + return root; +} + +/** + * Function to convert string to corresponding types + * @param {String} str value + * @returns {String|Boolean|Number} + */ +const getValue = (str) => { + if (!isNaN(str) && !isNaN(parseFloat(str))) { + return parseInt(str) + } else if (str.toLowerCase() === "true") { + return true; + } else if (str.toLowerCase() === "false") { + return false; + } + /** S3 Storage returns the "MD5" hash wrapped with double quotes, must be removed. */ + if (str[0] === '"' && str?.[str.length - 1] === '"') { + return str.slice(1, str.length - 1); + } + return str; +} + +module.exports = xmlToJson; \ No newline at end of file