diff --git a/CHANGELOG.md b/CHANGELOG.md index 24204f6..f835be7 100755 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -5,6 +5,20 @@ All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). +## [2.6.0] - 2024-01-18 + +### Added + +- Implemented server-side encryption options for writing objects into the Amazon S3 destination bucket: 'AES256' for AES256 encryption, 'AWS_KMS' for AWS Key Management Service encryption, and 'None' for no encryption. #124 +- Provided the optional Amazon S3 bucket to hold prefix list file. #125, #97 + +### Changed + +- Expanded Finder memory options, now including increased capacities of 316GB & 512GB. +- Added the feature of deleting KMS Key automatically after the solution pipeline status turns to stopped. #135 +- Added the feature that Finder Instance enables DTH-CLI automatically after external reboot. +- Add documentation of how to deploy the S3/ECR transfer task using CloudFormation. #128 + ## [2.5.0] - 2023-09-15 ### Added diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 8855141..7c866fa 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -11,7 +11,7 @@ information to effectively respond to your bug report or contribution. We welcome you to use the GitHub issue tracker to report bugs or suggest features. -When filing an issue, please check [existing open](https://github.com/awslabs/data-transfer-hub/issues), or [recently closed](https://github.com/awslabs/data-transfer-hub/issues?q=is%3Aissue+is%3Aclosed), issues to make sure somebody else hasn't already +When filing an issue, please check [existing open](https://github.com/awslabs/data-transfer-hub/issues), or [recently closed](https://github.com/awslabs/data-transfer-hub/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: * A reproducible test case or series of steps diff --git a/docs/USING_PREFIX_LIST.md b/docs/USING_PREFIX_LIST.md index 41b9237..57e18d3 100644 --- a/docs/USING_PREFIX_LIST.md +++ b/docs/USING_PREFIX_LIST.md @@ -9,15 +9,25 @@ Please write the list of prefixes into a Plain Text format file, with one prefix For example: ![Prefix List File](images/prefix_list_file.png) -## Step 2: Upload the Prefix List File to the source data bucket +## Step 2: Uploading the Prefix List File to Your Bucket +> **Note**: Ensure you enter the precise path of the Prefix List File when specifying its location in Step 3. -You can put the prefix list file in anywhere in your source bucket. -> Note: Please remember to write its actual path when filling in the location of the Prefix List File in the Step 3. +### Option 1: Uploading the Prefix List File to Your Source Bucket +You can store the prefix list file anywhere within your source bucket. ![prefix_list_file_in_s3](images/prefix_list_file_in_s3.png) -## Step 3: Config the Cloudformation Stack template +### Option 2: Uploading the Prefix List File to a Third Bucket within the Same Region and Account as the Data Transfer Hub -Write the path of the Prefix List File into the input box. +You have the flexibility to place the prefix list file in any location within a third bucket. It is essential that this third bucket shares the same region and account as the Data Transfer Hub. +![prefix_list_file_in_third_s3](images/prefix_list_third_s3.png) -![cloudformaiton](images/cloudformation_prefix_list.png) \ No newline at end of file +For those using the Data Transfer Hub portal, simply click the provided link to navigate directly to the third bucket. +![prefix_list_file_from_portal](images/prefix_list_portal.png) + +## Step 3: Configuring the CloudFormation Stack Template + +Enter the path of the Prefix List File in the provided input field. +If your Prefix List File is located in the Source Bucket, leave the `Bucket Name for Source Prefix List File` parameter blank. + +![cloudformation](images/cloudformation_prefix_list.png) diff --git a/docs/USING_PREFIX_LIST_CN.md b/docs/USING_PREFIX_LIST_CN.md index 3d370a4..ec705f4 100644 --- a/docs/USING_PREFIX_LIST_CN.md +++ b/docs/USING_PREFIX_LIST_CN.md @@ -1,23 +1,33 @@ [English](./USING_PREFIX_LIST_EN.md) -# 使用前缀列表完成多个指定前缀中数据的传输 +# 使用前缀列表文件过滤数据传输任务 -## Step 1: 创建前缀列表 +## 第1步:创建前缀列表文件 -请将前缀列表写入纯文本格式文件,每行一个前缀。 +请将前缀列表以纯文本格式写入文件,每行一个前缀。 -示例如下: -![Prefix List File](images/prefix_list_file.png) +例如: +![前缀列表文件](images/prefix_list_file.png) -## Step 2: 上传前缀列表文件到源数据桶 +## 第2步:将前缀列表文件上传到您的存储桶 +> **注意**:在第3步指定位置时,请确保输入前缀列表文件的精确路径。 -您可以将前缀列表文件放在源存储桶中的任何位置。 -> 注意: 请记住在步骤3填写Prefix List File的位置时填入它的实际路径。 +### 选项1:将前缀列表文件上传到您的源存储桶 +您可以在源存储桶内的任何位置存储前缀列表文件。 ![prefix_list_file_in_s3](images/prefix_list_file_in_s3.png) -## Step 3: 配置 Cloudformation 的堆栈模板 +### 选项2:将前缀列表文件上传到与数据传输中心同一区域和账户的第三个存储桶 -将Prefix List File的路径写入堆栈模板的指定参数中。 +您可以在第三个存储桶的任何位置放置前缀列表文件。重要的是,这个第三个存储桶必须与Data Transfer Hub处于同一区域和账户。 +![prefix_list_file_in_third_s3](images/prefix_list_third_s3.png) -![cloudformaiton](images/cloudformation_prefix_list.png) \ No newline at end of file +对于使用 Data Transfer Hub 控制台的用户,只需点击提供的链接即可直接导航至第三个存储桶。 +![prefix_list_file_from_portal](images/prefix_list_portal.png) + +## 第3步:配置CloudFormation堆栈模板 + +在提供的输入框中输入前缀列表文件的路径。 +如果您的前缀列表文件位于源存储桶中,请将`Bucket Name for Source Prefix List File`参数留空。 + +![cloudformation](images/cloudformation_prefix_list.png) diff --git a/docs/en-base/deployment/deployment.md b/docs/en-base/deployment/deployment.md index cad72ab..7856dac 100644 --- a/docs/en-base/deployment/deployment.md +++ b/docs/en-base/deployment/deployment.md @@ -58,7 +58,7 @@ In AWS Regions where Amazon Cognito is not yet available, you can use OIDC to pr 7. Save the `App ID` (that is, `client_id`) and `Issuer` to a text file from Endpoint Information, which will be used later. [![](../images/OIDC/endpoint-info.png)](../images/OIDC/endpoint-info.png) -8. Update the `Login Callback URL` and `Logout Callback URL` to your IPC recorded domain name. +8. Update the `Login Callback URL` and `Logout Callback URL` to your ICP recorded domain name. [![](../images/OIDC/authentication-configuration.png)](../images/OIDC/authentication-configuration.png) 9. Set the Authorization Configuration. diff --git a/docs/en-base/faq.md b/docs/en-base/faq.md index 590bdfd..3c34b61 100644 --- a/docs/en-base/faq.md +++ b/docs/en-base/faq.md @@ -35,7 +35,7 @@ Not supported currently. For this scenario, we recommend using Amazon S3's [Cros **7. Can I use AWS CLI to create a DTH S3 Transfer Task?**
-Yes. Please refer to the tutorial [Using AWS CLI to launch DTH S3 Transfer task](../user-guide/tutorial-cli-launch). +Yes. Please refer to the tutorial [Using AWS CLI to launch DTH S3 Transfer task](../user-guide/tutorial-s3#using-aws-cli). ## Performance diff --git a/docs/en-base/images/cluster_cn.png b/docs/en-base/images/cluster_cn.png new file mode 100644 index 0000000..bdcd59a Binary files /dev/null and b/docs/en-base/images/cluster_cn.png differ diff --git a/docs/en-base/images/cluster_en.png b/docs/en-base/images/cluster_en.png new file mode 100644 index 0000000..fdb1b13 Binary files /dev/null and b/docs/en-base/images/cluster_en.png differ diff --git a/docs/en-base/images/launch-stack copy.png b/docs/en-base/images/launch-stack copy.png new file mode 100644 index 0000000..2745adf Binary files /dev/null and b/docs/en-base/images/launch-stack copy.png differ diff --git a/docs/en-base/images/launch-stack.svg b/docs/en-base/images/launch-stack.svg new file mode 100644 index 0000000..eb7d015 --- /dev/null +++ b/docs/en-base/images/launch-stack.svg @@ -0,0 +1 @@ +Launch Stack \ No newline at end of file diff --git a/docs/en-base/images/secret_cn.png b/docs/en-base/images/secret_cn.png new file mode 100644 index 0000000..d1854c1 Binary files /dev/null and b/docs/en-base/images/secret_cn.png differ diff --git a/docs/en-base/images/secret_en.png b/docs/en-base/images/secret_en.png new file mode 100644 index 0000000..070f766 Binary files /dev/null and b/docs/en-base/images/secret_en.png differ diff --git a/docs/en-base/images/user.png b/docs/en-base/images/user.png new file mode 100644 index 0000000..e90a48c Binary files /dev/null and b/docs/en-base/images/user.png differ diff --git a/docs/en-base/index.md b/docs/en-base/index.md index 00d953d..6cc1d48 100644 --- a/docs/en-base/index.md +++ b/docs/en-base/index.md @@ -1,4 +1,4 @@ -The Data Transfer Hub solution provides secure, scalable, and trackable data transfer for Amazon Simple Storage Service (Amazon S3) objects and Amazon Elastic Container Registry (Amazon ECR) images. This data transfer helps customers expand their businesses globally by easily moving data in and out of Amazon Web Services (AWS) China Regions. +The Data Transfer Hub solution provides secure, scalable, and trackable data transfer for Amazon Simple Storage Service (Amazon S3) objects and Amazon Elastic Container Registry (Amazon ECR) images. This data transfer helps customers easily create and manage different types (Amazon S3 object and Amazon ECR image) of transfer tasks between AWS [partitions](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/partitions.html) (for example, aws, aws-cn, aws-us-gov), and from other cloud providers to AWS. This implementation guide provides an overview of the Data Transfer Hub solution, its reference architecture and components, considerations for planning the deployment, configuration steps for deploying the Data Transfer Hub solution to the AWS Cloud. diff --git a/docs/en-base/plan-deployment/regions.md b/docs/en-base/plan-deployment/regions.md index ce21e11..ad8a976 100644 --- a/docs/en-base/plan-deployment/regions.md +++ b/docs/en-base/plan-deployment/regions.md @@ -13,12 +13,15 @@ This solution uses services which may not be currently available in all AWS Regi | Asia Pacific (Seoul) | ap-northeast-2| | Asia Pacific (Singapore) | ap-southeast-1| | Asia Pacific (Sydney) | ap-southeast-2| +| Asia Pacific (Melbourne) | ap-southeast-4| | Canada (Central) | ca-central-1| +| Canada (Calgary) | ca-west-1| | Europe (Ireland) | eu-west-1| | Europe (London) | eu-west-2| | Europe (Stockholm) | eu-north-1| | Europe (Frankfurt) | eu-central-1| | South America (São Paulo) | sa-east-1| +| Israel (Tel Aviv) | il-central-1| ## Supported regions for deployment in AWS China Regions diff --git a/docs/en-base/revisions.md b/docs/en-base/revisions.md index d761e0a..2e482d9 100644 --- a/docs/en-base/revisions.md +++ b/docs/en-base/revisions.md @@ -1,9 +1,10 @@ | Date | Description| -|----------|--------| +|----------------|--------| | January 2021 | Initial release of version 1.0 | | July 2021 | Released version 2.0
1. Support general OIDC providers, including Authing, Auth0, okta, etc.
2. Support transferring objects from more Amazon S3 compatible storage services, such as Huawei Cloud OBS.
3. Support setting the access control list (ACL) of the target bucket object
4. Support deployment in account A, and copying data from account B to account C
5. Change to use Graviton 2 instance, and turn on BBR to transfer S3 objects to improve performance and save costs
6. Change to use Secrets Manager to maintain credential information | | December 2021 | Released version 2.1
1. Support custom prefix list to filter transfer tasks
2. Support configuration of single-run file transfer tasks
3. Support configuration of tasks through custom CRON Expression timetable
4. Support manual enabling or disabling of data comparison function | | July 2022 | Released version 2.2
1. Support transfer data through Direct Connect| | March 2023 | Released version 2.3
1. Support embedded dashboard and logs
2. Support S3 Access Key Rotation
3. Enhance One Time Transfer Task monitoring| | April 2023 | Released version 2.4
1. Support payer request S3 object transfer| -| September 2023 | Released version 2.5
1. Added support for transferring ECR assets without tags
2. Optimize stop task operation, add new filter condition to view all history tasks
3. Enhanced transfer performance by utilizing cluster capabilities through parallel multipart upload for large file transfers
4.Added automatic restart functionality for the Worker CLI
5.Enabled IMDSv2 by default for Auto Scaling Groups | \ No newline at end of file +| September 2023 | Released version 2.5
1. Added support for transferring ECR assets without tags
2. Optimize stop task operation, add new filter condition to view all history tasks
3. Enhanced transfer performance by utilizing cluster capabilities through parallel multipart upload for large file transfers
4.Added automatic restart functionality for the Worker CLI
5.Enabled IMDSv2 by default for Auto Scaling Groups | +| January 2024 | Released version 2.6
1. Added support for Amazon S3 destination bucket being encrypted with Amazon S3 managed keys
2. Provided the optional Amazon S3 bucket to hold prefix list file
3. Added the feature of deleting KMS Key automatically after the solution pipeline status turns to stopped
4. Added the feature that Finder Instance enables DTH-CLI automatically after external reboot
5. Increased Finder capacity to 316GB&512GB
6. Added three supported Regions: Asia Pacific (Melbourne), Canada (Calgary), Israel (Tel Aviv) | \ No newline at end of file diff --git a/docs/en-base/solution-overview/features-and-benefits.md b/docs/en-base/solution-overview/features-and-benefits.md index 003f715..fbd58d3 100644 --- a/docs/en-base/solution-overview/features-and-benefits.md +++ b/docs/en-base/solution-overview/features-and-benefits.md @@ -1,10 +1,11 @@ -The solution’s web console provides an interface for managing the following tasks: +The solution supports the following key features: + +- **Inter-Partition and Cross-Cloud Data Transfer**: to promote seamless transfer capabilities in one place +- **Auto scaling**: to allow rapid response to changes in file transfer traffic +- **High performance of large file transfer (1TB)**: to leverage the strengths of clustering, parallel large file slicing, and automatic retries to robust file transfer +- **Monitoring**: to track data flow, diagnose issues, and ensure the overall health of the data transfer processes +- **Out-of-the-box deployment** -- Transferring Amazon S3 objects between AWS China Regions and AWS Regions -- Transferring data from other cloud providers’ object storage services (including Alibaba Cloud OSS, Tencent COS, and Qiniu Kodo) to Amazon S3 -- Transferring objects from Amazon S3 compatible object storage service to Amazon S3 -- Transferring Amazon ECR images between AWS China Regions and AWS Regions -- Transferring container images from public container registries (for example, Docker Hub, Google gcr.io, Red Hat Quay.io) to Amazon ECR !!! note "Note" diff --git a/docs/en-base/tutorial/IAM-Policy.md b/docs/en-base/tutorial/IAM-Policy.md new file mode 100644 index 0000000..a7624a2 --- /dev/null +++ b/docs/en-base/tutorial/IAM-Policy.md @@ -0,0 +1,79 @@ + +# Set up credentials for Amazon S3 + +## Step 1: Create an IAM policy + +1. Open AWS Management Console. + +2. Choose IAM > Policy, and choose **Create Policy**. + +3. Create a policy. You can follow the example below to use IAM policy statement with minimum permissions, and change the `` in the policy statement accordingly. + +!!! Note "Note" + For S3 buckets in AWS China Regions, make sure you also change to use `arn:aws-cn:s3:::` instead of `arn:aws:s3:::`. + +### Policy for source bucket + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "dth", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource":[ + "arn:aws:s3:::/*", + "arn:aws:s3:::" + ] + } + ] +} +``` + + +### Policy for destination bucket + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "dth", + "Effect": "Allow", + "Action": [ + "s3:PutObject", + "s3:ListBucket", + "s3:PutObjectAcl", + "s3:AbortMultipartUpload", + "s3:ListBucketMultipartUploads", + "s3:ListMultipartUploadParts" + ], + "Resource": [ + "arn:aws:s3:::/*", + "arn:aws:s3:::" + ] + } + ] +} +``` + +To enable S3 Delete Event, you need to add `"s3:DeleteObject"` permission to the policy. + +Data Transfer Hub has native support for the S3 source bucket which enabled SSE-S3 and SSE-KMS. If your source bucket enabled *SSE-CMK*, please replace the source bucket policy with the policy in the link [for S3 SSE-KMS](./S3-SSE-KMS-Policy.md). + +## Step 2: Create a user + +1. Open AWS Management Console. +1. Choose IAM > User, and choose **Add User** to follow the wizard to create a user with credential. +1. Specify a user name, for example, *dth-user*. +1. For Access Type, select **Programmatic access** only and choose **Next: Permissions**. +1. Select **Attach existing policies directly**, search and use the policy created in Step 1, and choose **Next: Tags**. +1. Add tags if needed, and choose **Next: Review**. +1. Review the user details, and choose **Create User**. +1. Make sure you copied/saved the credential, and then choose **Close**. + +![Create User](../images/user.png) diff --git a/docs/en-base/tutorial/S3-SSE-KMS-Policy.md b/docs/en-base/tutorial/S3-SSE-KMS-Policy.md new file mode 100644 index 0000000..c407fa1 --- /dev/null +++ b/docs/en-base/tutorial/S3-SSE-KMS-Policy.md @@ -0,0 +1,46 @@ + +# Policy for S3 Source Bucket with SSE-CMK enabled + +Data Transfer Hub has native support for data source using SSE-S3 and SSE-KMS. If your source bucket enabled *SSE-CMK*, please replace the source bucket policy with the following policy, and change the `` in the policy statement accordingly. + +Pay attention to the following: + +- Change the `Resource` in KMS part to your own KMS key's Amazon Resource Name (ARN). + +- For S3 buckets in AWS China Regions, make sure to use `arn:aws-cn:s3:::` instead of `arn:aws:s3:::` + +**For Source Bucket with SSE-CMK enabled** + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "dth", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::/*", + "arn:aws:s3:::" + ] + }, + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": [ + "kms:Decrypt", + "kms:Encrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ], + "Resource": [ + "arn:aws:kms:us-west-2:111122223333:key/f5cd8cb7-476c-4322-ac9b-0c94a687700d " + ] + } + ] +} +``` \ No newline at end of file diff --git a/docs/en-base/update.md b/docs/en-base/update.md index aff03a3..b09d140 100644 --- a/docs/en-base/update.md +++ b/docs/en-base/update.md @@ -6,8 +6,7 @@ Use the following steps to upgrade the solution on AWS console. * [Step 1. Update the CloudFormation Stack](#step-1-update-the-cloudformation-stack) * [Step 2. (Optional) Update the OIDC configuration](#oidc-update) -* [Step 3. Create an invalidation on CloudFront](#step-3-create-an-invalidation-on-cloudfront) -* [Step 4. Refresh the web console](#step-4-refresh-the-web-console) +* [Step 3. Refresh the web console](#step-3-refresh-the-web-console) ## Step 1. Update the CloudFormation stack diff --git a/docs/en-base/user-guide/tutorial-directconnect.md b/docs/en-base/user-guide/tutorial-directconnect.md index a7dc149..a3e7b18 100644 --- a/docs/en-base/user-guide/tutorial-directconnect.md +++ b/docs/en-base/user-guide/tutorial-directconnect.md @@ -48,7 +48,7 @@ In this scenario, DTH is deployed in the **destination side** and within a VPC w 13. Choose **Create Task**. ## Use DTH to transfer data via DX in an isolated network -In this scenario, DTH is deployed in the **destination side** and within a VPC **without public access** (isolated VPC), and the source bucket is also in an isolated network. For details, refer to the [tutorial][https://github.com/awslabs/data-transfer-hub/blob/main/docs/tutorial-directconnect-isolated.md]. +In this scenario, DTH is deployed in the **destination side** and within a VPC **without public access** (isolated VPC), and the source bucket is also in an isolated network. For details, refer to the [tutorial](https://github.com/awslabs/data-transfer-hub/blob/main/docs/tutorial-directconnect-isolated.md). [![architecture]][architecture] diff --git a/docs/en-base/user-guide/tutorial-ecr.md b/docs/en-base/user-guide/tutorial-ecr.md index a44258d..e722194 100644 --- a/docs/en-base/user-guide/tutorial-ecr.md +++ b/docs/en-base/user-guide/tutorial-ecr.md @@ -1,3 +1,12 @@ +The solution allows you to create an Amazon ECR transfer task in the following ways: + +- [using the web console](#using-the-web-console) +- [using the ECR plugin](#using-the-dth-ecr-plugin) +- [using AWS CLI](#using-aws-cli) + +For a comparison between those options, refer to [Create Amazon S3 transfer task](./tutorial-s3.md). + +## Using the web console You can use the web console to create an Amazon ECR transfer task. For more information about how to launch the web console, see [deployment](../../deployment/deployment-overview). 1. From the **Create Transfer Task** page, select **Start a New Task**, and then select **Next**. @@ -28,4 +37,135 @@ You can use the web console to create an Amazon ECR transfer task. For more info 1. Choose **Create Task**. -After the task is created successfully, it will appear on the **Tasks** page. \ No newline at end of file +After the task is created successfully, it will appear on the **Tasks** page. + + +## Using the ECR plugin + +!!! note "Note" + + This tutorial is a deployment guide for the backend-only version. For more details, please refer to [ECR Plugin Introduction](https://github.com/awslabs/data-transfer-hub/blob/main/docs/ECR_PLUGIN.md). + +**Step 1. Prepare VPC (optional)** + +This solution can be deployed in both public and private subnets. Using public subnets is recommended. + +- If you want to use existing VPC, please make sure the VPC has at least 2 subnets, and both subnets must have public internet access (Either public subnets with internet gateway or private subnets with NAT gateway) + +- If you want to create new default VPC for this solution, please go to Step 2 and make sure you have *Create a new VPC for this cluster* selected when you create the cluster. + + +**Step 2. Set up ECS Cluster** + +An ECS Cluster is required for this solution to run Fargate task. + +1. Sign in to AWS Management Console, and choose Elastic Container Service (ECS). +2. From ECS Cluster home page, click **Create Cluster**. +3. Select Cluster Template. Choose **Network Only** type. +4. Specify a cluster name and click Create to create a cluster. If you want to also create a new VPC (public subnets only), please also check the **Create a new VPC for this cluster** option. + +![Create Cluster](../images/cluster_en.png) + +**Step 3. Configure credentials** + +If source (or destination) is NOT in current AWS account, you will need to provide `AccessKeyID` and `SecretAccessKey` (namely `AK/SK`) to pull from or push to Amazon ECR. And Secrets Manager is used to store the credentials in a secure manner. + +>If source type is Public, there is no need to provide the source credentials. + +Go to AWS Management Console > Secrets Manager. From Secrets Manager home page, click **Store a new secret**. For secret type, please use **Other type of secrets**. For key/value paris, please copy and paste below JSON text into the Plaintext section, and change value to your AK/SK accordingly. + +``` +{ + "access_key_id": "", + "secret_access_key": "" +} +``` + +![Secret](../images/secret_en.png) + +Click Next to specify a secret name, and click Create in teh last step. + +**Step 4. Launch AWS Cloudformation Stack** + +Please follow below steps to deploy this plugin via AWS Cloudformation. + +1. Sign in to AWS Management Console, and switch to the Region where you want to deploy the CloudFormation Stack. + +1. Click the following button to launch the CloudFormation Stack in that Region. + + - For AWS China Regions + + [![Launch Stack](../images/launch-stack.svg)](https://console.amazonaws.cn/cloudformation/home#/stacks/create/template?stackName=DTHECRStack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferECRStack.template) + + + - For AWS Global Regions + + [![Launch Stack](../images/launch-stack.svg)](https://console.aws.amazon.com/cloudformation/home#/stacks/create/template?stackName=DTHECRStack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferECRStack.template) + +1. Click **Next**. Specify values to parameters accordingly. Change the stack name if required. + +1. Click **Next**. Configure additional stack options such as tags (Optional). + +1. Click **Next**. Review and confirm acknowledgement, and then click **Create Stack** to start the deployment. + +The deployment will take approximately 3 to 5 minutes. + +## Using AWS CLI +You can use the [AWS CLI][aws-cli] to create an Amazon ECR transfer task. Note that if you have deployed the DTH Portal at the same time, the tasks started through the CLI will not appear in the Task List on your Portal. + +1. Create an Amazon VPC with two public subnets or two private subnets with [NAT gateway][nat]. + +2. Replace `` as shown below. + ``` + https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferECRStack.template + ``` + +3. Go to your terminal and enter the following command. For the parameter details, refer to the Parameters table. + + ```shell + aws cloudformation create-stack --stack-name dth-ecr-task --template-url CLOUDFORMATION_URL \ + --capabilities CAPABILITY_NAMED_IAM \ + --parameters \ + ParameterKey=sourceType,ParameterValue=Amazon_ECR \ + ParameterKey=srcRegion,ParameterValue=us-east-1 \ + ParameterKey=srcAccountId,ParameterValue=123456789012 \ + ParameterKey=srcList,ParameterValue=ALL \ + ParameterKey=includeUntagged,ParameterValue=false \ + ParameterKey=srcImageList,ParameterValue= \ + ParameterKey=srcCredential,ParameterValue=dev-us-credential \ + ParameterKey=destAccountId,ParameterValue= \ + ParameterKey=destRegion,ParameterValue=us-west-2 \ + ParameterKey=destCredential,ParameterValue= \ + ParameterKey=destPrefix,ParameterValue= \ + ParameterKey=alarmEmail,ParameterValue=your_email@example.com \ + ParameterKey=ecsVpcId,ParameterValue=vpc-07f56e8e21630a2a0 \ + ParameterKey=ecsClusterName,ParameterValue=dth-v22-01-TaskCluster-eHzKkHatj0tN \ + ParameterKey=ecsSubnetA,ParameterValue=subnet-034c58fe0e696eb0b \ + ParameterKey=ecsSubnetB,ParameterValue=subnet-0487ae5a1d3badde7 + ``` +**Parameters** + +| Parameter Name | Allowed Value | Default Value | Description | +|-----------------|--------------------------------|---------------|-------------| +| sourceType | Amazon_ECR
Public | Amazon_ECR | Choose type of source container registry, for example Amazon_ECR, or Public from Docker Hub, gco.io, etc. | +| srcRegion | | | Source Region Name (only required if source type is Amazon ECR), for example, us-west-1 | +| srcAccountId | | | Source AWS Account ID (only required if source type is Amazon ECR), leave it blank if source is in current account | +| srcList | ALL
SELECTED | ALL | Type of Source Image List, either ALL or SELECTED, for public registry, please use SELECTED only | +| srcImageList | | | Selected Image List delimited by comma, for example, ubuntu:latest,alpine:latest..., leave it blank if Type is ALL. For ECR source, using ALL_TAGS tag to get all tags. | +| srcCredential | | | The secret name in Secrets Manager only when using AK/SK credentials to pull images from source Amazon ECR, leave it blank for public registry | +| destRegion | | | Destination Region Name, for example, cn-north-1 | +| destAccountId | | | Destination AWS Account ID, leave it blank if destination is in current account | +| destPrefix | | | Destination Repo Prefix | +| destCredential | | | The secret name in Secrets Manager only when using AK/SK credentials to push images to destination Amazon ECR | +| includeUntagged | true
false | true | Whether to include untagged images in the replication | +| ecsClusterName | | | ECS Cluster Name to run ECS task (Please make sure the cluster exists) | +| ecsVpcId | | | VPC ID to run ECS task, e.g. vpc-bef13dc7 | +| ecsSubnetA | | | First Subnet ID to run ECS task, e.g. subnet-97bfc4cd | +| ecsSubnetB | | | Second Subnet ID to run ECS task, e.g. subnet-7ad7de32 | +| alarmEmail | | | Alarm Email address to receive notification in case of any failure | + + +[aws-cli]: https://aws.amazon.com/cli/ +[nat]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html + +[iam-role]: https://us-east-1.console.aws.amazon.com/iamv2/home#/roles \ No newline at end of file diff --git a/docs/en-base/user-guide/tutorial-s3.md b/docs/en-base/user-guide/tutorial-s3.md index 46b4da5..4a9f320 100644 --- a/docs/en-base/user-guide/tutorial-s3.md +++ b/docs/en-base/user-guide/tutorial-s3.md @@ -1,7 +1,19 @@ -You can use the web console to create an Amazon S3 transfer task. For more information about how to launch the web console, see [deployment](../../deployment/deployment-overview). +The solution allows you to create an Amazon S3 transfer task in the following ways: -!!! Note "Note" - Data Transfer Hub also supports using AWS CLI to create an Amazon S3 transfer task. For details, refer to this [tutorial](./tutorial-cli-launch.md). +- [using the web console](#using-the-web-console) +- [using the S3 plugin](#using-the-dth-s3-plugin) +- [using AWS CLI](#using-aws-cli) + +You can make your choice according to your needs. + +- The web console provides an intuitive user interface where you can start, clone or stop a data transfer task with a simple click. The frontend also provides metric monitoring and logging view, so you do not need to switch between different pages. + +- The S3 plugin is a standalone CloudFormation template, and you can easily integrate it into your workflows. Because this option allows deployment without the frontend, it is useful if you want to deploy in AWS China Regions but do not have an ICP licensed domain. + +- AWS CLI can quickly initiate data transfer tasks. Select this option if you want to leverage Data Transfer Hub in your automation scripts. + +## Using the web console +You can use the web console to create an Amazon S3 transfer task. For more information about how to launch the web console, see [deployment](../../deployment/deployment-overview). 1. From the **Create Transfer Task** page, select **Start a New Task**, and then select **Next**. @@ -15,7 +27,7 @@ You can use the web console to create an Amazon S3 transfer task. For more infor - If you need to achieve real-time incremental data synchronization, please configure whether to enable S3 event notification. Note that this option can only be configured when the program and your data source are deployed in the same area of the same account. - If you do not enable S3 event notification, the program will periodically synchronize incremental data according to the scheduling frequency you configure in the future. - If the source bucket is not in the same account where Data Transfer Hub was deployed, select **No**, then specify the credentials for the source bucket. - - If you choose to synchronize objects with multiple prefixes, please transfer the prefix list file separated by rows to the root directory of the data source bucket, and then fill in the name of the file. For details, please refer to [Multi-Prefix List Configuration Tutorial](https://github.com/awslabs/data-transfer-hub/blob/main/docs/USING_PREFIX_LIST.md)。 + - If you choose to synchronize objects with multiple prefixes, please transfer the prefix list file separated by rows to the root directory of the Data Transfer Hub central bucket, and then fill in the name of the file. For details, please refer to [Multi-Prefix List Configuration Tutorial](https://github.com/awslabs/data-transfer-hub/blob/main/docs/USING_PREFIX_LIST.md)。 5. To create credential information, select [Secrets Manager](https://console.aws.amazon.com/secretsmanager/home) to jump to the AWS Secrets Manager console in the current region. - From the left menu, select **Secrets**, then choose **Store a new secret** and select the **other type of secrets** key type. @@ -60,7 +72,188 @@ You can use the web console to create an Amazon S3 transfer task. For more infor After the task is created successfully, it will appear on the **Tasks** page. -### How to transfer S3 object from KMS encrypted Amazon S3 +!!! Note "Note" + If your destination bucket in Amazon S3 is set to require all data uploads to be encrypted with Amazon S3 managed keys. You can check the tutorial below. +**Destination bucket encrypted with Amazon S3 managed keys** + +Select "SSE-S3 AES256" from the dropdown menu under 'Destination bucket policy check' in the destination's configuration. For more information, refer to this [documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html). + +If your destination bucket is set to require that objects be encrypted using only SSE-KMS (Server-Side Encryption with AWS Key Management Service), which is detailed in this [documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-kms-encryption.html), and your policy looks something like the example provided: + + +``` + { + "Version": "2012-10-17", + "Id": "PutObjectPolicy", + "Statement": [ + { + "Sid": "DenyIncorrectEncryptionHeader", + "Effect": "Deny", + "Principal": "*", + "Action": "s3:PutObject", + "Resource": "arn:aws-cn:s3:::dth-sse-debug-cn-north-1/*", + "Condition": { + "StringNotEquals": { + "s3:x-amz-server-side-encryption": "aws:kms" + } + } + }, + { + "Sid": "DenyUnencryptedObjectUploads", + "Effect": "Deny", + "Principal": "*", + "Action": "s3:PutObject", + "Resource": "arn:aws-cn:s3:::dth-sse-debug-cn-north-1/*", + "Condition": { + "StringNotEquals": { + "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws-cn:kms:cn-north-1:123456789012:key/7c54749e-eb6a-42cc-894e-93143b32e7c0" + } + } + } + ] + } +``` + +In this case, you should select "SSE-KMS" in the 'Destination bucket policy check' dropdown menu in the destination's configuration. Additionally, you need to provide the KMS Key ID, such as "7c54749e-eb6a-42cc-894e-93143b32e7c0" in the example. + +## Using the S3 plugin + +!!! Note "Note" + This tutorial provides guidance for the backend-only version. For more details, please refer to [S3 Plugin Introduction](https://github.com/awslabs/data-transfer-hub/blob/main/docs/S3_PLUGIN.md). +**Step 1. Prepare VPC** + +This solution can be deployed in both public and private subnets. Using public subnets is recommended. + +- If you want to use existing VPC, please make sure the VPC has at least 2 subnets, and both subnets must have public internet access (Either public subnets with internet gateway or private subnets with NAT gateway). + +- If you want to create new default VPC for this solution, please go to Step 2 and make sure you have *Create a new VPC for this cluster* selected when you create the cluster. + +**Step 2. Configure credentials** + +You need to provide `AccessKeyID` and `SecretAccessKey` (namely `AK/SK`) to read or write bucket in S3 from or to another AWS account or other cloud storage service, and the credential will be stored in AWS Secrets Manager. You DON'T need to create credential for the bucket in the current account where you are deploying the solution. + +Go to AWS Management Console > Secrets Manager. From Secrets Manager home page, click **Store a new secret**. For secret type, please use **Other type of secrets**. For key/value paris, please copy and paste below JSON text into the Plaintext section, and change value to your AK/SK accordingly. + +``` +{ + "access_key_id": "", + "secret_access_key": "" +} +``` + +![Secret](../images/secret_en.png) + +Click Next to specify a secret name, and click Create in the last step. + + +> If the AK/SK is for source bucket, **READ** access to bucket is required; if it's for destination bucket, **READ** and **WRITE** access to bucket is required. For Amazon S3, you can refer to [Set up credentials for Amazon S3](../tutorial/IAM-Policy.md). + + +**Step 3. Launch AWS Cloudformation Stack** + +Please follow below steps to deploy this solution via AWS Cloudformation. + +1. Sign in to AWS Management Console, and switch to the Region where you want to deploy the CloudFormation Stack. + +1. Click the following button to launch the CloudFormation Stack. + + - For AWS China Regions + + [![Launch Stack](../images/launch-stack.svg)](https://console.amazonaws.cn/cloudformation/home#/stacks/create/template?stackName=DTHS3Stack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferS3Stack.template) + + - For AWS Global Regions + + [![Launch Stack](../images/launch-stack.svg)](https://console.aws.amazon.com/cloudformation/home#/stacks/create/template?stackName=DTHS3Stack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferS3Stack.template) + +1. Click **Next**. Specify values to parameters accordingly. Change the stack name if required. + +1. Click **Next**. Configure additional stack options such as tags (Optional). + +1. Click **Next**. Review and confirm acknowledgement, and then click **Create Stack** to start the deployment. + +The deployment will take approximately 3 to 5 minutes. + + +## Using AWS CLI +You can use the [AWS CLI][aws-cli] to create an Amazon S3 transfer task. Note that if you have deployed the DTH Portal at the same time, the tasks started through the CLI will not appear in the Task List on your Portal. + +1. Create an Amazon VPC with two public subnets or two private subnets with [NAT gateway][nat]. + +2. Replace `` as shown below. + ``` + https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferS3Stack.template + ``` + +3. Go to your terminal and enter the following command. For the parameter details, refer to the Parameters table. + + ```shell + aws cloudformation create-stack --stack-name dth-s3-task --template-url CLOUDFORMATION_URL \ + --capabilities CAPABILITY_NAMED_IAM \ + --parameters \ + ParameterKey=alarmEmail,ParameterValue=your_email@example.com \ + ParameterKey=destBucket,ParameterValue=dth-receive-cn-north-1 \ + ParameterKey=destPrefix,ParameterValue=test-prefix \ + ParameterKey=destCredentials,ParameterValue=drh-cn-secret-key \ + ParameterKey=destInCurrentAccount,ParameterValue=false \ + ParameterKey=destRegion,ParameterValue=cn-north-1 \ + ParameterKey=destStorageClass,ParameterValue=STANDARD \ + ParameterKey=destPutObjectSSEType,ParameterValue=None \ + ParameterKey=destPutObjectSSEKmsKeyId,ParameterValue= \ + ParameterKey=srcBucket,ParameterValue=dth-us-west-2 \ + ParameterKey=srcInCurrentAccount,ParameterValue=true \ + ParameterKey=srcCredentials,ParameterValue= \ + ParameterKey=srcRegion,ParameterValue=us-west-2 \ + ParameterKey=srcPrefix,ParameterValue=case1 \ + ParameterKey=srcType,ParameterValue=Amazon_S3 \ + ParameterKey=ec2VpcId,ParameterValue=vpc-040bbab85f0e4e088 \ + ParameterKey=ec2Subnets,ParameterValue=subnet-0d1bf2725ab8e94ee\\,subnet-06d17b2b3286be40e \ + ParameterKey=finderEc2Memory,ParameterValue=8 \ + ParameterKey=ec2CronExpression,ParameterValue="0/60 * * * ? *" \ + ParameterKey=includeMetadata,ParameterValue=false \ + ParameterKey=srcEvent,ParameterValue=No \ + ParameterKey=maxCapacity,ParameterValue=20 \ + ParameterKey=minCapacity,ParameterValue=1 \ + ParameterKey=desiredCapacity,ParameterValue=1 + ``` +**Parameters** + +| Parameter Name | Allowed Value | Default Value | Description | +| --- | --- | --- | --- | +| alarmEmail | | | An email to which errors will be sent +| desiredCapacity | | 1 | Desired capacity for Auto Scaling Group +| destAcl | private
public-read
public-read-write
authenticated-read
aws-exec-read
bucket-owner-read
bucket-owner-full-control | bucket-owner-full-control | Destination access control list +| destBucket | | | Destination bucket name +| destCredentials | | | Secret name in Secrets Manager used to keep AK/SK credentials for destination bucket. Leave it blank if the destination bucket is in the current account +| destInCurrentAccount | true
false | true | Indicates whether the destination bucket is in current account. If not, you should provide a credential with read and write access +| destPrefix | | | Destination prefix (Optional) +| destRegion | | | Destination region name +| destStorageClass | STANDARD
STANDARD_IA
ONEZONE_IA
INTELLIGENT_TIERING | INTELLIGENT_TIERING | Destination storage class, which defaults to INTELLIGENT_TIERING +| destPutObjectSSEType | None
AES256
AWS_KMS | None | Specifies the server-side encryption algorithm used for storing objects in Amazon S3. 'AES256' applies AES256 encryption, 'AWS_KMS' uses AWS Key Management Service encryption, and 'None' indicates that no encryption is applied. +| destPutObjectSSEKmsKeyId | | | Specifies the ID of the symmetric customer managed AWS KMS Customer Master Key (CMK) used for object encryption. This parameter should only be set when destPutObjectSSEType is set to 'AWS_KMS'. If destPutObjectSSEType is set to any value other than 'AWS_KMS', please leave this parameter empty. The default value is not set. +| isPayerRequest | true
false | false | Indicates whether to enable payer request. If true, it will get object in payer request mode. | +| ec2CronExpression | | 0/60 * * * ? * | Cron expression for EC2 Finder task
"" for one time transfer. | +| finderEc2Memory | 8
16
32
64
128
256 | 8 GB| The amount of memory (in GB) used by the Finder task. +| ec2Subnets | | | Two public subnets or two private subnets with [NAT gateway][nat] | +| ec2VpcId | | | VPC ID to run EC2 task, for example, vpc-bef13dc7 +| finderDepth | | 0 | Depth of sub folders to compare in parallel. 0 means comparing all objects in sequence +| finderNumber | | 1 | The number of finder threads to run in parallel +| includeMetadata | true
false | false | Indicates whether to add replication of object metadata. If true, there will be additional API calls. +| maxCapacity | | 20 | Maximum capacity for Auto Scaling Group +| minCapacity | | 1 | Minimum capacity for Auto Scaling Group +| srcBucket | | | Source bucket name +| srcCredentials | | | Secret name in Secrets Manager used to keep AK/SK credentials for Source Bucket. Leave it blank if source bucket is in the current account or source is open data +| srcEndpoint | | | Source Endpoint URL (Optional). Leave it blank unless you want to provide a custom Endpoint URL +| srcEvent | No
Create
CreateAndDelete | No | Whether to enable S3 Event to trigger the replication. Note that S3Event is only applicable if source is in the current account +| srcInCurrentAccount | true
false | false | Indicates whether the source bucket is in the current account. If not, you should provide a credential with read access +| srcPrefix | | | Source prefix (Optional) +| srcPrefixListBucket | | | Source prefix list file S3 bucket name (Optional). It used to store the Source prefix list file. The specified bucket must be located in the same AWS region and under the same account as the DTH deployment. If your PrefixList File is stored in the Source Bucket, please leave this parameter empty. +| srcPrefixsListFile | | | Source prefix list file S3 path (Optional). It supports txt type, for example, my_prefix_list.txt, and the maximum number of lines is 10 millions +| srcRegion | | | Source region name +| srcSkipCompare | true
false | false | Indicates whether to skip the data comparison in task finding process. If yes, all data in the source will be sent to the destination +| srcType | Amazon_S3
Aliyun_OSS
Qiniu_Kodo
Tencent_COS | Amazon_S3 | If you choose to use the Endpoint mode, please select Amazon_S3. +| workerNumber | 1 ~ 10 | 4 | The number of worker threads to run in one worker node/instance. For small files (size < 1MB), you can increase the number of workers to improve the transfer performance. + +## How to transfer S3 object from KMS encrypted Amazon S3 By default, Data Transfer Hub supports data source bucket using SSE-S3 and SSE-KMS. @@ -93,4 +286,10 @@ Pay attention to the following: } ``` + + + +[aws-cli]: https://aws.amazon.com/cli/ +[nat]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html + [iam-role]: https://us-east-1.console.aws.amazon.com/iamv2/home#/roles \ No newline at end of file diff --git a/docs/images/cloudformation_prefix_list.png b/docs/images/cloudformation_prefix_list.png index 0510073..b27c095 100644 Binary files a/docs/images/cloudformation_prefix_list.png and b/docs/images/cloudformation_prefix_list.png differ diff --git a/docs/images/prefix_list_portal.png b/docs/images/prefix_list_portal.png new file mode 100644 index 0000000..579e6b6 Binary files /dev/null and b/docs/images/prefix_list_portal.png differ diff --git a/docs/images/prefix_list_third_s3.png b/docs/images/prefix_list_third_s3.png new file mode 100644 index 0000000..a46ac3e Binary files /dev/null and b/docs/images/prefix_list_third_s3.png differ diff --git a/docs/mkdocs.en.yml b/docs/mkdocs.en.yml index 97d5db3..13f9498 100644 --- a/docs/mkdocs.en.yml +++ b/docs/mkdocs.en.yml @@ -30,7 +30,9 @@ nav: - Create Amazon ECR transfer task: user-guide/tutorial-ecr.md - Transfer S3 object from Alibaba Cloud OSS: user-guide/tutorial-oss.md - Transfer S3 object via Direct Connect: user-guide/tutorial-directconnect.md - - Use AWS CLI to create S3 transfer task: user-guide/tutorial-cli-launch.md + - Tutorials: + - Set up credentials for Amazon S3: tutorial/IAM-Policy.md + - Policy for S3 Source Bucket enabled SSE-CMK: tutorial/S3-SSE-KMS-Policy.md - Upgrade the solution: update.md - Uninstall the solution: uninstall.md - FAQ: faq.md diff --git a/docs/mkdocs.zh.yml b/docs/mkdocs.zh.yml index 104123d..b60a911 100644 --- a/docs/mkdocs.zh.yml +++ b/docs/mkdocs.zh.yml @@ -31,7 +31,9 @@ nav: - 创建 Amazon ECR 传输任务: user-guide/tutorial-ecr.md - 从阿里云 OSS 传输 S3 对象: user-guide/tutorial-oss.md - 通过 Direct Connect 传输 S3 对象: user-guide/tutorial-directconnect.md - - 使用 AWS CLI 创建 S3 传输任务: user-guide/tutorial-cli-launch.md + - 技术教程: + - 为 Amazon S3 设置凭证: tutorial/IAM-Policy_CN.md + - S3 源存储桶启用 SSE-CMK 的策略: tutorial/S3-SSE-KMS-Policy_CN.md - 更新解决方案: update.md - 卸载解决方案: uninstall.md - 常见问题解答: faq.md diff --git a/docs/zh-base/deployment/deployment.md b/docs/zh-base/deployment/deployment.md index d748422..21961fc 100644 --- a/docs/zh-base/deployment/deployment.md +++ b/docs/zh-base/deployment/deployment.md @@ -64,7 +64,7 @@ 7. 将Endpoint Information中的`App ID`(即`client_id`)和`Issuer`保存到一个文本文件中,以备后面使用。 [![](../images/OIDC/endpoint-info.png)](../images/OIDC/endpoint-info.png) -8. 将`Login Callback URL`和`Logout Callback URL`更新为IPC记录的域名。 +8. 将`Login Callback URL`和`Logout Callback URL`更新为ICP记录的域名。 [![](../images/OIDC/authentication-configuration.png)](../images/OIDC/authentication-configuration.png) 9. 设置以下授权配置。 diff --git a/docs/zh-base/faq.md b/docs/zh-base/faq.md index a8fb864..27239a8 100644 --- a/docs/zh-base/faq.md +++ b/docs/zh-base/faq.md @@ -34,7 +34,7 @@ **7. 能否使用 AWS CLI 创建 DTH S3 传输任务?**
-可以。请参考[使用AWS CLI启动DTH S3 Transfer任务](../user-guide/tutorial-cli-launch)指南。 +可以。请参考[使用AWS CLI启动DTH S3 Transfer任务](../user-guide/tutorial-s3/#aws-cli)指南。 ## 性能相关问题 diff --git a/docs/zh-base/images/cluster_cn.png b/docs/zh-base/images/cluster_cn.png new file mode 100644 index 0000000..bdcd59a Binary files /dev/null and b/docs/zh-base/images/cluster_cn.png differ diff --git a/docs/zh-base/images/cluster_en.png b/docs/zh-base/images/cluster_en.png new file mode 100644 index 0000000..fdb1b13 Binary files /dev/null and b/docs/zh-base/images/cluster_en.png differ diff --git a/docs/zh-base/images/launch-stack copy.png b/docs/zh-base/images/launch-stack copy.png new file mode 100644 index 0000000..2745adf Binary files /dev/null and b/docs/zh-base/images/launch-stack copy.png differ diff --git a/docs/zh-base/images/launch-stack.svg b/docs/zh-base/images/launch-stack.svg new file mode 100644 index 0000000..eb7d015 --- /dev/null +++ b/docs/zh-base/images/launch-stack.svg @@ -0,0 +1 @@ +Launch Stack \ No newline at end of file diff --git a/docs/zh-base/images/secret_cn.png b/docs/zh-base/images/secret_cn.png new file mode 100644 index 0000000..d1854c1 Binary files /dev/null and b/docs/zh-base/images/secret_cn.png differ diff --git a/docs/zh-base/images/secret_en.png b/docs/zh-base/images/secret_en.png new file mode 100644 index 0000000..070f766 Binary files /dev/null and b/docs/zh-base/images/secret_en.png differ diff --git a/docs/zh-base/images/user.png b/docs/zh-base/images/user.png new file mode 100644 index 0000000..e90a48c Binary files /dev/null and b/docs/zh-base/images/user.png differ diff --git a/docs/zh-base/index.md b/docs/zh-base/index.md index 4b00265..5feeffa 100644 --- a/docs/zh-base/index.md +++ b/docs/zh-base/index.md @@ -1,4 +1,4 @@ -**数据传输解决方案**(Data Transfer Hub)为 Amazon Simple Storage Service(Amazon S3)对象和 Amazon Elastic Container Registry(Amazon ECR)镜像提供安全、可扩展和可跟踪的数据传输。该数据传输帮助客户通过轻松地在亚马逊云科技(Amazon Web Services,AWS)中国区域内进出数据来拓展其全球业务。 +**数据传输解决方案**(Data Transfer Hub)为 Amazon Simple Storage Service(Amazon S3)对象和 Amazon Elastic Container Registry(Amazon ECR)镜像提供安全、可扩展和可跟踪的数据传输。通过轻松地在亚马逊云科技(Amazon Web Services,AWS)不同[分区](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/partitions.html)之间(例如,aws, aws-cn, aws-us-gov)或者从其它云服务提供商到 AWS 创建并管理多种类型的传输任务,该数据传输帮助客户拓展全球业务。 本实施指南提供了数据传输解决方案的概述、参考架构和组件、规划部署的注意事项以及将数据传输解决方案部署到 AWS 云的配置步骤。 diff --git a/docs/zh-base/plan-deployment/regions.md b/docs/zh-base/plan-deployment/regions.md index 8adb7f1..031c67f 100644 --- a/docs/zh-base/plan-deployment/regions.md +++ b/docs/zh-base/plan-deployment/regions.md @@ -13,12 +13,15 @@ | 亚太地区(首尔)区域 | ap-northeast-2 | 亚太地区(新加坡)区域 | ap-southeast-1 | 亚太地区(悉尼)区域 | ap-southeast-2 +| 亚太地区(墨尔本)区域 | ap-southeast-4 | 加拿大(中部)区域 | ca-central-1 +| 加拿大(卡尔加里)区域 | ca-west-1 | 欧洲(爱尔兰)区域 | eu-west-1 | 欧洲(伦敦)区域 | eu-west-2 | 欧洲(米兰)区域 | eu-north-1 | 欧洲(法兰克福)区域 | eu-central-1 | 南美洲(圣保罗)区域 | sa-east-1 +| 以色列(特拉维夫)区域 | il-central-1 ## 支持部署的中国区域 diff --git a/docs/zh-base/revisions.md b/docs/zh-base/revisions.md index dd13dbd..1b8641d 100644 --- a/docs/zh-base/revisions.md +++ b/docs/zh-base/revisions.md @@ -6,4 +6,5 @@ | 2022年7月 | 发布版本2.2
1. 支持通过 Direct Connect 传输数据| | 2023年3月 | 发布版本2.3
1. 支持嵌入式仪表板和监控日志
2. 支持S3 Access Key 自动轮换
3. 增强一次性传输任务监控| | 2023年4月 | 发布版本2.4
1. 支持请求者付费模式| -| 2023年9月 | 发布版本2.5
1. 通过利用集群的并行能力,提升了Amazon S3大文件传输的性能
2. 启用了未标记的ECR镜像传输
3. 优化了停止任务操作,并添加了新的筛选条件以查看所有历史任务 | \ No newline at end of file +| 2023年9月 | 发布版本2.5
1. 通过利用集群的并行能力,提升了Amazon S3大文件传输的性能
2. 启用了未标记的ECR镜像传输
3. 优化了停止任务操作,并添加了新的筛选条件以查看所有历史任务 | +| 2024年1月 | 发布版本 2.6
1. 增加了对使用Amazon S3托管密钥加密Amazon S3目标存储桶的支持
2. 提供了可选的Amazon S3存储桶来保存前缀列表文件
3. 在DTH管道状态停止后,添加了自动删除KMS密钥的功能
4. 在外部重新启动后,增加了Finder实例自动启用DTH-CLI的功能
5.增加了Finder的容量至316GB和512GB
6.新增了3个支持的区域:亚太地区(墨尔本),加拿大(卡尔加里),以色列(特拉维夫) | \ No newline at end of file diff --git a/docs/zh-base/solution-overview/features-and-benefits.md b/docs/zh-base/solution-overview/features-and-benefits.md index de886ba..ac4569e 100644 --- a/docs/zh-base/solution-overview/features-and-benefits.md +++ b/docs/zh-base/solution-overview/features-and-benefits.md @@ -1,10 +1,11 @@ -该解决方案的网页控制台提供了以下任务的管理界面: +该解决方案的核心功能: + +- 在AWS中国区域和AWS海外区域之间传输、跨云传输Amazon S3对象:在统一的平台上提供传输能力 +- 自动缩放:快速响应文件传输中流量的变化 +- 高性能的大文件传输(1TB):利用集群、并行的大文件切片和自动重试的优势来实现稳健的文件传输 +- 监控:跟踪数据流、诊断问题并确保数据传输过程的整体健康状况 +- 开箱即用的部署 -- 在AWS中国区域和AWS区域之间传输Amazon S3对象 -- 将数据从其他云服务提供商的对象存储服务(包括阿里云OSS、腾讯COS和七牛Kodo)传输到Amazon S3 -- 从Amazon S3兼容的对象存储服务传输对象到Amazon S3 -- 在AWS中国区域和AWS区域之间传输Amazon ECR镜像 -- 将容器镜像从公共容器镜像仓库(例如Docker Hub、Google gcr.io、Red Hat Quay.io)传输到Amazon ECR !!! note "注意" diff --git a/docs/zh-base/tutorial/IAM-Policy_CN.md b/docs/zh-base/tutorial/IAM-Policy_CN.md new file mode 100644 index 0000000..614202a --- /dev/null +++ b/docs/zh-base/tutorial/IAM-Policy_CN.md @@ -0,0 +1,75 @@ + +# 为 Amazon S3 设置凭证 + +## Step 1: 创建 IAM Policy + +打开 AWS 管理控制台,转到 IAM > 策略,单击 **Create Policy** + +使用以下示例 IAM 策略语句以最低权限创建策略。 请相应地更改策略声明中的 ``。 + +_Note_: 如果是针对中国地区的 S3 存储桶,请确保您更改为使用 `arn:aws-cn:s3::::` 而不是 `arn:aws:s3:::` + +### 对于源存储桶 + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "dth", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource":[ + "arn:aws:s3:::/*", + "arn:aws:s3:::" + ] + } + ] +} +``` + + +### 对于目标存储桶 + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "dth", + "Effect": "Allow", + "Action": [ + "s3:PutObject", + "s3:ListBucket", + "s3:PutObjectAcl", + "s3:AbortMultipartUpload", + "s3:ListBucketMultipartUploads", + "s3:ListMultipartUploadParts" + ], + "Resource": [ + "arn:aws:s3:::/*", + "arn:aws:s3:::" + ] + } + ] +} +``` + +> 请注意,如果要启用 S3 删除事件,则需要向策略添加 `"s3:DeleteObject"` 权限。 + +> Data Transfer Hub 原生支持使用 SSE-S3 和 SSE-KMS 的数据源,但如果您的源存储桶启用了 *SSE-CMK*,请将源存储桶策略替换为链接 [for S3 SSE-CMK](./S3-SSE-KMS-Policy_CN.md)中的策略。 + +## Step 2: 创建 User + +打开 AWS 管理控制台,转至 IAM > 用户,单击 **添加用户**,按照向导创建具有凭证的用户。 + +1. 指定用户名,例如 *dth-user*。 对于 Access Type,仅选择 **Programmatic access**。 单击**下一步:权限** +1. 选择**直接附加现有策略**,搜索并使用在步骤 1 中创建的策略,然后单击**下一步:标签** +1. 如果需要,添加标签,单击**下一步:Review** +1. 查看用户详细信息,然后单击**创建用户** +1. 确保您复制/保存了凭据,然后单击**关闭** + +![Create User](../images/user.png) diff --git a/docs/zh-base/tutorial/S3-SSE-KMS-Policy_CN.md b/docs/zh-base/tutorial/S3-SSE-KMS-Policy_CN.md new file mode 100644 index 0000000..f14c016 --- /dev/null +++ b/docs/zh-base/tutorial/S3-SSE-KMS-Policy_CN.md @@ -0,0 +1,44 @@ + +# S3 源存储桶启用 SSE-CMK 的策略 + +Data Transfer Hub 原生支持使用 SSE-S3 和 SSE-KMS 的数据源,但如果您的源存储桶启用了 *SSE-CMK*,请将源存储桶策略替换为以下策略,并更改`` 为相应的桶名称。 + +并且请将 kms 部分中的 `Resource` 更改为您自己的 KMS 密钥的 arn。 + +_注意_:如果是针对中国地区的 S3 存储桶,请确保您也更改为使用 `arn:aws-cn:s3:::` 而不是 `arn:aws:s3:::` + +**对于启用SSE-CMK的源存储桶** + +``` +{ + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "dth", + "Effect": "Allow", + "Action": [ + "s3:GetObject", + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::/*", + "arn:aws:s3:::" + ] + }, + { + "Sid": "VisualEditor0", + "Effect": "Allow", + "Action": [ + "kms:Decrypt", + "kms:Encrypt", + "kms:ReEncrypt*", + "kms:GenerateDataKey*", + "kms:DescribeKey" + ], + "Resource": [ + "arn:aws:kms:us-west-2:111122223333:key/f5cd8cb7-476c-4322-ac9b-0c94a687700d " + ] + } + ] +} +``` \ No newline at end of file diff --git a/docs/zh-base/update.md b/docs/zh-base/update.md index 4a9abe6..576c1fb 100644 --- a/docs/zh-base/update.md +++ b/docs/zh-base/update.md @@ -7,8 +7,7 @@ * [步骤 1. 更新 CloudFormation 堆栈](#1) * [步骤 2. (可选)更新 OIDC 配置](#oidc-update) -* [步骤 3. 在 CloudFront 创建失效](#cloudfront) -* [步骤 4. 刷新网页控制台](#4) +* [步骤 3. 刷新网页控制台](#3) ## 步骤 1. 更新 CloudFormation 堆栈 diff --git a/docs/zh-base/user-guide/tutorial-ecr.md b/docs/zh-base/user-guide/tutorial-ecr.md index 548397c..507aeaa 100644 --- a/docs/zh-base/user-guide/tutorial-ecr.md +++ b/docs/zh-base/user-guide/tutorial-ecr.md @@ -1,3 +1,12 @@ +该解决方案允许您通过以下方式创建 Amazon ECR 传输任务: + +- 使用控制台传输任务 +- 使用 DTH ECR 插件传输任务 +- 使用 AWS CLI 创建传输任务 + +想要了解这几种方式的更多信息,可参考[创建 Amazon S3 传输任务](./tutorial-s3.md)。 + +## 使用控制台传输任务 您可以在网页控制台创建Amazon ECR数据传输任务。更多信息请参考[部署解决方案](../../deployment/deployment-overview)。 1. 从**创建传输任务**页面,选择**创建新任务**,然后选择**下一步**。 @@ -26,4 +35,138 @@ 10. 选择**创建任务**。 -任务创建成功后,会出现在任务页面。 \ No newline at end of file +任务创建成功后,会出现在任务页面。 + +## 使用DTH ECR 插件传输任务 + + !!! note "注意" + + 本教程是纯后端版本的部署指南。如需了解详情,请参考该[DTH ECR插件介绍](https://github.com/awslabs/data-transfer-hub/blob/main/docs/ECR_PLUGIN_CN.md)。 + + +**1. 准备VPC (可选)** + +此解决方案可以部署在公共和私有子网中。 建议使用公共子网。 + +- 如果您想使用现有的 VPC,请确保 VPC 至少有 2 个子网,并且两个子网都必须具有公网访问权限(带有 Internet 网关的公有子网或带有 NAT 网关的私有子网) + +- 如果您想为此解决方案创建新的默认 VPC,请转到步骤2,并确保您在创建集群时选择了*为此集群创建一个新的 VPC*。 + + +**2. 配置ECS集群** + +此方案需要ECS 集群才能运行Fargate任务。 + +打开AWS 管理控制台 > Elastic Container Service (ECS)。 在 ECS集群首页上,单击 **创建集群** + +步骤1:选择集群模版,确保选择 **仅限联网** 类型。 + +步骤2:配置集群,指定集群名称,点击创建即可。 如果您还想创建一个新的 VCP(仅限公有子网),还请选中**为此集群创建新的 VPC** 选项。 + +![创建集群](../images/cluster_cn.png) + + + +**3. 配置凭据** + +如果源(或目标)不在当前的AWS账户中,则您需要提供`AccessKeyID`和`SecretAccessKey`(即`AK` / `SK`)以从Amazon ECR中拉取或推送镜像。 Amazon Secrets Manager 用于以安全方式存储访问凭证。 + +> 注意:如果源类型为“公共(Public)”,则无需提供源的访问凭证。 + +打开AWS 管理控制台 > Secrets Manager。 在 Secrets Manager 主页上,单击 **存储新的密钥**。 对于密钥类型,请使用**其他类型的秘密**。 对于键/值对,请将下面的 JSON 文本复制并粘贴到明文部分,并相应地将值更改为您的 AK/SK。 + +``` +{ + "access_key_id": "", + "secret_access_key": "" +} +``` + +![密钥](../images/secret_cn.png) + +然后下一步指定密钥名称,最后一步点击创建。 + +**4. 启动AWS Cloudformation部署** + +请按照以下步骤通过AWS Cloudformation部署此插件。 + +1.登录到AWS管理控制台,切换到将CloudFormation Stack部署到的区域。 + +1.单击以下按钮在该区域中启动CloudFormation堆栈。 + + - 部署到AWS中国北京和宁夏区 + + [![Launch Stack](../images/launch-stack.svg)](https://console.amazonaws.cn/cloudformation/home#/stacks/create/template?stackName=DTHECRStack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferECRStack.template) + + - 部署到AWS海外区 + + [![Launch Stack](../images/launch-stack.svg)](https://console.aws.amazon.com/cloudformation/home#/stacks/create/template?stackName=DTHECRStack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferECRStack.template) + + +1.单击**下一步**。 相应地为参数指定值。 如果需要,请更改堆栈名称。 + +1.单击**下一步**。 配置其他堆栈选项,例如标签(可选)。 + +1.单击**下一步**。 查看并勾选确认,然后单击“创建堆栈”开始部署。 + +部署预计用时3-5分钟 + +## 使用AWS CLI创建传输任务 +您可以使用 [AWS CLI][aws-cli] 创建 Amazon ECR传输任务。如果您同时部署了DTH Portal,通过CLI启动的任务将不会出现在您Portal的任务列表界面中。 + +1. 创建一个具有两个公有子网或两个拥有[NAT 网关][nat] 私有子网的Amazon VPC。 + +2. 根据需要替换``为`https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferECRStack.template`。 + +3. 转到您的终端并输入以下命令。详情请参考**参数列表**。 + + ```shell + aws cloudformation create-stack --stack-name dth-ecr-task --template-url CLOUDFORMATION_URL \ + --capabilities CAPABILITY_NAMED_IAM \ + --parameters \ + ParameterKey=sourceType,ParameterValue=Amazon_ECR \ + ParameterKey=srcRegion,ParameterValue=us-east-1 \ + ParameterKey=srcAccountId,ParameterValue=123456789012 \ + ParameterKey=srcList,ParameterValue=ALL \ + ParameterKey=includeUntagged,ParameterValue=false \ + ParameterKey=srcImageList,ParameterValue= \ + ParameterKey=srcCredential,ParameterValue=dev-us-credential \ + ParameterKey=destAccountId,ParameterValue= \ + ParameterKey=destRegion,ParameterValue=us-west-2 \ + ParameterKey=destCredential,ParameterValue= \ + ParameterKey=destPrefix,ParameterValue= \ + ParameterKey=alarmEmail,ParameterValue=your_email@example.com \ + ParameterKey=ecsVpcId,ParameterValue=vpc-07f56e8e21630a2a0 \ + ParameterKey=ecsClusterName,ParameterValue=dth-v22-01-TaskCluster-eHzKkHatj0tN \ + ParameterKey=ecsSubnetA,ParameterValue=subnet-034c58fe0e696eb0b \ + ParameterKey=ecsSubnetB,ParameterValue=subnet-0487ae5a1d3badde7 + + ``` + + +**参数列表** + +| 参数名称 | 允许值 | 默认值 | 说明 | +| --- | --- | --- | --- | +| sourceType | Amazon_ECR
Public | Amazon_ECR | 选择源容器注册表的类型,例如 Amazon_ECR 或来自 Docker Hub、gco.io 等的 Public。 | +| srcRegion | | | 源区域名称(仅当源类型为 Amazon ECR 时才需要),例如 us-west-1 | +| srcAccountId | | | 源 AWS 账户 ID(仅当源类型为 Amazon ECR 时才需要),如果源位于当前账户中,则将其留空 | +| srcList | ALL
SELECTED | ALL | 源图像列表的类型,全部或选定的,对于公共注册表,请仅使用选定的 | +| srcImageList | | | 所选镜像列表以逗号分隔,例如 ubuntu:latest,alpine:latest...,如果 Type 为 ALL,则将其留空。 对于ECR源,使用ALL_TAGS标签来获取所有标签。| +| srcCredential | | | 仅当使用 AK/SK 凭证从源 Amazon ECR 提取映像时,Secrets Manager 中的密钥名称,对于公共注册表将其留空 | +| destRegion | | | 目标区域名称,例如 cn-north-1 | +| destAccountId | | | 目标 AWS 账户 ID,如果目标位于当前账户中,则将其留空 | +| destPrefix | | | Destination Repo Prefix | +| destCredential | | | 仅当使用 AK/SK 凭证将映像推送到目标 Amazon ECR 时,Secrets Manager 中的密钥名称| +| includeUntagged | true
false | true | 是否在复制中包含未标记的图像 | +| ecsClusterName | | | 运行ECS任务的ECS集群名称(请确保集群存在)| +| ecsVpcId | | | 用于运行 ECS 任务的 VPC ID,例如 vpc-bef13dc7 | +| ecsSubnetA | | | 运行 ECS 任务的第一个子网 ID,例如 子网-97bfc4cd | +| ecsSubnetB | | | 运行 ECS 任务的第二个子网 ID,例如 子网-7ad7de32 | +| alarmEmail | | | 发生任何故障时接收通知的警报电子邮件地址 | + + +[aws-cli]: https://aws.amazon.com/cli/ +[nat]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html + +[iam-role]: https://us-east-1.console.aws.amazon.com/iamv2/home#/roles \ No newline at end of file diff --git a/docs/zh-base/user-guide/tutorial-s3.md b/docs/zh-base/user-guide/tutorial-s3.md index 4fa4f05..a926b4a 100644 --- a/docs/zh-base/user-guide/tutorial-s3.md +++ b/docs/zh-base/user-guide/tutorial-s3.md @@ -1,7 +1,22 @@ +该解决方案允许您通过以下方式创建 Amazon S3 传输任务: + +- 使用控制台传输任务 +- 使用DTH S3 插件创建传输任务 +- 使用AWS CLI创建传输任务 + +您可以根据您的需要进行选择。 + +- Web 控制台提供直观的用户界面,您只需单击即可启动、克隆或停止数据传输任务。前端还提供指标监控和日志记录视图,因此您无需在不同页面之间切换。 + +- S3 插件是一个独立的 CloudFormation 模板,您可以轻松地将其集成到您的工作流程中。由于此选项允许在没有前端的情况下进行部署,因此如果您想在 AWS 中国区域部署但没有 ICP 备案的域名,则此选项非常有用。 + +- AWS CLI可以快速启动数据传输任务。如果您想在自动化脚本中使用该解决方案,请选择此选项。 + +## 使用控制台传输任务 您可以在网页控制台创建Amazon S3数据传输任务。更多信息请参考[部署解决方案](../../deployment/deployment-overview)。 !!! Note "注意" - Data Transfer Hub 也支持通过 AWS CLI 创建 Amazon S3 的传输任务, 请参考该[教程](./tutorial-cli-launch.md). + Data Transfer Hub 也支持通过 AWS CLI 创建 Amazon S3 的传输任务, 请参考该[教程](#aws-cli). 1. 从**创建传输任务**页面,选择**创建新任务**,然后选择**下一步**。 2. 在**引擎选项**页面的引擎下,选择**Amazon S3**,然后选择**下一步**。 @@ -12,7 +27,7 @@ - 如果您需要实现实时的增量数据同步,请配置是否启用S3事件通知。注意,只有当该方案和您的数据源部署在同一个账户的同一个区域内时,方可配置该选项。 - 如果您不启用S3事件通知,该方案会按照您在后续所配置的调度频率来定期实现增量数据的同步。 - 如果数据源桶不在方案部署的账户中,请选择**No**,然后指定源存储桶的凭证。 - - 如果您选择同步多个前缀的对象,请将以换行为分隔的前缀列表文件传输到数据源桶的根目录下,然后填写该文件的名称。具体可参考[多前缀列表配置教程](https://github.com/awslabs/data-transfer-hub/blob/main/docs/USING_PREFIX_LIST_CN.md)。 + - 如果您选择同步多个前缀的对象,请将以换行为分隔的前缀列表文件传输到Data Transfer Hub存储桶的根目录下,然后填写该文件的名称。具体可参考[多前缀列表配置教程](https://github.com/awslabs/data-transfer-hub/blob/main/docs/USING_PREFIX_LIST_CN.md)。 5. 要创建凭证信息,请选择[Secrets Manager](https://console.aws.amazon.com/secretsmanager/home)以跳转到当前区域的AWS Secrets Manager控制台。 - 从左侧菜单中,选择**密钥**,然后选择**储存新的密钥**并选择**其他类型的密钥**类型。 - 根据下面的格式在Plaintext输入框中填写`access_key_id`和`secret_access_key`信息。有关更多信息,请参阅*IAM用户指南*中的[IAM功能](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html)。选择**下一步**。 @@ -54,7 +69,195 @@ 任务创建成功后,会出现在**任务**页面。 -### 从 KMS 加密的 Amazon S3 传输 S3 对象 +!!! Note "注意" + 如果 Amazon S3 中的目标存储桶设置为要求使用 Amazon S3 托管密钥加密所有上传的数据。 您可以查看下面的教程。 + +**使用 Amazon S3 托管密钥加密的目标存储桶** + +如果您在 Amazon S3 中的目标存储桶设置为要求所有数据上传都使用 Amazon S3 托管密钥进行加密,具体信息请参见 [文档](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html), + +1. 从目标桶配置中“目标存储桶策略检查”下的下拉菜单中选择“SSE-S3 AES256”。 + +2. 如果您的目标存储桶设置为要求仅使用 SSE-KMS(使用 AWS Key Management Service 的服务器端加密)加密对象[详细信息](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-kms-encryption.html), 并且你的策略类似于提供的示例: + + + ``` + { + "Version": "2012-10-17", + "Id": "PutObjectPolicy", + "Statement": [ + { + "Sid": "DenyIncorrectEncryptionHeader", + "Effect": "Deny", + "Principal": "*", + "Action": "s3:PutObject", + "Resource": "arn:aws-cn:s3:::dth-sse-debug-cn-north-1/*", + "Condition": { + "StringNotEquals": { + "s3:x-amz-server-side-encryption": "aws:kms" + } + } + }, + { + "Sid": "DenyUnencryptedObjectUploads", + "Effect": "Deny", + "Principal": "*", + "Action": "s3:PutObject", + "Resource": "arn:aws-cn:s3:::dth-sse-debug-cn-north-1/*", + "Condition": { + "StringNotEquals": { + "s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws-cn:kms:cn-north-1:123456789012:key/7c54749e-eb6a-42cc-894e-93143b32e7c0" + } + } + } + ] + } + ``` + + 在这种情况下,您应该在目标桶配置的“目标存储桶策略检查”下拉菜单中选择“SSE-KMS”。 此外,您还需要提供 KMS 密钥 ID,例如示例中的“7c54749e-eb6a-42cc-894e-93143b32e7c0”。 + + + + +## 使用DTH S3 插件创建传输任务 +!!! Note "注意" + 本教程是纯后端版本的部署指南。如需了解详情,请参考该[DTH S3插件介绍](https://github.com/awslabs/data-transfer-hub/blob/main/docs/S3_PLUGIN_CN.md). + +**1. 准备VPC** + +此解决方案可以部署在公共和私有子网中。 建议使用公共子网。 + +- 如果您想使用现有的 VPC,请确保 VPC 至少有 2 个子网,并且两个子网都必须具有公网访问权限(带有 Internet 网关的公有子网或带有 NAT 网关的私有子网) + +- 如果您想为此解决方案创建新的默认 VPC,请转到步骤2,并确保您在创建集群时选择了*为此集群创建一个新的 VPC*。 + +**2. 配置凭据** + +您需要提供`AccessKeyID`和`SecretAccessKey`(即`AK/SK`)才能从另一个 AWS 账户或其他云存储服务读取或写入 S3中的存储桶,凭证将存储在 AWS Secrets Manager 中。 您**不需要**为此方案部署的当前账户里的存储桶创建凭证。 + +打开AWS 管理控制台 > Secrets Manager。 在 Secrets Manager 主页上,单击 **存储新的密钥**。 对于密钥类型,请使用**其他类型的秘密**。 对于键/值对,请将下面的 JSON 文本复制并粘贴到明文部分,并相应地将值更改为您的 AK/SK。 + +``` +{ + "access_key_id": "", + "secret_access_key": "" +} +``` + +![密钥](../images/secret_cn.png) + +然后下一步指定密钥名称,最后一步点击创建。 + + +> 注意:如果该AK/SK是针对源桶, 则需要具有桶的**读**权限, 如果是针对目标桶, 则需要具有桶的**读与写**权限。 如果是Amazon S3, 可以参考[配置凭据](./IAM-Policy_CN.md) + + +**3. 启动AWS Cloudformation部署** + +请按照以下步骤通过AWS Cloudformation部署此解决方案。 + +1. 登录到AWS管理控制台,切换到将CloudFormation Stack部署到的区域。 + +1. 单击以下按钮在该区域中启动CloudFormation堆栈。 + + - 部署到AWS中国北京和宁夏区 + + [![Launch Stack](../images/launch-stack.svg)](https://console.amazonaws.cn/cloudformation/home#/stacks/create/template?stackName=DTHS3Stack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferS3Stack.template) + + - 部署到AWS海外区 + + [![Launch Stack](../images/launch-stack.svg)](https://console.aws.amazon.com/cloudformation/home#/stacks/create/template?stackName=DTHS3Stack&templateURL=https://solutions-reference.s3.amazonaws.com/data-transfer-hub/latest/DataTransferS3Stack.template) + + + +1. 单击**下一步**。 相应地为参数指定值。 如果需要,请更改堆栈名称。 + +1. 单击**下一步**。 配置其他堆栈选项,例如标签(可选)。 + +1. 单击**下一步**。 查看并勾选确认,然后单击“创建堆栈”开始部署。 + +部署预计用时3-5分钟 + +## 使用AWS CLI创建传输任务 +您可以使用 [AWS CLI][aws-cli] 创建 Amazon S3 传输任务。如果您同时部署了DTH Portal,通过CLI启动的任务将不会出现在您Portal的任务列表界面中。 + +1. 创建一个具有两个公有子网或两个拥有[NAT 网关][nat] 私有子网的Amazon VPC。 + +2. 根据需要替换``为`https://s3.amazonaws.com/solutions-reference/data-transfer-hub/latest/DataTransferS3Stack-ec2.template`。 + +3. 转到您的终端并输入以下命令。详情请参考**参数列表**。 + + ```shell + aws cloudformation create-stack --stack-name dth-s3-task --template-url CLOUDFORMATION_URL \ + --capabilities CAPABILITY_NAMED_IAM \ + --parameters \ + ParameterKey=alarmEmail,ParameterValue=your_email@example.com \ + ParameterKey=destBucket,ParameterValue=dth-recive-cn-north-1 \ + ParameterKey=destPrefix,ParameterValue=test-prefix \ + ParameterKey=destCredentials,ParameterValue=drh-cn-secret-key \ + ParameterKey=destInCurrentAccount,ParameterValue=false \ + ParameterKey=destRegion,ParameterValue=cn-north-1 \ + ParameterKey=destStorageClass,ParameterValue=STANDARD \ + ParameterKey=destPutObjectSSEType,ParameterValue=None \ + ParameterKey=destPutObjectSSEKmsKeyId,ParameterValue= \ + ParameterKey=srcBucket,ParameterValue=dth-us-west-2 \ + ParameterKey=srcInCurrentAccount,ParameterValue=true \ + ParameterKey=srcCredentials,ParameterValue= \ + ParameterKey=srcRegion,ParameterValue=us-west-2 \ + ParameterKey=srcPrefix,ParameterValue=case1 \ + ParameterKey=srcType,ParameterValue=Amazon_S3 \ + ParameterKey=ec2VpcId,ParameterValue=vpc-040bbab85f0e4e088 \ + ParameterKey=ec2Subnets,ParameterValue=subnet-0d1bf2725ab8e94ee\\,subnet-06d17b2b3286be40e \ + ParameterKey=finderEc2Memory,ParameterValue=8 \ + ParameterKey=ec2CronExpression,ParameterValue="0/60 * * * ? *" \ + ParameterKey=includeMetadata,ParameterValue=false \ + ParameterKey=srcEvent,ParameterValue=No \ + ParameterKey=maxCapacity,ParameterValue=20 \ + ParameterKey=minCapacity,ParameterValue=1 \ + ParameterKey=desiredCapacity,ParameterValue=1 + ``` + + +**参数列表** + +| 参数名称 | 允许值 | 默认值 | 说明 | +| --- | --- | --- | --- | +| alarmEmail | | | 用于接收错误信息的电子邮件 +| desiredCapacity | | 1 | Auto Scaling 组的所需容量 +| destAcl | private
public-read
public-read-write
authenticated-read
aws-exec-read
bucket-owner-read
bucket-owner-full-control | bucket-owner-full-control | 目的桶访问控制列表 +| destBucket | | | 目标桶名称 +| destCredentials | | | Secrets Manager 中用于保存目标存储桶的 AK/SK 凭证的密钥名称。如果目标存储桶在当前帐户中,则留空 +| destInCurrentAccount | true
false | true | 目标存储桶是否在当前帐户中。如果不在,您应该提供具有读写权限的凭证 +| destPrefix | | | 目标前缀(可选) +| destRegion | | | 目标区域名称 +| destStorageClass | STANDARD
STANDARD_IA
ONEZONE_IA
INTELLIGENT_TIERING | INTELLIGENT_TIERING | 目标存储类。默认值为INTELLIGENT_TIERING +| destPutObjectSSEType | None
AES256
AWS_KMS | None | Specifies the server-side encryption algorithm used for storing objects in Amazon S3. 'AES256' applies AES256 encryption, 'AWS_KMS' uses AWS Key Management Service encryption, and 'None' indicates that no encryption is applied. +| destPutObjectSSEKmsKeyId | | | Specifies the ID of the symmetric customer managed AWS KMS Customer Master Key (CMK) used for object encryption. This parameter should only be set when destPutObjectSSEType is set to 'AWS_KMS'. If destPutObjectSSEType is set to any value other than 'AWS_KMS', please leave this parameter empty. The default value is not set. +| isPayerRequest | true
false | false | 是否启用付费者请求模式 | +| ec2CronExpression | | 0/60 * * * ? * | EC2 Finder 任务的 Cron 表达式。
"" 表示一次性转移。| +| finderEc2Memory | 8
16
32
64
128
256 | 8 GB| Finder 任务使用的内存量(以 GB 为单位) +| ec2Subnets | | | 两个公共子网或具有 [NAT 网关][nat] 的两个私有子网 | +| ec2VpcId | | | 运行 EC2 任务的 VPC ID,例如 vpc-bef13dc7 +| finderDepth | | 0 | 要并行比较的子文件夹的深度。 0 表示按顺序比较所有对象 +| finderNumber | | 1 | 并行运行的查找器线程数 +| includeMetadata | true
false | false | 添加对象元数据的复制,会有额外的 API 调用 +| maxCapacity | | 20 | Auto Scaling 组的最大容量 +| minCapacity | | 1 | Auto Scaling 组的最小容量 +| srcBucket | | | 源桶名称 +| srcCredentials | | | Secrets Manager 中用于保存 Source Bucket 的 AK/SK 凭证的密钥名称。 如果源存储桶在当前帐户中或源是开源数据,则留空 +| srcEndpoint | | | 源端点 URL(可选),除非您想提供自定义端点 URL,否则留空 +| srcEvent | No
Create
CreateAndDelete | No | 是否启用 S3 Event 触发复制。 请注意,S3Event 仅适用于源位于当前帐户中的情况 +| srcInCurrentAccount | true
false | false | 源存储桶是否当前帐户中。如果不在,您应该提供具有读取权限的凭证 +| srcPrefix | | | 源前缀(可选) +| srcPrefixListBucket | | | Source prefix list file S3 bucket name (Optional). It used to store the Source prefix list file. The specified bucket must be located in the same AWS region and under the same account as the DTH deployment. If your PrefixList File is stored in the Source Bucket, please leave this parameter empty. +| srcPrefixsListFile | | | Source prefix list file S3 path (Optional). It supports txt type, for example, my_prefix_list.txt, and the maximum number of lines is 10 millions +| srcRegion | | | Source region name +| srcSkipCompare | true
false | false | Indicates whether to skip the data comparison in task finding process. If yes, all data in the source will be sent to the destination +| srcType | Amazon_S3
Aliyun_OSS
Qiniu_Kodo
Tencent_COS | Amazon_S3 | 如果选择使用Endpoint模式,请选择Amazon_S3 +| workerNumber | 1 ~ 10 | 4 | 在一个工作节点/实例中运行的工作线程数。 对于小文件(平均文件大小 < 1MB),您可以增加工作线程数量以提高传输性能。 + + +## 从 KMS 加密的 Amazon S3 传输 S3 对象 Data Transfer Hub 默认支持使用 SSE-S3 和 SSE-KMS 的数据源。 @@ -88,4 +291,8 @@ Data Transfer Hub 默认支持使用 SSE-S3 和 SSE-KMS 的数据源。 } ``` + +[aws-cli]: https://aws.amazon.com/cli/ +[nat]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html + [iam-role]: https://us-east-1.console.aws.amazon.com/iamv2/home#/roles \ No newline at end of file diff --git a/source/constructs/lambda/api/task/test/test_api_task_v2.py b/source/constructs/lambda/api/task/test/test_api_task_v2.py index 46956cd..cf0d476 100644 --- a/source/constructs/lambda/api/task/test/test_api_task_v2.py +++ b/source/constructs/lambda/api/task/test/test_api_task_v2.py @@ -84,6 +84,112 @@ } +task_info_2 = { + "id": "19c9d25f-90ae-4397-98b6-855d75c38473", + "createdAt": "2024-01-08T06:03:25.680647Z", + "description": "", + "executionArn": "arn:aws:states:us-west-2:123456789012:execution:APICfnWorkflowCfnDeploymentStateMachineFC154A5B-fkxpvfKrg7iD:9990ba90-963e-465d-8799-2162b25279ca", + "parameters": [ + {"ParameterKey": "srcType", "ParameterValue": "Amazon_S3"}, + {"ParameterKey": "srcEndpoint", "ParameterValue": ""}, + {"ParameterKey": "srcBucket", "ParameterValue": "dth-dev-tokyo-kervin-01"}, + {"ParameterKey": "srcPrefix", "ParameterValue": ""}, + {"ParameterKey": "srcPrefixsListFile", "ParameterValue": "prefix_list.txt"}, + {"ParameterKey": "srcPrefixListBucket", "ParameterValue": "prefix_list_bucket"}, + {"ParameterKey": "srcEvent", "ParameterValue": "No"}, + {"ParameterKey": "srcRegion", "ParameterValue": "ap-northeast-1"}, + {"ParameterKey": "srcInCurrentAccount", "ParameterValue": "true"}, + {"ParameterKey": "srcCredentials", "ParameterValue": ""}, + {"ParameterKey": "destBucket", "ParameterValue": "dth-sse-debug-cn-north-1"}, + {"ParameterKey": "destPrefix", "ParameterValue": "v260-dev-e2e-0108-01"}, + {"ParameterKey": "destStorageClass", "ParameterValue": "INTELLIGENT_TIERING"}, + {"ParameterKey": "destRegion", "ParameterValue": "cn-north-1"}, + {"ParameterKey": "destInCurrentAccount", "ParameterValue": "false"}, + {"ParameterKey": "destCredentials", "ParameterValue": "dev-account-dth"}, + {"ParameterKey": "includeMetadata", "ParameterValue": "false"}, + {"ParameterKey": "isPayerRequest", "ParameterValue": "false"}, + {"ParameterKey": "destPutObjectSSEType", "ParameterValue": "AES256"}, + {"ParameterKey": "destPutObjectSSEKmsKeyId", "ParameterValue": ""}, + {"ParameterKey": "destAcl", "ParameterValue": "bucket-owner-full-control"}, + {"ParameterKey": "ec2CronExpression", "ParameterValue": "0 */1 ? * * *"}, + {"ParameterKey": "maxCapacity", "ParameterValue": "20"}, + {"ParameterKey": "minCapacity", "ParameterValue": "1"}, + {"ParameterKey": "desiredCapacity", "ParameterValue": "1"}, + {"ParameterKey": "srcSkipCompare", "ParameterValue": "false"}, + {"ParameterKey": "finderDepth", "ParameterValue": "0"}, + {"ParameterKey": "finderNumber", "ParameterValue": "1"}, + {"ParameterKey": "finderEc2Memory", "ParameterValue": "512"}, + {"ParameterKey": "workerNumber", "ParameterValue": "4"}, + {"ParameterKey": "alarmEmail", "ParameterValue": "xxxxxx"}, + {"ParameterKey": "ec2VpcId", "ParameterValue": "vpc-0ecf829216cbb6f25"}, + { + "ParameterKey": "ec2Subnets", + "ParameterValue": "subnet-0f909c9db82df2026,subnet-086fc4c755dfcbac1", + }, + ], + "progress": "IN_PROGRESS", + "scheduleType": "FIXED_RATE", + "stackId": "arn:aws:cloudformation:us-west-2:123456789012:stack/DTH-S3EC2-ab5f8/a379eaa0-adeb-11ee-bd63-0a997cb6160f", + "stackOutputs": [ + { + "Description": "Split Part DynamoDB Table Name", + "OutputKey": "CommonSplitPartTableName68CB1187", + "OutputValue": "DTH-S3EC2-ab5f8-S3SplitPartTable-2PIFDU7JHQ4M", + }, + { + "Description": "SFN ARN", + "OutputKey": "MultiPartStateMachineSfnArnFA9E5135", + "OutputValue": "arn:aws:states:us-west-2:123456789012:stateMachine:DTH-S3EC2-ab5f8-MultiPart-ControllerSM", + }, + { + "Description": "Alarm Topic Name", + "OutputKey": "CommonAlarmTopicName54A80B94", + "OutputValue": "DTH-S3EC2-ab5f8-S3TransferAlarmTopic-aTXEi0GbJHtH", + }, + { + "Description": "Stack Name", + "OutputKey": "CommonStackName013B3BAB", + "OutputValue": "DTH-S3EC2-ab5f8", + }, + { + "Description": "Worker ASG Name", + "OutputKey": "EC2WorkerStackWorkerASGName4A04CB6D", + "OutputValue": "DTH-S3EC2-ab5f8-Worker-ASG", + }, + { + "Description": "Queue Name", + "OutputKey": "CommonQueueNameEB26B1B7", + "OutputValue": "DTH-S3EC2-ab5f8-S3TransferQueue-swVYDUd40h1z", + }, + { + "Description": "Dead Letter Queue Name", + "OutputKey": "CommonDLQQueueName98D51C56", + "OutputValue": "DTH-S3EC2-ab5f8-S3TransferQueueDLQ-xTUXojgnYlkB", + }, + { + "Description": "DynamoDB Table Name", + "OutputKey": "CommonTableName4099A6E9", + "OutputValue": "DTH-S3EC2-ab5f8-S3TransferTable-1EAIM9YBKD15G", + }, + { + "Description": "Worker Log Group Name", + "OutputKey": "EC2WorkerStackWorkerLogGroupName752FB4C3", + "OutputValue": "DTH-S3EC2-ab5f8-CommonS3RepWorkerLogGroupE38567D7-4OC3GMr616VT", + }, + { + "Description": "Finder Log Group Name", + "OutputKey": "FinderStackFinderLogGroupNameB966FCFB", + "OutputValue": "DTH-S3EC2-ab5f8-FinderStackFinderLogGroup9408DAAE-SRIf6z1UPORs", + }, + ], + "stackStatus": "CREATE_COMPLETE", + "templateUrl": "https://solutions-features-reference.s3.amazonaws.com/data-transfer-hub/develop/DataTransferS3Stack.template", + "totalObjectCount": "4", + "type": "S3EC2", + "updatedDt": "2024-01-08T06:07:37Z", +} + + @pytest.fixture def ddb_client(): with mock_dynamodb(): @@ -106,7 +212,7 @@ def ddb_client(): "WriteCapacityUnits": 10 }, ) - data_list = [task_info_1] + data_list = [task_info_1, task_info_2] with app_log_config_table.batch_writer() as batch: for data in data_list: batch.put_item(Item=data) @@ -126,17 +232,38 @@ def test_lambda_function(ddb_client): "info": { "fieldName": "listTasksV2", "parentTypeName": "Query", - "variables": { - "page": 1, - "count": 10 - }, + "variables": {"page": 1, "count": 10}, }, }, None, ) # Expect Execute successfully. - assert result["total"] == 1 + assert result["total"] == 2 + tasks = result["items"] + specific_task = next( + ( + task + for task in tasks + if task["id"] == "19c9d25f-90ae-4397-98b6-855d75c38473" + ), + None, + ) + assert specific_task is not None + src_prefix_list_bucket_value = "prefix_list_bucket" + assert any( + param["ParameterKey"] == "srcPrefixListBucket" + and param["ParameterValue"] == src_prefix_list_bucket_value + for param in specific_task["parameters"] + ) + assert any( + param["ParameterKey"] == "srcPrefixListBucket" + for param in specific_task["parameters"] + ) + assert any( + param["ParameterKey"] == "srcPrefixListBucket" + for param in specific_task["parameters"] + ) def test_args_error(ddb_client): import api_task_v2 diff --git a/source/constructs/lib/api-stack.ts b/source/constructs/lib/api-stack.ts index 6d6408f..5cb7fed 100644 --- a/source/constructs/lib/api-stack.ts +++ b/source/constructs/lib/api-stack.ts @@ -140,6 +140,9 @@ import * as appsync from "@aws-cdk/aws-appsync-alpha"; }, ]) const snsKey = new kms.Key(this, 'SNSTopicEncryptionKey', { + removalPolicy: RemovalPolicy.DESTROY, + pendingWindow: Duration.days(7), + description: "Data Transfer Hub KMS-CMK for encrypting the objects in SNS", enableKeyRotation: true, enabled: true, alias: `alias/dth/sns/${Aws.STACK_NAME}`, diff --git a/source/constructs/lib/cfn-step-functions.ts b/source/constructs/lib/cfn-step-functions.ts index d908d0a..2100446 100644 --- a/source/constructs/lib/cfn-step-functions.ts +++ b/source/constructs/lib/cfn-step-functions.ts @@ -328,6 +328,7 @@ export class CloudFormationStateMachine extends Construct { "kms:TagResource", "kms:UntagResource", "kms:UpdateAlias", + "kms:ScheduleKeyDeletion", ], resources: [ `*`, diff --git a/source/constructs/lib/portal-stack.ts b/source/constructs/lib/portal-stack.ts index e4a416f..895cd90 100644 --- a/source/constructs/lib/portal-stack.ts +++ b/source/constructs/lib/portal-stack.ts @@ -46,6 +46,47 @@ export class PortalStack extends Construct { constructor(scope: Construct, id: string, props: PortalStackProps) { super(scope, id); + const getDefaultBehavior = () => { + if (props.auth_type === AuthType.COGNITO) { + return { + responseHeadersPolicy: new cloudfront.ResponseHeadersPolicy( + this, + 'ResponseHeadersPolicy', + { + responseHeadersPolicyName: `SecHdr${Aws.REGION}${Aws.STACK_NAME}`, + comment: 'Security Headers Policy', + securityHeadersBehavior: { + contentSecurityPolicy: { + contentSecurityPolicy: `default-src 'self'; upgrade-insecure-requests; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:; connect-src 'self' ${props.aws_appsync_graphqlEndpoint} https://cognito-idp.${Aws.REGION}.amazonaws.com/`, + override: true, + }, + contentTypeOptions: { override: true }, + frameOptions: { + frameOption: cloudfront.HeadersFrameOption.DENY, + override: true, + }, + referrerPolicy: { + referrerPolicy: cloudfront.HeadersReferrerPolicy.NO_REFERRER, + override: true, + }, + strictTransportSecurity: { + accessControlMaxAge: Duration.seconds(600), + includeSubdomains: true, + override: true, + }, + xssProtection: { + protection: true, + modeBlock: true, + override: true, + }, + }, + } + ), + }; + } + return {}; + }; + const website = new CloudFrontToS3(this, 'Web', { bucketProps: { versioned: true, @@ -60,14 +101,15 @@ export class PortalStack extends Construct { minimumProtocolVersion: cloudfront.SecurityPolicyProtocol.TLS_V1_2_2019, enableIpv6: false, enableLogging: true, //Enable access logging for the distribution. - comment: 'Data Transfer Hub Portal Distribution', + comment: `Data Transfer Hub Portal Distribution (${Aws.REGION})`, errorResponses: [ { httpStatus: 403, responseHttpStatus: 200, responsePagePath: "/index.html", } - ] + ], + defaultBehavior: getDefaultBehavior(), }, insertHttpSecurityHeaders: false, }); @@ -88,6 +130,7 @@ export class PortalStack extends Construct { ); const websiteBucket = website.s3Bucket as s3.Bucket; + const websiteLoggingBucket = website.s3LoggingBucket as s3.Bucket; const websiteDist = website.cloudFrontWebDistribution.node.defaultChild as cloudfront.CfnDistribution if (props.auth_type === AuthType.OPENID) { @@ -198,6 +241,7 @@ function handler(event) { code: lambda.Code.fromAsset(path.join(__dirname, '../../custom-resource/')), environment: { WEB_BUCKET_NAME: websiteBucket.bucketName, + SRC_PREFIX_LIST_BUCKET_NAME: websiteLoggingBucket.bucketName, API_ENDPOINT: props.aws_appsync_graphqlEndpoint, OIDC_PROVIDER: props.aws_oidc_provider, OIDC_CLIENT_ID: props.aws_oidc_client_id, diff --git a/source/constructs/lib/s3-plugin/common-resources.ts b/source/constructs/lib/s3-plugin/common-resources.ts index 1ba990c..c2eae7b 100644 --- a/source/constructs/lib/s3-plugin/common-resources.ts +++ b/source/constructs/lib/s3-plugin/common-resources.ts @@ -144,6 +144,9 @@ export class CommonStack extends Construct { }); const snsKey = new kms.Key(this, "SNSTopicEncryptionKey", { + removalPolicy: RemovalPolicy.DESTROY, + pendingWindow: Duration.days(7), + description: "Data Transfer Hub KMS-CMK for encrypting the objects in SNS", enableKeyRotation: true, enabled: true, alias: `alias/dth/sns/${Aws.STACK_NAME}`, diff --git a/source/constructs/lib/s3-plugin/ec2-finder-stack.ts b/source/constructs/lib/s3-plugin/ec2-finder-stack.ts index 98f488f..ca837a1 100644 --- a/source/constructs/lib/s3-plugin/ec2-finder-stack.ts +++ b/source/constructs/lib/s3-plugin/ec2-finder-stack.ts @@ -145,7 +145,9 @@ export class Ec2FinderStack extends Construct { "32": { instanceType: "r6g.xlarge" }, "64": { instanceType: "r6g.2xlarge" }, "128": { instanceType: "r6g.4xlarge" }, - "256": { instanceType: "r6g.8xlarge" } + "256": { instanceType: "r6g.8xlarge" }, + "384": { instanceType: "r6g.12xlarge" }, + "512": { instanceType: "r6g.16xlarge" }, } }); @@ -238,6 +240,7 @@ export class Ec2FinderStack extends Construct { `echo "export SRC_BUCKET=${props.env.SRC_BUCKET}" >> env.sh`, `echo "export SRC_PREFIX=${props.env.SRC_PREFIX}" >> env.sh`, `echo "export SRC_PREFIX_LIST=${props.env.SRC_PREFIX_LIST}" >> env.sh`, + `echo "export SRC_PREFIX_LIST_BUCKET=${props.env.SRC_PREFIX_LIST_BUCKET}" >> env.sh`, `echo "export SRC_REGION=${props.env.SRC_REGION}" >> env.sh`, `echo "export SRC_ENDPOINT=${props.env.SRC_ENDPOINT}" >> env.sh`, `echo "export SRC_CREDENTIALS=${props.env.SRC_CREDENTIALS}" >> env.sh`, @@ -268,6 +271,8 @@ export class Ec2FinderStack extends Construct { `echo "sleep 61; mycount=0; while (( \\$mycount < 5 )); do aws autoscaling update-auto-scaling-group --region ${Aws.REGION} --auto-scaling-group-name ${Aws.STACK_NAME}-Finder-ASG --desired-capacity 0; sleep 10; ((mycount=\$mycount+1)); done;" >> start-finder.sh`, // change the asg desired-capacity to 0 to stop the finder task "chmod +x start-finder.sh", + // Add to startup items + 'echo "@reboot /home/ec2-user/start-finder.sh" >> /var/spool/cron/root', // Run the script "./start-finder.sh" ); diff --git a/source/constructs/lib/s3-plugin/ec2-worker-stack.ts b/source/constructs/lib/s3-plugin/ec2-worker-stack.ts index 0ac18df..f02b43a 100644 --- a/source/constructs/lib/s3-plugin/ec2-worker-stack.ts +++ b/source/constructs/lib/s3-plugin/ec2-worker-stack.ts @@ -226,6 +226,8 @@ export class Ec2WorkerStack extends Construct { `echo "export DEST_IN_CURRENT_ACCOUNT=${props.env.DEST_IN_CURRENT_ACCOUNT}" >> env.sh`, `echo "export DEST_STORAGE_CLASS=${props.env.DEST_STORAGE_CLASS}" >> env.sh`, `echo "export DEST_ACL=${props.env.DEST_ACL}" >> env.sh`, + `echo "export DEST_SSE_TYPE=${props.env.DEST_SSE_TYPE}" >> env.sh`, + `echo "export DEST_SSE_KMS_KEY_ID=${props.env.DEST_SSE_KMS_KEY_ID}" >> env.sh`, // `echo "export MULTIPART_THRESHOLD=${props.env.MULTIPART_THRESHOLD}" >> env.sh`, // `echo "export CHUNK_SIZE=${props.env.CHUNK_SIZE}" >> env.sh`, diff --git a/source/constructs/lib/s3-plugin/s3-plugin-stack.ts b/source/constructs/lib/s3-plugin/s3-plugin-stack.ts index b6e20c6..25481a0 100644 --- a/source/constructs/lib/s3-plugin/s3-plugin-stack.ts +++ b/source/constructs/lib/s3-plugin/s3-plugin-stack.ts @@ -77,7 +77,7 @@ export class DataTransferS3Stack extends Stack { const runType: RunType = this.node.tryGetContext("runType") || RunType.EC2; - const cliRelease = "1.3.0"; + const cliRelease = "1.4.0"; const srcType = new CfnParameter(this, "srcType", { description: @@ -103,15 +103,26 @@ export class DataTransferS3Stack extends Stack { const srcPrefixsListFile = new CfnParameter(this, "srcPrefixsListFile", { description: - "Source Prefixs List File S3 path (Optional), support txt type, the maximum number of lines is 10 millions. e.g. my_prefix_list.txt", + "Source Prefix List File S3 path (Optional), support txt type, the maximum number of lines is 10 millions. e.g. my_prefix_list.txt", default: "", type: "String" }); this.addToParamLabels( - "Source Prefixs List File", + "Source Prefix List File", srcPrefixsListFile.logicalId ); + const srcPrefixListBucket = new CfnParameter(this, "srcPrefixListBucket", { + description: + "Source prefix list file S3 bucket name (Optional). It used to store the Source prefix list file. The specified bucket must be located in the same AWS region and under the same account as the DTH deployment. If your PrefixList File is stored in the Source Bucket, please leave this parameter empty.", + default: "", + type: "String" + }); + this.addToParamLabels( + "Bucket Name for Source Prefix List File", + srcPrefixListBucket.logicalId + ); + const srcSkipCompare = new CfnParameter(this, "srcSkipCompare", { description: "Skip the data comparison in task finding process? If yes, all data in the source will be sent to the destination", @@ -241,6 +252,36 @@ export class DataTransferS3Stack extends Stack { }); this.addToParamLabels("Destination Access Control List", destAcl.logicalId); + const destPutObjectSSEType = new CfnParameter( + this, + "destPutObjectSSEType", + { + description: + "Specifies the server-side encryption algorithm used for storing objects in Amazon S3 (PutObjectPolicy).", + default: "None", + type: "String", + allowedValues: ["None", "AES256", "AWS_KMS"] + } + ); + this.addToParamLabels( + "Destination Bucket PutObjectPolicy SSE Type", + destPutObjectSSEType.logicalId + ); + + const destPutObjectSSEKmsKeyId = new CfnParameter( + this, + "destPutObjectSSEKmsKeyId", + { + description: "Destination Bucket SSE KMS Key ID (Optional). This field is required only if the SSE Type in your Destination Bucket's PutObject Policy is set to 'AWS_KMS'.", + default: "", + type: "String" + } + ); + this.addToParamLabels( + "Destination Bucket SSE KMS Key ID", + destPutObjectSSEKmsKeyId.logicalId + ); + const ec2VpcId = new CfnParameter(this, "ec2VpcId", { description: "VPC ID to run EC2 task, e.g. vpc-bef13dc7", default: "", @@ -260,7 +301,7 @@ export class DataTransferS3Stack extends Stack { description: "The amount of memory (in GB) used by the Finder task.", default: "8", type: "String", - allowedValues: ["8", "16", "32", "64", "128", "256"] + allowedValues: ["8", "16", "32", "64", "128", "256", "384", "512"] }); this.addToParamLabels("EC2 Finder Memory", finderEc2Memory.logicalId); @@ -323,6 +364,7 @@ export class DataTransferS3Stack extends Stack { srcBucket.logicalId, srcPrefix.logicalId, srcPrefixsListFile.logicalId, + srcPrefixListBucket.logicalId, srcRegion.logicalId, srcEndpoint.logicalId, srcInCurrentAccount.logicalId, @@ -339,7 +381,9 @@ export class DataTransferS3Stack extends Stack { destInCurrentAccount.logicalId, destCredentials.logicalId, destStorageClass.logicalId, - destAcl.logicalId + destAcl.logicalId, + destPutObjectSSEType.logicalId, + destPutObjectSSEKmsKeyId.logicalId ); this.addToParamGroups("Notification Information", alarmEmail.logicalId); this.addToParamGroups( @@ -420,7 +464,12 @@ export class DataTransferS3Stack extends Stack { `DestBucket`, destBucket.valueAsString ); - + const prefixListFileIBucket = s3.Bucket.fromBucketName( + this, + `PrefixListFileBucket`, + srcPrefixListBucket.valueAsString + ); + // Get VPC const vpc = ec2.Vpc.fromVpcAttributes(this, "EC2Vpc", { vpcId: ec2VpcId.valueAsString, @@ -498,6 +547,7 @@ export class DataTransferS3Stack extends Stack { SRC_BUCKET: srcBucket.valueAsString, SRC_PREFIX: srcPrefix.valueAsString, SRC_PREFIX_LIST: srcPrefixsListFile.valueAsString, + SRC_PREFIX_LIST_BUCKET: srcPrefixListBucket.valueAsString, SRC_REGION: srcRegion.valueAsString, SRC_ENDPOINT: srcEndpoint.valueAsString, SRC_CREDENTIALS: srcCredentials.valueAsString, @@ -528,7 +578,10 @@ export class DataTransferS3Stack extends Stack { commonStack.sqsQueue.grantSendMessages(finderStack.finderRole); srcIBucket.grantRead(finderStack.finderRole); destIBucket.grantRead(finderStack.finderRole); - multipartStateMachine.multiPartControllerStateMachine.grantRead(finderStack.finderRole); + prefixListFileIBucket.grantRead(finderStack.finderRole); + multipartStateMachine.multiPartControllerStateMachine.grantRead( + finderStack.finderRole + ); const workerEnv = { JOB_TABLE_NAME: commonStack.jobTable.tableName, @@ -554,6 +607,8 @@ export class DataTransferS3Stack extends Stack { DEST_IN_CURRENT_ACCOUNT: destInCurrentAccount.valueAsString, DEST_STORAGE_CLASS: destStorageClass.valueAsString, DEST_ACL: destAcl.valueAsString, + DEST_SSE_TYPE: destPutObjectSSEType.valueAsString, + DEST_SSE_KMS_KEY_ID: destPutObjectSSEKmsKeyId.valueAsString, FINDER_DEPTH: finderDepth.valueAsString, FINDER_NUMBER: finderNumber.valueAsString, @@ -581,7 +636,9 @@ export class DataTransferS3Stack extends Stack { commonStack.sqsQueue.grantSendMessages(ec2Stack.workerAsg.role); srcIBucket.grantRead(ec2Stack.workerAsg.role); destIBucket.grantReadWrite(ec2Stack.workerAsg.role); - multipartStateMachine.multiPartControllerStateMachine.grantStartExecution(ec2Stack.workerAsg.role); + multipartStateMachine.multiPartControllerStateMachine.grantStartExecution( + ec2Stack.workerAsg.role + ); asgName = ec2Stack.workerAsg.autoScalingGroupName; } diff --git a/source/constructs/package.json b/source/constructs/package.json index a55ecc3..52ad2df 100755 --- a/source/constructs/package.json +++ b/source/constructs/package.json @@ -21,24 +21,24 @@ "devDependencies": { "@types/aws-lambda": "^8.10.83", "@types/jest": "29.5.3", - "@types/node": "20.6.0", + "@types/node": "20.11.0", "@types/uuid": "9.0.2", - "aws-cdk": "2.95.0", - "aws-cdk-lib": "2.95.0", + "aws-cdk": "2.121.1", + "aws-cdk-lib": "2.121.1", "jest": "^29.7.0", "ts-jest": "^29.1.1", "ts-node": "^10.2.1", - "typescript": "5.2.2", + "typescript": "5.3.3", "uuid": "9.0.0" }, "dependencies": { - "@aws-solutions-constructs/aws-cloudfront-s3": "2.44.0", - "@aws-solutions-constructs/core": "2.44.0", + "@aws-solutions-constructs/aws-cloudfront-s3": "2.45.0", + "@aws-solutions-constructs/core": "2.45.0", "@aws-cdk/aws-appsync-alpha": "2.59.0-alpha.0", - "aws-cdk": "2.95.0", - "aws-cdk-lib": "2.95.0", - "cdk-nag": "2.27.66", - "constructs": "10.2.55", + "aws-cdk": "2.121.1", + "aws-cdk-lib": "2.121.1", + "cdk-nag": "2.28.3", + "constructs": "10.3.0", "source-map-support": "0.5.21" } } diff --git a/source/custom-resource/lambda_function.py b/source/custom-resource/lambda_function.py index 36f6870..8b115ec 100644 --- a/source/custom-resource/lambda_function.py +++ b/source/custom-resource/lambda_function.py @@ -25,6 +25,7 @@ s3 = boto3.resource("s3", region_name=default_region, config=default_config) bucket_name = os.environ.get("WEB_BUCKET_NAME") +src_prefix_list_bucket = os.environ.get("SRC_PREFIX_LIST_BUCKET_NAME") api_endpoint = os.environ.get("API_ENDPOINT") user_pool_id = os.environ.get("USER_POOL_ID") user_pool_client_id = os.environ.get("USER_POOL_CLIENT_ID") @@ -65,12 +66,12 @@ def upgrade_data(): def get_config_str(): - """ Get config string """ + """Get config string""" export_json = { "taskCluster": { "ecsVpcId": ecs_vpc_id, - "ecsClusterName":ecs_cluster_name, - "ecsSubnets": ecs_subnets.split(",") + "ecsClusterName": ecs_cluster_name, + "ecsSubnets": ecs_subnets.split(","), }, "aws_appsync_region": region, "aws_appsync_authenticationType": authentication_type, @@ -82,14 +83,15 @@ def get_config_str(): "aws_oidc_client_id": client_id, "aws_cloudfront_url": cloudfront_url, "aws_appsync_graphqlEndpoint": api_endpoint, - "aws_user_pools_web_client_id": user_pool_client_id + "aws_user_pools_web_client_id": user_pool_client_id, + "src_prefix_list_bucket": src_prefix_list_bucket, } return json.dumps(export_json) def write_to_s3(config_str): - """ Write config file to S3 """ + """Write config file to S3""" logger.info("Put config file to S3") key_name = "aws-exports.json" s3.Bucket(bucket_name).put_object(Key=key_name, Body=config_str) @@ -97,7 +99,7 @@ def write_to_s3(config_str): def cloudfront_invalidate(distribution_id, distribution_paths): - """ Create a CloudFront invalidation request """ + """Create a CloudFront invalidation request""" invalidation_resp = cloudfront.create_invalidation( DistributionId=distribution_id, InvalidationBatch={ diff --git a/source/custom-resource/test/conftest.py b/source/custom-resource/test/conftest.py index 805050d..77c6e0e 100644 --- a/source/custom-resource/test/conftest.py +++ b/source/custom-resource/test/conftest.py @@ -16,6 +16,7 @@ def default_environment_variables(): os.environ["AWS_DEFAULT_REGION"] = "us-east-1" os.environ["WEB_BUCKET_NAME"] = "solution-web-bucket" + os.environ["SRC_PREFIX_LIST_BUCKET_NAME"] = "solution-web-logging-bucket" os.environ["API_ENDPOINT"] = "https:/solution.xxx.amazonaws.com/graphql" os.environ["USER_POOL_ID"] = "abc" os.environ["USER_POOL_CLIENT_ID"] = "abcd" diff --git a/source/portal/mock/styleMock.ts b/source/portal/mock/styleMock.ts new file mode 100644 index 0000000..dc791db --- /dev/null +++ b/source/portal/mock/styleMock.ts @@ -0,0 +1,17 @@ +/* +Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. + +Licensed under the Apache License, Version 2.0 (the "License"). +You may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ +// eslint-disable-next-line no-undef +module.exports = {}; diff --git a/source/portal/package.json b/source/portal/package.json index 48951fd..351cd08 100644 --- a/source/portal/package.json +++ b/source/portal/package.json @@ -9,60 +9,61 @@ }, "private": true, "dependencies": { - "@apollo/client": "^3.8.3", + "@apollo/client": "^3.8.8", "@aws-amplify/ui-components": "^1.9.40", - "@aws-amplify/ui-react": "^5.3.0", + "@aws-amplify/ui-react": "^5.3.3", "@material-ui/core": "^4.12.4", "@material-ui/icons": "^4.11.3", "@material-ui/lab": "4.0.0-alpha.61", - "@testing-library/jest-dom": "^6.1.3", - "@testing-library/react": "^14.0.0", - "@testing-library/user-event": "^14.4.3", + "@testing-library/jest-dom": "^6.2.0", + "@testing-library/react": "^14.1.2", + "@testing-library/user-event": "^14.5.2", "@types/classnames": "^2.3.1", - "@types/jest": "^29.5.4", - "@types/node": "^20.6.0", - "@types/react": "^18.2.21", - "@types/react-copy-to-clipboard": "^5.0.4", - "@types/react-dom": "^18.2.7", + "@types/jest": "^29.5.11", + "@types/node": "^20.11.0", + "@types/react": "^18.2.46", + "@types/react-copy-to-clipboard": "^5.0.7", + "@types/react-dom": "^18.2.18", "@types/react-loader-spinner": "^4.0.0", "@types/react-router-dom": "^5.3.3", - "apexcharts": "^3.42.0", + "apexcharts": "^3.45.1", "apollo-link": "^1.2.14", - "aws-amplify": "^5.3.10", + "aws-amplify": "^5.3.13", "aws-appsync-auth-link": "^3.0.7", "aws-appsync-subscription-link": "^3.1.2", - "axios": "^1.5.0", - "classnames": "^2.3.2", - "date-fns": "^2.30.0", - "i18next": "^23.5.0", - "i18next-browser-languagedetector": "^7.1.0", - "i18next-http-backend": "^2.2.2", + "axios": "^1.6.4", + "classnames": "^2.5.1", + "date-fns": "^3.2.0", + "i18next": "^23.7.16", + "i18next-browser-languagedetector": "^7.2.0", + "i18next-http-backend": "^2.4.2", "lodash.clonedeep": "^4.5.0", - "moment": "^2.29.4", + "moment": "^2.30.1", "node-sass": "^9.0.0", "oidc-client": "^1.11.5", - "oidc-client-ts": "^2.2.5", + "oidc-client-ts": "^2.4.0", "react": "^18.2.0", "react-apexcharts": "^1.4.1", "react-copy-to-clipboard": "^5.1.0", "react-dom": "^18.2.0", - "react-i18next": "^13.2.2", - "react-loader-spinner": "^5.4.5", - "react-minimal-datetime-range": "^2.0.9", + "react-i18next": "^14.0.0", + "react-loader-spinner": "^6.1.6", + "react-minimal-datetime-range": "^2.1.0", "react-number-format": "^5.3.1", - "react-oidc-context": "^2.3.0", - "react-router-dom": "^6.15.0", - "redux": "^4.2.1", + "react-oidc-context": "^2.3.1", + "react-router-dom": "^6.21.1", + "redux": "^5.0.1", "redux-react-hook": "^4.0.3", - "sweetalert2": "11.7.27", - "typescript": "^5.2.2" + "sweetalert2": "11.10.2", + "typescript": "^5.3.3" }, "scripts": { "start": "cross-env TSC_WATCHFILE=UseFsEventsWithFallbackDynamicPolling react-scripts start", "win-start": "react-scripts start", "build": "react-scripts build && rimraf build/aws-exports.json", "test": "react-scripts test", - "eject": "react-scripts eject" + "eject": "react-scripts eject", + "test:coverage": "react-scripts test --coverage --watchAll=false --collectCoverageFrom='src/**/*.{ts,tsx}'" }, "eslintConfig": { "extends": "react-app" @@ -80,19 +81,27 @@ ] }, "devDependencies": { - "@types/lodash.clonedeep": "^4.5.7", - "@typescript-eslint/eslint-plugin": "^6.6.0", - "@typescript-eslint/parser": "^6.6.0", - "browserslist": "^4.21.10", + "@types/lodash.clonedeep": "^4.5.9", + "@typescript-eslint/eslint-plugin": "^6.17.0", + "@typescript-eslint/parser": "^6.17.0", + "browserslist": "^4.22.2", "cross-env": "^7.0.3", - "eslint": "^8.48.0", + "eslint": "^8.56.0", "eslint-config-standard": "^17.1.0", - "eslint-plugin-import": "^2.28.1", - "eslint-plugin-n": "^16.0.2", + "eslint-plugin-import": "^2.29.1", + "eslint-plugin-n": "^16.6.1", "eslint-plugin-promise": "^6.1.1", "eslint-plugin-react": "^7.33.2", "react-scripts": "^5.0.1", - "rimraf": "^5.0.1" + "rimraf": "^5.0.5" + }, + "jest": { + "moduleNameMapper": { + "\\.(css|less|scss|sss|styl)$": "/mock/styleMock.ts" + }, + "transformIgnorePatterns": [ + "node_modules/(?!@codemirror)/" + ] }, "overrides": { "nth-check": "2.1.1" diff --git a/source/portal/public/locales/en/translation.json b/source/portal/public/locales/en/translation.json index 3ef9b95..bfda3ff 100644 --- a/source/portal/public/locales/en/translation.json +++ b/source/portal/public/locales/en/translation.json @@ -127,10 +127,11 @@ "objectACLDesc3": " Use the default value if you don't want to set the object ACL.", "transferType": "Transfer Type", "nameOfPrefixList": "Name of Prefix List", - "nameOfPrefixListDesc": "Upload the prefix list (.txt file, one prefix per line) in root directory of source bucket. E.g. prefix_list.txt", + "nameOfPrefixListDesc": "Upload the prefix list (.txt file, one prefix per line) in root directory of <0>. E.g. prefix_list.txt", "govCloudTips": "GovCloud data transfer limited in US", "payerRequest": "Enable S3 Payer Request?", - "payerRequestDesc": "S3 Payer Request required the Requestor pay DTO fee" + "payerRequestDesc": "S3 Payer Request required the Requester pay DTO fee", + "solutionLoggingBucket": "Solution logging bucket" }, "dest": { "title": "Destination settings", @@ -143,7 +144,11 @@ "storageClass": "Storage Class", "storageClassDesc": "Objects will be stored with the selected storage class.", "destRegionName": "Region Name", - "destRegionNameDesc": "You can enter region name or region code." + "destRegionNameDesc": "You can enter region name or region code.", + "encryptionType": "Encryption type", + "encryptionTypeDesc": "Check the destination bucket encryption type.", + "encryptionKey": "KMS Key ID", + "encryptionKeyDesc": "Input the KMS Key ID for access destination bucket" }, "credential": { "title": "Credentials", @@ -397,7 +402,8 @@ "destPrefixInvalid": "Do not append / at the end of the prefix", "emailRequired": "Alarm Email is required", "emailValidate": "Alarm Email must be validate", - "credentialRequired": "Secret Key is Required" + "credentialRequired": "Secret Key is Required", + "kmsKeyRequired": "Please input the KMS Key ID" } }, "comps": { @@ -448,7 +454,7 @@ }, "engineSettingsEC2": { "name": "Engine settings", - "tip1": "The S3 transfer engine use Amazon EC2 Graviton2 in an Auto Scaling Group to transfer objects. You can adjust the concurrency by setting the minimum, maximum and desired instance number. The transfer engine will auto scale between the minimun value and maximum value.", + "tip1": "The S3 transfer engine use Amazon EC2 Graviton2 in an Auto Scaling Group to transfer objects. You can adjust the concurrency by setting the minimum, maximum and desired instance number. The transfer engine will auto scale between the minimum value and maximum value.", "tip2": "You can adjust these numbers on Amazon Web Services Console after creating the task.", "linkName": "Adjust the Auto Scaling Group to change the transfer concurrency." }, @@ -467,7 +473,7 @@ "numberName": "Finder Number", "ex1": "Example 1: ", "ex2": "Example 2: ", - "tip1": "If there're 10 subdirectories with over 100k files each, it's recommended to set finderDepth=1 and finderNumber=10", + "tip1": "If there are 10 subdirectories with over 100k files each, it's recommended to set finderDepth=1 and finderNumber=10", "tip2": "If the transferred files are under two tier subdirectories, with 10 tier 1 subdirectories and 15 tier 2 subdirectories incl. over 100k leaf objects each. It is recommended to set finderDepth=2, finderNumber=10*15=150" }, "finderMemory": { diff --git a/source/portal/public/locales/zh/translation.json b/source/portal/public/locales/zh/translation.json index 984efa5..26726c0 100644 --- a/source/portal/public/locales/zh/translation.json +++ b/source/portal/public/locales/zh/translation.json @@ -127,10 +127,11 @@ "objectACLDesc3": " 如果您不想设置对象ACL,请使用默认值.", "transferType": "传输类型", "nameOfPrefixList": "前缀列表文件名", - "nameOfPrefixListDesc": "请将前缀文件以.txt的格式上传到源存储桶根目录,每行写一个前缀。例如: prefix_list.txt", + "nameOfPrefixListDesc": "请将前缀文件以.txt的格式上传到<0>根目录,每行写一个前缀。例如: prefix_list.txt", "govCloudTips": "GovCloud 数据传输仅限在美国", "payerRequest": "使用申请方付款?", - "payerRequestDesc": "使用申请方付款存储桶时,申请方将支付请求和从存储桶下载数据的费用" + "payerRequestDesc": "使用申请方付款存储桶时,申请方将支付请求和从存储桶下载数据的费用", + "solutionLoggingBucket": "解决方案日志存储桶" }, "dest": { "title": "目标桶设置", @@ -143,7 +144,11 @@ "storageClass": "存储类型", "storageClassDesc": "对象将会传输到您所选择的存储类型.", "destRegionName": "目标区域名称", - "destRegionNameDesc": "您可以输入区域名称或区域代码." + "destRegionNameDesc": "您可以输入区域名称或区域代码.", + "encryptionType": "加密类型", + "encryptionTypeDesc": "检查目标存储桶加密类型。", + "encryptionKey": "KMS 密钥 ID", + "encryptionKeyDesc": "输入访问目标存储桶的 KMS 密钥 ID" }, "credential": { "title": "凭证", @@ -397,7 +402,8 @@ "destPrefixInvalid": "不需要在后缀后面添加‘/’", "emailRequired": "通知邮箱必填", "emailValidate": "请输入合法的通知邮箱地址", - "credentialRequired": "凭证参数必填" + "credentialRequired": "凭证参数必填", + "kmsKeyRequired": "请输入 KMS 秘钥 ID" } }, "comps": { diff --git a/source/portal/src/App.test.tsx b/source/portal/src/App.test.tsx index 2a1b446..28b1f81 100644 --- a/source/portal/src/App.test.tsx +++ b/source/portal/src/App.test.tsx @@ -3,9 +3,43 @@ import React from "react"; import { render } from "@testing-library/react"; import App from "./App"; +import { useDispatch } from "redux-react-hook"; -test("renders learn react link", () => { - const { getByText } = render(); - const linkElement = getByText(/learn react/i); - expect(linkElement).toBeDefined(); +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), +})); + +describe("App", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + }); + test("renders without errors", () => { + const { getByText } = render(); + const linkElement = getByText(/Data Transfer Hub/i); + expect(linkElement).toBeDefined(); + }); }); diff --git a/source/portal/src/assets/config/const.ts b/source/portal/src/assets/config/const.ts index 23a772f..854e08d 100644 --- a/source/portal/src/assets/config/const.ts +++ b/source/portal/src/assets/config/const.ts @@ -161,6 +161,10 @@ export const S3_PARAMS_LIST_MAP: any = { en_name: "Destination Bucket Object Prefix", zh_name: "目标数据桶对象前缀", }, + srcPrefixListBucket: { + en_name: "Prefix List Storage Bucket", + zh_name: "前缀文件存储桶", + }, destStorageClass: { en_name: "Destination Object Storage Class", zh_name: "目标S3存储类型", @@ -185,6 +189,14 @@ export const S3_PARAMS_LIST_MAP: any = { en_name: "Destination Secret key for Credentials", zh_name: "目标区域的密钥名称", }, + destPutObjectSSEType: { + en_name: "Encryption type", + zh_name: "加密类型", + }, + destPutObjectSSEKmsKeyId: { + en_name: "KMS Key ID", + zh_name: "KMS 密钥 ID", + }, includeMetadata: { en_name: "Include Metadata", zh_name: "是否包含元数据", @@ -343,6 +355,12 @@ export enum S3_STORAGE_CLASS_TYPE { INTELLIGENT_TIERING = "INTELLIGENT_TIERING", } +export enum S3_ENCRYPTION_TYPE { + NONE = "None", + AES256 = "AES256", + AWS_KMS = "AWS_KMS", +} + export enum CRON_FIX_UNIT { DAYS = "DAYS", MINUTES = "MINUTES", @@ -380,7 +398,9 @@ export const AWS_REGION_LIST = [ { value: "ap-southeast-1", name: "Singapore" }, { value: "ap-southeast-2", name: "Sydney" }, { value: "ap-northeast-1", name: "Tokyo" }, + { value: "ap-southeast-4", name: "Melbourne" }, { value: "ca-central-1", name: "Central" }, + { value: "ca-west-1 ", name: "Calgary" }, { value: "eu-central-1", name: "Frankfurt" }, { value: "eu-west-1", name: "Ireland" }, { value: "eu-west-2", name: "London" }, @@ -392,13 +412,14 @@ export const AWS_REGION_LIST = [ { value: "me-south-1", name: "Bahrain" }, { value: "me-central-1", name: "UAE" }, { value: "sa-east-1", name: "São Paulo" }, + { value: "il-central-1", name: "Tel Aviv" }, { value: "cn-north-1", name: "Beijing" }, { value: "cn-northwest-1", name: "Ningxia" }, { value: "us-gov-west-1", name: "US-Gov-West" }, { value: "us-gov-east-1", name: "US-Gov-East" }, ]; -// https://www.alibabacloud.com/help/en/basics-for-beginners/latest/regions-and-zones +// https://help.aliyun.com/document_detail/40654.html export const ALICLOUD_REGION_LIST = [ { value: "cn-qingdao", name: "Qingdao" }, { value: "cn-beijing", name: "Beijing" }, @@ -413,6 +434,7 @@ export const ALICLOUD_REGION_LIST = [ { value: "cn-wulanchabu", name: "Ulanqab" }, { value: "cn-nanjing", name: "Nanjing" }, { value: "cn-fuzhou", name: "Fuzhou" }, + { value: "cn-wuhan-lr", name: "Wuhan" }, { value: "cn-hongkong", name: "Hong Kong" }, { value: "ap-southeast-1", name: "Singapore" }, { value: "ap-southeast-2", name: "Sydney" }, @@ -427,6 +449,7 @@ export const ALICLOUD_REGION_LIST = [ { value: "us-west-1", name: "Silicon Valley" }, { value: "us-east-1", name: "Virginia" }, { value: "me-east-1", name: "Dubai" }, + { value: "me-central-1", name: "Riyadh" }, { value: "ap-south-1", name: "Mumbai" }, ]; @@ -535,6 +558,8 @@ export const EC2_MEMORY_LIST = [ { name: "64GB", value: "64" }, { name: "128GB", value: "128" }, { name: "256GB", value: "256" }, + { name: "384GB", value: "384" }, + { name: "512GB", value: "512" }, ]; export const CRON_TYPE_LIST = [ @@ -573,6 +598,12 @@ export const S3_STORAGE_CLASS_OPTIONS = [ }, ]; +export const S3_ENCRYPTION_OPTIONS = [ + { name: "None", value: S3_ENCRYPTION_TYPE.NONE }, + { name: "AES256", value: S3_ENCRYPTION_TYPE.AES256 }, + { name: "SSE-KMS", value: S3_ENCRYPTION_TYPE.AWS_KMS }, +]; + export const SOURCE_TYPE_OPTIONS_LAMBDA = [ { name: "creation.sourceType.amazonS3Name", value: EnumSourceType.S3 }, { name: "creation.sourceType.aliyunOSSName", value: EnumSourceType.AliOSS }, diff --git a/source/portal/src/assets/types/index.ts b/source/portal/src/assets/types/index.ts index d53abba..b00bde9 100644 --- a/source/portal/src/assets/types/index.ts +++ b/source/portal/src/assets/types/index.ts @@ -32,6 +32,7 @@ export interface AmplifyConfigType { aws_cloudfront_url: string; aws_appsync_graphqlEndpoint: string; aws_user_pools_web_client_id: string; + src_prefix_list_bucket: string; } export interface AmplifyJSONType { diff --git a/source/portal/src/assets/utils/utils.ts b/source/portal/src/assets/utils/utils.ts index 89c9eed..74c4d82 100644 --- a/source/portal/src/assets/utils/utils.ts +++ b/source/portal/src/assets/utils/utils.ts @@ -55,3 +55,34 @@ export const buildCloudFormationLink = ( } return `https://${region}.console.aws.amazon.com/cloudformation/home?region=${region}#/stacks/events?filteringStatus=active&filteringText=&viewNested=true&hideStacks=false&stackId=${cfnId}`; }; + +export const getDirPrefixByPrefixStr = (prefix: string) => { + if (prefix && prefix.indexOf("/") >= 0) { + const slashPos = prefix.lastIndexOf("/"); + prefix = prefix.slice(0, slashPos + 1); + } + return prefix; +}; + +export const buildS3Link = ( + region: string, + bucketName: string, + prefix?: string +): string => { + if (region.startsWith("cn")) { + if (prefix) { + const resPrefix = getDirPrefixByPrefixStr(prefix); + if (resPrefix.endsWith("/")) { + return `https://console.amazonaws.cn/s3/buckets/${bucketName}?region=${region}&prefix=${resPrefix}`; + } + } + return `https://console.amazonaws.cn/s3/buckets/${bucketName}`; + } + if (prefix) { + const resPrefix = getDirPrefixByPrefixStr(prefix); + if (resPrefix.endsWith("/")) { + return `https://s3.console.aws.amazon.com/s3/buckets/${bucketName}?region=${region}&prefix=${resPrefix}`; + } + } + return `https://s3.console.aws.amazon.com/s3/buckets/${bucketName}`; +}; diff --git a/source/portal/src/common/comp/form/DrhInput.tsx b/source/portal/src/common/comp/form/DrhInput.tsx index 79f08ee..c6be2fb 100644 --- a/source/portal/src/common/comp/form/DrhInput.tsx +++ b/source/portal/src/common/comp/form/DrhInput.tsx @@ -11,7 +11,7 @@ type DrhInputProps = { inputType?: string; isHidden?: boolean; optionTitle: string; - optionDesc: string; + optionDesc: string | React.ReactElement; isOptional?: boolean; defaultValue?: string; inputValue: string; diff --git a/source/portal/src/common/info/FinderMemory.tsx b/source/portal/src/common/info/FinderMemory.tsx index c5f1a40..e1ce708 100644 --- a/source/portal/src/common/info/FinderMemory.tsx +++ b/source/portal/src/common/info/FinderMemory.tsx @@ -49,7 +49,15 @@ const FinderMemory: React.FC = () => { 256GB - >640 M + 960 M + 384GB + + + 1280 M + 512GB + + + >1280 M {t("comps.finderMemory.notSupport")} diff --git a/source/portal/src/graphql/mutations.ts b/source/portal/src/graphql/mutations.ts index b52f180..9031d31 100644 --- a/source/portal/src/graphql/mutations.ts +++ b/source/portal/src/graphql/mutations.ts @@ -1,5 +1,3 @@ -// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. -// SPDX-License-Identifier: Apache-2.0 /* tslint:disable */ /* eslint-disable */ // this is an auto generated file. This will be overwritten diff --git a/source/portal/src/graphql/queries.ts b/source/portal/src/graphql/queries.ts index d098e36..ac17c40 100644 --- a/source/portal/src/graphql/queries.ts +++ b/source/portal/src/graphql/queries.ts @@ -1,5 +1,3 @@ -// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. -// SPDX-License-Identifier: Apache-2.0 /* tslint:disable */ /* eslint-disable */ // this is an auto generated file. This will be overwritten diff --git a/source/portal/src/pages/creation/ecr/StepTwoECR.tsx b/source/portal/src/pages/creation/ecr/StepTwoECR.tsx index 9bf9bcc..93a0244 100644 --- a/source/portal/src/pages/creation/ecr/StepTwoECR.tsx +++ b/source/portal/src/pages/creation/ecr/StepTwoECR.tsx @@ -330,6 +330,10 @@ const StepTwoECR: React.FC = () => { useEffect(() => { // Update tmpECRTaskInfo updatetmpECRTaskInfo("sourceInAccount", sourceInAccount); + if (sourceInAccount === YES_NO.YES) { + updatetmpECRTaskInfo("srcAccountId", ""); + updatetmpECRTaskInfo("srcCredential", ""); + } }, [sourceInAccount]); useEffect(() => { @@ -361,6 +365,10 @@ const StepTwoECR: React.FC = () => { useEffect(() => { // Update tmpECRTaskInfo updatetmpECRTaskInfo("destInAccount", destInAccount); + if (destInAccount === YES_NO.YES) { + updatetmpECRTaskInfo("destAccountId", ""); + updatetmpECRTaskInfo("destCredential", ""); + } }, [destInAccount]); useEffect(() => { diff --git a/source/portal/src/pages/creation/ecr/test/StepOneECRTips.test.tsx b/source/portal/src/pages/creation/ecr/test/StepOneECRTips.test.tsx new file mode 100644 index 0000000..b86b953 --- /dev/null +++ b/source/portal/src/pages/creation/ecr/test/StepOneECRTips.test.tsx @@ -0,0 +1,64 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepOneECRTips from "../StepOneECRTips"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("StepOneECRTips", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpECRTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/creation.ecrPlugin.name/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/creation/ecr/test/StepThreeECR.test.tsx b/source/portal/src/pages/creation/ecr/test/StepThreeECR.test.tsx new file mode 100644 index 0000000..cfd47ec --- /dev/null +++ b/source/portal/src/pages/creation/ecr/test/StepThreeECR.test.tsx @@ -0,0 +1,92 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepThreeECR from "../StepThreeECR"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +beforeEach(() => { + jest.spyOn(console, "error").mockImplementation(jest.fn()); +}); + +describe("StepThreeECR", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + const fakeLocalStorage = { + getItem: jest.fn().mockImplementation((key) => { + if (key === "DTH_CONFIG_JSON") { + return JSON.stringify({ + taskCluster: { + ecsVpcId: "vpc-0ecf829216cbb6f25", + ecsClusterName: + "DataTransferHub-cognito-TaskCluster-67w4Up7IBkg3", + ecsSubnets: [ + "subnet-0f909c9db82df2026", + "subnet-086fc4c755dfcbac1", + ], + }, + }); + } + return null; + }), + }; + + Object.defineProperty(window, "localStorage", { + value: fakeLocalStorage, + }); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpECRTaskInfo: { + parametersObj: { sourceType: "ECR", srcImageList: "" }, + }, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/step.oneTitle/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/creation/ecr/test/StepTwoECR.test.tsx b/source/portal/src/pages/creation/ecr/test/StepTwoECR.test.tsx new file mode 100644 index 0000000..8e8cd67 --- /dev/null +++ b/source/portal/src/pages/creation/ecr/test/StepTwoECR.test.tsx @@ -0,0 +1,68 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepTwoECR from "../StepTwoECR"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +beforeEach(() => { + jest.spyOn(console, "error").mockImplementation(jest.fn()); +}); + +describe("StepTwoECR", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpECRTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/step.oneTitle/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/creation/s3/StepThreeS3.tsx b/source/portal/src/pages/creation/s3/StepThreeS3.tsx index 759126a..5984e76 100644 --- a/source/portal/src/pages/creation/s3/StepThreeS3.tsx +++ b/source/portal/src/pages/creation/s3/StepThreeS3.tsx @@ -31,6 +31,7 @@ import { CREATE_USE_LESS_PROPERTY, DRH_CONFIG_JSON_NAME, YES_NO, + S3SourcePrefixType, } from "assets/config/const"; import { ACTION_TYPE, @@ -44,6 +45,7 @@ import { ScheduleType } from "API"; const mapState = (state: IState) => ({ tmpTaskInfo: state.tmpTaskInfo, + amplifyConfig: state.amplifyConfig, }); const JOB_TYPE_MAP: any = { @@ -64,7 +66,7 @@ const StepThreeS3: React.FC = () => { } }, [i18n.language]); - const { tmpTaskInfo } = useMappedState(mapState); + const { tmpTaskInfo, amplifyConfig } = useMappedState(mapState); const [paramShowList, setParamShowList] = useState([]); const [createTaskGQL, setCreateTaskGQL] = useState(); const [isCreating, setIsCreating] = useState(false); @@ -135,6 +137,14 @@ const StepThreeS3: React.FC = () => { ParameterKey: "srcPrefixsListFile", ParameterValue: parametersObj.srcPrefixsListFile, }); + // set prefix bucket + if (parametersObj.srcPrefixType === S3SourcePrefixType.MultiplePrefix) { + taskParamArr.push({ + ParameterKey: "srcPrefixListBucket", + ParameterValue: amplifyConfig.src_prefix_list_bucket, + }); + } + taskParamArr.push({ ParameterKey: "srcEvent", ParameterValue: parametersObj.enableS3Event, @@ -184,6 +194,14 @@ const StepThreeS3: React.FC = () => { ParameterKey: "isPayerRequest", ParameterValue: tmpIsPayerRequest, }); + taskParamArr.push({ + ParameterKey: "destPutObjectSSEType", + ParameterValue: parametersObj.destPutObjectSSEType, + }); + taskParamArr.push({ + ParameterKey: "destPutObjectSSEKmsKeyId", + ParameterValue: parametersObj.destPutObjectSSEKmsKeyId, + }); taskParamArr.push({ ParameterKey: "destAcl", ParameterValue: parametersObj.destAcl, @@ -228,6 +246,7 @@ const StepThreeS3: React.FC = () => { ParameterKey: "alarmEmail", ParameterValue: parametersObj.alarmEmail, }); + return taskParamArr; }; diff --git a/source/portal/src/pages/creation/s3/StepTwoS3.tsx b/source/portal/src/pages/creation/s3/StepTwoS3.tsx index 78a3718..5c4235a 100644 --- a/source/portal/src/pages/creation/s3/StepTwoS3.tsx +++ b/source/portal/src/pages/creation/s3/StepTwoS3.tsx @@ -26,6 +26,7 @@ import EC2Config from "./comps/EC2Config"; import OptionSettings from "./comps/MoreSettings"; import { + S3_ENCRYPTION_TYPE, bucketNameIsValid, emailIsValid, urlIsValid, @@ -60,6 +61,7 @@ const StepTwoS3: React.FC = () => { const [alarmEmailFormatError, setAlarmEmailFormatError] = useState(false); const [fixedRateInvalidError, setFixedRateInvalidError] = useState(false); + const [kmsKeyRequiredError, setKmsKeyRequiredError] = useState(false); useEffect(() => { // if the taskInfo has no taskType, redirect to Step one @@ -143,6 +145,16 @@ const StepTwoS3: React.FC = () => { } } } + // check kms key required + if ( + paramsObj?.destPutObjectSSEType === S3_ENCRYPTION_TYPE.AWS_KMS && + !paramsObj?.destPutObjectSSEKmsKeyId?.trim() + ) { + setKmsKeyRequiredError(true); + errorCount++; + } else { + setKmsKeyRequiredError(false); + } } } @@ -188,9 +200,13 @@ const StepTwoS3: React.FC = () => { setFixedRateInvalidError(false); }, [tmpTaskInfo?.parametersObj?.ec2CronExpression]); + useEffect(() => { + setKmsKeyRequiredError(false); + }, [tmpTaskInfo?.parametersObj?.destPutObjectSSEKmsKeyId]); + // END Monitor tmpTaskInfo and hide validation error const goToHomePage = () => { - navigate("/"); + navigate(-1); }; const goToStepOne = () => { @@ -217,6 +233,7 @@ const StepTwoS3: React.FC = () => { setDestPrefixFormatError(false); setAlramEmailRequireError(false); setAlarmEmailFormatError(false); + setKmsKeyRequiredError(false); }, [tmpTaskInfo?.parametersObj?.sourceType]); return ( @@ -259,6 +276,7 @@ const StepTwoS3: React.FC = () => { destShowBucketValidError={destBucketFormatError} destShowRegionRequiredError={destRegionRequiredError} destShowPrefixFormatError={destPrefixFormatError} + kmsKeyRequiredError={kmsKeyRequiredError} /> {engine === S3_ENGINE_TYPE.LAMBDA && } {engine === S3_ENGINE_TYPE.EC2 && ( diff --git a/source/portal/src/pages/creation/s3/comps/DestSettings.tsx b/source/portal/src/pages/creation/s3/comps/DestSettings.tsx index 65c1973..37f27f4 100644 --- a/source/portal/src/pages/creation/s3/comps/DestSettings.tsx +++ b/source/portal/src/pages/creation/s3/comps/DestSettings.tsx @@ -24,6 +24,8 @@ import { AWS_REGION_LIST, DEST_OBJECT_ACL_LINK, OBJECT_ACL_LIST, + S3_ENCRYPTION_OPTIONS, + S3_ENCRYPTION_TYPE, } from "assets/config/const"; import { ACTION_TYPE, S3_ENGINE_TYPE } from "assets/types/index"; @@ -39,6 +41,7 @@ interface DestPropType { destShowBucketValidError: boolean; destShowRegionRequiredError: boolean; destShowPrefixFormatError: boolean; + kmsKeyRequiredError: boolean; } const DestSettings: React.FC = (props) => { @@ -51,6 +54,7 @@ const DestSettings: React.FC = (props) => { destShowBucketValidError, destShowRegionRequiredError, destShowPrefixFormatError, + kmsKeyRequiredError, } = props; // Refs const destBucketRef = useRef(null); @@ -63,6 +67,7 @@ const DestSettings: React.FC = (props) => { const [showDestCredential, setShowDestCredential] = useState(false); const [showDestRegion, setShowDestRegion] = useState(false); const [showDestInAccount, setShowDestInAccount] = useState(true); + const [showKMSKeyInput, setShowKMSKeyInput] = useState(false); const [destInAccount, setDestInAccount] = useState( tmpTaskInfo?.parametersObj?.destInAccount || YES_NO.YES @@ -100,6 +105,13 @@ const DestSettings: React.FC = (props) => { tmpTaskInfo?.parametersObj?.destRegionObj || null ); + const [destPutObjectSSEType, setDestPutObjectSSEType] = useState( + tmpTaskInfo?.parametersObj?.destPutObjectSSEType || S3_ENCRYPTION_TYPE.NONE + ); + const [destPutObjectSSEKmsKeyId, setDestPutObjectSSEKmsKeyId] = useState( + tmpTaskInfo?.parametersObj?.destPutObjectSSEKmsKeyId || "" + ); + // Show Error useEffect(() => { console.info("destShowBucketRequiredError:", destShowBucketRequiredError); @@ -190,7 +202,7 @@ const DestSettings: React.FC = (props) => { // Monitor SourceInAccount When Lambda Engine useEffect(() => { const srcInAccount = - tmpTaskInfo?.parametersObj?.sourceInAccount === YES_NO.YES ? true : false; + tmpTaskInfo?.parametersObj?.sourceInAccount === YES_NO.YES; if (engineType === S3_ENGINE_TYPE.LAMBDA) { // engine type is lambda // show dest credential and dest region when src in account is true, else hide @@ -239,6 +251,22 @@ const DestSettings: React.FC = (props) => { } }, [tmpTaskInfo?.parametersObj?.srcRegionName]); + // Change destPutObjectSSEType + useEffect(() => { + updateTmpTaskInfo("destPutObjectSSEType", destPutObjectSSEType); + if (destPutObjectSSEType === S3_ENCRYPTION_TYPE.AWS_KMS) { + setShowKMSKeyInput(true); + } else { + setShowKMSKeyInput(false); + setDestPutObjectSSEKmsKeyId(""); + } + }, [destPutObjectSSEType]); + + // Change destPutObjectSSEKmsKeyId + useEffect(() => { + updateTmpTaskInfo("destPutObjectSSEKmsKeyId", destPutObjectSSEKmsKeyId); + }, [destPutObjectSSEKmsKeyId]); + return (
@@ -399,6 +427,41 @@ const DestSettings: React.FC = (props) => { />
)} + +
+ ) => { + setDestPutObjectSSEType(event.target.value); + }} + selectValue={destPutObjectSSEType} + optionList={S3_ENCRYPTION_OPTIONS} + /> +
+ + {showKMSKeyInput && ( +
+ ) => { + setDestPutObjectSSEKmsKeyId(event.target.value); + }} + inputName="destPutObjectSSEKmsKeyId" + inputValue={destPutObjectSSEKmsKeyId} + placeholder="12345678-1234-1234-1234-123456789012" + showRequiredError={kmsKeyRequiredError} + requiredErrorMsg={t("tips.error.kmsKeyRequired")} + /> +
+ )}
)} diff --git a/source/portal/src/pages/creation/s3/comps/SourceSettings.tsx b/source/portal/src/pages/creation/s3/comps/SourceSettings.tsx index 5c6f4c6..29828ec 100644 --- a/source/portal/src/pages/creation/s3/comps/SourceSettings.tsx +++ b/source/portal/src/pages/creation/s3/comps/SourceSettings.tsx @@ -1,7 +1,7 @@ // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import React, { useState, useEffect, useRef } from "react"; -import { useTranslation } from "react-i18next"; +import { Trans, useTranslation } from "react-i18next"; import { useDispatch, useMappedState } from "redux-react-hook"; // DRH Comp @@ -36,8 +36,10 @@ import { import { IState } from "store/Store"; import Alert from "common/Alert"; +import { buildS3Link } from "assets/utils/utils"; const mapState = (state: IState) => ({ tmpTaskInfo: state.tmpTaskInfo, + amplifyConfig: state.amplifyConfig, }); interface SourcePropType { @@ -58,7 +60,7 @@ const SourceSettings: React.FC = (props) => { } = props; console.info("engineType:", engineType); const { t } = useTranslation(); - const { tmpTaskInfo } = useMappedState(mapState); + const { tmpTaskInfo, amplifyConfig } = useMappedState(mapState); const dispatch = useDispatch(); console.info("tmpTaskInfo:", tmpTaskInfo); @@ -385,9 +387,23 @@ const SourceSettings: React.FC = (props) => { optionTitle={t( "creation.step2.settings.source.nameOfPrefixList" )} - optionDesc={t( - "creation.step2.settings.source.nameOfPrefixListDesc" - )} + optionDesc={ + , + ]} + /> + } onChange={(event: React.ChangeEvent) => { setSrcPrefixsListFile(event.target.value); }} diff --git a/source/portal/src/pages/creation/s3/test/StepThreeS3.test.tsx b/source/portal/src/pages/creation/s3/test/StepThreeS3.test.tsx new file mode 100644 index 0000000..e67489e --- /dev/null +++ b/source/portal/src/pages/creation/s3/test/StepThreeS3.test.tsx @@ -0,0 +1,92 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepThreeS3 from "../StepThreeS3"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("StepThreeS3", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + + const fakeLocalStorage = { + getItem: jest.fn().mockImplementation((key) => { + if (key === "DTH_CONFIG_JSON") { + return JSON.stringify({ + taskCluster: { + ecsVpcId: "vpc-0ecf829216cbb6f25", + ecsClusterName: + "DataTransferHub-cognito-TaskCluster-67w4Up7IBkg3", + ecsSubnets: [ + "subnet-0f909c9db82df2026", + "subnet-086fc4c755dfcbac1", + ], + }, + }); + } + return null; + }), + }; + + Object.defineProperty(window, "localStorage", { + value: fakeLocalStorage, + }); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + taskCluster: { + ecsVpcId: "vpc-xxxxx", + ecsClusterName: "xxx-xxx-xxx-xxx", + ecsSubnets: ["subnet-xxx", "subnet-xxx"], + }, + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/step.threeTitle/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/creation/s3/test/StepTwoS3.test.tsx b/source/portal/src/pages/creation/s3/test/StepTwoS3.test.tsx new file mode 100644 index 0000000..f0a1c0e --- /dev/null +++ b/source/portal/src/pages/creation/s3/test/StepTwoS3.test.tsx @@ -0,0 +1,68 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepTwoS3 from "../StepTwoS3"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +beforeEach(() => { + jest.spyOn(console, "error").mockImplementation(jest.fn()); +}); + +describe("StepTwoS3", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/step.oneTitle/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/creation/s3/test/stepOneS3Tips.test.tsx b/source/portal/src/pages/creation/s3/test/stepOneS3Tips.test.tsx new file mode 100644 index 0000000..388c9df --- /dev/null +++ b/source/portal/src/pages/creation/s3/test/stepOneS3Tips.test.tsx @@ -0,0 +1,64 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepOneS3Tips from "../StepOneS3Tips"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("StepOneS3Tips", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/creation.s3plugin.name/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/creation/test/StepOne.test.tsx b/source/portal/src/pages/creation/test/StepOne.test.tsx new file mode 100644 index 0000000..fb9a8ce --- /dev/null +++ b/source/portal/src/pages/creation/test/StepOne.test.tsx @@ -0,0 +1,64 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import StepOne from "../StepOne"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("StepOne", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/step.oneTitle/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/tabs/test/Details.test.tsx b/source/portal/src/pages/detail/tabs/test/Details.test.tsx new file mode 100644 index 0000000..f83f24f --- /dev/null +++ b/source/portal/src/pages/detail/tabs/test/Details.test.tsx @@ -0,0 +1,73 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import Details from "../Details"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; +import { mockTaskDetail } from "test/task.mock"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("Details", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render( +
+ ); + const linkElement = getByText(/taskDetail.details/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/tabs/test/Engine.test.tsx b/source/portal/src/pages/detail/tabs/test/Engine.test.tsx new file mode 100644 index 0000000..16eed27 --- /dev/null +++ b/source/portal/src/pages/detail/tabs/test/Engine.test.tsx @@ -0,0 +1,65 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import Engine from "../Engine"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; +import { mockTaskDetail } from "test/task.mock"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("Engine", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "S3EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/taskDetail.engineSettings/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/tabs/test/LogGroup.test.tsx b/source/portal/src/pages/detail/tabs/test/LogGroup.test.tsx new file mode 100644 index 0000000..b02802d --- /dev/null +++ b/source/portal/src/pages/detail/tabs/test/LogGroup.test.tsx @@ -0,0 +1,73 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import LogGroup from "../LogGroup"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("LogGroup", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render( + + ); + const linkElement = getByText(/logGroup.details/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/tabs/test/Monitor.test.tsx b/source/portal/src/pages/detail/tabs/test/Monitor.test.tsx new file mode 100644 index 0000000..b17e7f3 --- /dev/null +++ b/source/portal/src/pages/detail/tabs/test/Monitor.test.tsx @@ -0,0 +1,81 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import Monitor from "../Monitor"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; +import { mockTaskDetail } from "test/task.mock"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +jest.mock("react-apexcharts", () => ({ + __esModule: true, + default: () =>
, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +beforeEach(() => { + jest.spyOn(console, "error").mockImplementation(jest.fn()); +}); + +describe("Monitor", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render( + + ); + const linkElement = getByText(/taskDetail.taskMetrics/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/tabs/test/Options.test.tsx b/source/portal/src/pages/detail/tabs/test/Options.test.tsx new file mode 100644 index 0000000..9a2dd9b --- /dev/null +++ b/source/portal/src/pages/detail/tabs/test/Options.test.tsx @@ -0,0 +1,65 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import Options from "../Options"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; +import { mockTaskDetail } from "test/task.mock"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("Options", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/taskDetail.option/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/test/DetailECR.test.tsx b/source/portal/src/pages/detail/test/DetailECR.test.tsx new file mode 100644 index 0000000..b3972e8 --- /dev/null +++ b/source/portal/src/pages/detail/test/DetailECR.test.tsx @@ -0,0 +1,64 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import DetailECR from "../DetailECR"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("DetailECR", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/breadCrumb.tasks/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/detail/test/DetailS3.test.tsx b/source/portal/src/pages/detail/test/DetailS3.test.tsx new file mode 100644 index 0000000..76d1053 --- /dev/null +++ b/source/portal/src/pages/detail/test/DetailS3.test.tsx @@ -0,0 +1,68 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import DetailS3 from "../DetailS3"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +beforeEach(() => { + jest.spyOn(console, "error").mockImplementation(jest.fn()); +}); + +describe("DetailS3", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/breadCrumb.tasks/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/home/test/Home.test.tsx b/source/portal/src/pages/home/test/Home.test.tsx new file mode 100644 index 0000000..f57029c --- /dev/null +++ b/source/portal/src/pages/home/test/Home.test.tsx @@ -0,0 +1,64 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import Home from "../Home"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +describe("Home", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/Getting started/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/pages/list/test/TaskList.test.tsx b/source/portal/src/pages/list/test/TaskList.test.tsx new file mode 100644 index 0000000..384f9cf --- /dev/null +++ b/source/portal/src/pages/list/test/TaskList.test.tsx @@ -0,0 +1,68 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 +import React from "react"; +import { render } from "@testing-library/react"; +import TaskList from "../TaskList"; +import { useDispatch, useMappedState } from "redux-react-hook"; +import { useNavigate, useParams } from "react-router-dom"; + +jest.mock("react-i18next", () => ({ + Trans: ({ children }: any) => children, + useTranslation: () => { + return { + t: (key: any) => key, + i18n: { + changeLanguage: jest.fn(), + }, + }; + }, + initReactI18next: { + type: "3rdParty", + init: jest.fn(), + }, +})); + +// Mock useDispatch and useSelector +jest.mock("redux-react-hook", () => ({ + useDispatch: jest.fn(), + useSelector: jest.fn(), + useMappedState: jest.fn(), +})); + +jest.mock("react-router-dom", () => ({ + ...jest.requireActual("react-router-dom"), + useParams: jest.fn(), + useNavigate: jest.fn(), +})); + +beforeEach(() => { + jest.spyOn(console, "error").mockImplementation(jest.fn()); +}); + +describe("TaskList", () => { + let mockDispatch = jest.fn(); + beforeEach(() => { + mockDispatch = jest.fn(); + (useDispatch as any).mockReturnValue(mockDispatch); + + global.performance.getEntriesByType = jest.fn( + () => [{ type: "reload" }] as any + ); + const navigateMock = jest.fn(); + (useNavigate as jest.Mock).mockReturnValue(navigateMock); + }); + test("renders without errors", () => { + (useMappedState as jest.Mock).mockReturnValue({ + tmpTaskInfo: {}, + amplifyConfig: { + aws_project_region: "us-east-1", + }, + }); + (useParams as jest.Mock).mockReturnValue({ + engine: "EC2", + }); + const { getByText } = render(); + const linkElement = getByText(/taskList.title/i); + expect(linkElement).toBeDefined(); + }); +}); diff --git a/source/portal/src/setupTests.ts b/source/portal/src/setupTests.ts index 922b355..2d90df4 100644 --- a/source/portal/src/setupTests.ts +++ b/source/portal/src/setupTests.ts @@ -4,4 +4,7 @@ // allows you to do things like: // expect(element).toHaveTextContent(/react/i) // learn more: https://github.com/testing-library/jest-dom -import "@testing-library/jest-dom/extend-expect"; +import "@testing-library/jest-dom"; +import { TextEncoder } from "node:util"; + +global.TextEncoder = TextEncoder; diff --git a/source/portal/src/store/Store.ts b/source/portal/src/store/Store.ts index 6a1ab8f..ca6afc5 100644 --- a/source/portal/src/store/Store.ts +++ b/source/portal/src/store/Store.ts @@ -48,6 +48,8 @@ export interface S3ec2Task { chunkSize?: string; maxThreads?: string; isPayerRequest: string; + destPutObjectSSEType: string; + destPutObjectSSEKmsKeyId: string; }; [key: string]: any; } diff --git a/source/portal/src/test/task.mock.ts b/source/portal/src/test/task.mock.ts new file mode 100644 index 0000000..651aef5 --- /dev/null +++ b/source/portal/src/test/task.mock.ts @@ -0,0 +1,46 @@ +// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +// SPDX-License-Identifier: Apache-2.0 + +export const mockTaskDetail = { + id: "xxx-xxx-xxx-xxx-xxx", + description: "", + type: "S3EC2", + templateUrl: "https://xxx.com/DataTransferS3Stack.template", + parameters: [ + { + ParameterKey: "srcType", + ParameterValue: "Amazon_S3", + __typename: "TaskParameter", + }, + { + ParameterKey: "srcEndpoint", + ParameterValue: "", + __typename: "TaskParameter", + }, + { + ParameterKey: "srcBucket", + ParameterValue: "xxx-dev-xxx-xxxx-01", + __typename: "TaskParameter", + }, + ], + createdAt: "2024-01-11T02:54:22.144372Z", + stoppedAt: null, + progress: "IN_PROGRESS", + progressInfo: null, + stackId: + "arn:aws:cloudformation:us-west-2:xxxx:stack/DTH-xxx-4724d/xxx-b02c-xxx-xxx-xx", + stackName: null, + stackOutputs: [ + { + Description: "Split Part DynamoDB Table Name", + OutputKey: "xxxxx", + OutputValue: "DTH-xxx-xxx-xxx-xxx", + __typename: "StackOutputs", + }, + ], + stackStatus: "CREATE_COMPLETE", + stackStatusReason: null, + executionArn: "arn:aws:states:us-west-2:xxxxxx:execution:xxx", + scheduleType: "FIXED_RATE", + __typename: "Task", +};