Skip to content

Commit

Permalink
4.1.376
Browse files Browse the repository at this point in the history
  • Loading branch information
aws-sdk-dotnet-automation committed Jul 21, 2023
1 parent 536555d commit 2a1d3e7
Show file tree
Hide file tree
Showing 12 changed files with 845 additions and 769 deletions.
1,416 changes: 708 additions & 708 deletions Include/sdk/_sdk-versions.json

Large diffs are not rendered by default.

7 changes: 6 additions & 1 deletion changelogs/CHANGELOG.2023.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
### 4.1.375 (2023-07-20 21:44Z)
### 4.1.376 (2023-07-21 21:41Z)
* AWS Tools for PowerShell now use AWS .NET SDK 3.7.604.0 and leverage its new features and improvements. Please find a description of the changes at https://github.com/aws/aws-sdk-net/blob/master/changelogs/SDK.CHANGELOG.ALL.md.
* Amazon Relational Database Service
* Modified cmdlet New-RDSDBInstance: added parameter DBSystemId.

### 4.1.375 (2023-07-20 21:44Z)
* AWS Tools for PowerShell now use AWS .NET SDK 3.7.603.0 and leverage its new features and improvements. Please find a description of the changes at https://github.com/aws/aws-sdk-net/blob/master/changelogs/SDK.CHANGELOG.ALL.md.
* Amazon CodeCatalyst
* Added cmdlet Get-CCATSourceRepository leveraging the GetSourceRepository service API.
Expand Down
7 changes: 6 additions & 1 deletion changelogs/CHANGELOG.ALL.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
### 4.1.375 (2023-07-20 21:44Z)
### 4.1.376 (2023-07-21 21:41Z)
* AWS Tools for PowerShell now use AWS .NET SDK 3.7.604.0 and leverage its new features and improvements. Please find a description of the changes at https://github.com/aws/aws-sdk-net/blob/master/changelogs/SDK.CHANGELOG.ALL.md.
* Amazon Relational Database Service
* Modified cmdlet New-RDSDBInstance: added parameter DBSystemId.

### 4.1.375 (2023-07-20 21:44Z)
* AWS Tools for PowerShell now use AWS .NET SDK 3.7.603.0 and leverage its new features and improvements. Please find a description of the changes at https://github.com/aws/aws-sdk-net/blob/master/changelogs/SDK.CHANGELOG.ALL.md.
* Amazon CodeCatalyst
* Added cmdlet Get-CCATSourceRepository leveraging the GetSourceRepository service API.
Expand Down
34 changes: 24 additions & 10 deletions modules/AWSPowerShell/Cmdlets/Glue/Basic/New-GLUEJob-Cmdlet.cs
Original file line number Diff line number Diff line change
Expand Up @@ -389,16 +389,30 @@ public partial class NewGLUEJobCmdlet : AmazonGlueClientCmdlet, IExecutor
/// <summary>
/// <para>
/// <para>The type of predefined worker that is allocated when a job runs. Accepts a value of
/// Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.</para><ul><li><para>For the <code>Standard</code> worker type, each worker provides 4 vCPU, 16 GB of memory
/// and a 50GB disk, and 2 executors per worker.</para></li><li><para>For the <code>G.1X</code> worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of
/// memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for memory-intensive jobs.</para></li><li><para>For the <code>G.2X</code> worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of
/// memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for memory-intensive jobs.</para></li><li><para>For the <code>G.025X</code> worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB
/// of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for low volume streaming jobs. This worker type is only available for Glue version
/// 3.0 streaming jobs.</para></li><li><para>For the <code>Z.2X</code> worker type, each worker maps to 2 M-DPU (8vCPU, 64 GB of
/// m emory, 128 GB disk), and provides up to 8 Ray workers based on the autoscaler.</para></li></ul>
/// G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.</para><ul><li><para>For the <code>G.1X</code> worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
/// memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker.
/// We recommend this worker type for workloads such as data transforms, joins, and queries,
/// to offers a scalable and cost effective way to run most jobs.</para></li><li><para>For the <code>G.2X</code> worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
/// memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker.
/// We recommend this worker type for workloads such as data transforms, joins, and queries,
/// to offers a scalable and cost effective way to run most jobs.</para></li><li><para>For the <code>G.4X</code> worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB
/// of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
/// worker. We recommend this worker type for jobs whose workloads contain your most demanding
/// transforms, aggregations, joins, and queries. This worker type is available only for
/// Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions:
/// US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore),
/// Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt),
/// Europe (Ireland), and Europe (Stockholm).</para></li><li><para>For the <code>G.8X</code> worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB
/// of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
/// worker. We recommend this worker type for jobs whose workloads contain your most demanding
/// transforms, aggregations, joins, and queries. This worker type is available only for
/// Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions
/// as supported for the <code>G.4X</code> worker type.</para></li><li><para>For the <code>G.025X</code> worker type, each worker maps to 0.25 DPU (2 vCPUs, 4
/// GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
/// worker. We recommend this worker type for low volume streaming jobs. This worker type
/// is only available for Glue version 3.0 streaming jobs.</para></li><li><para>For the <code>Z.2X</code> worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB
/// of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers
/// based on the autoscaler.</para></li></ul>
/// </para>
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
Expand Down
32 changes: 22 additions & 10 deletions modules/AWSPowerShell/Cmdlets/Glue/Basic/New-GLUESession-Cmdlet.cs
Original file line number Diff line number Diff line change
Expand Up @@ -219,16 +219,28 @@ public partial class NewGLUESessionCmdlet : AmazonGlueClientCmdlet, IExecutor
#region Parameter WorkerType
/// <summary>
/// <para>
/// <para>The type of predefined worker that is allocated to use for the session. Accepts a
/// value of Standard, G.1X, G.2X, or G.025X.</para><ul><li><para>For the <code>Standard</code> worker type, each worker provides 4 vCPU, 16 GB of memory
/// and a 50GB disk, and 2 executors per worker.</para></li><li><para>For the <code>G.1X</code> worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of
/// memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for memory-intensive jobs.</para></li><li><para>For the <code>G.2X</code> worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of
/// memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for memory-intensive jobs.</para></li><li><para>For the <code>G.025X</code> worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB
/// of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for low volume streaming jobs. This worker type is only available for Glue version
/// 3.0 streaming jobs.</para></li></ul>
/// <para>The type of predefined worker that is allocated when a job runs. Accepts a value of
/// G.1X, G.2X, G.4X, or G.8X for Spark jobs. Accepts the value Z.2X for Ray notebooks.</para><ul><li><para>For the <code>G.1X</code> worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
/// memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker.
/// We recommend this worker type for workloads such as data transforms, joins, and queries,
/// to offers a scalable and cost effective way to run most jobs.</para></li><li><para>For the <code>G.2X</code> worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
/// memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker.
/// We recommend this worker type for workloads such as data transforms, joins, and queries,
/// to offers a scalable and cost effective way to run most jobs.</para></li><li><para>For the <code>G.4X</code> worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB
/// of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
/// worker. We recommend this worker type for jobs whose workloads contain your most demanding
/// transforms, aggregations, joins, and queries. This worker type is available only for
/// Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions:
/// US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore),
/// Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt),
/// Europe (Ireland), and Europe (Stockholm).</para></li><li><para>For the <code>G.8X</code> worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB
/// of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
/// worker. We recommend this worker type for jobs whose workloads contain your most demanding
/// transforms, aggregations, joins, and queries. This worker type is available only for
/// Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions
/// as supported for the <code>G.4X</code> worker type.</para></li><li><para>For the <code>Z.2X</code> worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB
/// of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers
/// based on the autoscaler.</para></li></ul>
/// </para>
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
Expand Down
35 changes: 24 additions & 11 deletions modules/AWSPowerShell/Cmdlets/Glue/Basic/Start-GLUEJobRun-Cmdlet.cs
Original file line number Diff line number Diff line change
Expand Up @@ -175,17 +175,30 @@ public partial class StartGLUEJobRunCmdlet : AmazonGlueClientCmdlet, IExecutor
/// <summary>
/// <para>
/// <para>The type of predefined worker that is allocated when a job runs. Accepts a value of
/// Standard, G.1X, G.2X, or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.</para><ul><li><para>For the <code>Standard</code> worker type, each worker provides 4 vCPU, 16 GB of memory
/// and a 50GB disk, and 2 executors per worker.</para></li><li><para>For the <code>G.1X</code> worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of
/// memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for memory-intensive jobs.</para></li><li><para>For the <code>G.2X</code> worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of
/// memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for memory-intensive jobs.</para></li><li><para>For the <code>G.025X</code> worker type, each worker maps to 0.25 DPU (2 vCPU, 4 GB
/// of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker
/// type for low volume streaming jobs. This worker type is only available for Glue version
/// 3.0 streaming jobs.</para></li><li><para>For the <code>Z.2X</code> worker type, each worker maps to 2 DPU (8vCPU, 64 GB of
/// m emory, 128 GB disk), and provides up to 8 Ray workers (one per vCPU) based on the
/// autoscaler.</para></li></ul>
/// G.1X, G.2X, G.4X, G.8X or G.025X for Spark jobs. Accepts the value Z.2X for Ray jobs.</para><ul><li><para>For the <code>G.1X</code> worker type, each worker maps to 1 DPU (4 vCPUs, 16 GB of
/// memory) with 84GB disk (approximately 34GB free), and provides 1 executor per worker.
/// We recommend this worker type for workloads such as data transforms, joins, and queries,
/// to offers a scalable and cost effective way to run most jobs.</para></li><li><para>For the <code>G.2X</code> worker type, each worker maps to 2 DPU (8 vCPUs, 32 GB of
/// memory) with 128GB disk (approximately 77GB free), and provides 1 executor per worker.
/// We recommend this worker type for workloads such as data transforms, joins, and queries,
/// to offers a scalable and cost effective way to run most jobs.</para></li><li><para>For the <code>G.4X</code> worker type, each worker maps to 4 DPU (16 vCPUs, 64 GB
/// of memory) with 256GB disk (approximately 235GB free), and provides 1 executor per
/// worker. We recommend this worker type for jobs whose workloads contain your most demanding
/// transforms, aggregations, joins, and queries. This worker type is available only for
/// Glue version 3.0 or later Spark ETL jobs in the following Amazon Web Services Regions:
/// US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore),
/// Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt),
/// Europe (Ireland), and Europe (Stockholm).</para></li><li><para>For the <code>G.8X</code> worker type, each worker maps to 8 DPU (32 vCPUs, 128 GB
/// of memory) with 512GB disk (approximately 487GB free), and provides 1 executor per
/// worker. We recommend this worker type for jobs whose workloads contain your most demanding
/// transforms, aggregations, joins, and queries. This worker type is available only for
/// Glue version 3.0 or later Spark ETL jobs, in the same Amazon Web Services Regions
/// as supported for the <code>G.4X</code> worker type.</para></li><li><para>For the <code>G.025X</code> worker type, each worker maps to 0.25 DPU (2 vCPUs, 4
/// GB of memory) with 84GB disk (approximately 34GB free), and provides 1 executor per
/// worker. We recommend this worker type for low volume streaming jobs. This worker type
/// is only available for Glue version 3.0 streaming jobs.</para></li><li><para>For the <code>Z.2X</code> worker type, each worker maps to 2 M-DPU (8vCPUs, 64 GB
/// of memory) with 128 GB disk (approximately 120GB free), and provides up to 8 Ray workers
/// based on the autoscaler.</para></li></ul>
/// </para>
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
namespace Amazon.PowerShell.Cmdlets.RDS
{
/// <summary>
/// Returns information about blue/green deployments.
/// Describes one or more blue/green deployments.
///
///
/// <para>
Expand All @@ -52,8 +52,9 @@ public partial class GetRDSBlueGreenDeploymentCmdlet : AmazonRDSClientCmdlet, IE
#region Parameter BlueGreenDeploymentIdentifier
/// <summary>
/// <para>
/// <para>The blue/green deployment identifier. If this parameter is specified, information
/// from only the specific blue/green deployment is returned. This parameter isn't case-sensitive.</para><para>Constraints:</para><ul><li><para>If supplied, must match an existing blue/green deployment identifier.</para></li></ul>
/// <para>The blue/green deployment identifier. If you specify this parameter, the response
/// only includes information about the specific blue/green deployment. This parameter
/// isn't case-sensitive.</para><para>Constraints:</para><ul><li><para>Must match an existing blue/green deployment identifier.</para></li></ul>
/// </para>
/// </summary>
[System.Management.Automation.Parameter(Position = 0, ValueFromPipelineByPropertyName = true, ValueFromPipeline = true)]
Expand All @@ -63,7 +64,7 @@ public partial class GetRDSBlueGreenDeploymentCmdlet : AmazonRDSClientCmdlet, IE
#region Parameter Filter
/// <summary>
/// <para>
/// <para>A filter that specifies one or more blue/green deployments to describe.</para><para>Supported filters:</para><ul><li><para><code>blue-green-deployment-identifier</code> - Accepts system-generated identifiers
/// <para>A filter that specifies one or more blue/green deployments to describe.</para><para>Valid Values:</para><ul><li><para><code>blue-green-deployment-identifier</code> - Accepts system-generated identifiers
/// for blue/green deployments. The results list only includes information about the blue/green
/// deployments with the specified identifiers.</para></li><li><para><code>blue-green-deployment-name</code> - Accepts user-supplied names for blue/green
/// deployments. The results list only includes information about the blue/green deployments
Expand All @@ -83,7 +84,7 @@ public partial class GetRDSBlueGreenDeploymentCmdlet : AmazonRDSClientCmdlet, IE
/// <summary>
/// <para>
/// <para>An optional pagination token provided by a previous <code>DescribeBlueGreenDeployments</code>
/// request. If this parameter is specified, the response includes only records beyond
/// request. If you specify this parameter, the response only includes records beyond
/// the marker, up to the value specified by <code>MaxRecords</code>.</para>
/// </para>
/// <para>
Expand All @@ -101,7 +102,7 @@ public partial class GetRDSBlueGreenDeploymentCmdlet : AmazonRDSClientCmdlet, IE
/// <para>
/// <para>The maximum number of records to include in the response. If more records exist than
/// the specified <code>MaxRecords</code> value, a pagination token called a marker is
/// included in the response so you can retrieve the remaining results.</para><para>Default: 100</para><para>Constraints: Minimum 20, maximum 100.</para>
/// included in the response so you can retrieve the remaining results.</para><para>Default: 100</para><para>Constraints:</para><ul><li><para>Must be a minimum of 20.</para></li><li><para>Can't exceed 100.</para></li></ul>
/// </para>
/// </summary>
[System.Management.Automation.Parameter(ValueFromPipelineByPropertyName = true)]
Expand Down
Loading

0 comments on commit 2a1d3e7

Please sign in to comment.