- Administrator Guide
The src
folder contains a nesting of folders that go on to make two different
git repositories. One of the repositories (bootstrap) lives on your
AWS Management account and is responsible for holding
bootstrap AWS CloudFormation templates for your Organization that are used for
setting up AWS Accounts with a foundation of resources to suit your needs.
These templates are automatically applied to AWS accounts within specific AWS
Organizations Organizational Units.
The (pipelines) repository lives on your deployment account and holds the various definitions that are used to facilitate deploying your applications and resources across many AWS Account and Regions. These two repositories will be automatically committed to AWS CodeCommit with the initial starting content as part of the initial deployment of ADF. From there, you can clone the repositories and extend the example definitions in them as desired.
The adfconfig.yml
file resides on the
management account CodeCommit Repository (in us-east-1
)
and defines the general high-level configuration for the AWS Deployment
Framework.
The configuration properties are synced into AWS Systems Manager Parameter Store and are used for certain orchestration options throughout your Organization.
Below is an example of an adfconfig.yml
file. When you deploy ADF,
the information entered at the time of deployment will be passed into the
adfconfig.yml
that is committed to the bootstrap repository as a starting
point. You can always edit it and push it back into the bootstrap repository
to update these values.
roles:
cross-account-access: OrganizationAccountAccessRole
regions:
deployment-account:
- eu-central-1
targets:
# No need to also include 'eu-central-1' in targets as the
# deployment-account region is also considered a target region by default.
- eu-west-1
config:
main-notification-endpoint:
- type: email
target: jane@example.com
moves:
- name: to-root
action: safe
protected: # Optional
- ou-123
scp: # Service Control Policy
keep-default-scp: enabled # Optional
scm: # Source Control Management
auto-create-repositories: enabled # Optional
default-scm-branch: main # Optional
org:
stage: dev # Optional
In the above example the properties are categorized into roles
, regions
,
and config
. These are discussed in the similarly named sections below.
Currently, the only role type specification that the adfconfig.yml
file
requires is the role name you wish to use for cross-account access.
The AWS Deployment Framework requires a role that will be used to initiate
bootstrapping and allow the management account access to
assume access in target accounts to facilitate bootstrapping and updating
processes.
When you create new accounts in your AWS Organization they all need to be instantiated with a role with the same name. When creating a new account, by default, the role you choose as the Organization Access role comes with Administrator Privileges in IAM.
The Regions specification plays an important role in how ADF is laid out.
You should choose a single deployment-account
region that you would consider
your primary region of choice. For any pipeline that did not configure a region
to target specifically, it will default to the primary region only. This region
is also used as the region where the deployment pipelines will be located.
If you want to deploy to other regions next to the primary region, you should
also define these as target
regions.
These target regions will get the baselines applied to and can function as
deployment targets for your pipelines.
If you decide to add more regions later that is also possible.
You can add a new region to this list and push the aws-deployment-framework-bootstrap repository to the management account it will apply the bootstrapping process to all of your existing accounts for the new region added.
Please note: You can not have AWS CodePipeline deployment pipelines
deploy into regions that are not part of this list of bootstrapped regions.
In the above example we want our main region to be the eu-central-1
region,
this region, along with eu-west-1
will be able to have resources bootstrapped
and deployed into via AWS CodePipeline.
Config has five components in main-notification-endpoint
, scp
, scm
,
moves
and protected
.
-
main-notification-endpoint
is the main notification endpoint for the bootstrapping pipeline and deployment account pipeline creation pipeline. This value should be a valid email address or slack channel that will receive updates about the status (Success/Failure) of CodePipeline that is associated with bootstrapping and creation/updating of all pipelines throughout your organization. -
moves
is configuration related to moving accounts within your AWS Organization. Currently the only configuration options formoves
is named to-root and allows eithersafe
orremove_base
.If you specify safe you are telling the framework that when an AWS Account is moved from whichever OU it currently is in, back into the root of the Organization it will not make any direct changes to the account.
However it will update any AWS CodePipeline pipelines that the account belonged to so that it is no longer a valid target.
If you specify
remove_base
for this option and move an account to the root of your organization it will attempt to remove the base CloudFormation stacks (regional and global) from the account and then update any associated pipeline. -
protected
is a configuration that allows you to specify a list of OUs that are not configured by the AWS Deployment Framework bootstrapping process. You can move accounts to the protected OUs which will skip the standard bootstrapping process. This is useful for migrating existing accounts into being managed by The ADF. -
scp
allows the definition of configuration options that relate to Service Control Policies. Currently the only option forscp
iskeep-default-scp
which can either beenabled
ordisabled
. This option determines if the defaultFullAWSAccess
Service Control Policy should stay attached to OUs that are managed by anscp.json
or if it should be removed to make way for a more specific SCP, by default this isenabled
.Its important to understand how SCPs work before setting this setting to disabled. Please read How SCPs work for more information.
-
scm
tracks all source code management configuration.-
auto-create-repositories
enables the automation aspect of creating AWS CodeCommit repositories automatically when creating a new Pipeline via ADF. This option is only relevant if you are using thesource_account_id
parameter in your Pipeline Parameters. If this value isenabled
, ADF will automatically create the AWS CodeCommit Repository on the Source Account with the same name of the associated pipeline. If the Repository already exists on the source account this process will continue silently.If you wish to update/alter the CloudFormation template used to create this repository it can be found at
${bootstrap_repository}/adf-build/shared/repo_templates/codecommit.yml
. -
default-scm-branch
allows you to configure the default branch that should be used with all source-code management platforms that ADF supports. For any new installation of the AWS Deployment Framework, this will default tomain
, as this is the default branch used by CodeCommit.If you have a pre-existing installation of ADF and did not specifically configure this property, for backward compatibility it will default to
master
instead. We recommend configuring the main scm branch name tomain
. As new repositories will most likely use this branch name as their default branch.
-
-
deployment-maps
tracks all source code management configuration.-
allow-empty-target
, when set toenabled
this allows you to configure deployment maps with empty targets.If all targets get evaluated to empty, the ADF pipeline is still created based on the remaining providers (e.g. source and build). It just does not have a deploy stage.
This is useful when you need to:
- target an OU that does not have any AWS Accounts (initially or temporarily).
- target AWS Accounts by tag with no AWS Accounts having that tag assigned (yet).
-
-
org
configures settings in case of staged multi-organization ADF deployments.-
stage
defines the AWS Organization stage in case of staged multi- organization ADF deployments. This is an optional setting. In enterprise- grade deployments, it is a common practice to define an explicit dev, int and prod AWS Organization with its own ADF instance per AWS organization. This approach allows for well-tested and stable prod AWS Organization deployments. If set, a matching SSM parameter/adf/org/stage
gets created in the deployment and all target accounts. You can reference it within your buildspec files to allow for org-specific deployments; without hardcoding the AWS Organization stage in your buildspec. If this variable is not set, the SSM parameter/adf/org/stage
defaults to "none". More information about setting up ADF with multiple AWS Organizations can be found in the Multi-Organization Guide -
default-scm-codecommit-account-id
allows you to configure the default account id that should be used with all source-code management platforms that ADF supports. If not set here, the deployment account id is taken as default value. The CodeCommit account-id can be still be overwritten with an explicit account id in the individual deployment map. The CodeCommit provider guide provides more details: providers-guide.md.yml: CodeCommit.
-
The management account is the owner of the AWS Organization. This account is the only account that is allowed to access AWS Organizations and make changes such as creating accounts or moving accounts the Organization. Because of this, it is important that you keep the account safe and well structured when it comes to IAM access and controls.
The AWS Deployment Framework does deploy minimal resources into the management
account to allow processes such as bootstrapping and
to take advantage of changes of structure within your AWS Organization. The
CodeCommit repository on the management account holds the bootstrapping templates
and parameters along with the adfconfig.yml
file which defines how the
framework orchestrates certain tasks.
Any changes to resources such as adfconfig.yml
or any bootstrapping template
should be done via a Pull Request and the repository should apply strict IAM
permissions surrounding merging of requests. This account is typically managed
by some sort of centrally governed team who can translate business requirements
or policies for Organizational Units into technical artifacts in the form of
SCPs or CloudFormation templates that ADF will apply.
The Deployment Account is the gatekeeper for all deployments throughout an Organization. Once the baselines have been applied to your accounts via the bootstrapping process, the Deployment account connects the dots by taking source code and resources from a repository (e.g. CodeCommit, S3, or external via AWS CodeConnections or an AWS CodeStar Connection) and into the numerous target accounts and regions as defined in the deployment map files via AWS CodePipeline.
The Deployment account holds the deployment_map.yml file(s) which defines where, what and how your resources will go from their source to their destination. In an Organization there should only be a single Deployment account. This is to promote transparency throughout an organization and to reduce duplication of code and resources.
With a single Deployment Account, teams can see the status of other teams deployments while still being restricted to making changes to the deployment map files via Pull Requests.
The Default Deployment account region is the region where the Pipelines you create and their associated stacks will reside. It is also the region that will host CodeCommit repositories (If you choose to use CodeCommit). You can think of the Deployment Account region as the region that you would consider your default region of choice when deploying resources in AWS.
ADF enables automated AWS Account creation and management via its Account
Provisioning process. This process runs as part of the bootstrap pipeline and
ensures the existence of AWS Accounts defined in .yml
files within the
adf-accounts
directory. For more information on how how this works see the
README.md,
or alternatively in the bootstrap repository you can find it in the
adf-accounts
directory.
Note on Provisioning AWS Accounts via AWS Control Tower ADF is fully compatible with AWS Control Tower. If you deployed ADF and AWS Control Tower in your AWS Organization and if you opted for vending AWS Accounts via AWS Control Tower, you can ignore the ADF Account Provisioning feature. Any AWS Account vended via AWS Control Tower will go through the regular ADF bootstrap process described below.
The Bootstrapping of an AWS Account is a concept that allows you to specify an AWS CloudFormation template that will automatically be applied on any account that is moved into a specific Organizational Unit in AWS Organizations.
Bootstrapping of AWS accounts is a convenient way to apply a baseline to an account or sub-set of accounts based on the structure of your AWS Organization.
When deploying ADF, a CodeCommit repository titled
aws-deployment-framework-bootstrap
will be created.
This repository acts as an entry point for bootstrapping templates. The
definition of which templates are applied to which Organization Unit are defined
in the adf-bootstrap
folder structure of the
aws-deployment-framework-bootstrap
repository.
Inside the adf-bootstrap
folder of the aws-deployment-framework-bootstrap
repository, create a folder structure and associated CloudFormation templates
global-iam.yml
or regional.yml
and optional parameters
global-iam-params.json
or regional-params.json
that match your desired
specificity when bootstrapping your AWS Accounts.
Please be aware: that you should not create a copy of the global.yml
file
that lives in the adf-bootstrap
folder. ADF will automatically navigate up
through the directory structure to find the global.yml
. If you were to create
a global.yml
copy, it will not get updated by ADF automatically.
Additionally, it is important to note that you should not make changes to the
global.yml
file. Changes in this file will be overwritten when a new version
of ADF is installed. If you made changes, you would need to carefully apply
them again with every update. Instead, it is suggested to use the
global-iam.yml
as described above.
Commit and push this repository to the CodeCommit repository titled
aws-deployment-framework-bootstrap
on the management account.
global.yml
in the base of theadf-bootstrap
folder is required as it functions as the base configuration for ADF. Do not create copies of this file. The only two locations where theglobal.yml
is allowed to exist areadf-bootstrap/global.yml
andadf-bootstrap/deployment/global.yml
. If you must deploy infrastructure in the global region only, please use theglobal-iam.yml
file name instead. This file will not get any automated updates by ADF either, so changes persist after updates of ADF.regional.yml
is optional, deploying specific resources in each ADF-enabled region.
Pushing to this repository will initiate AWS CodePipeline to run which will in-turn start AWS CodeBuild to sync the contents of the repository with S3.
Once the files are in S3, moving an AWS Account into a specific AWS Organization will trigger AWS Step Functions to run and to apply the bootstrap template for that specific Organizational Unit to that newly moved account.
Any changes in the future made to this repository, such as its templates contents or parameters files, will trigger an update to any bootstrap template applied on accounts throughout the Organization.
You can see these changes in CodePipeline. When you install/update ADF, it
creates a CodePipeline bootstrap pipeline named
aws-deployment-framework-bootstrap-pipeline
.
When bootstrapping AWS Accounts you might have a large number of accounts that are all required to have the same baseline applied to them. For this reason we use a recursive search concept to apply base stacks (and parameters) to accounts. Lets take the following example.
adf-bootstrap <------------------ This folder lives in the bootstrap repo within
│ the management account.s
├───deployment
│ ├───global.yml
│ └───regional.yml
│
│
├───banking
│ │
│ ├───dev
│ │ ├───regional.yml
│ │ └───global.yml
│ ├───test
│ │ ├───regional.yml
│ │ └───global.yml
│ └───prod
│ ├───regional.yml
│ └───global.yml
│
│
└───insurance
│
├───dev
│ ├───regional.yml
│ └───global.yml
├───test
│ ├───regional.yml
│ └───global.yml
└───prod
├───regional.yml
└───global.yml
In the above example we have defined a different global and regional
configuration for each of the OU's under our business unit's
(insurance
and banking
). This means, that any account we move into these
OU's will apply the most specific template that they can.|
However, if we decided that dev
and test
would have the same base template
we can change the structure to be as follows:
adf-bootstrap <-- This folder lives in the bootstrap repo on management account
├───regional.yml <== These will be used when specificity is lowest
├───global.yml
│
├───deployment
│ ├───regional.yml
│ └───global.yml
│
├───banking
│ ├───prod
│ ├───regional.yml
│ └───global.yml
│
└───insurance
├───prod
├───regional.yml
└───global.yml
What we have done is removed two folder representations of the Organizational
Units and moved our generic test
& dev
base stacks into the root of our
repository (as a single template). This means that any account move into an
OU that cannot find its regional.yml
or global.yml
will recursively search
one level up until it reaches the root. This same logic is applied for
parameter files such as regional-params.json
and global-params.json
(if they exist).
When it comes to bootstrapping accounts with certain resources you may have a
need for some of these resources to be regionally defined such as VPC's,
Subnets, S3 Buckets etc. The optional regional.yml
and associated
regional-params.json
files allow you to define what will be bootstrapped at a
regional level.
This means each region you specify in your adfconfig.yml
will receive this
base stack. The same is applied to updating the base stacks. When a new change
comes in for any base template it will be updated for each of the regions
specified in your adfconfig.yml
.
The Deployment account has a minimum required regional.yml
file that is part
of the initial commit of the skeleton content. More can be added to this
template as required, however, the existing resources should not be removed.
The global bootstrapping is similar to the Regional Bootstrapping process. In this case, however, the resources are deployed at a global level.
Resources such as IAM roles and other account wide resources should be placed in
the global.yaml
and optional associated parameters in global-params.json
in
order to have them applied to accounts.
Any global stack is deployed in the same region you choose to deploy your deployment account into.
Global stacks are deployed and updated first whenever the bootstrapping or
updating process occur to allow for any exports to be defined prior to regional
stacks executing. The Deployment account has a minimum required global.yml
file that is included as part of the initial commit. More can be added to this
template as required, however, the default resources should not be removed.
When you setup the initial configuration for the AWS Deployment Framework you define your initial parameters, a subset of these get placed into the adfconfig.yml.
This file defines the regions you will use for not only bootstrapping but which regions will later be used as targets for deployment pipelines. Be sure you read the section on adfconfig to understand how this ties in with bootstrapping.
We recommend to keep the bootstrapping templates for your accounts as light as
possible and to only include the absolute essentials for a given OUs baseline.
The provided default global.yml
contains the minimum resources for the
frameworks functionality (Roles) which can be built upon if required.
The name you specify in the deployment_map.yml
(or other map files) will be
automatically linked to a repository of the same name (in the source account
you chose) so be sure to name your pipeline in the map correctly. The
Notification endpoint is simply an endpoint that you will receive updates on
when this pipeline has state changes (via slack or email).
The source_account_id
plays an important role by linking this specific
pipeline to a specific account in which it can receive resources. For example,
let's say we are in a team that deploys the AWS CloudFormation template that
contains the base networking and security to the Organization. In this case,
this team may have their own AWS account which is completely isolated from the
team that develops applications for the banking sector of the company.
The pipeline for this CloudFormation template should only ever be triggered by
changes on the repository in that specific teams account. In this case, the AWS
Account Id for the team that is responsible for this specific CloudFormation
will be entered in the parameters as source_account_id
value.
When you enter the source_account_id
in the deployment_map.yml
, you are
saying that this pipeline can only receive content (trigger a run) from a
change on that specific repository in that specific account and on a specific
branch (defaults to adfconfig.yml - config/scm/default-scm-branch
).
This stops other accounts making repositories that might push code down an
unintended pipeline since every pipeline maps to only one source. Another
common parameter used in pipelines is restart_execution_on_update
, when this
is set to True
it will automatically trigger a run of the pipeline whenever
its structure is updated. This is useful when you want a pipeline to
automatically execute when a new account is moved into an Organizational Unit
that a pipeline deploys into, such as foundational resources (eg VPC, IAM,
CloudFormation resources).
pipelines:
# The CodeCommit repository on the source account would need to have this
# name
- name: vpc
default_providers:
source:
provider: codecommit
properties:
# This teams AWS account is the only one able to push into this
# pipeline
account_id: 111111111111
targets:
- /security # Shorthand target example
Here is an example of passing in a parameter to a pipeline to override the
default branch that is used to trigger the pipeline from, this time using
an AWS CodeConnections link to Bitbucket, GitHub, or GitLab as a
source (No need for source_account_id
).
pipelines:
- name: vpc # The GitHub repo would have this name
default_providers:
source:
provider: codeconnections
properties:
branch: dev/feature
# Optional, name property will be used if repository is not specified
repository: example-vpc
owner: example-owner
# The Code Connection ARN should be stored inside a AWS Systems
# Manager Parameter Store parameter name.
# Where the parameter key can have any name, as long as it starts
# with /adf/. You need to create this parameter manually
# in the deployment region in the deployment account once.
#
# It is recommended to add a Tag like CreatedBy with the user that
# created it. So it is clear this parameter is not managed by ADF
# itself.
code_connection_path: /adf/my_aws_codeconnections_param
targets:
- /security # Shorthand example
Note If you find yourself specifying the same set of parameters over and over through-out the deployment map consider using Yaml Anchors and Alias.
Along with Pipeline Parameters there can potentially be stage parameters if required. Take for example the following pipeline to perform ETL processing on workloads as they land in one Amazon S3 Bucket and, after transformed, moved into new Buckets in other AWS Accounts.
- name: sample-etl
default_providers:
source:
provider: s3
properties:
source_account_id: 111111111111
bucket_name: banking-etl-bucket-source
object_key: input.zip
deploy:
provider: s3
tags:
owner: john
targets:
- path: 222222222222
regions: eu-west-1
properties:
bucket_name: account-blah-bucket-etl
object_key: some_path/output.zip
- path: 333333333333
properties:
bucket_name: business_unit_bucket-etl
object_key: another/path/output.zip
In this example, we want to take our input.zip
file from the Amazon S3 Bucket
banking-etl-bucket-source
that lives in the account 111111111111
. When the
data lands in the bucket we want to run our pipeline which will run some ETL
scripts (see samples folder) against the data and output output.zip
. We
then want to deploy this artifact into other Amazon S3 Buckets that live in
other AWS Accounts and potentially other AWS Regions. Since Bucket Names are
globally unique we need some way to define which bucket we want to deploy our
output.zip
into at a stage level. The way we accomplish this is we can pass
in properties
in the form of key/value
into the stage itself.
Please note: This is the preferred method to setup external sources. If you have configured an AWS CodeStar Connection before and wonder how-to set it up again, please read the AWS CodeStar Connection steps.
Prerequisite: To enable AWS CodeConnections to be used the following steps are required:
- Navigate to the
aws-deployment-framework-bootstrap
repository, specifically the/adf-bootstrap/deployment/
folder (notice thedeployment
OU folder at the end). - There should be a
global-iam.yml
file in that folder. If not, please rename or copy theexample-global-iam.yml
file toglobal-iam.yml
to proceed. - Inside the
global-iam.yml
file ensure the CloudFormation resources namedCodeConnectionsPolicy
is no longer commented out.
Important note: CodeConnectionsPolicy
IAM policy is a sample.
Please make sure you update this policy and scope it properly for the use cases
you want to support.
In order for a pipeline to be connected to Bitbucket, GitHub, or GitLab you will need to setup AWS CodeConnections first. Please follow the steps as described in the AWS Developer Tools documentation on how-to setup a new connection with your code repository.
Once the connection is created you can store the Connection ARN into the Deployment Account with AWS Systems Manager Parameter Store.
Before you proceed, please check the Connection ARN of the connection you
configured. Depending on the method and creation time of the connection it
might have created a CodeStar Connection instead. If it did, the ARN will
include the codestar
keyword. If so, please proceed with the steps described
in the AWS CodeStar Connection first before you
continue.
Please use the /adf/
prefix for this parameter. For example:
/adf/my_source_connection_param
As ADF has read access to parameters that start with /adf/
.
Once the values are stored, you can create the Repository in your external source provider (Bitbucket, GitHub, or GitLab) as per normal. Once the repository is ready, no further steps are required on the external source provider's side, just update your deployment map to use the new source type and push to the deployment account. Here is an example of a deployment map with a single pipeline from an external source provider, in this case the external repository must be named 'vpc'.
pipelines:
- name: vpc
default_providers:
source:
provider: codeconnections
properties:
# Optional, name property will be used if repository is not specified
repository: example-vpc
owner: awslabs
# The path in Amazon Systems Manager Parameter Store that holds the
# Connections Arn.
# Please note, by default ADF only has access to read /adf/
# parameters. You need to create this parameter manually
# in the deployment region in the deployment account once.
#
# It is recommended to add a Tag like CreatedBy with the user that
# created it. So it is clear this parameter is not managed by ADF
# itself.
#
# Example content of the parameter, plain ARN as a simple string:
# arn:aws:codeconnections:eu-west-1:111111111111:connection/11111111-2222-3333-4444-555555555555
codeconnections_param_path: /adf/my_github_connection_arn_param
targets:
- /security
Please note: Only proceed with the steps in this document if you have an existing AWS CodeStar Connection you like to maintain. With the announcement of the AWS CodeStar Connection to AWS CodeConnections name change the preferred method to link GitHub, GitLab, Bitbucket, and other sources is AWS CodeConnections. You do not need to replace the AWS CodeStar Connection with an AWS CodeConnections resource if you have one already. According to the service documentation it will continue to be supported via the new AWS CodeConnections API without requiring further changes in ADF's config or the deployment maps.
If you are about to setup a new connection to an external source code provider, please consider following the AWS CodeConnections steps instead.
Prerequisite: To enable an AWS CodeStar Connection to be used the following steps are required:
- Navigate to the
aws-deployment-framework-bootstrap
repository, specifically the/adf-bootstrap/deployment/
folder (notice thedeployment
OU folder at the end). - There should be a
global-iam.yml
file in that folder. If not, please rename or copy theexample-global-iam.yml
file toglobal-iam.yml
to proceed. - Inside the
global-iam.yml
file ensure the CloudFormation resources namedCodeConnectionsPolicy
is no longer commented out. - Also make sure the CodeStar actions are no longer commented out.
Important note: CodeConnectionsPolicy
IAM policy is a sample.
Please make sure you update this policy and scope it properly for the use cases
you want to support. We recommend that you leave this policy name as
CodeConnectionsPolicy
, even though you are setting up a
CodeStar Connection
. This will make it easier to detect required updates if
these would-be introduced by future ADF versions.
The remaining steps are the same as configuring an AWS CodeConnections setup. So please follow the next steps as documented in the Using AWS CodeConnections for Bitbucket, GitHub, or GitLab section.
**Please note: While the AWS CodeConnections source provider name is
codeconnections
, if the configured connection ARN refers to an AWS CodeStar
Connection it will set that up instead.
Sometimes its a requirement to chain pipelines together, when one finishes
you might want another one to start. Because of this, we have a
completion_trigger
property which can be added to any pipeline within your
deployment map files. Chaining works by creating a CloudWatch event that
triggers on completion of the pipeline to start one or many others. For example:
pipelines:
- name: sample-vpc
default_providers:
source: &generic_source
provider: codecommit
properties:
account_id: 111111111111
# When this pipeline finishes it will automatically start sample-iam and
# sample-ecs-cluster at the same time
completion_trigger:
pipelines:
- sample-iam
- sample-ecs-cluster
# Using a YAML Anchor, *generic_targets will paste the same value as
# defined in `targets` here.
targets: &generic_targets
- /banking/testing
- approval
- /banking/production
- name: sample-iam
default_providers:
source: *generic_source # Using YAML Alias
targets: *generic_targets # Using YAML Alias
- name: sample-ecs-cluster
default_providers:
source: *generic_source # Using YAML Alias
targets: *generic_targets # Using YAML Alias
Service control policies (SCPs) are one type of policy that you can use to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines.
ADF allows SCPs to be applied in a similar fashion as base stacks. You can
define your SCP definition in a file named scp.json
and place it in a folder
that represents your Organizational Unit (or OU/AccountName path if you are
wanting to apply an account-specific SCP) within the adf-bootstrap
folder from
the aws-deployment-framework-bootstrap
repository on the management account.
For example, if you have an account named my_banking_account
under the
banking/dev
OU that needs a specific SCP, and another SCP defined for the
whole deployment
OU, the folder structure would look like this:
adf-bootstrap <-- This folder lives in the aws-deployment-framework-bootstrap
│ repository on the management account.
│
├───deployment
│ │
│ └───scp.json
│
└───banking
│
└───dev
│
└───my_banking_account
│
└───scp.json
The file adf-bootstrap/deployment/scp.json
applies the defined SCP to the
deployment
OU, while the file
adf-bootstrap/banking/dev/my_backing_account/scp.json
applies the defined SCP
to the my_banking_account
account.
SCPs are available only in an organization that has all features enabled. Once you have enabled all features within your Organization, ADF can manage and automate the application and updating process of the SCPs.
Tag Policies are a feature that allows you to define rules on how tags can be used on AWS resources in your accounts in AWS Organizations. You can use Tag Policies to easily adopt a standardized approach for tagging AWS resources.
You can define your Tagging Policy definition in a file named
tagging-policy.json
and place it in a folder that represents your
Organizational Unit within the adf-bootstrap
folder from the
aws-deployment-framework-bootstrap
repository on the management account.
Tagging policies can also be applied to a single account using the same approach
described above for SCPs.
Tag Policies are available only in an organization that has all features enabled. Once you have enabled all features within your Organization, ADF can manage and automate the application and updating process of the Tag Policies. For more information, see here.
The ADF allows alternate notification_endpoint
values that can be used to
notify the status of a specific pipeline (in deployment_map.yml
). You can
specify an email address in the deployment map and notifications will be
emailed directly to that address.
However, if you specify a slack channel name (eg team-bugs) as the value, the notifications will be forwarded to that channel. In order to setup this integration you will need to create a Slack App.
When you create your Slack app, you can create multiple Webhook URL's (Incoming Webhook) that are each associated with their own channel. Create a webhook for each channel you plan on using throughout your Organization. Once created, copy the webhook URL and create a new secret in Secrets Manager on the Deployment Account:
- In AWS Console, click Store a new secret and select type 'Other type of secrets' (eg API Key).
- In Secret key/value tab, enter the channel name (eg team-bugs) in the first field and the webhook URL in the second field.
- In Select the encryption key section, choose aws/secretsmanager as the encryption key. Click Next.
- In Secret Name, give the secret a name that maps to the channel that the
webhook is authorized to send messages to. For example, if I had created a
webhook for a channel called
team-bugs
this would be stored in Secrets Manager as/adf/slack/team-bugs
. - Optionally, enter a description for the key
- Click Next. Ensure Disable automatic rotation is selected. Click Next again.
- Review the data and then click Store.
Once the value is stored as a secret, it can be used like so:
pipelines:
- name: sample-vpc
default_providers:
source:
provider: codecommit
properties:
account_id: 111111111111
params:
# This channel will receive pipeline events (success/failures/approvals)
notification_endpoint: team-bugs
restart_execution_on_update: True
targets:
- path: /banking/testing
name: testing
- path: /banking/production
name: omg_production
The ADF also supports integrating pipeline notifications with Slack via the AWS ChatBot. This allows pipeline notifications to scale and provides a consistent Slack notification across different AWS services.
In order to use AWS ChatBot, first you must configure an AWS ChatBot Client for your desired Slack workspace. Once the client has been created. You will need to manually create a channel configuration that will be used by the ADF.
Currently, dynamically creating channel configurations is not supported. In the
deployment map, you can configure a unique channel via the notification
endpoint parameter for each pipeline separately. Add the params
section if
that is missing and add the following configuration to the pipeline:
pipelines:
- name: some-pipeline
# ...
params:
notification_endpoint:
type: chat_bot
target: my_channel_config
To determine the current version, follow these steps:
To check the current version of ADF that you have deployed, go to the management
account in us-east-1. Check the CloudFormation stack output or tag of the
serverlessrepo-aws-deployment-framework
Stack.
- In the outputs tab, it will show the version as the
ADFVersionNumber
. - In the tags on the CloudFormation stack, it is presented as
ADFVersion
orserverlessrepo:semanticVersion
.
If you want to check which version is the latest one available, go to the Open Source ADF Repository on GitHub to browse through its releases.
- Follow the build and deploy steps, as documented in the installation guide, steps 1 to 3.
The serverlessrepo-aws-deployment-framework
stack is updated through this
process with new changes that were included in that release of ADF.
To check the progress in the management account in us-east-1
, follow these
steps:
-
Go to the CloudFormation console and search for the
serverlessrepo-aws-deployment-framework
stack. -
Open the stack named:
serverlessrepo-aws-deployment-framework
. -
In the overview of the stack, it reports it current state,
UPDATE_COMPLETE
with a recentUpdated time
is what you want to see. -
If it is in progress or if it has not applied the update yet, you can go to the
Events
tab to see what is happening and if any error happened. Use the refresh button on the top right of the table to retrieve updates on the stack deployment.Once finished, you need to merge the pull request after reviewing the changes if any are present. Since there might be changes to some of the foundational aspects of ADF and how it works (eg CDK Constructs). These changes might need to be applied to the files that live within the bootstrap repository in your AWS management account too.
To ease this process, the AWS CloudFormation stack will run the InitialCommit Custom CloudFormation resource when updating ADF. This resource will open a pull request against the default branch (i.e.
main
) on the bootstrap repository with a set of changes that you can optionally choose to merge. If those changes are merged into the default branch, the bootstrap pipeline will run to finalize the update to the latest version.
Which branch is used is determined by:
-
Describing the CodeCommit repository, it will use the default branch of the repository.
-
Follow these instructions if you want to switch from one branch to another, you only need to create a new branch from the current default branch.
Navigate to the CodeCommit repository and update the default branch of the repository to the new branch. Make sure to click the
Save
button underneath the default branch setting to save it.Alternatively, you can also perform the update using the AWS CLI.
In the management account in us-east-1
:
- Go to the Pull Request section of the
aws-deployment-framework-bootstrap
CodeCommit repository - There might be a pull request if the
aws-deployment-framework-bootstrap
repository that you have has to be updated to apply recent changes of ADF. This would show up with the version that you deployed recently, for examplev4.0.0
. - If there is no pull request, nothing to worry about. In that case, no changes were required in your repository for this update. Continue to the next step. If there is a pull request, open it and review the changes that it proposes. Once reviewed, merge the pull request to continue.
Confirm the aws-deployment-framework-bootstrap
pipeline in the management
account in us-east-1
:
- Go to the CodePipeline console for the aws-deployment-framework-bootstrap pipeline.
- This should progress and turn up as green. If you did not have to merge the pull request in the prior step, feel free to 'Release changes' on the pipeline to test it.
- If any of these steps fail, you can click on the
Details
link to get more insights into the failure. Please report the step where it failed and include a copy of the logs when it fails here.
The aws-deployment-framework-bootstrap
pipeline will trigger the account
creation and on-boarding process in parallel.
These are managed through Step Function state machines.
- Navigate to the AWS Step Functions service
in the management account in
us-east-1
. - Check the
AccountManagementStateMachine...
state machine, all recent invocations since we performed the update should succeed. It could be the case that there are no invocations at all. In that case, wait a minute and continue to the next step. - Check the
AccountBootstrappingStateMachine...
state machine, all recent invocations since we perform the update should succeed. It could be the case that there are no invocations at all. In that case, wait a minute and continue to the next step.
Once the aws-deployment-framework-bootstrap
pipeline is finished, it will
trigger the aws-deployment-framework-pipelines
pipeline in the
deployment account in your main region:
- Open your deployment account.
- Make sure you are in the main deployment region, where all your pipelines are located.
- Go to the CodePipeline console and search for
aws-deployment-framework-pipelines
. - This should progress and turn up as green. If any of these steps fail, it
could be that one of your pipelines could not be updated. You can click on
the
Details
link to get more insights into the failure. Please report the step where it failed by opening an issue here and include a copy of the logs when it fails here.
The aws-deployment-framework-pipelines
pipeline will trigger the creation of
pipelines as defined in the deployment maps.
This process is managed in an AWS Step Function state machine.
- Navigate to the AWS Step Functions service in the deployment account in your main region.
- Check the
adf-pipeline-management
state machine, all recent invocations since we performed the update should succeed.
We need to confirm that the pipelines generated by ADF are fully functional To test this, please open one of the generated pipelines in the CodePipeline service in your deployment account inside your main region. Release a change and check the progress of the pipeline to validate it continues to work.
If it failed, check the previous execution of the pipeline first, before you report this as an issue. The pipeline itself might have been broken before ADF was updated. If so, search for another pipeline that was successful before and release changes in that one instead.
ADF will automatically detect the default branch that is configured for the CodeCommit repository.
If you want to switch from one branch to another, you need to create a new branch from the current default branch.
For example, switching from master
to main
:
Simply running these commands from inside the
aws-deployment-framework-bootstrap
or the
aws-deployment-framework-pipelines
repository that is cloned locally
should do it:
# Please note: this assumes that the
# aws-deployment-framework-bootstrap/pipelines repository is configured as the
# origin remote in your local git environment.
git fetch
git checkout origin/master
git push HEAD:origin/main
Navigate to the CodeCommit repository and update the default branch of the
repository to the new branch. Make sure to click the Save
button underneath
the default branch setting to save it.
Alternatively, you can also perform the update using the AWS CLI.
If you wish to remove ADF you can delete the CloudFormation stack named
serverlessrepo-aws-deployment-framework
in the management account in
the us-east-1
region. This will remove most resources created by ADF
in the management account. With the exception of S3 buckets and SSM parameters.
If you bootstrapped ADF into the management account you need to manually remove
the bootstrap stacks as well.
Feel free to delete the S3 buckets, SSM parameters that start with the /adf
prefix, as well as other CloudFormation stacks such as:
- adf-global-base-bootstrap (in the main deployment region)
- adf-global-base-iam (in the main deployment region)
- adf-regional-base-bootstrap (in every other region configured for ADF)
When these stacks are removed, you can switch into the deployment
account. We need to remove the base stack in the deployment account
adf-global-base-deployment
and any associated regional
deployment account base stacks. After you have deleted these stacks, you can
manually remove any base stacks from accounts that were bootstrapped.
Alternatively prior to removing the initial
serverlessrepo-aws-deployment-framework
stack, you can set the moves section
of the adfconfig.yml
file to remove-base which would automatically clean up
the base stack when the account is moved to the Root of the AWS Organization.
One thing to keep in mind if you are planning to re-install ADF is that you
will want to clean up the parameter from SSM Parameter Store. You can safely
remove all /adf
prefixed SSM parameters. But most importantly, you need to
remove the /adf/deployment_account_id
in us-east-1
on the
management account.
As AWS Step Functions uses this parameter to determine if ADF has already got a
deployment account setup. If you re-install ADF with this parameter set to a
value, ADF will attempt an assume role to the account to configure it, which
will fail since that role will not be on the account at that point.
If you are experiencing an issue with ADF, please follow the guide on Updating Between Versions to check if your latest installation was installed successfully before you continue.
When you need to troubleshoot the installation or upgrade of ADF, please set
set the Log Level
parameter of the ADF Stack to DEBUG
.
There are two ways to enable this:
- If you installed/upgraded to the latest version and that failed, you can
follow the installation docs. When you are about
to deploy the latest version again, set the
Log Level
toDEBUG
to get extra logging information about the issue you are experiencing. - If you are running an older version of ADF, please navigate to the
CloudFormation Console in
us-east-1
of the AWS Management account. - Update the stack.
- For any ADF deployment of
v3.2.0
and later, please change theLog Level
parameter and set it toDEBUG
. Deploy those changes and revert them after you gathered the information required to report or fix the issue. - If you are running a version prior to
v3.2.0
, you will need to update the template using the CloudFormation Designer. Search forINFO
and replace that withDEBUG
. Deploy the updated version and reverse this process after you found the logging information you needed to report the issue or resolve it.
Please trace the failed component and dive into/report the debug information.
The main components to look at are:
- In the AWS Management Account in
us-east-1
: - The CloudFormation aws-deployment-framework stack.
- The CloudWatch Logs for the Lambda functions deployed by ADF.
- Check if the CodeCommit pull
request
to install the latest version changes of ADF is merged into your default
branch for the
aws-deployment-framework-bootstrap
(ADF Bootstrap) repository. - The CodePipeline execution of the AWS Bootstrap pipeline.
- Navigate to the AWS Step Functions service
in the management account in
us-east-1
. Check the state machines namedAccountManagementStateMachine...
andAccountBootstrappingStateMachine...
. Look at recent executions only.- When you find one that has a failed execution, check the components that are marked orange/red in the diagram.
- If one failed and you want to trigger it again, you can execute it with
the
New Execution
button in AWS Step Functions. Or even better, to trigger all executions again, Release a Change in the ADF Bootstrap CodePipeline - aws-deployment-framework-bootstrap.
- In the AWS Deployment Account in the deployment region the CodePipeline
execution
of the
aws-deployment-framework-pipeline
(ADF pipelines) repository Important: link points toeu-west-1
, please change that to your own deployment region. - Navigate to the AWS Step Functions service
in the deployment account in your main region. Please note, the link points
to the
eu-west-1
region. Please update that to your own deployment region. Check the state machines namedadf-pipeline-management
,adf-bootstrap-enable-cross-account
, andadf-pipeline-management-delete-outdated
. Look at recent executions only.- When you find one that has a failed execution, check the components that are marked orange/red in the diagram.
- If one failed and you want to trigger it again, you can execute it with
the
New Execution
button in AWS Step Functions. Or even better in case of theadf-pipeline-management
, trigger all executions again, Release a Change in the ADF Pipeline generation CodePipeline - aws-deployment-framework-pipelines.
Important: If you are about to share any debug information through an issue on the ADF GitHub repository, please replace:
- the account ids with simple account ids like:
111111111111
,222222222222
, etc. - the organization id with a simple one,
o-aa111bb222
. - the organization unit identifiers and names.
- the email addresses by hiding them behind
--some-notification-email-address--
. - the Slack channel identifier and SNS topics configured with simplified ones.
- the cross-account access role with the default
OrganizationAccountAccessRole
. - the S3 buckets using a simplified bucket name, like
example-bucket-1
. - the Amazon Resource Names (ARNs) could also expose information.
Always read what you are about to share carefully to make sure any identifiable or sensitive information is removed.