Pipelines to detect and remediate misconfigurations in AWS resources.
Docker daemon must be installed and running. Please see Install Docker Engine for more information.
Download and install Flowpipe (https://flowpipe.io/downloads) and Steampipe (https://steampipe.io/downloads). Or use Brew:
brew install turbot/tap/flowpipe
brew install turbot/tap/steampipe
Install the AWS plugin with Steampipe:
steampipe plugin install aws
Steampipe will automatically use your default AWS credentials. Optionally, you can setup multiple accounts or customize AWS credentials.
Create a connection_import
resource to import your Steampipe AWS connections:
vi ~/.flowpipe/config/aws.fpc
connection_import "aws" {
source = "~/.steampipe/config/aws.spc"
connections = ["*"]
}
For more information on importing connections, please see Connection Import.
For more information on connections in Flowpipe, please see Managing Connections.
Install the mod:
mkdir aws-compliance
cd aws-compliance
flowpipe mod install github.com/turbot/flowpipe-mod-aws-compliance
To run your first detection, you'll need to ensure your Steampipe server is up and running:
steampipe service start
To find your desired detection, you can filter the pipeline list
output:
flowpipe pipeline list | grep "detect_and_correct"
Then run your chosen pipeline:
flowpipe pipeline run aws_compliance.pipeline.detect_and_correct_s3_buckets_with_block_public_access_disabled
This will then run the pipeline and depending on your configured running mode; perform the relevant action(s), there are 3 running modes:
- Wizard
- Notify
- Automatic
This is the default
running mode, allowing for a hands-on approach to approving changes to resources by prompting for input for each detected resource.
Whilst the out of the box default is to run the workflow directly in the terminal. You can use Flowpipe server and external integrations to prompt in http
, slack
, teams
, etc.
This mode as the name implies is used purely to report detections via notifications either directly to your terminal when running in client mode or via another configured notifier when running in server mode for each detected resource.
To run in notify
mode, you will need to set the approvers
variable to an empty list []
and ensure the resource-specific default_action
variable is set to notify
, either in your flowpipe.fpvars
file:
approvers = []
s3_buckets_with_block_public_access_disabled_default_action = "notify"
or pass the approvers
and default_action
arguments on the command-line.
flowpipe pipeline run aws_compliance.pipeline.detect_and_correct_s3_buckets_with_block_public_access_disabled --arg='default_action=notify' --arg='approvers=[]'
This behavior allows for a hands-off approach to remediating resources.
To run in automatic
mode, you will need to set the approvers
variable to an empty list []
and the the resource-specific default_action
variable to one of the available options in your flowpipe.fpvars
file:
approvers = []
s3_buckets_with_block_public_access_disabled_default_action = "block_public_access"
or pass the approvers
and default_action
argument on the command-line.
flowpipe pipeline run aws_compliance.pipeline.detect_and_correct_s3_buckets_with_block_public_access_disabled --arg='approvers=[] --arg='default_action=block_public_access'
To further enhance this approach, you can enable the pipelines corresponding query trigger to run completely hands-off.
Note: Query triggers require Flowpipe running in server mode.
Each detect_and_correct
pipeline comes with a corresponding Query Trigger, these are disabled by default allowing for you to enable and schedule them as desired.
Let's begin by looking at how to set-up a Query Trigger to automatically resolve our S3 buckets that do not block public access.
Firsty, we need to update our flowpipe.fpvars
file to add or update the following variables - if we want to run our remediation hourly
and automatically apply
the corrections:
s3_buckets_with_block_public_access_disabled_trigger_enabled = true
s3_buckets_with_block_public_access_disabled_trigger_schedule = "1h"
s3_buckets_with_block_public_access_disabled_default_action = "block_public_access"
Now we'll need to start up our Flowpipe server:
flowpipe server
This will run every hour and detect S3 buckets that do not block public access and apply the corrections without further interaction!
Several pipelines have input variables that can be configured to better match your environment and requirements.
Each variable has a default defined in its source file, e.g, s3/s3_buckets_with_default_encryption_disabled.fp
(or variables.fp
for more generic variables), but these can be overwritten in several ways:
The easiest approach is to setup your flowpipe.fpvars
file, starting with the sample:
cp flowpipe.fpvars.example flowpipe.fpvars
vi flowpipe.fpvars
flowpipe pipeline run aws_compliance.pipeline.detect_and_correct_s3_buckets_with_block_public_access_disabled
Alternatively, you can pass variables on the command line:
flowpipe pipeline run aws_compliance.pipeline.detect_and_correct_s3_buckets_with_block_public_access_disabled --var notifier=notifier.default
Or through environment variables:
export FP_VAR_notifier="notifier.default"
flowpipe pipeline run aws_compliance.pipeline.detect_and_correct_s3_buckets_with_block_public_access_disabled
For more information, please see Passing Input Variables
Finally, each detection pipeline has a corresponding Query Trigger, these are disabled by default allowing for you to configure only those which are required, see the docs for more information.
This repository is published under the Apache 2.0 license. Please see our code of conduct. We look forward to collaborating with you!
Flowpipe and Steampipe are products produced from this open source software, exclusively by Turbot HQ, Inc. They are distributed under our commercial terms. Others are allowed to make their own distribution of the software, but cannot use any of the Turbot trademarks, cloud services, etc. You can learn more in our Open Source FAQ.
Want to help but don't know where to start? Pick up one of the help wanted
issues: