-
Notifications
You must be signed in to change notification settings - Fork 26
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
28 changed files
with
9,576 additions
and
7 deletions.
There are no files selected for viewing
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,17 +1,87 @@ | ||
## My Project | ||
# Grafana Dashboard for AWS ParallelCluster | ||
|
||
TODO: Fill this README out! | ||
This is a sample solution based on Grafana for monitoring various component of an HPC cluster built with AWS ParallelCluster. | ||
There are 6 dashboards that can be used as they are or customized as you need. | ||
* ParallelCluster Stats - this is the main dashboard taht shows general monitoring info and metrics for the whole cluster. It includes Slurm metrics and Storage performance metrics. | ||
* Master Node Details - this dashboard shows detailed metric for the Master node, including CPU, Memory, Network and Storage usage. | ||
* Compute Node List - this dashboard show the list of the available compute nodes. Each entry is a link to a more detailed page. | ||
* Compute Node Details - similarly to the master node details this dashboard show the same metric for the compute nodes. | ||
* Cluster Logs - This dashboard shows all the logs of your HPC Cluster. The logs are pushed by AWS ParallelCluster to AWS ClowdWatch Logs and finally reported here. | ||
* Cluster Costs (beta / in developemnt) - This dashboard shows the cost associated to every AWS Service utilized by your Cluster. It includes: EC2, EBS, FSx, S3, EFS. | ||
|
||
Be sure to: | ||
|
||
* Change the title in this README | ||
* Edit your repository description on GitHub | ||
## AWS ParallelCluster | ||
**AWS ParallelCluster** is an AWS supported Open Source cluster management tool that makes it easy for you to deploy and | ||
manage High Performance Computing (HPC) clusters in the AWS cloud. | ||
It automatically sets up the required compute resources and a shared filesystem and offers a variety of batch schedulers such as AWS Batch, SGE, Torque, and Slurm. | ||
* More info on: https://aws.amazon.com/hpc/parallelcluster/ | ||
* Source Code on Git-Hub: https://github.com/aws/aws-parallelcluster | ||
* Official Documentation: https://docs.aws.amazon.com/parallelcluster/ | ||
|
||
|
||
## Solution components | ||
This project is build with the following components: | ||
|
||
* **Grafana** is an open-source (https://github.com/grafana/grafana) platform for monitoring and observability. Grafana allows you to query, visualize, alert on and understand your metrics as well as create, explore, and share dashboards fostering a data driven culture. | ||
* **Prometheus** open-source (https://github.com/prometheus/prometheus/) project for systems and service monitoring from the Cloud Native Computing Foundation (https://cncf.io/). It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. | ||
* The **Prometheus Pushgateway** is on open-source (https://github.com/prometheus/pushgateway/) tool that allows ephemeral and batch jobs to expose their metrics to Prometheus. | ||
* **Nginx** (http://nginx.org/) is an HTTP and reverse proxy server, a mail proxy server, and a generic TCP/UDP proxy server. | ||
* **Prometheus-Slurm-Exporter** (https://github.com/vpenso/prometheus-slurm-exporter/) is a Prometheus collector and exporter for metrics extracted from the Slurm (https://slurm.schedmd.com/overview.html) resource scheduling system. | ||
* **Node_exporter** (https://github.com/prometheus/node_exporter) is a Prometheus exporter for hardware and OS metrics exposed by \*NIX kernels, written in Go with pluggable metric collectors. | ||
|
||
Note: *while almost all components are under the Apache2 license, only **Prometheus-Slurm-Exporter is licensed under GPLv3**, you need to be aware of it and accept the license terms before proceeding and installing this component.* | ||
|
||
|
||
## Example Dashboards | ||
|
||
![ParallelCluster](docs/ParallelCluster.png?raw=true "AWS ParallelCluster") | ||
|
||
![Master](docs/Master.png?raw=true "Master Node") | ||
|
||
![Compute Node List](docs/List.png?raw=true "Compute Node List") | ||
|
||
![Logs](docs/Logs.png?raw=true "AWS ParallelCluster Logs") | ||
|
||
![Costs](docs/Costs.png?raw=true "Best - AWS ParallelCluster Costs") | ||
|
||
|
||
## How to use it | ||
|
||
You can simply use the post-install script that you can find in this git-hub (http://link/) repo as it is, or customize it as you need. For instance, you might want to change your Grafana password to something more secure and meaningful for you, or you might want to customize some dashboards by adding additional components to monitor. | ||
The proposed post-install script will take care of installing and configuring everything for you. Though, few additional parameters are needed in the AWS ParallelCluster config file: the post_install_args, additional IAM policies, security group, and a tag. Please note that, at the moment, the post install script has only been tested using Amazon Linux 2 (https://aws.amazon.com/amazon-linux-2/). | ||
|
||
``` | ||
base_os = alinux2 | ||
post_install = s3://<my-bucket-name>/grafana-post-install.sh | ||
post_install_args = "https://github.com/aws-samples/aws-parallelcluster-monitoring/archive/main.zip" | ||
additional_iam_policies = arn:aws:iam::aws:policy/CloudWatchFullAccess,arn:aws:iam::aws:policy/AWSPriceListServiceFullAccess,arn:aws:iam::aws:policy/AmazonSSMFullAccess,arn:aws:iam::aws:policy/AWSCloudFormationReadOnlyAccess | ||
tags = {“Grafana” : “true”} | ||
``` | ||
|
||
Make sure that port 80 and port 443 of your master node are accessible from the internet (or form your network). You can achieve this by creating the appropriate security group via AWS Web-Console or via Command Line Interface (CLI (https://docs.aws.amazon.com/cli/index.html)), see an example below: | ||
|
||
``` | ||
aws ec2 create-security-group --group-name my-grafana-sg --description "Open Grafana dashboard ports" —vpc-id vpc-1a2b3c4d | ||
aws ec2 authorize-security-group-ingress --group-id sg-12345 --protocol tcp --port 443 —cidr 0.0.0.0/0 | ||
aws ec2 authorize-security-group-ingress --group-id sg-12345 --protocol tcp --port 80 —cidr 0.0.0.0/0 | ||
``` | ||
|
||
More information on how to create your security groups here (https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-sg.html#creating-a-security-group). | ||
Finally, set the additional_sg parameter in the [VPC] section of your ParallelCluster config file. | ||
After your cluster is created, you can just open a web-browser and connect to https://your_public_ip (https://your_public_ip/) , a landing page will be presented to you with links to the Prometheus database service and the Grafana dashboards. | ||
|
||
|
||
Note: Because of the higher volume of network traffic due to the compute nodes continuously pushing metrics to the master node, | ||
in case you expect to run a large scale cluster (hundreds of instances), we would recommend to use an instance type slightly bigger than what you planned for your master node. | ||
|
||
## Security | ||
|
||
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. | ||
|
||
## License | ||
|
||
This library is licensed under the MIT-0 License. See the LICENSE file. | ||
|
||
This library is licensed under the MIT-0 License. See the LICENSE file. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,121 @@ | ||
#!/bin/bash | ||
|
||
#source the AWS ParallelCluster profile | ||
. /etc/parallelcluster/cfnconfig | ||
|
||
export AWS_DEFAULT_REGION=$cfn_region | ||
aws_region_long_name=$(python /usr/local/bin/aws-region.py $cfn_region) | ||
|
||
masterInstanceType=$(ec2-metadata -t | awk '{print $2}') | ||
masterInstanceId=$(ec2-metadata -i | awk '{print $2}') | ||
s3_bucket=$(echo $cfn_postinstall | sed "s/s3:\/\///g;s/\/.*//") | ||
s3_size_gb=$(echo "$(aws s3api list-objects --bucket $s3_bucket --output json --query "[sum(Contents[].Size)]"| sed -n 2p | tr -d ' ') / 1024 / 1024 / 1024" | bc) | ||
|
||
|
||
#retrieve the s3 cost | ||
if [[ $s3_size_gb -le 51200 ]]; then | ||
s3_range=51200 | ||
elif [[ $VAR -le 512000 ]]; then | ||
s3_range=512000 | ||
else | ||
s3_range="Inf" | ||
fi | ||
|
||
####################### S3 ######################### | ||
|
||
s3_cost_gb_month=$(aws --region us-east-1 pricing get-products \ | ||
--service-code AmazonS3 \ | ||
--filters 'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=storageClass,Value=General Purpose' \ | ||
--query 'PriceList[0]' --output text \ | ||
| jq -r --arg endRange $s3_range '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[].value | select(.endRange==$endRange).pricePerUnit.USD') | ||
|
||
s3=$(echo "scale=2; $s3_cost_gb_month * $s3_size_gb / 720" | bc) | ||
echo "s3_cost $s3" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/cost | ||
|
||
|
||
####################### Master ######################### | ||
master_node_h_price=$(aws pricing get-products \ | ||
--region us-east-1 \ | ||
--service-code AmazonEC2 \ | ||
--filters 'Type=TERM_MATCH,Field=instanceType,Value='$masterInstanceType \ | ||
'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=preInstalledSw,Value=NA' \ | ||
'Type=TERM_MATCH,Field=operatingSystem,Value=Linux' \ | ||
'Type=TERM_MATCH,Field=tenancy,Value=Shared' \ | ||
'Type=TERM_MATCH,Field=capacitystatus,Value=UnusedCapacityReservation' \ | ||
--output text \ | ||
--query 'PriceList' \ | ||
| jq -r '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[] | .value.pricePerUnit.USD') | ||
|
||
echo "master_node_cost $master_node_h_price" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/cost | ||
|
||
|
||
####################### FSX ######################### | ||
fsx_size_gb=$(aws cloudformation describe-stacks --stack-name $stack_name --region $cfn_region \ | ||
| jq -r '.Stacks[0].Parameters | map(select(.ParameterKey == "FSXOptions"))[0].ParameterValue' \ | ||
| awk -F "," '{print $3}') | ||
|
||
fsx_type=$(aws cloudformation describe-stacks --stack-name $stack_name --region $cfn_region \ | ||
| jq -r '.Stacks[0].Parameters | map(select(.ParameterKey == "FSXOptions"))[0].ParameterValue' \ | ||
| awk -F "," '{print $9}') | ||
|
||
fsx_throughput=$(aws cloudformation describe-stacks --stack-name $stack_name --region $cfn_region \ | ||
| jq -r '.Stacks[0].Parameters | map(select(.ParameterKey == "FSXOptions"))[0].ParameterValue' \ | ||
| awk -F "," '{print $10}') | ||
|
||
if [[ $fsx_type = "SCRATCH_2" ]] || [[ $fsx_type = "SCRATCH_1" ]]; then | ||
fsx_cost_gb_month=$(aws pricing get-products \ | ||
--region us-east-1 \ | ||
--service-code AmazonFSx \ | ||
--filters 'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=fileSystemType,Value=Lustre' \ | ||
'Type=TERM_MATCH,Field=throughputCapacity,Value=N/A' \ | ||
--output text \ | ||
--query 'PriceList' \ | ||
| jq -r '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[] | .value.pricePerUnit.USD') | ||
|
||
elif [ $fsx_type = "PERSISTENT_1" ]; then | ||
fsx_cost_gb_month=$(aws pricing get-products \ | ||
--region us-east-1 \ | ||
--service-code AmazonFSx \ | ||
--filters 'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=fileSystemType,Value=Lustre' \ | ||
'Type=TERM_MATCH,Field=throughputCapacity,Value='$fsx_throughput \ | ||
--output text \ | ||
--query 'PriceList' \ | ||
| jq -r '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[] | .value.pricePerUnit.USD') | ||
|
||
else | ||
fsx_cost_gb_month=0 | ||
fi | ||
|
||
fsx=$(echo "scale=2; $fsx_cost_gb_month * $fsx_size_gb / 720" | bc) | ||
echo "fsx_cost $fsx" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/cost | ||
|
||
|
||
#parametrize: | ||
ebs_volume_total_cost=0 | ||
ebs_volume_ids=$(aws ec2 describe-instances --instance-ids $masterInstanceId \ | ||
| jq -r '.Reservations | to_entries[].value | .Instances | to_entries[].value | .BlockDeviceMappings | to_entries[].value | .Ebs.VolumeId') | ||
|
||
for ebs_volume_id in $ebs_volume_ids | ||
do | ||
ebs_volume_type=$(aws ec2 describe-volumes --volume-ids $ebs_volume_id | jq -r '.Volumes | to_entries[].value.VolumeType') | ||
#ebs_volume_iops=$(aws ec2 describe-volumes --volume-ids $ebs_volume_id | jq -r '.Volumes | to_entries[].value.Iops') | ||
ebs_volume_size=$(aws ec2 describe-volumes --volume-ids $ebs_volume_id | jq -r '.Volumes | to_entries[].value.Size') | ||
|
||
ebs_cost_gb_month=$(aws --region us-east-1 pricing get-products \ | ||
--service-code AmazonEC2 \ | ||
--query 'PriceList' \ | ||
--output text \ | ||
--filters 'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=productFamily,Value=Storage' \ | ||
'Type=TERM_MATCH,Field=volumeApiName,Value='$ebs_volume_type \ | ||
| jq -r '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[] | .value.pricePerUnit.USD') | ||
|
||
ebs_volume_cost=$(echo "scale=2; $ebs_cost_gb_month * $ebs_volume_size / 720" | bc) | ||
ebs_volume_total_cost=$(echo "scale=2; $ebs_volume_total_cost + $ebs_volume_cost" | bc) | ||
done | ||
|
||
echo "ebs_master_cost $ebs_volume_total_cost" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/cost |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,47 @@ | ||
#!/bin/bash | ||
|
||
#source the AWS ParallelCluster profile | ||
. /etc/parallelcluster/cfnconfig | ||
|
||
export AWS_DEFAULT_REGION=$cfn_region | ||
aws_region_long_name=$(python /usr/local/bin/aws-region.py $cfn_region) | ||
computeInstanceType=$(aws cloudformation describe-stacks --stack-name $stack_name --region $cfn_region | jq -r '.Stacks[0].Parameters | map(select(.ParameterKey == "ComputeInstanceType"))[0].ParameterValue') | ||
|
||
compute_node_h_price=$(aws pricing get-products \ | ||
--region us-east-1 \ | ||
--service-code AmazonEC2 \ | ||
--filters 'Type=TERM_MATCH,Field=instanceType,Value='$computeInstanceType \ | ||
'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=preInstalledSw,Value=NA' \ | ||
'Type=TERM_MATCH,Field=operatingSystem,Value=Linux' \ | ||
'Type=TERM_MATCH,Field=tenancy,Value=Shared' \ | ||
'Type=TERM_MATCH,Field=capacitystatus,Value=UnusedCapacityReservation' \ | ||
--output text \ | ||
--query 'PriceList' \ | ||
| jq -r '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[] | .value.pricePerUnit.USD') | ||
|
||
#ebs_volume_id=$(aws ec2 describe-instances --instance-ids $computeInstanceId \ | ||
# | jq -r '.Reservations | to_entries[].value | .Instances | to_entries[].value | .BlockDeviceMappings | to_entries[].value | .Ebs.VolumeId' \ | ||
# | tail -1) #remove this tail | ||
|
||
#ebs_volume_type=$(aws ec2 describe-volumes --volume-ids $ebs_volume_id | jq -r '.Volumes | to_entries[].value.VolumeType') | ||
#ebs_volume_iops=$(aws ec2 describe-volumes --volume-ids $ebs_volume_id | jq -r '.Volumes | to_entries[].value.Iops') | ||
ebs_volume_size=$(aws cloudformation describe-stacks --stack-name $stack_name --region $cfn_region | jq -r '.Stacks[0].Parameters | map(select(.ParameterKey == "ComputeRootVolumeSize"))[0].ParameterValue') | ||
|
||
#check if volumeApiName can chane in the future, for now "gp2" is hardcoded | ||
ebs_cost_gb_month=$(aws --region us-east-1 pricing get-products \ | ||
--service-code AmazonEC2 \ | ||
--query 'PriceList' \ | ||
--output text \ | ||
--filters 'Type=TERM_MATCH,Field=location,Value='"${aws_region_long_name}" \ | ||
'Type=TERM_MATCH,Field=productFamily,Value=Storage' \ | ||
'Type=TERM_MATCH,Field=volumeApiName,Value=gp2' \ | ||
| jq -r '.terms.OnDemand | to_entries[] | .value.priceDimensions | to_entries[] | .value.pricePerUnit.USD') | ||
|
||
total_num_compute_nodes=$(/opt/slurm/bin/sinfo -O "nodes" --noheader) | ||
compute_ebs_volume_cost=$(echo "scale=2; $ebs_cost_gb_month * $total_num_compute_nodes * $ebs_volume_size / 720" | bc) | ||
compute_nodes_cost=$(echo "scale=2; $total_num_compute_nodes * $compute_node_h_price" | bc) | ||
|
||
|
||
echo "ebs_compute_cost $compute_ebs_volume_cost" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/cost | ||
echo "compute_nodes_cost $compute_nodes_cost" | curl --data-binary @- http://127.0.0.1:9091/metrics/job/cost |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
import json | ||
import sys | ||
|
||
from pkg_resources import resource_filename | ||
|
||
region = str(sys.argv[1]) | ||
|
||
name = None | ||
endpoint_file = resource_filename('botocore', 'data/endpoints.json') | ||
with open(endpoint_file, 'r') as ep_file: | ||
data = json.load(ep_file) | ||
for partition in data['partitions']: | ||
if region in partition['regions']: | ||
name = partition['regions'][region]['description'] | ||
break | ||
|
||
print(name) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
version: '3.8' | ||
services: | ||
prometheus-node-exporter: | ||
container_name: node-exporter | ||
network_mode: host | ||
pid: host | ||
restart: unless-stopped | ||
volumes: | ||
- '/:/host:ro,rslave' | ||
image: quay.io/prometheus/node-exporter | ||
command: | ||
- '--path.rootfs=/host' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
version: '3.8' | ||
services: | ||
pushgateway: | ||
container_name: pushgateway | ||
network_mode: host | ||
pid: host | ||
restart: unless-stopped | ||
ports: | ||
- '9091:9091' | ||
image: prom/pushgateway | ||
prometheus: | ||
container_name: prometheus | ||
network_mode: host | ||
pid: host | ||
restart: unless-stopped | ||
ports: | ||
- '9090:9090' | ||
volumes: | ||
- '/home/$cfn_cluster_user/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml' | ||
- 'prometheus-data:/prometheus' | ||
image: prom/prometheus | ||
command: | ||
- '--config.file=/etc/prometheus/prometheus.yml' | ||
- '--storage.tsdb.path=/prometheus' | ||
- '--web.console.libraries=/usr/share/prometheus/console_libraries' | ||
- '--web.console.templates=/usr/share/prometheus/consoles' | ||
- '--web.external-url=/prometheus/' | ||
- '--web.route-prefix=/' | ||
grafana: | ||
container_name: grafana | ||
network_mode: host | ||
pid: host | ||
restart: unless-stopped | ||
ports: | ||
- '3000:3000' | ||
environment: | ||
- 'GF_SECURITY_ADMIN_PASSWORD=Grafana4PC!' | ||
- 'GF_SERVER_ROOT_URL=http://%(domain)s/grafana/' | ||
volumes: | ||
- '/home/$cfn_cluster_user/grafana:/etc/grafana/provisioning' | ||
- 'grafana-data:/var/lib/grafana' | ||
image: grafana/grafana | ||
prometheus-node-exporter: | ||
container_name: node-exporter | ||
network_mode: host | ||
pid: host | ||
restart: unless-stopped | ||
volumes: | ||
- '/:/host:ro,rslave' | ||
image: quay.io/prometheus/node-exporter | ||
command: | ||
- '--path.rootfs=/host' | ||
nginx: | ||
container_name: nginx | ||
network_mode: host | ||
pid: host | ||
ports: | ||
- '443:443' | ||
restart: unless-stopped | ||
volumes: | ||
- '/home/$cfn_cluster_user/nginx/conf.d:/etc/nginx/conf.d/' | ||
- '/home/$cfn_cluster_user/nginx/ssl:/etc/ssl/' | ||
- '/home/$cfn_cluster_user/www:/usr/share/nginx/html' | ||
image: nginx | ||
volumes: | ||
prometheus-data: | ||
grafana-data: |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Oops, something went wrong.