To collect the data for this analysis, the following steps need to be performed: make sure...
- the
burl
script is available - the
curl
command is available - the
getdap4
command is available - the ~/.netrc file is present and contains UAT credentials
Run the script OA-performance.sh
to collect the data. You can save the generated files to
new directories.
These directories hold csv files that contain the total time, time to connect to the server and the total time to the
first byte of data. The timing information was collected by cURL. For data in run_1
the client was not using the
EBNET VPN while for run_2
the client was using the VPN.
See the README.md file in that directory. To get the bes.log files from our UAT deployment, use Kion to access the hosts and use the commands in Driving around on UAT's EC2 instances to copy the raw logs here. Then see the raw-bes-logs README for more information.
AWS Console --> System Manager --> Session Manager.
To get access to a specific host, 'Start Session'
Once there (useful commands):
sudo -s
gets root accessdocker ps -a
shows the container Idsdocker exec it <container> bash
to access the container with a Bash shelldocker cp <conatiner>:/var/log/bes/bes.log .
when run in the EC2 host (not the container) will copy the container's bes.log to the EC2 host's file systemaws s3 cp bes.log s3://opendap.scratch/UAT-logs/bes-2d47.log
How to get files off of an EC2 instance in UAT, et cetera. Use ENV Vars for the credentials.
Set the AWS credentials using:
- [root@ip-10-9-0-37 ~]# export AWS_ACCESS_KEY_ID=******************
- [root@ip-10-9-0-37 ~]# export AWS_SECRET_ACCESS_KEY=****************************************
- [root@ip-10-9-0-37 ~]# export AWS_ACCESS_REGION=us-west-2
Then copy the file to S3:
- [root@ip-10-9-0-37 ~]# aws s3 cp bes.log s3://opendap.scratch/UAT-logs/bes-in-cloud-72bf.log
Back on the host with this OA-analysis repo, get the file using:
aws s3 cp s3://opendap.scratch/UAT-logs/bes-in-cloud-72bf.log .