-
Notifications
You must be signed in to change notification settings - Fork 1
Installation
Tested on CentOS 7 VM with ~4CPU 8GB RAM
e.g. Linode 4 CPU, 160GB Storage, 8GB RAM
[boot VM]
yum install docker
#enable start on boot
systemctl enable docker
#to start docker
systemctl start docker
#this may take a few minutes
docker pull floodspark/elk:v26
docker network create -d bridge backend
#needed by elastic
nano /etc/sysctl.conf
#add the following line
vm.max_map_count=262144
[save change]
#refresh with the new configuration
sysctl -p
#ELK takes a while (i.e. minutes) to boot and may cause errors with subsequent p0f and/or filebeat boots if they cannot connect to the ELK ports they need
docker run -e ES_CONNECT_RETRY=300 -d --name elk --network=backend floodspark/elk:v26
docker pull floodspark/p0f:v2
cd Floodspark/docker/p0f/
#replace the -i flag value with your listening interface's name
docker run --network=host --name p0f -u root -v $(pwd):/tmp/p0f:z -d floodspark/p0f:v2 /opt/p0f/p0f -i [interface here] -o /tmp/p0f/p0f.log
docker pull floodspark/filebeat:v6
docker run -d --name=filebeat --network=backend -v [absolute_path_to_p0f_log_dir_from_previous_step]:/tmp/p0f/ floodspark/filebeat:v6 filebeat
docker pull floodspark/resty:v37
#running below may fail if ELK (upstream) container is not running or fully booted
docker run -d --name resty --network=backend -p 80:80 -p 443:443 floodspark/resty:v37
docker pull floodspark/redis:v2
docker run -d --name redis --network=backend floodspark/redis:v2
#ELK passwords setup. Remember these passwords!
docker exec -it elk /bin/bash
/opt/elasticsearch/bin/elasticsearch-setup-passwords interactive
#Next we configure Logstash to use basic authentication
#Log into the Kibana Web UI using the elastic user over the HTTP or HTTPS port plus your VM's IP address
- Use the the Management > Roles UI in Kibana to create a logstash_writer role.
- For Cluster privileges, add manage_index_templates and monitor.
- For Indices' value, use *****. For its Privileges value add write, create, delete, and create_index. Save.
- Create a logstash_internal user and assign it the logstash_writer role. You can create users from the Management > Users UI in Kibana.
- Configure Logstash to authenticate as the logstash_internal user you just created. Using the password you set, within the ELK docker container (using a command such as "docker exec -it elk /bin/bash" to get a terminal) update the /etc/logstash/conf.d/30-output.conf file's "password => "qE*7Pj43Or"" value.
#More information on these steps, if needed: https://www.elastic.co/guide/en/logstash/current/ls-security.html#ls-http-auth-basic
create/move into a directory you will use for a git repo
git clone https://github.com/GSMcNamara/Floodspark.git
modify both references to "floodspark.com" in Floodspark/docker/tanner/snare/Dockerfile to the domain you wish to emulate
cd ..
docker-compose build
screen
docker-compose up
Press Ctrl-A then Ctrl-D
#next we change resty/nginx's routing mode from block to honeypot
docker exec -it resty /bin/bash
nano /etc/nginx/conf.d/default.conf
change 'set $mode "block";' to 'set $mode "honeypot";'
save changes
nginx -t
#if above test passes, then:
nginx -s reload
Now, the snare server may serve a 500 error because Tanner has a failure. Within the /tmp/tanner/tanner.err file in the tanner Docker container you may see an error message such as "PermissionError: [Errno 13] Permission denied: '/var/log/tanner/tanner_report.json'"
If so, you may have to do this bad workaround:
setenforce 0
docker exec -it -u 0 tanner /bin/sh
touch /var/log/tanner/tanner_report.json
chmod 777 /var/log/tanner/tanner_report.json
exit container
Hopefully this is soon fixed: https://github.com/dtag-dev-sec/tpotce/issues/517
go to http://[your ip address]/app/infra#/logs?_g=()
click Change source configuration
set Log indices to *
Click Update Source
Go to /app/kibana#/management/kibana/objects?_g=()
Import the JSON file from https://github.com/GSMcNamara/Floodspark/blob/master/demo/export.ndjson or https://raw.githubusercontent.com/GSMcNamara/Floodspark/master/demo/export.ndjson. You may need to remove the ".txt" extension from the downloaded file's name.
Click Done
To create dashboard-only user with only the view permission, e.g. Floodspark Demo
Go to Management > Security > Roles. Create a role with name "demo_dashboard_read". Under Index privileges add the underlying indices that you want this user to be able to view through the dashboard. Add read to Privileges.
Go to Management > Security > Users > Create user and create the username "dashboard_only", set a password you will remember, and add "kibana_dashboard_only_user" and "demo_dashboard_read" to Roles.
User this account to demo your fancy dashboard(s)
#You can have resty/nginx authenticate automatically and load the dashboard URL directly, so it is all that user sees. HOWEVER, the downside is that you cannot log into Kibana using any other account (e.g. elastic) when accessed through the modified resty/nginx instance.
docker exec -it resty /bin/bash
modify /etc/nginx/conf.d/default.conf #(nano is installed)
within the "location /" block, e.g. under "proxy_set_header Host $http_host;" add "proxy_set_header Authorization "Basic ZGFzaGJvYXJkX29ubHk6cUUqN1BqNDNPcg==";" where "ZGFzaGJvYXJkX29ubHk6cUUqN1BqNDNPcg==" is the base64 encoded value for the dashboard_only:password combination. You must replace this with the base64 encoded value for dashboard_only:[the password you set]
#to block people from changing the password of the dashboard_only user, update the following nginx line with "dashboard_only" instead of "floodspark":
location /api/security/v1/users/floodspark/password {
#TODO: to have kibana load a specific dashboard directly
save changes.
#run the following to test nginx config before restarting nginx
nginx -t
#restart nginx if safe
nginx -s reload
#change the following line in /etc/nginx/conf.d/default.conf of the resty docker container:
proxy_pass http://elk:5601/;
#update the domain value in this line:
proxy_set_header Host [your domain here];
#reload the config:
nginx -s reload
#This should not really differ from a standard installation in NGINX
#Installation takes place in the resty docker container
#Steps might be similar to:
docker cp private.key resty:/etc/ssl/private/
docker cp ssl-bundle.crt resty:/etc/ssl/certs/
docker exec -it resty /bin/bash
nano /etc/nginx/conf.d/default.conf
#comment out the existing SSL cert information, resulting in something like:
ssl_certificate /etc/ssl/certs/ssl-bundle.crt;
ssl_certificate_key /etc/ssl/private/private.key;
# ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
# ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
# ssl_dhparam /etc/ssl/certs/dhparam.pem;
save.
nginx -t
nginx -s reload
#pause Redis to avoid blacklisting yourself
docker stop redis
#Wget, cURL, visit in Chrome Incognito mode, and/or Tor browser, etc to generate data
#resume redis when you're ready to start blocking people again
docker start redis
#check status of all containers
docker ps -a
#debug a container
docker exec -it [container name] /bin/bash
#get command used to start container
docker inspect -f "{{.Name}} {{.Config.Cmd}}" $(docker ps -a -q)
#list all images
docker images
#run the redis-cli:
docker run -it --network backend --rm floodspark/redis:v2 redis-cli -h redis
#list all existing ip addresses in blacklist:
KEYS *
#delete all ip addresses from blacklist:
FLUSHALL