Table of Contents
- 1 Prepare a build environment
- 2 Deploy the Postgresql database
- 3 Create a PKI
- 4 Deploy the Rabbitmq relay
- 5 Scheduler Configuration
- 6 API configuration
- 7 Build the clients and create an investigator
- 8 Agent Configuration
- 9 Appendix A: Manual RabbitMQ Configuration
- 10 Appendix B: Scheduler configuration reference
- 11 Appendix C: Advanced agent configuration
This document describes the steps to build and configure a MIG platform. MIG has 6 major components. The Postgresql database and RabbitMQ relay are external dependencies, and while this document shows one way of deploying them, you are free to use your own method. All other components (scheduler, api, agents and clients) require specific compilation and configuration steps that are explained in this document.
Due to the fast changing pace of Go, MIG and its third party packages, we do not currently provide binary packages. You will have to compile the components yourself, which is explained below.
A complete environment should be configured in the following order:
- retrieve the source and prepare your build environment
- deploy the postgresql database
- create a PKI
- deploy the rabbitmq relay
- build, configure and deploy the scheduler
- build, configure and deploy the api
- build the clients and create an investigator
- configure and deploy agents
Install Go 1.5 from your package manager , via gvm or from source.
You must use Go 1.5 because MIG uses vendoring that isn't available in prior versions.
$ go version
go version go1.5 linux/amd64
As with any Go setup, make sure your GOPATH is exported, for example by setting it to $HOME/go
$ export GOPATH="$HOME/go"
$ mkdir $GOPATH
Then retrieve MIG's source code using go get:
$ go get mig.ninja/mig
Go get will place MIG under $GOPATH/src/mig.ninja/mig. Change directory to this path and build the components. Note that, if you're on a Debian or Ubuntu box, you can run make deb-server directly which will build the scheduler, api and workers into a single DEB package. Otherwise, use the following make commands:
$ make mig-scheduler
$ make mig-api
$ make worker-agent-intel
$ make worker-compliance-item
Or just run make that will build everything and runs tests as well.
Note: running make will build everything including the mig-console which requires readline to be installed (readline-devel on rhel/fedora or libreadline-dev on debian/ubuntu).
$ make
Install postgres 9.3+ on a server and copy the scripts database/createlocaldb.sh and database/schema.sql. Make sure you have sudo access to the server and run the script (or run the commands from createlocaldb.sh manually).
$ ./createlocaldb.sh
Created user migadmin with password 'l1bZowe8fy1'
Created user migapi with password 'p4oid18'
Created user migscheduler with password '48Cm12Taodf928wqojdlsa1981'
MIG Database created successfully.
This creates a local database called mig with the relevant admin, api and scheduler users. Make sure you save the passwords generated by the script in a safe location, you'll need them later.
To verify the DB, use the psql command line:
$ sudo su - postgres
postgres@jaffatower:~$ psql
psql (9.4.4)
Type "help" for help.
postgres=# \c mig
You are now connected to database "mig" as user "postgres".
mig=# \d
List of relations
Schema | Name | Type | Owner
--------+----------------------+----------+----------
public | actions | table | migadmin
public | agents | table | migadmin
public | agents_stats | table | postgres
public | agtmodreq | table | migadmin
public | commands | table | migadmin
public | invagtmodperm | table | migadmin
public | investigators | table | migadmin
public | investigators_id_seq | sequence | postgres
public | modules | table | migadmin
public | signatures | table | migadmin
(10 rows)
If you are using a remote database, create a database and an admin user, then modify the variables at the top of src/mig/database/createremotedb.sh and run it. The script will create the DB schema and output the credentials for users migscheduler and migapi. These credentials need to be references in the MIG Scheduler and API configuration files.
Edit the variables in the script createremotedb.sh:
$ vim createremotedb.sh
PGDATABASE='mig'
PGUSER='migadmin'
PGPASS='MYDATABASEPASSWORD'
PGHOST='192.168.0.1'
PGPORT=5432
Then run it against your database server. Make sure that the Postgresql client command line psql is installed locally.
$ which psql
/usr/bin/psql
$ bash createremotedb.sh
[... bunch of sql queries ...]
created users:
migscheduler 4NvQFdwdQ8UOU4ekEOgWDWi3gzG5cg2X
migapi xcJyJhLg1cldIp7eXcxv0U-UqV80tMb-
Skip this step if you want to reuse an existing PKI. MIG will need a server certificate for RabbitMQ, and client certificates for agents, schedulers and workers. The PKI is only used to protect connection to the public AMQP endpoint.
Use the script is tools/create_mig_ca.sh to generate a new CA and signed certificates for each component.
Create a new directory that will hold the CA, copy the script in it, and run it. The script will prompt for one piece of information: the public DNS of the rabbitmq relay. It's important that you set this to the correct value to allow AMQP clients to validate the rabbitmq certificate correctly.
$ mkdir migca
$ cd migca
$ cp $GOPATH/src/mig.ninja/mig/tools/create_mig_ca.sh .
$ bash create_mig_ca.sh
[...]
enter the public dns name of the rabbitmq server agents will connect to> mymigrelay.example.net
[...]
$ ls -l
total 76
-rw-r--r-- 1 julien julien 5163 Sep 9 00:06 agent.crt
-rw-r--r-- 1 julien julien 1033 Sep 9 00:06 agent.csr
-rw-r--r-- 1 julien julien 1704 Sep 9 00:06 agent.key
drwxr-xr-x 3 julien julien 4096 Sep 9 00:06 ca
-rw-r--r-- 1 julien julien 3608 Sep 9 00:06 create_mig_ca.sh
-rw-r--r-- 1 julien julien 2292 Sep 9 00:06 openssl.cnf
-rw-r--r-- 1 julien julien 5161 Sep 9 00:06 rabbitmq.crt
-rw-r--r-- 1 julien julien 1029 Sep 9 00:06 rabbitmq.csr
-rw-r--r-- 1 julien julien 1704 Sep 9 00:06 rabbitmq.key
-rw-r--r-- 1 julien julien 5183 Sep 9 00:06 scheduler.crt
-rw-r--r-- 1 julien julien 1045 Sep 9 00:06 scheduler.csr
-rw-r--r-- 1 julien julien 1704 Sep 9 00:06 scheduler.key
-rw-r--r-- 1 julien julien 5169 Sep 9 00:06 worker.crt
-rw-r--r-- 1 julien julien 1033 Sep 9 00:06 worker.csr
-rw-r--r-- 1 julien julien 1704 Sep 9 00:06 worker.key
These certificates can now be used in each component.
Install the RabbitMQ server from your distribution's packaging system. If your distribution does not provide a RabbitMQ package, install erlang from yum or apt, and then install RabbitMQ using the packages from rabbitmq.com
The script in tools/create_rabbitmq_config.sh can be run against a local instance of rabbitmq to configure the necessary users and permissions.
$ bash createrabbitmqconfig.sh
[ ... ]
[ ok ] Restarting message broker: rabbitmq-server.
rabbitmq configured with the following users:
admin 5IRociqhefiehekjqqhfeq
scheduler MM8972olkjwqashrieygrh
agent p1938oanvdjknxcbveufif
worker 80912lsdkjj718tdfxmlqx
copy ca.crt and rabbitmq.{crt,key} into /etc/rabbitmq/
then run $ service rabbitmq-server restart
Save the credentials in a safe location, we will need them later.
Copy the ca.crt, rabbitmq.key and rabbitmq.crt we generate in the PKI into /etc/rabbitmq and restart the service. You should see Beam listen on port 5671.
$ netstat -taupen|grep 5671
tcp6 0 0 :::5671 :::* LISTEN 110 658831 11467/beam.smp
If you care about the detail of Rabbitmq's configuration, read the manual configuration section in the appendix at the end of this document.
If you deploy the scheduler using the package build by the deb-server target, a template configuration will be placed in /etc/mig/scheduler.cfg. Otherwise, you can find one in conf/scheduler.cfg.inc.
If you use deb-server, simply dpkg -i the package and the scheduler will be installed into /opt/mig/bin/mig-scheduler, its configuration kept in /etc/mig.
If you build your own binary, get one by running make mig-scheduler.
Start by copying the ca.crt, scheduler.key and scheduler.crt we generated in the PKI into the /etc/mig/ folder.
Then edit the configuration file to replace the DB and RabbitMQ parameters with the ones that we obtained in previous steps. The default configurations provided for both Postgres and RabbitMQ are purposedly wrong and need to be replaced, otherwise the scheduler will fail to connect. Below is an example configuration that would work with the setup we have prepared.
[agent] ; timeout controls the inactivity period after which ; agents are marked offline timeout = "60m" ; heartbeatfreq maps to the agent configuration and helps ; the scheduler detect duplicate agents, and some other things heartbeatfreq = "5m" ; whitelist contains a list of agent queues that are allowed ; to send heartbeats and receive commands whitelist = "/var/cache/mig/agents_whitelist.txt" ; detect endpoints that are running multiple agents detectmultiagents = true ; issue kill orders to duplicate agents running on the same endpoint killdupagents = true ; the collector continuously pulls ; pending messages from the spool [collector] ; frequency at which the collector runs, ; default is to run every second freq = "1s" ; the periodic runs less often that ; the collector and does cleanup and DB updates [periodic] ; frequency at which the periodic jobs run freq = "87s" ; delete finished actions, commands and invalids after ; this period has passed deleteafter = "360h" ; run a rabbitmq unused queues cleanup job at this frequency ; this is DB & amqp intensive so don't run it too often queuescleanupfreq = "24h" [directories] spool = "/var/cache/mig/" tmp = "/var/tmp/" [postgres] host = "192.168.1.240" port = 5432 dbname = "mig" user = "migscheduler" password = "4NvQFdwdQ8UOU4ekEOgWDWi3gzG5cg2X" sslmode = "disable" maxconn = 10 [mq] host = "rabbitmq.mig.example.net" port = 5671 user = "scheduler" pass = "MM8972olkjwqashrieygrh" vhost = "mig" ; TLS options usetls = true cacert = "/etc/mig/ca.crt" tlscert = "/etc/mig/scheduler.crt" tlskey = "/etc/mig/scheduler.key" ; AMQP options ; timeout defaults to 10 minutes ; keep this higher than the agent heartbeat value timeout = "10m" [logging] mode = "stdout" ; stdout | file | syslog level = "debug" ; for file logging ; file = "mig_scheduler.log" ; for syslog, logs go into local3 ; host = "localhost" ; port = 514 ; protocol = "udp"
The sample above needs to be tweaked further to match your environment. This document explains each section in Appendix B. For now, let's test our setup with this basic conf by running mig-scheduler in foreground, as root.
# /opt/mig/bin/mig-scheduler
Initializing Scheduler context...OK
2015/09/09 04:25:47 - - - [debug] leaving initChannels()
2015/09/09 04:25:47 - - - [debug] leaving initDirectories()
2015/09/09 04:25:47 - - - [info] Database connection opened
2015/09/09 04:25:47 - - - [debug] leaving initDB()
2015/09/09 04:25:47 - - - [info] AMQP connection opened
2015/09/09 04:25:47 - - - [debug] leaving initRelay()
2015/09/09 04:25:47 - - - [debug] leaving makeSecring()
2015/09/09 04:25:47 - - - [info] no key found in database. generating a private key for user migscheduler
2015/09/09 04:25:47 - - - [info] created migscheduler identity with ID %!d(float64=1) and key ID A8E1ED58512FCD9876DBEA4FEA513B95032D9932
2015/09/09 04:25:47 - - - [debug] leaving makeSchedulerInvestigator()
2015/09/09 04:25:47 - - - [debug] loaded scheduler private key from database
2015/09/09 04:25:47 - - - [debug] leaving makeSecring()
2015/09/09 04:25:47 - - - [info] Loaded scheduler investigator with key id A8E1ED58512FCD9876DBEA4FEA513B95032D9932
2015/09/09 04:25:47 - - - [debug] leaving initSecring()
2015/09/09 04:25:47 - - - [info] mig.ProcessLog() routine started
2015/09/09 04:25:47 - - - [info] processNewAction() routine started
2015/09/09 04:25:47 - - - [info] sendCommands() routine started
2015/09/09 04:25:47 - - - [info] terminateCommand() routine started
2015/09/09 04:25:47 - - - [info] updateAction() routine started
2015/09/09 04:25:47 - - - [info] agents heartbeats listener initialized
2015/09/09 04:25:47 - - - [debug] leaving startHeartbeatsListener()
2015/09/09 04:25:47 - - - [info] agents heartbeats listener routine started
2015/09/09 04:25:47 4883372310530 - - [info] agents results listener initialized
2015/09/09 04:25:47 4883372310530 - - [debug] leaving startResultsListener()
2015/09/09 04:25:47 - - - [info] agents results listener routine started
2015/09/09 04:25:47 - - - [info] collector routine started
2015/09/09 04:25:47 - - - [info] periodic routine started
2015/09/09 04:25:47 - - - [info] queue cleanup routine started
2015/09/09 04:25:47 - - - [info] killDupAgents() routine started
2015/09/09 04:25:47 4883372310531 - - [debug] initiating spool inspection
2015/09/09 04:25:47 4883372310532 - - [info] initiating periodic run
2015/09/09 04:25:47 4883372310532 - - [debug] leaving cleanDir()
2015/09/09 04:25:47 4883372310532 - - [debug] leaving cleanDir()
2015/09/09 04:25:47 4883372310531 - - [debug] leaving loadNewActionsFromDB()
2015/09/09 04:25:47 4883372310531 - - [debug] leaving loadNewActionsFromSpool()
2015/09/09 04:25:47 4883372310531 - - [debug] leaving loadReturnedCommands()
2015/09/09 04:25:47 4883372310531 - - [debug] leaving expireCommands()
2015/09/09 04:25:47 4883372310531 - - [debug] leaving spoolInspection()
2015/09/09 04:25:47 4883372310532 - - [debug] leaving markOfflineAgents()
2015/09/09 04:25:47 4883372310533 - - [debug] QueuesCleanup(): found 0 offline endpoints between 2015-09-08 01:25:47.292598629 +0000 UTC and now
2015/09/09 04:25:47 4883372310533 - - [info] QueuesCleanup(): done in 7.389363ms
2015/09/09 04:25:47 4883372310533 - - [debug] leaving QueuesCleanup()
2015/09/09 04:25:47 4883372310532 - - [debug] leaving markIdleAgents()
2015/09/09 04:25:47 4883372310532 - - [debug] CountNewEndpoints() took 7.666476ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] CountIdleEndpoints() took 99.925426ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] SumIdleAgentsByVersion() took 99.972162ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] SumOnlineAgentsByVersion() took 100.037988ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] CountFlappingEndpoints() took 100.134112ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] CountOnlineEndpoints() took 99.976176ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] CountDoubleAgents() took 99.959133ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] CountDisappearedEndpoints() took 99.900215ms to run
2015/09/09 04:25:47 4883372310532 - - [debug] leaving computeAgentsStats()
2015/09/09 04:25:47 4883372310532 - - [debug] leaving detectMultiAgents()
2015/09/09 04:25:47 4883372310532 - - [debug] leaving periodic()
2015/09/09 04:25:47 4883372310532 - - [info] periodic run done in 110.647479ms
Among the debug logs, we can see that the scheduler successfully connected to both PostgresSQL and RabbitMQ. It detected that no scheduler key was present in the database and created one with Key ID "A8E1ED58512FCD9876DBEA4FEA513B95032D9932". It then proceeded to wait for work to do, waking up regularly to perform maintenance tasks.
This working scheduler allows us to move on to the next component: the API.
MIG's REST API is the interface between investigators and the rest of the infrastructure. It is also accessed by agents to discover their public IP.
The API needs to be deployed like a normal web application, preferably behind a reverse proxy that handles TLS.
{investigators}-\ --> {reverse proxy} -> {api} -> {database} -> {scheduler} -> {rabbitmq} -> {agents} {agents}--------/
For this documentation, we will assume that the API listens on its local IP, which is 192.168.1.150, on port 51664. The public endpoint of the api is api.mig.example.net. A configuration could be defined as follow:
[authentication] # turn this on after initial setup, once you have at least # one investigator created enabled = off # when validating token timestamps, accept a timestamp that is # within this duration of the local clock tokenduration = 10m [server] # local listening ip ip = "192.168.1.150" # local listening port port = 51664 # public location of the API endpoint host = "https://api.mig.example.net" # API base route, all endpoints are below this path # ex: http://localhost:12345/api/v1/action/create/ # |------<host>--------|<base>|--<endpoint>--| baseroute = "/api/v1" [postgres] host = "192.168.1.240" port = 5432 dbname = "mig" user = "migapi" password = "p4QfcStzn8JIH4T4Tfr_kUzYHiPher1H" sslmode = "disable" [logging] mode = "stdout" ; stdout | file | syslog level = "debug" ; for file logging ; file = "mig_api.log" ; for syslog, logs go into local3 ; host = "localhost" ; port = 514 ; protocol = "udp"
Note in the configuration above that authentication is disabled for now.
The Postgres credentials are taken from the user/password we generated for user migapi during the database configuration.
Under the [server] section:
- ip and port define the socket the API will be listening on.
- host is the public URL of the API, that clients will be connecting to
- baseroute is the location of the base of the API, without the trailing slash.
In this example, to reach the home of the API, we would point our browser to https://api.mig.example.net/api/v1/.
A sample Nginx reverse proxy configuration is shown below:
server { listen 443; ssl on; root /var/www; index index.html index.htm; server_name api.mig.example.net; client_max_body_size 200M; # certs sent to the client in SERVER HELLO are concatenated in ssl_certificate ssl_certificate /etc/nginx/certs/api.mig.example.net.crt; ssl_certificate_key /etc/nginx/certs/api.mig.example.net.key; ssl_session_timeout 5m; ssl_session_cache shared:SSL:50m; # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits ssl_dhparam /etc/nginx/certs/dhparam; # modern configuration. tweak to your needs. ssl_protocols TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK'; ssl_prefer_server_ciphers on; location /api/v1/ { proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://192.168.1.150:51664/api/v1/; } }
If you're going to enable HTTPS in front of the API, make sure to use a trusted certificate. Agents don't connect to untrusted certificates. If you can't get one, or don't want to for a test environment, don't use HTTPS and configure the API and Nginx to use HTTP instead. Credentials are never passed to the API, only PGP tokens, so the worst you could expose is investigation results.
You can test that the API works properly by performing a request to the dashboard endpoint. It should return a JSON document with all counters at zero, since we don't have any agent connected yet.
$ curl https://jaffa.linuxwall.info/api/v1/dashboard | python -mjson.tool
{
"collection": {
"version": "1.0",
"href": "https://api.mig.example.net/api/v1/dashboard",
"items": [
{
"href": "https://api.mig.example.net/api/v1/dashboard",
"data": [
{
"name": "online agents",
"value": 0
},
{
"name": "online agents by version",
"value": null
},
{
"name": "online endpoints",
"value": 0
},
{
"name": "idle agents",
"value": 0
},
{
"name": "idle agents by version",
"value": null
},
{
"name": "idle endpoints",
"value": 0
},
{
"name": "new endpoints",
"value": 0
},
{
"name": "endpoints running 2 or more agents",
"value": 0
},
{
"name": "disappeared endpoints",
"value": 0
},
{
"name": "flapping endpoints",
"value": 0
}
]
}
],
"template": {},
"error": {}
}
}
MIG has multiple command line clients that can be used to interact with the API and run investigations or view results. The two main clients are mig, a command line tool that can run investigations quickly, and mig-console, a readline console that can also run investigations but browse through passed investigations as well and manage investigators. We will use mig-console to create our first investigator.
Here we will assume you already have GnuPG installed, and that you generate a keypair for yourself (see the doc on gnupg.org). You should be able to access your PGP Fingerprint using this command:
$ gpg --fingerprint myinvestigator@example.net pub 2048R/3B763E8F 2013-04-30 Key fingerprint = E608 92BB 9BD8 9A69 F759 A1A0 A3D6 5217 3B76 3E8F uid My Investigator <myinvestigator@example.net> sub 2048R/8026F39F 2013-04-30
Next, create the client configuration file in $HOME/.migrc. Below is a sample you can reuse with your own values.
$ cat ~/.migrc [api] url = "https://api.mig.example.net/api/v1/" [gpg] home = "/home/myuser/.gnupg/" keyid = "E60892BB9BD89A69F759A1A0A3D652173B763E8F"
Make sure have the dev library of readline installed (readline-devel on rhel/fedora or libreadline-dev on debian/ubuntu) and go get the binary from its source repository
$ sudo apt-get install libreadline-dev $ go get mig.ninja/mig/client/mig-console $ $GOPATH/bin/mig-console ## ## _.---._ .---. # # # /-\ ---|| | /\ __...---' .---. '---'-. '. # #| | / || | /--\ .-''__.--' _.'( | )'. '. '._ : # # \_/ ---| \_ \_/ \ .'__-'_ .--'' ._'---'_.-. '. '-'. ### ~ -._ -._''---. -. '-._ '. # |\ |\ /---------| ~ -.._ _ _ _ ..-_ '. '-._''--.._ # | \| \ / |- |__ | | -~ -._ '-. -. '-._''--.._.--''. ###| \ \/ ---__| | | ~ ~-.__ -._ '-.__ '. '. ##### ~~ ~---...__ _ ._ .' '. # /\ --- /-\ |--|---- ~ ~--.....--~ # ### /--\ | | ||-\ // #####/ \ | \_/ | \//__ +------ | Agents & Endpoints summary: | * 0 online agents on 0 endpoints | * 0 idle agents on 0 endpoints | * 0 endpoints are running 2 or more agents | * 0 endpoints appeared over the last 7 days | * 0 endpoints disappeared over the last 7 days | * 0 endpoints have been flapping | Online agents by version: | Idle agents by version: | | Latest Actions: | ---- ID ---- + ---- Name ---- + -Sent- + ---- Date ---- + ---- Investigators ---- +------ Connected to https://api.mig.example.net/api/v1/. Exit with ctrl+d. Type help for help. mig>
The console wait for input on the mig> prompt. Enter help is you want to explore all the available functions. For now, we will only create a new investigator in the database.
The investigator will be defined with its public key, so the first thing we need to do is export our public key to a local file that can be given to the console during the creation process.
$ gpg --export -a myinvestigator@example.net > /tmp/myinvestigator_pubkey.asc
Then in the console prompt, enter the following commands:
- create investigator
- enter a name, such as Bob The Investigator
- enter the path to the public key /tmp/myinvestigator_pubkey.asc
- enter y to confirm the creation
The console should display "Investigator 'Bob The Investigator' successfully created with ID 2". We can view the details of this new investigator by entering investigator 2 on the console prompt.
mig> investigator 2 Entering investigator mode. Type exit or press ctrl+d to leave. help may help. Investigator 2 named 'Bob The Investigator' inv 2> details Investigator ID 2 name Bob The Investigator status active key id E60892BB9BD89A69F759A1A0A3D652173B763E8F created 2015-09-09 09:53:28.989481 -0400 EDT modified 2015-09-09 09:53:28.989481 -0400 EDT
Now that we have an active investigator created, we can enable authentication in the API. Go back to the API server and modify the configuration in /etc/mig/api.cfg.
[authentication] # turn this on after initial setup, once you have at least # one investigator created enabled = on
Reopen the mig-console, and you will see the investigator name in the API logs:
2015/09/09 13:56:09 4885615083520 - - [info] src=192.168.1.243,192.168.1.1 auth=[Bob The Investigator 2] GET HTTP/1.0 /api/v1/dashboard resp_code=200 resp_size=600 user-agent=MIG Client console-20150826+62ea662.dev
The benefit of the PGP token approach is the API never needs access to private keys, and thus a compromise of the API doesn't leak credentials of investigators.
This concludes the configuration of the server side of MIG. Next we need to build agents that can be deployed across our infrastructure.
The MIG Agent configuration must be prepared before build. The configuration is hardwired into the agent, such that no external file is required to run it.
TLS Certificates, PGP public keys and configuration variables would normally be stored in external files, that would make installing an agent on an endpoint more complex. The approach of building all of the configuration parameters into the agent means that we can ship a single binary that is self-sufficient. Go's approach to statically built binary also helps greatly eliminate the need for external dependencies. Once the agent is built, ship it to an endpoint, run it, and you're done.
A template of agent configuration is in 'conf/mig-agent-conf.go.inc'. Copy this to 'conf/mig-agent-conf.go' and edit the file. Make sure to respect Go syntax format.
$ go get mig.ninja/mig
$ cd $GOPATH/src/mig.ninja/mig
$ cp conf/mig-agent-conf.go.inc example.net.agents-conf.go
$ vim conf/example.net.agents-conf.go
Later on, when you run 'make mig-agent', the Makefile will copy the agent configuration to the agent source code, and build the binary. If the configuration file is missing, Makefile will alert you. If you have an error in the format of the file, the Go compiler will return a list of compilation errors for you to fix.
TLS support between agents and rabbitmq is optional, but strongly recommended. If you want to use TLS, you need to import the PEM encoded client certificate, client key and CA certificate that we created in the PKI step further up into 'mig-agent-conf.go'.
- CACERT must contain the PEM encoded certificate of the Root CA.
- AGENTCERT must contain the PEM encoded client certificate of the agent.
- AGENTKEY must contain the PEM encoded client certificate of the agent.
You also need to edit the AMQPBROKER variable to invoke amqps instead of the regular amqp mode. You probably also want to change the port from 5672 (default amqp) to 5671 (default amqps).
In the AMQPBROKER parameter, we set the agent's RabbitMQ username and password we generated in previous steps.
var AMQPBROKER string = "amqps://agent:p1938oanvdjknxcbveufif@rabbitmq.mig.example.net:5671/mig"
Agents need to know the location of the API as it is used to discover their public IP during startup.
var APIURL string = "https://api.mig.example.net/api/v1/"
The agent supports connecting to the relay via a CONNECT proxy. It will attempt a direct connection first, and if this fails, will look for the environment variable HTTP_PROXY to use as a proxy. A list of proxies can be manually added to the configuration of the agent in the PROXIES parameters. These proxies will be used if the two previous connections fail.
An agent using a proxy will reference the name of the proxy in the environment fields of the heartbeat sent to the scheduler.
The agent can establish a listening TCP socket on localhost for management purpose. The list of supported operations can be obtained by sending the keyword help to this socket.
$ nc localhost 51664 <<< help
Welcome to the MIG agent socket. The commands are:
pid returns the PID of the running agent
To obtain the PID of the running agent, use the following command:
$ nc localhost 51664 <<< pid ; echo
9792
Leave the SOCKET configuration variable empty to disable the stat socket.
The agent can log to stdout, to a file or to the system logging. On Windows, the system logging is the Event log. On POSIX systems, it's syslog.
The LOGGINGCONF parameter is used to configure the proper logging level.
The detail of how access control lists are created and managed is described in concepts: Access Control Lists. In this documentation, we focus on a basic setup that grant access of all modules to all investigators, and restricts what the scheduler key can do.
ACL are declared in JSON hardcoded into the AGENTACL variable of the agent configuration. For now, we only create two ACLs: a default one that grants access to all modules to two investigators, and an agentdestroy one that grants access to the agentdestroy module to the scheduler.
The ACLs only references the fingerprint of the public key of each investigator and a weight that describes how much permission each investigator is granted with.
// Control modules permissions by PGP keys
var AGENTACL = [...]string{
`{
"default": {
"minimumweight": 2,
"investigators": {
"Bob The Investigator": {
"fingerprint": "E60892BB9BD89A69F759A1A0A3D652173B763E8F",
"weight": 2
},
"Sam Axe": {
"fingerprint": "FA5D79F95F7AF7097C3E83DA26A86D5E5885AC11",
"weight": 2
}
}
}
}`,
`{
"agentdestroy": {
"minimumweight": 1,
"investigators": {
"MIG Scheduler": {
"fingerprint": "A8E1ED58512FCD9876DBEA4FEA513B95032D9932",
"weight": 1
}
}
}
}`,
}
Note that the PGP key of the scheduler was created automatically when we started the scheduler service for the first time. You can access its fingerprint via the mig-console, as follow:
$ mig-console mig> investigator 1 inv 1> details Investigator ID 1 name migscheduler status active key id A8E1ED58512FCD9876DBEA4FEA513B95032D9932 created 2015-09-09 00:25:47.225086 -0400 EDT modified 2015-09-09 00:25:47.225086 -0400 EDT
You can also view its public key by entering pubkey in the prompt.
The public keys of all investigators must be listed in the PUBLICPGPKEYS array. Each key is its own entry in the array. Since all investigators must be created via the mig-console to have access to the API, the easiest way to export their public keys is also via the mig-console.
$ mig-console
mig> investigator 2
inv 2> pubkey
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1
mQENBFF/69EBCADe79sqUKJHXTMW3tahbXPdQAnpFWXChjI9tOGbgxmse1eEGjPZ
QPFOPgu3O3iij6UOVh+LOkqccjJ8gZVLYMJzUQC+2RJ3jvXhti8xZ1hs2iEr65Rj
zUklHVZguf2Zv2X9Er8rnlW5xzplsVXNWnVvMDXyzx0ufC00dDbCwahLQnv6Vqq8
etc...
Then insert the whole armored pubkey, with header and footer, into the array. Each key must be present in the PUBLICPGPKEYS array, enclosed with backticks. The order is irrelevant.
// PGP public key that is authorized to sign actions
var PUBLICPGPKEYS = [...]string{
`-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1 - myinvestigator@example.net
mQENBFF/69EBCADe79sqUKJHXTMW3tahbXPdQAnpFWXChjI9tOGbgxmse1eEGjPZ
=3tGV
-----END PGP PUBLIC KEY BLOCK-----
`,
`
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1. Name: sam.axe@example.net
mQINBE5bjGABEACnT9K6MEbeDFyCty7KalsNnMjXH73kY4B8aJXbE6SSnRA3gWpa
-----END PGP PUBLIC KEY BLOCK-----`}
The agent has many other configuration parameters that you may want to tweak before shipping it. Each of them is documented in the sample configuration file.
Once the agent properly configured, you can build it using make. The path to the customized configuration must be given in the AGTCONF make variable. You can also set BUILDENV to the environment you're building for, it is set to dev by default.
$ make mig-agent AGTCONF=conf/example.net.agents-conf.go
mkdir -p bin/linux/amd64
echo building mig-agent for linux/amd64
building mig-agent for linux/amd64
if [ ! -r conf/linuxwall-mig-agent-conf.go ]; then echo "conf/linuxwall-mig-agent-conf.go configuration file does not exist" ; exit 1; fi
# test if the agent configuration variable contains something different than the default value
# and if so, replace the link to the default configuration with the provided configuration
if [ conf/linuxwall-mig-agent-conf.go != "conf/mig-agent-conf.go.inc" ]; then rm mig-agent/configuration.go; cp conf/linuxwall-mig-agent-conf.go mig-agent/configuration.go; fi
GOOS=linux GOARCH=amd64 GO15VENDOREXPERIMENT=1 go build -o bin/linux/amd64/mig-agent-20150909+556e9c0.dev"" -ldflags "-X main.version=20150909+556e9c0.dev" mig.ninja/mig/mig-agent
ln -fs "$(pwd)/bin/linux/amd64/mig-agent-20150909+556e9c0.dev""" "$(pwd)/bin/linux/amd64/mig-agent-latest"
[ -x "bin/linux/amd64/mig-agent-20150909+556e9c0.dev""" ] && echo SUCCESS && exit 0
SUCCESS
Built binaries will be placed in bin/linux/amd64/ (or in a similar directory if you are building on a different platform).
To cross-compile for a different platform, use the ARCH and OS make variables:
$ make mig-agent AGTCONF=conf/example.net.agents-conf.go BUILDENV=prod OS=windows ARCH=amd64
You can test the agent on the command line using the debug flag -d. When run with -d, the agent will stay in foreground and print its activity to stdout.
$ sudo ./bin/linux/amd64/mig-agent-20150909+556e9c0.dev -d
[info] using builtin conf
2015/09/09 10:43:30 - - - [debug] leaving initChannels()
2015/09/09 10:43:30 - - - [debug] Logging routine initialized.
2015/09/09 10:43:30 - - - [debug] leaving findHostname()
2015/09/09 10:43:30 - - - [debug] Ident is Debian testing-updates sid
2015/09/09 10:43:30 - - - [debug] Init is upstart
2015/09/09 10:43:30 - - - [debug] leaving findOSInfo()
2015/09/09 10:43:30 - - - [debug] Found local address 172.21.0.3/20
2015/09/09 10:43:30 - - - [debug] Found local address fe80::3602:86ff:fe2b:6fdd/64
2015/09/09 10:43:30 - - - [debug] Found public ip 172.21.0.3
2015/09/09 10:43:30 - - - [debug] leaving initAgentID()
2015/09/09 10:43:30 - - - [debug] Loading permission named 'default'
2015/09/09 10:43:30 - - - [debug] Loading permission named 'agentdestroy'
2015/09/09 10:43:30 - - - [debug] leaving initACL()
2015/09/09 10:43:30 - - - [debug] AMQP: host=rabbitmq.mig.example.net, port=5671, vhost=mig
2015/09/09 10:43:30 - - - [debug] Loading AMQPS TLS parameters
2015/09/09 10:43:30 - - - [debug] Establishing connection to relay
2015/09/09 10:43:30 - - - [debug] leaving initMQ()
2015/09/09 10:43:30 - - - [debug] leaving initAgent()
2015/09/09 10:43:30 - - - [info] Mozilla InvestiGator version 20150909+556e9c0.dev: started agent gator1
2015/09/09 10:43:30 - - - [debug] heartbeat '{"name":"gator1","queueloc":"linux.gator1.ft8dzivx8zxd1mu966li7fy4jx0v999cgfap4mxhdgj1v0zv","mode":"daemon","version":"20150909+556e9c0.dev","pid":2993,"starttime":"2015-09-09T10:43:30.871448608-04:00","destructiontime":"0001-01-01T00:00:00Z","heartbeatts":"2015-09-09T10:43:30.871448821-04:00","environment":{"init":"upstart","ident":"Debian testing-updates sid","os":"linux","arch":"amd64","isproxied":false,"addresses":["172.21.0.3/20","fe80::3602:86ff:fe2b:6fdd/64"],"publicip":"172.21.0.3"},"tags":{"operator":"example.net"}}'
2015/09/09 10:43:30 - - - [debug] Message published to exchange 'toschedulers' with routing key 'mig.agt.heartbeats' and body '{"name":"gator1","queueloc":"linux.gator1.ft8dzivx8zxd1mu966li7fy4jx0v999cgfap4mxhdgj1v0zv","mode":"daemon","version":"20150909+556e9c0.dev","pid":2993,"starttime":"2015-09-09T10:43:30.871448608-04:00","destructiontime":"0001-01-01T00:00:00Z","heartbeatts":"2015-09-09T10:43:30.871448821-04:00","environment":{"init":"upstart","ident":"Debian testing-updates sid","os":"linux","arch":"amd64","isproxied":false,"addresses":["172.21.0.3/20","fe80::3602:86ff:fe2b:6fdd/64"],"publicip":"172.21.0.3"},"tags":{"operator":"example.net"}}'
2015/09/09 10:43:30 - - - [debug] leaving initSocket()
2015/09/09 10:43:30 - - - [debug] leaving publish()
2015/09/09 10:43:30 - - - [info] Stat socket connected successfully on 127.0.0.1:61664
^C2015/09/09 10:43:39 - - - [emergency] Shutting down agent: 'interrupt'
2015/09/09 10:43:40 - - - [info] closing sendResults channel
2015/09/09 10:43:40 - - - [info] closing parseCommands goroutine
2015/09/09 10:43:40 - - - [info] closing runModule goroutine
The output above indicates that the agent successfully connected to Rabbitmq and sent a heartbeat message. The scheduler will receive this heartbeat and process it, but in order to mark the agent offline, the scheduler must whitelist its queueloc value.
To do so, go back to the scheduler server and add the queueloc into /var/cache/mig/agents_whitelist.txt. No need to restart the scheduler, it is automatically taken into account.
$ echo 'linux.gator1.ft8dzivx8zxd1mu966li7fy4jx0v999cgfap4mxhdgj1v0zv' >> /var/cache/mig/agents_whitelist.txt
At the next run of the scheduler periodic routine, the agent will be marked as online and show up in the dashboard counters. You can browse these counters using the mig-console.
mig> status +------ | Agents & Endpoints summary: | * 1 online agents on 1 endpoints +------
Get the mig command line from the upstream repository and run a simple investigation that looks for a user in /etc/passwd.
$ go get mig.ninja/mig/client/mig
$ $GOPATH/bin/mig file -path /etc -name "^passwd$" -content "^root"
1 agents will be targeted. ctrl+c to cancel. launching in 5 4 3 2 1 GO
Following action ID 4885615083564.status=inflight.
- 100.0% done in -2m17.141481302s
1 sent, 1 done, 1 succeeded
gator1 /etc/passwd [lastmodified:2015-08-31 16:15:05.547605529 +0000 UTC, mode:-rw-r--r--, size:2251] in search 's1'
1 agent has found results
A single file is found, as expected.
All communications between schedulers and agents rely on RabbitMQ's AMQP protocol. While MIG does not rely on the security of RabbitMQ to pass orders to agents, an attacker that gains control to the message broker would be able to listen to all messages passed between the various components. To prevent this, RabbitMQ must provide a reasonable amount of protection, at several levels:
- All communications on the public internet are authenticated using client and server certificates. Since all agents share a single client certificate, this provides minimal security, and should only be used to make it harder for attackers to establish an AMQP connection with rabbitmq.
- Agents can only listen on their own queue. This is accomplished by randomizing the name of the agent queue.
- Agents can only publish to the toschedulers exchange. This is accomplished using tight Access Control rules to RabbitMQ.
Note that, even if a random agent manages to connect to the relay, the scheduler will accept its registration only if it is present in the scheduler's whitelist.
On the rabbitmq server, create users:
- admin, with the tag 'administrator'
- scheduler , agent and worker with no tag
All users should have strong passwords. The scheduler password goes into the configuration file conf/mig-scheduler.cfg, in [mq] password. The agent password goes into conf/mig-agent-conf.go, in the agent AMQPBROKER dial string. The admin password is, of course, for yourself.
sudo rabbitmqctl add_user admin SomeRandomPassword
sudo rabbitmqctl set_user_tags admin administrator
sudo rabbitmqctl add_user scheduler SomeRandomPassword
sudo rabbitmqctl add_user agent SomeRandomPassword
sudo rabbitmqctl add_user worker SomeRandomPassword
You can list the users with the following command:
sudo rabbitmqctl list_users
On fresh installation, rabbitmq comes with a guest user that as password guest and admin privileges. You may you to delete that account.
sudo rabbitmqctl delete_user guest
- Create a 'mig' virtual host.
sudo rabbitmqctl add_vhost mig
sudo rabbitmqctl list_vhosts
- Create permissions for the scheduler user. The scheduler is allowed to:
- CONFIGURE:
- declare the exchanges toagents, toschedulers and toworkers
- declare and delete queues under mig.agt.*
- WRITE:
- publish into the exchanges toagents and toworkers
- consume from queues mig.agt.heartbeats and mig.agt.results
- READ:
- declare the exchanges toagents, toschedulers and toworkers
- consume from queues mig.agt.heartbeats and mig.agt.results bound to the toschedulers exchange
sudo rabbitmqctl set_permissions -p mig scheduler \
'^(toagents|toschedulers|toworkers|mig\.agt\..*)$' \
'^(toagents|toworkers|mig\.agt\.(heartbeats|results))$' \
'^(toagents|toschedulers|toworkers|mig\.agt\.(heartbeats|results))$'
- Create permissions for the agent use. The agent is allowed to:
- CONFIGURE:
- create any queue under mig.agt.*
- WRITE:
- publish to the toschedulers exchange
- consume from queues under mig.agt.*
- READ:
- consume from queues under mig.agt.* bound to the toagents exchange
sudo rabbitmqctl set_permissions -p mig agent \
'^mig\.agt\..*$' \
'^(toschedulers|mig\.agt\..*)$' \
'^(toagents|mig\.agt\..*)$'
- Create permissions for the event workers. The workers are allowed to:
- CONFIGURE:
- declare queues under migevent.*
- WRITE:
- consume from queues under migevent.*
- READ:
- consume from queues under migevent.* bound to the toworkers
- exchange
sudo rabbitmqctl set_permissions -p mig worker \
'^migevent\..*$' \
'^migevent(|\..*)$' \
'^(toworkers|migevent\..*)$'
- Start the scheduler, it shouldn't return any ACCESS error. You can also list the permissions with the command:
$ sudo rabbitmqctl list_permissions -p mig | column -t
Listing permissions in vhost "mig" ...
agent ^mig\\.agt\\..*$ ^(toschedulers|mig\\.agt\\..*)$ ^(toagents|mig\\.agt\\..*)$
scheduler ^(toagents|toschedulers|toworkers|mig\\.agt\\..*)$ ^(toagents|toworkers|mig\\.agt\\.(heartbeats|results))$ ^(toagents|toschedulers|toworkers|mig\\.agt\\.(heartbeats|results))$
worker ^migevent\\..*$ ^migevent(|\\..*)$ ^(toworkers|migevent\\..*)$
The documentation from rabbitmq has a thorough explanation of SSL support in rabbit at http://www.rabbitmq.com/ssl.html . Without going into too much details, we need three things:
- a PKI (and its public cert)
- a server certificate and private key for rabbitmq itself
- a client certificate and private key for the agents
You can obtain these three things on you own, or follow the openssl tutorial from the rabbitmq documentation. Come back here when you have all three.
On the rabbitmq server, place the certificates under /etc/rabbitmq/certs/.
/etc/rabbitmq/certs/ ├── cacert.pem ├── migrelay1.example.net.key └── migrelay1.example.net.pem
Edit (or create) the configuration file of rabbitmq to reference the certificates.
[ {rabbit, [ {ssl_listeners, [5671]}, {ssl_options, [{cacertfile,"/etc/rabbitmq/certs/cacert.pem"}, {certfile,"/etc/rabbitmq/certs/migrelay1.example.net.pem"}, {keyfile,"/etc/rabbitmq/certs/migrelay1.example.net.key"}, {verify,verify_peer}, {fail_if_no_peer_cert,true} ]} ]} ].
By default, queues within a RabbitMQ cluster are located on a single node (the node on which they were first declared). If that node goes down, the queue will become unavailable. To mirror all MIG queues to all nodes of a rabbitmq cluster, use the following policy:
# rabbitmqctl -p mig set_policy mig-mirror-all "^mig\." '{"ha-mode":"all"}'
Setting policy "mig-mirror-all" for pattern "^mig\\." to "{\"ha-mode\":\"all\"}" with priority "0" ...
...done.
To create a cluster, all rabbitmq nodes must share a secret called erlang cookie. The erlang cookie is located in /var/lib/rabbitmq/.erlang.cookie. Make sure the value of the cookie is identical on all members of the cluster, then tell one node to join another one:
# rabbitmqctl stop_app
Stopping node 'rabbit@ip-172-30-200-73' ...
...done.
# rabbitmqctl join_cluster rabbit@ip-172-30-200-42
Clustering node 'rabbit@ip-172-30-200-73' with 'rabbit@ip-172-30-200-42' ...
...done.
# rabbitmqctl start_app
Starting node 'rabbit@ip-172-30-200-73' ...
...done.
To remove a dead node from the cluster, use the following command from any active node of the running cluster.
# rabbitmqctl forget_cluster_node rabbit@ip-172-30-200-84
If one node of the cluster goes down, and the agents have trouble reconnecting, they may throw the error NOT_FOUND - no binding mig.agt..... That happens when the binding in question exists but the 'home' node of the (durable) queue is not alive. In case of a mirrored queue that would imply that all mirrors are down. Essentially both the queue and associated bindings are in a limbo state at that point - they neither exist nor do they not exist. source
The safest thing to do is to delete all the queues on the cluster, and restart the scheduler. The agents will restart themselves.
# for queue in $(rabbitmqctl list_queues -p mig|grep ^mig|awk '{print $1}')
do
echo curl -i -u admin:adminpassword -H "content-type:application/json" \
-XDELETE http://localhost:15672/api/queues/mig/$queue;
done
(remove the echo in the command above, it's there as a safety for copy/paste people).
If you want more than 1024 clients, you may have to increase the max number of file descriptors that rabbitmq is allowed to hold. On linux, increase nofile in /etc/security/limits.conf as follow:
rabbitmq - nofile 102400
Then, make sure than pam_limits.so is included in /etc/pam.d/common-session:
session required pam_limits.so
To prevent yours agents from getting blocked by firewalls, it may be a good idea to use port 443 for connections between agents and rabbitmq. However, rabbitmq is not designed to run on a privileged port. The solution, then, is to use iptables to redirect the port on the rabbitmq server.
iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j REDIRECT --to-port 5671 -m comment --comment "Serve RabbitMQ on HTTPS port"
The scheduler keeps copies of work in progress in a set of spool directories. It will take of creating the spool if it doesn't exist. The spool shouldn't grow in size beyond a few megabytes as the scheduler tries to do regular housekeeping, but it is still preferable to put it in a large enough location.
sudo chown mig-user /var/cache/mig -R
Agents's queuelocs must be listed in a whitelist file for the scheduler to accept their registrations. The location of the whitelist is configurable, but a good place for it is in /var/cache/mig/agents_whitelist.txt. The file contains one queueloc string on each line. The agent queueloc is taken from the hostname of the endpoint the agent runs on, plus a random value only known to the endpoint and the MIG platform.
linux.agent123.example.net.58b3mndjmbb00 windows.db4.sub.example.com.56b2andxmyb00
If the scheduler receives a heartbeat from an agent that is not present in the whitelist, it will log an error message. An operator can process the logs and add agents to the whitelist manually.
Dec 17 23:39:10 ip-172-30-200-53 mig-scheduler[9181]: - - - [warning] getHeartbeats(): Agent 'linux.somehost.example.net.4vjs8ubqo0100' is not authorized
For environments that are particularly dynamic, it is possible to use regexes in the whitelist. This is done by prepending re: to the whitelist entry.
re:linux.server[0-9]{1,4}.example.net.[a-z0-9]{13}
Keep the list of regexes short. Until MIG implements a better agent validation mechanisms, the whitelist is reread for every registration, and regexes are recompiled every time. On a busy platform, this can be done hundreds of times per second and induce heavy cpu usage.
sslmode
sslmode can take the values disable, require (no cert verification) and verify-full (requires cert verification). A proper installation should use verify-full.
[postgres] sslmode = "verify-full"
macconn
The scheduler has an extra parameter to control the max number of database connections it can use at once. It's important to keep that number relatively low, and increase it with the size of your infrastructure. The default value is set to 10, and a good production value is 100.
[postgres] maxconn = 10
If the DB insertion rate is lower than the agent heartbeats rate, the scheduler will receive more heartbeats per seconds than it can insert in the database. When that happens, you will see the insertion lag increase in the query below:
mig=> select NOW() - heartbeattime as "insertion lag"
mig-> from agents order by heartbeattime desc limit 1;
insertion lag
-----------------
00:00:00.212257
(1 row)
A healthy insertion lag should be below one second. If the lag increases, and your DB server still isn't stuck at 100% CPU, try increasing the value of maxconn. It will cause the scheduler to use more insertion threads.
The scheduler can log to stdout, syslog, or a target file. It will run in foreground if the logging mode is set to 'stdout'. For the scheduler to run as a daemon, set the mode to 'file' or 'syslog'.
[logging] ; select a mode between 'stdout', 'file' and 'syslog ; for syslog, logs go into local3 mode = "syslog" level = "debug" host = "localhost" port = 514 protocol = "udp"
TLS support between the scheduler and rabbitmq is optional but strongly recommended. To enable it, generate a client certificate and set the [mq] configuration section of the scheduler as follow:
[mq] host = "relay1.mig.example.net" port = 5671 user = "scheduler" pass = "secretrabbitmqpassword" vhost = "mig" ; TLS options usetls = true cacert = "/etc/mig/scheduler/cacert.pem" tlscert = "/etc/mig/scheduler/scheduler-amqps.pem" tlskey = "/etc/mig/scheduler/scheduler-amqps-key.pem"
Make sure to use fully qualified paths otherwise the scheduler will fail to load them after going in the background.
The Collector is a routine ran periodically by the scheduler to inspect the content of its spool. It will load files that may have been missed by the file notification routine, and delete old files after a grace period.
[collector] ; frequency at which the collector runs freq = "60s"
Periodic routines are run at freq interval to do housekeeping and accounting, cleaning up the spool, marking agents that stopped sending hearbeats idle or offline, computing agents stats or detecting hosts running multiple agents.
; the periodic runs less often that ; the collector and does cleanup and DB updates [periodic] ; frequency at which the periodic jobs run freq = "87s" ; delete finished actions, commands and invalids after ; this period has passed deleteafter = "360h" ; run a rabbitmq unused queues cleanup job at this frequency ; this is DB & amqp intensive so don't run it too often queuescleanupfreq = "24h"
The scheduler uses a PGP key to issue termination order on hosts that run multiple agents. Due to the limited scope of that key, it is stored in the database to facilitate deployment and provisioning of multiple schedulers.
Upon startup, the scheduler will look for an investigator named migscheduler and retrieve its private key to use it in action signing. If no investigator is found, it generates one and inserts it into the database, such that other schedulers can use it as well.
At the time, the scheduler public key must be manually added into the agent configuration. This will be changed in the future when ACLs and investigators can be dynamically distributed to agents.
In the ACL of the agent configuration file conf/mig-agent-conf.go:
var AGENTACL = [...]string{ `{ "agentdestroy": { "minimumweight": 1, "investigators": { "MIG Scheduler": { "fingerprint": "1E644752FB76B77245B1694E556CDD7B07E9D5D6", "weight": 1 } } } }`, }
And add the public PGP key of the scheduler as well:
// PGP public keys that are authorized to sign actions var PUBLICPGPKEYS = [...]string{ ` -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1. Name: MIG Scheduler mQENBFF/69EBCADe79sqUKJHXTMW3tahbXPdQAnpFWXChjI9tOGbgxmse1eEGjPZ QPFOPgu3O3iij6UOVh+LOkqccjJ8gZVLYMJzUQC+2RJ3jvXhti8xZ1hs2iEr65Rj zUklHVZguf2Zv2X9Er8rnlW5xzplsVXNWnVvMDXyzx0ufC00dDbCwahLQnv6Vqq8 BdUCSrvo/r7oAims8SyWE+ZObC+rw7u01Sut0ctnYrvklaM10+zkwGNOTszrduUy ..... ` }
It is possible to use a configuration file with the agent. The location of the file can be specified using the -c flag of the agent's binary. If no flag is specific, the agent will look for a configuration file at /etc/mig/mig-agent.cfg. If no file is found at this location, the builtin parameters are used.
The following parameters are not controlable by the configuration file:
- list of investigators public keys in PUBLICPGPKEYS
- list of access control lists in AGENTACL
- list of proxies in PROXIES
All other parameters can be overriden in the configuration file. Check out the sample file mig-agent.cfg.inc in the conf folder.
The Makefile in the MIG repository contains targets that build RPM, DEB, DMG and MSI files to facilitate the distribution of MIG Agents. These targets rely on FPM to build packages, so make sure you have ruby and fpm installed before proceeding.
Note that due to various packaging requirements, it is easier to build a package on the environment it is targeted for: rhel for RPMs, debian for DEBs, macos for DMGs and windows for MSIs.
The make targets are:
- deb-agent to build debian packages
- rpm-agent to build rpm packages
- dmg-agent to build dmg packages
- msi-agent to build msi packages (experimental)
$ make deb-agent AGTCONF=conf/linuxwall-mig-agent-conf.go
mkdir -p bin/linux/amd64
echo building mig-agent for linux/amd64
[...]
fpm -C tmp -n mig-agent --license GPL --vendor mozilla --description "Mozilla InvestiGator Agent" \
-m "Mozilla OpSec" --url http://mig.mozilla.org --architecture x86_64 -v 20150909+556e9c0.dev \
--after-remove tmp/agent_remove.sh --after-install tmp/agent_install.sh \
-s dir -t deb .
Created package {:path=>"mig-agent_20150909+556e9c0.dev_amd64.deb"}
$ ls -al mig-agent_20150909+556e9c0.dev_amd64.deb
-rw-r--r-- 1 ulfr ulfr 3454772 Sep 9 10:55 mig-agent_20150909+556e9c0.dev_amd64.deb