-
Notifications
You must be signed in to change notification settings - Fork 285
First time setup
You can download SELKS 6 RC1 from here - SELKS 6 download page
Our blog about the release - https://www.stamus-networks.com/2019/04/16/selks5-the-sorceress/
Virtual machines import note - Recommended minimum basic set up for SELKS 6 is 2vCPUs 8GB RAM
After you are done with installing the iso and First time setup (please read below) you should just point your browser to https://your.selks.IP.here/
Usage and logon credentials (OS/ssh and web management user):
- user:
selks-user
- password:
selks-user
(password in Live mode
is live
)
The default root
password is StamusNetworks
NOTE: Internet access is needed to complete the first time setup.
cmd/shell:
selks-first-time-setup_stamus
On Desktop versions of SELKS:
Double click "FirstTimeSetup" icon on the desktop
NOTE: Follow the instructions and answer the setup questions. The first time set up script can take about 2-5 min to finish up.
Logs from the first time set up process and tasks are located in - /opt/selks/log/
Follow the instructions and type in the desired sniffing interface(s) and choose a Full Packet Capture (FPC) option or not.
NOTE: After the first time set up is done - please upgrade to the latest components:
Do an upgrade(in cmd/shell):
selks-upgrade_stamus
Or you can also do an upgrade from the desktop by simply clicking on the Upgrade-SELKS desktop icon.
After the script is finished you can access the web management interface and GUI via https://your.selks.IP.here/
which provides a landing page for :
-
Scirius
ruleset management and Suricata administration management -
Kibana
dashboards – providing links/connections to rules and alert event drill down management, correlation and Full Packet Capture -
EveBox
alert,event, correlation management -
Moloch
viewer for pcap export and packet capture drill down -
Scirius Hunt
interface – once logged in , right upper corner, click and choose Hunt.
To see the status of the critical services
systemctl status suricata elasticsearch logstash kibana evebox molochviewer-selks molochpcapread-selks supervisorctl status scirius
Or just use the built in health check script
selks-health-check_stamus
All configs for elasticsearch kibana logstash scirius
share their default locations. For example /etc/{service_name_here}/
For Moloch:
/data/moloch/etc/config.ini
For Suricata:
/etc/suricata/suricata.yaml /etc/suricata/selks6-addin.yaml /etc/suricata/selks6-interfaces-config.yaml
selks6-addin.yaml
contains SELKS 6 and Suricata specific setup
selks6-interfaces-config.yaml
contains auto generated (by First Time Setup script
) interface configuration for the chosen sniffing interfaces.
First time set up script can be run from the command line:
selks-first-time-setup_stamus
or from the desktop shortcut (if using the desktop version of SELKS) by doubleclicking on the FirstTimeSetup icon
Elasticsearch:
/var/log/elasticsearch/elasticsearch.log
Logstash:
/var/log/logstash/logstash-plain.log /var/log/logstash/logstash-slowlog-plain.log
Suricata:
/var/log/suricata/suricata.log /var/log/suricata/stats.log
Moloch:
/data/moloch/logs/
Scirius:
/var/log/scirius-error.log /var/log/scirius.log
Full Packet Capture on SELKS 6 is done by Suricata. During the first time set up you will be asked to make a choice of:
1) FPC - Full Packet Capture. Suricata will rotate and delete the pcap captured files. 2) FPC_Retain - Full Packet Capture with having Moloch's pcap retention/rotation. Keeps the pcaps as long as there is space available. 3) None - disable packet capture\n
The default settings for SELKS 6 are located in /etc/suricata/selks6-addin.yaml
and in terms of FPC are:
- pcap-log: enabled: yes filename: log.%n.%t.pcap #filename: log.pcap # File size limit. Can be specified in kb, mb, gb. Just a number # is parsed as bytes. limit: 10mb # If set to a value will enable ring buffer mode. Will keep Maximum of "max-files" of size "limit" max-files: 20 mode: multi # normal, multi or sguil. # Directory to place pcap files. If not provided the default log # directory will be used. Required for "sguil" mode. dir: /data/nsm/
Which means that every Suricata thread will write a pcap for each of its threads (by default as many as the CPUs available). When that pcap reaches 10 MB
of size it will be closed and a new one will be started to be written into.
NOTE: When the pcap is closed for writing by Suricata - it will then be picked up by the Moloch reader and digested onto Elasticsearch. This would also explain why in certain cases you can see an alert but the FPC would not be immediately available in the Moloch viewer.
The pcaps will be rotated on a per thread basis once a thread has reached 20 pcaps written (for a particular thread). So in the default config setup you could theoretically have a maximum of:
20 * #number_of_threads * 10MB
For a 4 CPU machine with the default settings this would be:
20 * 4 * 10MB = 800MB of data max.
After that it will get rotated and would never go over 800MB.
If you choose to adjust those default settings you would need to restart the service for the changes to take effect systemctl restart suricata
Pcaps get stored in:
/data/nsm/
If you choose Option 1
the pcaps will be rotated by Suricata.
Keeping smaller pcap files would make sure you get the data more often digested. However it would need to be tried/tested out for each particular deployment depending on what the size of the sniffing traffic is.
Moloch's config.ini
settings are explained here - https://github.com/aol/moloch/wiki/Settings
Pcaps get stored in:
/data/nsm/ /data/moloch/raw/
This option offers possibility for setting up size and time retention policy.
The pcap storage is being handled by Moloch. In that case Suricata would write the FPC pcaps in /data/nsm/
but when done writing (file is closed for example as it reaches the default 10MB set limit) they would be digested and then immediately deleted from /data/nsm/
but would be kept in /data/moloch/raw/
. The rotation policy of Moloch is described here - https://github.com/aol/moloch/wiki/FAQ#pcap-deletion. The settings if needed then can be adjusted here on SELKS - /data/moloch/etc/config.ini
. You would need to restart the service for the changes to take effect systemctl status molochpcapread-selks
.
Moloch's config.ini
settings are explained here - https://github.com/aol/moloch/wiki/Settings
Moloch's storage handling is explained here - https://github.com/aol/moloch/wiki/FAQ#data-never-gets-deleted
In this option there will be no pcap capture done
Moloch and Elasticsearch clean up procedure located in :
/etc/crontab
With respect to pcap storage - depending on if you have chosen Option 1
or Option 2
the pcaps rotation and deletion will be handled either by Suricata or Moloch. It is explained how in the sections above.
To clean up and delete all
(wipe everything out) logs, pcaps and flush the Elasticsearch DB you could use:
selks-db-logs-cleanup_stamus
The PCAP storage
rotation policy of Moloch (if it is chosen) is described here - https://github.com/aol/moloch/wiki/FAQ#pcap-deletion.
To see the size of indices into Elasticsearch:
curl -X GET "localhost:9200/_cat/indices?v&s=store.size"
To check specifically Suricata's logs created indices in Elasticsearch:
curl -X GET "localhost:9200/_cat/indices?v&s=store.size" |grep logstash
Handy scripts -
Clean all logs and flush DBs:
selks-db-logs-cleanup_stamus
First time setup. Can be run multiple times, as many as needed/wanted:
selks-first-time-setup_stamus
Set up and configure Moloch (already included in selks-first-time-setup_stamus
execution sequence):
selks-molochdb-init-setup_stamus
Set up sniffing interface for Suricata(already included in selks-first-time-setup_stamus
execution sequence):
selks-setup-ids-interface
SELKS Upgrade:
selks-upgrade_stamus
SELKS health status check:
selks-health-check_stamus
For the purpose of speed the OS and /data/nsm/
can reside on the SSDs.
Depending on your FPC needs you can consider mounting /data/moloch/raw/
onto separate partitions/disks. Those could be slower but with much bigger volume and cheaper.
NOTE: Make sure (especially if you have upgraded to Scirius 3.2.0+) that in /etc/scirius/local_settings.py you have the following variable:
KIBANA6_DASHBOARDS_PATH = "/opt/selks/kibana6-dashboards/"
To reload/reset the dashboards from the cmd/shell :
cd /usr/share/python/scirius/ && . bin/activate && python bin/manage.py kibana_reset && deactivate
To reload/reset the dashboards from Scirius GUI -
Go to System settings
(from the Stamus logo drop down menu in the left upper corner) -> Kibana
-> choose the desired action.
Source location:
/opt/selks/kibana7-dashboards/
Documentation:
https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html https://www.elastic.co/guide/en/elasticsearch/reference/current/important-settings.html
A quick try to do some performance optimization could be looking at /etc/elasticsearch/jvm.options
and increasing the heap
## JVM configuration ################################################################ ## IMPORTANT: JVM heap size ################################################################ ## ## You should always set the min and max JVM heap ## size to the same value. For example, to set ## the heap to 4 GB, set: ## ## -Xms4g ## -Xmx4g ## ## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html ## for more information ## ################################################################ # Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms1g -Xmx1g
A more thorough guide for Elasticsearch performance tuning
Documentation:
https://www.elastic.co/guide/en/logstash/current/introduction.html
For a quick try/fix do some performance optimization you could try increasing the heap in /etc/logstash/jvm.options
:
# Xms represents the initial size of total heap space # Xmx represents the maximum size of total heap space -Xms1g -Xmx1g
Also in /etc/logstash/logstash.yml
:
# This defaults to the number of the host's CPU cores. # # pipeline.workers: 2 # # How many events to retrieve from inputs before sending to filters+workers # # pipeline.batch.size: 125
A more thorough guide for Performance tuning and troubleshooting
Documentation:
https://github.com/aol/moloch/wiki
Performance tuning:
https://github.com/aol/moloch/wiki/Settings#high-performance-settings
Documentation:
https://suricata.readthedocs.io/en/latest/
Performance tuning (for advanced users):