-
Notifications
You must be signed in to change notification settings - Fork 286
Virtual Machine README
This page contains details for the SOF-ELK® (Security Operations and Forensics Elasticsearch, Logstash, Kibana) VM. The VM is provided as a community resource but is covered at varying depths in the following SANS courses and workshops:
- SANS FOR572, Advanced Network Forensics and Analysis
- SANS FOR509, Cloud Forensics and Incident Response
- SANS FOR589, Cybercrime Intelligence
- SANS Aviata Cloud Solo Flight Challenge
The VM is also used in the following courseware and/or other instructional materials:
All parsers and dashboards for this VM are now maintained in the project's Github repository.
The latest version of the VM itself is available using the links below:
- x86 version for Intel-based processors: https://for572.com/sof-elk-x86-vm
- ARM version for Apple M-series and other ARM processors: https://for572.com/sof-elk-arm-vm
- Basic details on the distribution
- VM is a Ubuntu 24.04 base with all OS updates as of 2024-12-17
- Includes Elastic stack components v8.17.0
- Configuration files are from the "public/v20241217" branch of this Github repository
- Metadata
- x86 processor version
- Filename:
Public SOF-ELK x86 v20241217.7z
- File size:
2,611,258,269
bytes - MD5:
802a023e5bfa43f38e987ce73f8dab3b
- SHA256:
1ebe82dd835a85f3bfe24a4eb8c25a4c6d1d3a5c700e9e99365e4ec6ce530b4e
- Filename:
- ARM processor version
- Filename:
Public SOF-ELK ARM v20241217.7z
- File size:
2,469,859,147
bytes - MD5:
8dd0dec02104977e8f27f1f540a82dc1
- SHA256:
8f4c274ddd4ca0e6fbd1f07cf190a6c06b1b00521bbbaa3b373c215c5f4a0e6f
- Filename:
- x86 processor version
-
The VM was created with VMware Fusion v13.6.1 and ships with virtual hardware v21.
- If you're using an older version of VMware Workstation/Fusion/Player, you will likely need to convert the VM back to a previous version of the hardware.
- Some VMware software provides this function via the GUI, or you may find the free "VMware vCenter Converter" tool helpful.
-
The VM is deployed with the "NAT" network mode enabled
-
Credentials:
- username:
elk_user
- password:
forensics
- has sudo access to run ALL commands
- password:
- username:
-
Logstash will ingest all files from the following filesystem locations:
-
/logstash/aws/
: JSON-formatted Amazon Web Services CloudTrail log files. Use the includedaws-cloudtrail2sof-elk.py
loader script. -
/logstash/azure/
: JSON-formatted Microsoft Azure logs. At this time, the following log types are supported: Event Logs, Sign In Logs, Audit Logs, Admin Activity Logs, and Storage Logs. -
/logstash/gcp/
: JSON-formatted Google Compute Platform logs. -
/logstash/gws/
: JSON-formatted Google Workspace logs extracted using the Google Workspace API. -
/logstash/httpd/
: Apache logs in common, combined, or vhost-combined formats -
/logstash/kape/
: JSON-format files generated by the KAPE triage collection tool. (See this document for details on which specific output files are currently supported and their required file naming structure.) -
/logstash/kubernetes/
: Kubernetes log files. -
/logstash/microsoft365/
: JSON-formatted Microsoft 365 logs only. -
/logstash/nfarch/
: Archived NetFlow output, formatted as described below. -
/logstash/passivedns/
: Logs from the passivedns utility. -
/logstash/plaso/
: CSV bodyfile-format files generated by the Plaso tool from the log2timeline framework. (See this document for details on creating CSV files in a supported format.) -
/logstash/syslog/
: Syslog-formatted data- NOTICE: Remember that syslog DOES NOT reflect the year of a log entry! Therefore, Logstash has been configured to look for a year value in the path to a file. For example:
/logstash/syslog/2015/var/log/messages
will assign all entries from that file to the year 2015. If no year is present, the current year will be assumed. This is enabled only for the/logstash/syslog/
directory.
- NOTICE: Remember that syslog DOES NOT reflect the year of a log entry! Therefore, Logstash has been configured to look for a year value in the path to a file. For example:
-
/logstash/zeek/
: JSON-formatted logs from the Zeek Network Security Monitoring platform. These must be in decompressed form. The following Zeek logs are supported:-
conn.log
: Treated like NetFlow and stored in thenetflow-*
indices. -
dns.log
: Treated like other DNS logs and stored in thelogstash-*
indices. -
http.log
: Treated like other HTTP logs and stored in thehttpdlog-*
indices. - The following logs are stored in the
zeek-*
indices:files.log
ftp.log
notice.log
ssl.log
weird.log
x509.log
-
-
-
Commands to be familiar with:
-
/usr/local/sbin/sof-elk_clear.py
: DESTROY contents of the Elasticsearch database. Most frequently used with an index name base (e.g.sof-elk_clear.py -i logstash
to delete all data from the Elasticsearchlogstash-*
indexes.) Other options detailed with the-h
flag. -
/usr/local/sbin/sof-elk_update.sh
: Update the SOF-ELK® configuration files from the Github repository. (Requiressudo
.)
-
-
Files to be familiar with:
-
/etc/logstash/conf.d/*.conf
: Symlinks to github-based configuration files that handle input, preprocessing, parsing, postprocessing, and output of log events. -
/usr/local/sof-elk/*
: Clone of the project Github repository, with thepublic/v*
branch corresponding to the virtual machine's release version.
-
- Extract the compressed archive to your host system
- Open and boot the VM
- Log into the VM with the
elk_user
credentials (see above)- Logging in via SSH recommended, but if using the console login and a non-US keyboard, run
sudo loadkeys uk
, replacinguk
as needed for your local keyboard mapping
- Logging in via SSH recommended, but if using the console login and a non-US keyboard, run
-
cd
to one of the/logstash/*/
directories as appropriate - Place files in this location
- Mind the above warning about the year for syslog data.
- Open the main Kibana dashboard using the Kibana URL shown in the pre-authentication screen,
http://<ip_address>:5601
- This dashboard gives a basic overview of what data has been loaded and how many records are present
- Be sure to set the necessary time frame for your data
- There are several stock dashboards included at the link near the top of the Introduction dashboard
- Wait for Logstash to parse the input files, load the appropriate dashboard URL, and start interacting with your data
- To ingest existing
nfcapd
-created NetFlow evidence, it must be parsed into a specific format. The includednfdump2sof-elk.sh
script will take care of this.- Read from single file:
nfdump2sof-elk.sh -r /path/to/netflow/nfcapd.201703190000 -w /logstash/nfarch/inputfile_1.txt
- Read recursively from directory:
nfdump2sof-elk.sh -r /path/to/netflow/ -w /logstash/nfarch/inputfile_2.txt
- Optionally, you can specify the IP address of the exporter that created the flow data:
nfdump2sof-elk.sh -e 10.3.58.1 -r /path/to/netflow/ -w /logstash/nfarch/inputfile_3.txt
- Read from single file:
- To ingest existing AWS VPC Flow data files in JSON format, use the included
aws-vpcflow2sof-elk.sh
script.- Read recursively from directory:
aws-vpcflow2sof-elk.sh -r /path/to/aws-vpcflow/ -w /logstash/nfarch/aws-vpcflow_1.txt
- Read recursively from directory:
- To ingest existing GCP VPC Flow data files in JSON format, use the included
azure-vpcflow2sof-elk.py
script.- Read from single file:
azure-vpcflow2sof-elk.py -r /path/to/gcp-vpcflow/file1.json -w /logstash/nfarch/gcp-vpcflow_1.txt
- Read recursively from directory:
azure-vpcflow2sof-elk.py -r /path/to/gcp-vpcflow/ -w /logstash/nfarch/gcp-vpcflow_1.txt
- Read from single file:
- Syslog data in
~elk_user/sample_evidence/lab-2.3_source_evidence.zip
- Unzip this file first, such as
cd ~elk_user/sample_evidence/ ; unzip lab-2.3_source_evidence.zip
- Then unzip each of the resulting files into the
/logstash/*/
subdirectory indicated below, such as:unzip -d /logstash/syslog/ ~elk_user/sample_evidence/lab-2.3_source_evidence/<file>
-
proxy_squid_logs.zip
:/logstash/httpd/
-
proxy_system_logs.zip
:/logstash/syslog/
-
commonuse_windows_logs.zip
:/logstash/syslog/
-
muse_logs.zip
:/logstash/syslog/
-
fw-router_logs.zip
:/logstash/syslog/
-
- Use the time frame
2013-06-08 15:00:00
to2013-06-08 23:30:00
to examine this data.
- Unzip this file first, such as
- NetFlow data in
~elk_user/sample_evidence/lab-3.1_source_evidence.zip
- Unzip this file first, such as
cd ~elk_user/sample_evidence/ ; unzip lab-3.1_source_evidence.zip
- Then use the
nfdump2sof-elk.sh
script and write output to the/logstash/nfarch/
directory, such as:cd /home/elk_user/sample_evidence/lab-3.1_source_evidence/ ; nfdump2sof-elk.sh -e 10.3.58.1 -r netflow/ -w /logstash/nfarch/lab-3.1_netflow.txt
- Use the time frame
2012-04-02
to2012-04-07
to examine this data.
- Unzip this file first, such as
This project continues in part due to the amazing support from a range of people from the security industry. The valuable and vital contributions from those who have committed content, filed issues, and submitted pull requests are reflected in the GitHub interface for those respective functions. In addition, the support from others is just as critical, including generating and/or providing sample data to test new features, documentation inputs, and more. This is not an exhaustive list, but the efforts of the information security community is always an important factor in the success of any open source project.
- Derek B: Cisco ASA parsing/debugging and sample data
- Barry A: Sample data and troubleshooting
- Ryan Johnson: Testing
- Matt Bromiley: Testing
- Mike Pilkington: Testing
- Mark Hallman: Testing
- David Szili: Testing and troubleshooting
- Pierre Lidome: Microsoft 365 assistance, test data, and overall testing of the cloud data parsers
- Josh Lemon: GCP assistance
- David Cowen: AWS assistance
- Megan Roddie: Testing
- Bedang Sen: Documentation regarding building new parsers
- Tony Knutson: Sample data for the KAPE, Snare, and Plaso parsers; Overall testing
- This virtual appliance is provided "as is" with no express or implied warranty for accuracy or accessibility. No support for the functionality the VM provides is offered outside of this document.
- This virtual appliance includes GeoLite2 data created by MaxMind, available from https://www.maxmind.com and subject to the GeoLite2 EULA. The included GeoIP database files are from December 17, 2019 and are covered by the previous MaxMind license that permitted redistribution of these files without collecting user contact information. Installation of updated GeoIP databases should be accomplished by the included
/usr/local/sbin/geoip_bootstrap.sh
script. This script also optionally enables scheduled automatic updates to the databases for Internet-connected systems. You can learn more about the GeoLite2 databases, as well as sign up for a free MaxMind account by clicking here. - SOF-ELK® is a registered trademark of Lewes Technology Consulting, LLC. Content is copyrighted by its respective contributors. SOF-ELK logo is a wholly owned property of Lewes Technology Consulting, LLC and is used by permission.
All content ©2025 Lewes Technology Consulting, LLC unless otherwise indicated.
Table of Contents