75 lines
2.8 KiB
Markdown
75 lines
2.8 KiB
Markdown
Sharingan is the monitoring and blue-team solution for the AniNIX. It is responsible for monitoring and alarming on a wide array of events, from dev-ops to cybersecurity.
|
|
|
|
# Etymology
|
|
Sharingan is named after the mythical technique from the Naruto anime series. Sharingan confers deep insight abilities to its user, and our implementation of it will do the same for our administrators' domains.
|
|
|
|
# Relevant Files and Software
|
|
|
|
We use Graylog on a dedicated VM to aggregate results. By default, all servers in a datacenter should send journald via syslog to `sharingan.$datacenter.aninix.net`.
|
|
|
|
## Syslog-ng
|
|
|
|
We use a lot of services in the AniNIX ecosystem -- some create files, some pipe output, etc. Syslog-ng then picks these up and files them off to graylog over 514/udp/syslog.
|
|
|
|
## Journald
|
|
|
|
ArchLinux and most systemd-based Linux distributions use journald to track system log files.
|
|
|
|
## Suricata
|
|
|
|
Suricata generates a file, [fast.log](file:///var/log/suricata/fast.log), containing threat intelligence about network threats. We place this on the Core web front-end to detect incoming assaults on our applications.
|
|
|
|
## SSHGuard
|
|
|
|
## ClamAV
|
|
|
|
## OSSEC
|
|
|
|
TODO
|
|
|
|
## Monit
|
|
|
|
## Graylog
|
|
|
|
## Elasticsearch
|
|
Elasticsearch acts as graylog's data backend.
|
|
|
|
We have seen issues where poor disk i/o or unplanned shutdown can cause Elasticsearch to have index corruption.
|
|
|
|
1. Stop elasticsearch
|
|
1. From `/usr/share/elasticsearch/lib`, you can use `java -cp lucene-core*.jar -ea:org.apache.lucene... org.apache.lucene.index.CheckIndex /usr/share/elasticsearch/data/nodes/0/indices/1nJc43t7TGuHmVR3Q5w9PA/1/index -verbose -exorcise` (on the right index) to exorcise the corrupted data.
|
|
1. Remove corruption flags: `rm /usr/share/elasticsearch/data/nodes/0/indices/1nJc43t7TGuHmVR3Q5w9PA/1/index/corrupted_*`
|
|
1. Restart elasticsearch
|
|
1. Retry shard allocation:
|
|
```
|
|
curl -X POST http://127.0.0.1:9200/_cluster/reroute?retry_failed=true
|
|
curl -XGET localhost:9200/_cluster/allocation/explain?pretty
|
|
```
|
|
|
|
## Mongodb
|
|
MongoDB holds the graylog config for us.
|
|
|
|
# Available Clients
|
|
See [[WebServer#Available Clients|AniNIX::Webserver's client list]].
|
|
|
|
# Equivalents or Competition
|
|
|
|
Various monitoring SaaS vendors are available, including Nagios, OP5, PagerDuty, etc. A variety of paid cybersecurity vendors are also on the market, particularly contract firms. Data aggregation is also oft used via the ElasticStack for a number of use-cases. We chose Graylog because it unifies these funtions for what we care about -- alarming on actionable events, whether they are malicious or accidental.
|
|
We will use a variety of tools here to feed into the Sharingan SIEM.
|
|
|
|
# Network IDS: Suricata
|
|
|
|
We use Suricata to scan network data to identify threats.
|
|
|
|
## Rules engine: oinkmaster
|
|
|
|
# Network IPS: sshguard
|
|
|
|
# WAF: modsecurity
|
|
|
|
# Vulnerability management: lynis
|
|
|
|
# Host IDS: rkhunter
|
|
|
|
|