Threat and Anomaly Detection Rules

Our Research - Short Story

Our Research and Innovation Team constantly looks for new approaches and techniques which help in threat detection. This is how we came up with identifying the patterns based on the logs generated by the component by:

i) Exploiting all the known vulnerabilities

ii) Comparing the patterns in vulnerable versions and the fixed versions

iii) Finally writing a SIGMA rule for it

We have determined that it is very effective in identifying the threats when we used these rules in our own network. We did not stop there and a month later one of our researchers came up with a new technique to identify suspicious activity generated by an open-source component used in our network.

We quickly validated and turned it into a process of Anomaly Detection for popular open-source components where we developed an engine to identify the log messages which can be a potential threat using our knowledge gained over researching thousands of CVEs

Here again, we are creating SIGMA and Elastic Detection rules for organizations to benefit from our research. Why are we doing this? Why Sigma? We love innovation and we focus on investigating new approaches by enhancing the organization’s security posture and making detection rules available to everyone. So, we chose Sigma Rules which can be converted into any other SIEM solution using tools like uncoder and Sigmac We sincerely thank Florian Roth for introducing SIGMA to the community and for providing valuable feedback. These detection rules may help the security teams to efficiently manage incident response and perform forensic investigation at various points in the network. They also help in preventing further damage and future re-occurrence You can find more rules on our Github

For detailed information about our research, visit our blogs A New Approach to Accelerate Threat Detection Threat Detection with SIGMA Rules

Using Sigma and Splunk for Threat Detection

Assuming readers are already familiar with the basics, let's start with

Conversion of SIGMA Rule

The Sigma rules written by our research team can be found here. We have picked “CVE-2010-2266" to guide you through the process.

The following command can be used to convert the Sigma to Splunk rule using Sigmac tool.

./sigmac --t splunk --c splunk_ngnix_access.yml CVE-2010-2266.yml

Where,

  1. --t : describes the target backend, in this case “Splunk”

  2. --c : uses the given configuration file i.e., splunk_ngnix_access.yml

Generated Splunk Query for CVE-2010-2266:

((uri_path="*/%c0./%20*" response_code="500") OR "1113: No mapping for the Unicode character exists in the target multi-byte code page")

To test the generated rule, Nginx logs can be used from this link which can be further indexed into Splunk instance.

After indexing the logs, the Splunk rule can be used for detecting vulnerable events. The search result would be as follows:

Anomaly Detection with Sigma and Splunk

The Sigma rules can be converted and used in Splunk to create Alerts and pro-actively monitor the data in real-time using an anomaly feed. Our Research Team generates the anomaly feed based on the log messages available from the source code of widely used components.

The sigma rules for the generated anomaly feed which are based on severity level can be found here. For instance, we have used Critical anomaly rules for detection.

Convert the Sigma rule to Splunk query

("http alloc large header buffer" OR "the \"*\" size must be equal to or greater than \"*\"" OR "http large header free:" OR "http large header alloc:" OR "http large header copy:" OR "client sent too long URI" OR "unsafe URI \"*\" was detected" OR "client sent invalid \"Destination\" header:" OR "SSL renegotiation *" OR "\"*\" mp4 atom too large:*" OR "client sent invalid chunked body" OR "state buffer overflow: * bytes required" OR "buffer overflow")

Upon executing the query, the results are as follows:

  • This rule can be saved as “Alert” by configuring the “Trigger actions”, where we can choose from multiple action events to get notified based on the severity.

  • Once the search criteria is met, the following Alerts will be generated, which can be viewed from the “Activity" tab.

The generated alerts can be viewed at “Activity -> Triggered Alerts".

Similarly, for high severity anomalies, the Splunk query would be:

("http invalid header:" OR "client sent invalid header:" OR "client sent invalid userid cookie \"*\"" OR "client * sent invalid \"Host\" header \"*\", URL: \"*\"" OR "zero size buf" OR "zero size buf in writer" OR "\"*\" must be less than the size of all \"*\" minus one buffer" OR "client sent invalid \"Host\" header" OR "client sent invalid \"Content-Length\" header" OR "rt signal queue overflow recovered" OR "auth http server sent invalid response" OR "memcached sent invalid key in response \"*\" for key \"*\"" OR "memcached sent invalid trailer" OR "http charset invalid utf *" OR "client sent invalid \"Overwrite\" header:" OR "client sent invalid header line: \"*\"" OR "client sent too large request" OR "upstream sent invalid header" OR "\"*\" mp4 * atom too large" OR "escaped URI: \"*\"" OR "spdy state buffer overflow: * bytes required" OR "client intended to send body data larger than declared" OR "receive buffer overrun" OR "no * for ssl_client_verify" OR "request reference counter overflow while processing" OR "http2 preread buffer overflow" OR "client SSL certificate verify error: (*:*)" OR "client violated connection flow control: received DATA frame length *, available window *" OR "client violated flow control for stream *: received DATA frame length *, available window *" OR "client sent invalid :path header: \"*\"" OR "upstream sent too large http2 frame: *" OR "upstream sent headers frame with invalid length: *" OR "upstream sent invalid http2 table index: *" OR "upstream sent invalid http2 dynamic table size update: *" OR "upstream sent too large http2 header name length" OR "upstream sent too large http2 header value length" OR "header is too large" OR "client sent invalid :scheme header: \"*\"" OR "client sent invalid host in request line" OR "negative size buf in output t:* r:* f:* * *-* * *-*" OR "negative size buf in chain writer t:* r:* f:* * *-* * *-*" OR "negative size buf in writer t:* r:* f:* * *-* * *-*" OR "unexpected \"-\" symbol after \"*\" parameter in \"*\" SSI command" OR "too large mp4 * samples size in \"*\"" OR "too large chunk offset in \"*\"" OR "no OCSP responder URL in certificate" OR "empty host in OCSP responder in certificate")

Upon executing the generated query, the results are as follows:

The triggered alerts for high severity anomalies would be as follows:

We can view the results or edit the search query under the “Actions" column. Based on the alerts generated, further actions can be taken by the concerned security teams.

Anomaly Detection - Elastic Style

Have a look at Introduction to ELK Threat Detection which is the base of our research. We will be using ELK SIEM to see if our detection rules can trigger any signals. For the demonstration, we will be using the component Nginx.

Importing our Rules

  • In Kibana, go to Security > SIEM ( http://localhost:5601/app/siem )

  • Go to Detections

  • Then click “Manage signal detection rules”

  • Now, “Import rule”

  • You can import our rules from here

To test the rules, please download this sample log file from here and index them. How to Index Sample Log

# Module: nginx # Docs: https://www.elastic.co/guide/en/beats/filebeat/7.8/filebeat-module-nginx.html - module: nginx # Access logs access: enabled: false # Set custom paths for the log files. If left empty, # Filebeat will choose the paths depending on your OS. var.paths: ["/var/log/nginx/access.log*"] # Error logs error: enabled: true # Set custom paths for the log files. If left empty, # Filebeat will choose the paths depending on your OS. var.paths: ["/home/cyber/Downloads/nginx_unit_test.log*"] # Ingress-nginx controller logs. This is disabled by default. It could be used in Kubernetes environments to parse ingress-nginx logs ingress_controller: enabled: false # Set custom paths for the log files. If left empty, # Filebeat will choose the paths depending on your OS. #var.paths: ["/var/log/nginx/error.log*"]

Once you have pushed your logs into Elasticsearch through the Filebeat Nginx module, you can make relevant changes with the rules.

Changes Required in Rules:

  • Our Queries work on two fields

  • event.dataset which is nginx.error

  • a message which contains the keyword according to our research

  • By default, the rule works on filebeat-* index, in case you have multiple indices or other indexes

You can change the query as follows:

{"actions":[],"created_at":"2020-08-04T09:41:32.293Z","updated_at":"2020-08-04T09:59:32.580Z","created_by":"elastic","description":"Detecting suspicious error log events which may lead to potential security threats","enabled":false,"false_positives":[],"filters":[],"from":"now-360s","id":"0eed7a60-d914-4024-ac64-99940c356385","immutable":false,"index":["filebeat-*"],"interval":"5m","rule_id":"0779d3e7-9d79-4e37-99ae-c2d815254591","language":"kuery","output_index":".siem-signals-default","max_signals":100,"risk_score":19,"name":"Anomaly Nginx_Low","query":"event.dataset : \"nginx.error\" and message : \"peer started SSL renegotiation\"","references":[],"meta":{"from":"1m","kibana_siem_app_url":"http://localhost:5601/app/siem"},"severity":"low","updated_by":"elastic","tags":["nginx"],"to":"now","type":"query","threat":[],"throttle":"no_actions","version":2} {"exported_count":1,"missing_rules":[],"missing_rules_count":0}

For Multiple Indices:

{"actions":[],"created_at":"2020-08-04T09:41:32.293Z","updated_at":"2020-08-04T09:59:32.580Z","created_by":"elastic","description":"Detecting suspicious error log events which may lead to potential security threats","enabled":false,"false_positives":[],"filters":[],"from":"now-360s","id":"0eed7a60-d914-4024-ac64-99940c356385","immutable":false,"index":["your-index*”, “filebeat-*”],"interval":"5m","rule_id":"0779d3e7-9d79-4e37-99ae-c2d815254591","language":"kuery","output_index":".siem-signals-default","max_signals":100,"risk_score":19,"name":"Anomaly Nginx_Low","query":"event.dataset : \"nginx.error\" and message : \"peer started SSL renegotiation\"","references":[],"meta":{"from":"1m","kibana_siem_app_url":"http://localhost:5601/app/siem"},"severity":"low","updated_by":"elastic","tags":["nginx"],"to":"now","type":"query","threat":[],"throttle":"no_actions","version":2} {"exported_count":1,"missing_rules":[],"missing_rules_count":0}

However, instead of event.dataset you can also use log.file.path field which contains your nginx error log file path.

Example: /var/log/nginx/error.log Once the rule is successfully imported, you can activate the rule and receive alerts accordingly. Generated Signals

When the rule is triggered, signals are generated as below

Investigating Signal

On clicking “Investigate in timeline”, we should see

Risk Scores and Severity Each signal comes with a Risk Score, Severity Range and Crucial Information. To know more details about a signal, please visit http://localhost:5601/app/siem#/overview

The Summary of the Above Signals:

  • They are stacked on the basis of Threat Tactic name, which in the above screenshot is Initial Access.

  • Our risk score for the critical rule set is 76 and the low rule set is 19, though the scores can be varied at times.

Thus depending on the generated signal, one can investigate further for possible attacks and safeguard their systems.

Conclusion

This new approach to SIEM Threat Detection dramatically reduces the overhead associated with traditional development of correlation rules and searches. Anomaly detection can be an effective means to discover strange activity in large and complex datasets that are crucial for maintaining smooth and secure operations.

Last updated