Implementing Elastic Cloud and using Elastic Security

Implementing Elastic Cloud and using Elastic Security

Whilst I am a big fan of free open source solutions, I am going to bend my preference here a bit for the Elastic Cloud solution functioning as a SIEM.

Note: I will be updating this article fairly reguarly with new functions and features within Elastic over time. If this is a topic you are interested in, consider subscribing to the mailing list for Elastic related content.

Elastic offers a Cloud based solution which would allow a very modest lightweight SIEM to be implemented for around $0.05 AUD/hour (60GB of Index Storage), but this does not include high availability zones or anything fancy like that (you pay considerably more for that).

This article describes the implementation, deployment and configuration of Elastic SIEM and the usage of Elastic Security (including more advanced functions such as Threat Detection and Webhooks).

Elastic Stack in the Cloud

First of all we are going to create an account at cloud.elastic.co so we can then create an Elastic deployment.

For this deployment I will be talking to deploying a single zone instance (this is not good practice for disaster recovery purposes).

Start by creating an account on cloud.elastic.co, and the Create Deployment. You will be offered a selection of pre-configured solutions. For mine I will be deploying Elastic Security, and I will be selecting a Google as the cloud provider.

Preconfigured Elastic Platform selections

I am not going be configuring monitoring or IP filtering for this installation, but you should consider it if you are concerned about your infrastructure being attacked or if you require health monitoring to be implemented.

I would also recommend customising your deployment, otherwise you may be up for some rather expensive bills for a test environment. This deployment will be rather small, and some of the notable elements being:

  • Hot data and Content tier (2GB RAM/60GB Storage)
  • 1x Kibana instance (1GB RAM)
  • 1x Machine Learning instance (1GB RAM)
  • 1x APM Instance (512MB RAM)

Due to how Elastic charge for these items, we are only paying for the Elasticsearch instances (the others are included for Free) at a rate of $0.05 AUD / hour.

Summary of charges for Elastic Cloud

Deploying the Platform

Once you have set to deploying the platform, it will take a couple of minutes to create the necessary services and then allow you access to Kibana etc to continue your configuration.

Deploying Elastic Platform

Eventually you will see that the Elastic platform has been deployed successfully, and you will be able to check the status of the nodes within the deployment.

Platform status post-deployment

Configuring Kibana

There is actually not much to configure within Kibana out of the box – most of what I will be doing will focus on the Elastic Agent ingest component which will be discussed next.

Deploying Elastic Agents

Deploying the agents is fairly easy. From the Management menu > Fleet you will see a summary / dashboard for agents. From here we can create policies which we can then assign endpoints to either at the time of enrolment, or we can move them across after they have been initially enrolled.

Elastic Agents Fleet Configuration screen

Adding Agents

Enrolling agents is very simple, and really involves downloading the Elastic Agent and then enrolling it through a command line argument. This will create an association between the Elastic Agent and the Elastic Platform, and depending on what integrations have been associated with your policy, those logs will start flowing through shortly thereafter.

Enrolling agents to the Elastic Deployment

Creating Policies

Policies are version numbered (much as documents are versioned in Elastic when they are updated) so whenever a change is made to a policy, the version number is incremented. This may be used to determine if there is an issue between the Elastic Deployment and the Endpoint Agent.

Assigned agents to policies

You can see here that I have created a few policies, and I have assignedan agent to the Dionaea Honeypots policy for the purposes of capturing log data from those assigned assets.

Within those policies, we can further compartmentalise the logging through those policies into their own namespaces. This is useful for creating segregated searches and alerting, but also allows an opportunity to apply specify Index Lifecycle Management policies for those namespaces.

Policies can also have multiple integrations added to them to harvest logs or metrics from various services, translate them to the Elastic Common Schema, and then transmit them through to the Elastic platform. In my example below for Dionaea Honeypots I have added Elastic Agent, Elastic Endpoint Security, IPTables, Linux Metrics and System Metrics.

Integrations added to a policy

Adding Integrations

For this example I will be adding a custom log integration to the Dionaea policy to capture the dionaea.log file which is being appended to in my honeypots. This will allow the endpoint agent registered to these policies to look for the file, and being harvesting the content for indexing in Elasticsearch.

Custom logs configured for Dionaea.log source format

For more information on what Dionaea is, and why this log file is particularly of interest to me, head over the Dionaea page here.

Now that the Elastic Agent is harvesting that log file, it’s contents is being ingested into an index specific to that log source. This makes writing specific detection queries considerably easier, and also means we can adjust how the log is structured into that index in a more controlled manner (i.e. now we can convert the log into a a ECS compliant format).

The dionaea.log file is now being processed into the Elastic Deployment via Elastic Agent.

Capturing data from other sources

Now we can start looking for logs in other places – this is something Elastic has done quite a bit of work in.

From a logging perspective there are quite a few more sources which can be ingested into the Elastic platform, with those logs being restructured and enriched to be compatible with the Elastic Common Schema. This makes threat hunting much easier in the subsequent sections.

Logging sources compatible with Elastic platform

In addition, there are a slew of security products which have integrations written for them to ingest their logs into Elastic as well.

Collection of security appliances compatible with Elastic Common Schema

Threat Detection Rules

Threat Detection rules are preconfigured or custom developed ‘signatures’ which trigger when their configured conditions are found within your compatible indexes.

These rules are bound to connectors, which are fed the particulars of a detection for further action.

At present, Elastic Cloud allows connectors to be specified for Jira and ServiceNow. However, by using the Elastic native Alerts function, webhooks may be configured but it requires slightly more technical effort.

External Connectors

As mentioned before, Detection Rules and Alerts may send their detection information to external connectors. These connectors may be comprised of case management systems, security orchestration automated response systems, or any other automated / orchestrated system which could be compatible or appropriate.

Webhooks

Webhooks are configurable from the Alerts section of Kibana. Alerts may be POST or GET instigated, with their respective payloads being relatively configurable as well.

This where the slightly technical requirements come in, because you basically need to define what you want in your webhook transmission.

As a rule of thumb, I try to keep my webhook structure relatively basic to make workflows in SOAR platforms a bit more configurable and flexible.

Jira

Alerts which are transmitted to Jira are fairly user-friendly, and ideally you would send these alerts to Jira Service Desk for handling.

Fortunately you can use Atlassian’s free tier and integrate Elastic’s webhooks to Jira. In the example below, I have created an alert for high CPU usage, the content for which is fairly flexible to configure.

Jira Service Desk alert generated by Elastic (mobile interface)

Index Lifecycle Management

Index Lifecycle Management (ILM) is in effect a set of policies applied to indexes which govern how those indexes are handled in terms of system resources over time.

Documents in indexes usually start out in Hot storage whereby it lives in RAM and on disk to make read / write access faster.

Documents can be consolidated to disk by transitioning them over time to Cold states, and then eventually Frozen states on large and slow disk arrays.

Eventually these indexes could be Deleted as determined by the Delete policy qualifiers.

Leave a Reply

twelve + sixteen =