I have been an off and on user of TheHive for nearly a year now, and it is encouraging to see the development and release of TheHive4 (even if in pre-release). In this post I will walk through the deployment, configuration and migration of TheHive to TheHive4, and what improvements have been implemented into this release.

TheHive and Cortex, if you do not have any experience with them yet, are SOAR (Security Orchestration, Automation and Response) platforms intended to reduce the mandrolic (unfortunately not a word, but you get what it means) work involved with precursor and indicator analysis.

The previous / current version of TheHive and Cortex are dependant on Elasticsearch 5.x, which for obvious reasons may become an issue as the successive versions are released.
TheHive functions as a front-end incident analysis and reporting platform, whereas Cortex functions as the analysis backend which basically comes with a bunch of analysers and responders which perform an action on the behalf of the analyst from within TheHive.

Most people may have deployed this in a configuration where by TheHive and Cortex share the same Elasticsearch instance – which may create problems down the line.

TheHive and Cortex sharing an ES 5.x instance

Obviously, you can see a potential issue here with moving from a current TheHive Elasticsearch instance, to anything else – especially where cases have already been raised, closed, investigated etc, and those cases still need to be referred back to.

First of all, let’s create a new TheHive4 node, separate from the existing appliance, and deploy TheHive to it. The end state configuration, should look something like the below:

Separating TheHive4 from Cortex & Elasticsearch 5.x

System Specifications

For this one, I will be deploying TheHive4 to an Ubuntu Server 18.04 Virtual Machine located on the same subnet as an existing TheHive installation. This guide does not take into consideration your own network topology or security risks, so I encourage you to assess and adapt as required.

The end state should be a separate installation of TheHive4 integrating to the existing Cortex instance, with all the previous cases being retained for ongoing use.

TheHive-Project make some recommendations for system resources for TheHive4 (as below). In this case I will be the sole user of this instance, so I will deploy accordingly.

Number of usersCPURAM
< 324-8
< 1048-16
< 20816-32
Recommended system resources for TheHive4
Allocated system resources for TheHive4 test instance

Installing Cassandra

Let’s get to work on the dependencies for TheHive4:

sudo apt-get install -y openjdk-8-jre-headless
sudo echo JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64" >> /etc/environment
sudo export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
curl -fsSL https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
sudo apt update -y
sudo apt install cassandra -y

Run the command cqlsh then update the cluster name:

UPDATE system.local SET cluster_name = 'thp' where key='local';
exit

We then need to run flush using nodetool

nodetool flush

Update the /etc/cassandra/cassandra.yaml file:

cluster_name: 'thp'
listen_address: 'localhost' # address for nodes
rpc_address: 'localhost' # address for clients
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          - seeds: 'localhost' # self for the first node
data_file_directories:
  - '/var/lib/cassandra/data'
commitlog_directory: '/var/lib/cassandra/commitlog'
saved_caches_directory: '/var/lib/cassandra/saved_caches'
hints_directory: '/var/lib/cassandra/hints'

Then start the service:

sudo systemctl start cassandra
sudo systemctl enable cassandra
sudo mkdir -p /opt/thp_data/files/thehive
sudo chown -R thehive:thehive /opt/thp_data/files/thehive

You should see cassandra binding to port 7000 on the TheHive4 node:

netstat -an | grep 9000
TheHive4 is operational on port 9000

Installing TheHive4

Let’s start by adding the PGP key, and then adding the repositories for installing from:

curl https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY | sudo apt-key add -
echo 'deb https://deb.thehive-project.org beta main' | sudo tee -a /etc/apt/sources.list.d/thehive-project.list
sudo apt-get update
sudo apt-get install thehive4 -y

We now need to tell TheHive4 to use the Cassandra database we just created above, and then define the directory where data will be stored when associated with case objects.

Edit /etc/thehive/application.conf to reflect the following:

db.janusgraph {
  storage {
    backend: cql
    hostname: ["127.0.0.1"]
    cql {
      cluster-name: thp
      keyspace: thehive
    }
  }
}

We also need to update the storage section of the application.conf to reflect the local storage option defined for TheHive4″

storage {
  provider: localfs
  localfs.directory: /opt/thp_data/files/thehive
}

With these modifications complete, we can move to starting TheHive4 and then merging our existing cases from TheHive.

systemctl start thehive
systemctl enable thehive
TheHive4 has been successfully started

Migrating cases from TheHive to TheHive4

Fortuitously, TheHive4 comes with a migration tool which will migrate the contents of the Elasticsearch 5.x case data from ES to Cassandra. This would be particularly important where you have with live case data, or historical information retained within your existing installation.

There is a small catch however, TheHive4 operates on an email address schema for logging into TheHive, and this requirement also affects how your case data is transferred into the environment.
Essentially, you need to define what the username suffix will be for cases being transferred from Elasticsearch to Cassandra.

However, once the transfer is complete, there are methods available to regain access to these older cases.

To merge the data from TheHive to TheHive4, you need to be able to access the Elasticsearch instance of TheHive from TheHive4’s host. You may need to adjust TheHive to expose 9200 to your network interface, and you can confirm connectivity through the curl command.

curl http://THEHIVE_IP_ADDR:9200?pretty

You will need to update the /etc/thehive/application.conf of the TheHive4 host, to set the default user domain value:

auth.defaultUserDomain: "mydomain.com"

And then run the following command from TheHive4’s host:

/opt/thehive/bin/migration \
  --output /etc/thehive/application.conf \
  --main-organisation OrganisationName \
  --es-uri http://THEHIVE_IP_ADDR:9200

The main-organisation flag is used to define what the new organisation within TheHive4 will appear as, and all users defined within the installation will be created under that organisation, with the defaultUserDomain suffixed to their login names.

Note: Depending on the amount of data, this could take a few minutes, to several hours – It would be suggested to avoid writing new data to TheHive or TheHive4 whilst this process is occuring.

Once complete, you can now connect to TheHive4 and complete making the imported cases visible to the new installation.

Logging into TheHive4 for the first time

Your first login to TheHive4 should be through the default administrator’s account – for obvious reasons, this should be changed on your first login:

Default administrator account: admin@thehive.local/secret

Sharing cases with organisations within TheHive4

We can then make cases visible to the MySOC organisation, and the accounts therein by linking the NewSOC organisation to the OldSOC organisation.

Linking organisations within TheHive4

This should allow the SOC operators within MySOC to interact with old cases from TheHive, or alternatively new account may be generated under the OldSOC organisation and resume operations from there.

More on customising TheHive4?

I will be writing more on TheHive4 over the coming weeks whilst I integrate TheHive4 into my test bed environment. Of note, I will be having ElastAlert generate alerts for TheHive to be enriched by an operator.

Keep an eye out for Part 2 coming in the next few weeks.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.