Deploying (and using) TheHive4 [Part 1]

Deploying (and using) TheHive4 [Part 1]

  1. Deploying (and using) TheHive4 [Part 1]
  2. Building TheHive4 (4.0.5) and configuring MISP, Cortex and Webhooks.
  3. Building the Assemblyline Analyzer for TheHive’s Cortex.
  4. TheHive 4.1.0 Deployment and Integration with MISP

I have been an off and on user of TheHive for nearly a year now, and it is encouraging to see the development and release of TheHive4 (even if in pre-release). In this post I will walk through the deployment, configuration and migration of TheHive to TheHive4, and what improvements have been implemented into this release.

Note: I have since written an updated installation procedure for TheHive4, which can be found here. Along with guidance on integration for MISP and Webhooks.

TheHive and Cortex, if you do not have any experience with them yet, are SIRP (Security Incident Response Platforms) tools intended to reduce the mandrolic (unfortunately not a word, but you get what it means) work involved with precursor and indicator analysis.

The previous / current version of TheHive and Cortex are dependant on Elasticsearch 5.x, which for obvious reasons may become an issue as the successive versions are released.
TheHive functions as a front-end incident analysis and reporting platform, whereas Cortex functions as the analysis backend which basically comes with a bunch of analysers and responders which perform an action on the behalf of the analyst from within TheHive.

Most people may have deployed this in a configuration where by TheHive and Cortex share the same Elasticsearch instance – which may create problems down the line.

TheHive and Cortex sharing an ES 5.x instance

Obviously, you can see a potential issue here with moving from a current TheHive Elasticsearch instance, to anything else – especially where cases have already been raised, closed, investigated etc, and those cases still need to be referred back to.

First of all, let’s create a new TheHive4 node, separate from the existing appliance, and deploy TheHive to it. The end state configuration, should look something like the below:

Separating TheHive4 from Cortex & Elasticsearch 5.x

System Specifications

For this one, I will be deploying TheHive4 to an Ubuntu Server 18.04 Virtual Machine located on the same subnet as an existing TheHive installation. This guide does not take into consideration your own network topology or security risks, so I encourage you to assess and adapt as required.

The end state should be a separate installation of TheHive4 integrating to the existing Cortex instance, with all the previous cases being retained for ongoing use.

TheHive-Project make some recommendations for system resources for TheHive4 (as below). In this case I will be the sole user of this instance, so I will deploy accordingly.

Number of usersCPURAM
< 324-8
< 1048-16
< 20816-32
Recommended system resources for TheHive4
Allocated system resources for TheHive4 test instance

Installing Cassandra

Let’s get to work on the dependencies for TheHive4:

sudo apt-get install -y openjdk-8-jre-headless
sudo echo JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64" >> /etc/environment
sudo export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"
curl -fsSL | sudo apt-key add -
echo "deb 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
sudo apt update -y
sudo apt install cassandra -y

Run the command cqlsh then update the cluster name:

UPDATE system.local SET cluster_name = 'thp' where key='local';

We then need to run flush using nodetool

nodetool flush

Update the /etc/cassandra/cassandra.yaml file:

cluster_name: 'thp'
listen_address: 'localhost' # address for nodes
rpc_address: 'localhost' # address for clients
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
          - seeds: 'localhost' # self for the first node
  - '/var/lib/cassandra/data'
commitlog_directory: '/var/lib/cassandra/commitlog'
saved_caches_directory: '/var/lib/cassandra/saved_caches'
hints_directory: '/var/lib/cassandra/hints'

Then start the service:

sudo systemctl start cassandra
sudo systemctl enable cassandra
sudo mkdir -p /opt/thp_data/files/thehive
sudo chown -R thehive:thehive /opt/thp_data/files/thehive

You should see cassandra binding to port 7000 on the TheHive4 node:

netstat -an | grep 7000
Cassandra is operational on port 7000

Installing TheHive4

Let’s start by adding the PGP key, and then adding the repositories for installing from:

curl | sudo apt-key add -
echo 'deb beta main' | sudo tee -a /etc/apt/sources.list.d/thehive-project.list
sudo apt-get update
sudo apt-get install thehive4 -y

We now need to tell TheHive4 to use the Cassandra database we just created above, and then define the directory where data will be stored when associated with case objects.

Edit /etc/thehive/application.conf to reflect the following:

db.janusgraph {
  storage {
    backend: cql
    hostname: [""]
    cql {
      cluster-name: thp
      keyspace: thehive

We also need to update the storage section of the application.conf to reflect the local storage option defined for TheHive4″

storage {
  provider: localfs /opt/thp_data/files/thehive

With these modifications complete, we can move to starting TheHive4 and then merging our existing cases from TheHive.

systemctl start thehive
systemctl enable thehive
TheHive4 has been successfully started

Migrating cases from TheHive to TheHive4

Fortuitously, TheHive4 comes with a migration tool which will migrate the contents of the Elasticsearch 5.x case data from ES to Cassandra. This would be particularly important where you have with live case data, or historical information retained within your existing installation.

There is a small catch however, TheHive4 operates on an email address schema for logging into TheHive, and this requirement also affects how your case data is transferred into the environment.
Essentially, you need to define what the username suffix will be for cases being transferred from Elasticsearch to Cassandra.

However, once the transfer is complete, there are methods available to regain access to these older cases.

To merge the data from TheHive to TheHive4, you need to be able to access the Elasticsearch instance of TheHive from TheHive4’s host. You may need to adjust TheHive to expose 9200 to your network interface, and you can confirm connectivity through the curl command.

curl http://THEHIVE_IP_ADDR:9200?pretty

You will need to update the /etc/thehive/application.conf of the TheHive4 host, to set the default user domain value:

auth.defaultUserDomain: ""

And then run the following command from TheHive4’s host:

/opt/thehive/bin/migration \
  --output /etc/thehive/application.conf \
  --main-organisation OrganisationName \
  --es-uri http://THEHIVE_IP_ADDR:9200

The main-organisation flag is used to define what the new organisation within TheHive4 will appear as, and all users defined within the installation will be created under that organisation, with the defaultUserDomain suffixed to their login names.

Note: Depending on the amount of data, this could take a few minutes, to several hours – It would be suggested to avoid writing new data to TheHive or TheHive4 whilst this process is occuring.

Once complete, you can now connect to TheHive4 and complete making the imported cases visible to the new installation.

Logging into TheHive4 for the first time

Your first login to TheHive4 should be through the default administrator’s account – for obvious reasons, this should be changed on your first login:

Default administrator account: [email protected]/secret

Sharing cases with organisations within TheHive4

We can then make cases visible to the MySOC organisation, and the accounts therein by linking the NewSOC organisation to the OldSOC organisation.

Linking organisations within TheHive4

This should allow the SOC operators within MySOC to interact with old cases from TheHive, or alternatively new account may be generated under the OldSOC organisation and resume operations from there.

More on customising TheHive4?

I will be writing more on TheHive4 over the coming weeks whilst I integrate TheHive4 into my test bed environment. Of note, I will be having ElastAlert generate alerts for TheHive to be enriched by an operator.

Keep an eye out for Part 2 coming in the next few weeks.


Building TheHive4 (4.0.5) and configuring MISP, Cortex and Webhooks. – McHugh Security Posted on11:21 am - March 3, 2021

[…] the last write up I published on TheHive, there have been some significant changes and updates to TheHive. So for […]

Cristian Posted on4:57 pm - August 26, 2021

Good morning please could you help me with a question. I installed, cassandra and thehive 4, and I enter via web, my organization and users are created, but I don’t know how to send alerts to The Hive.
I want to know if you could help me with information on in which folder I could copy logs for hive to use and process until it can be integrated into the mail.
My idea is to inject logs to see how The hive indexes them and after I understand that, try to integrate the mail to the hive.

    McHughSecurity Posted on12:47 pm - September 2, 2021

    Hi Cristian,

    TheHive is not a log aggregator or a SIEM. Instead, alerts are created within TheHive for an analyst to perform a triage assessment.

    Alerts are created through TheHive’s API endpoint. This can be done through Curl or something of the like, or through one of the many abstractions available to the API.

    A good example of that integration would be TheHive4py which would allow you to programmatically generate alerts based on conditions detected within your logs.
    See this link ( for an example of how to use TheHive4py to do this.

    I hope this helps you out.


Leave a Reply

three × two =