Building TheHive4 (4.0.5) and configuring MISP, Cortex and Webhooks.

Building TheHive4 (4.0.5) and configuring MISP, Cortex and Webhooks.

  1. Deploying (and using) TheHive4 [Part 1]
  2. Building TheHive4 (4.0.5) and configuring MISP, Cortex and Webhooks.
  3. Building the Assemblyline Analyzer for TheHive’s Cortex.
  4. TheHive 4.1.0 Deployment and Integration with MISP

Since the last write up I published on TheHive, there have been some significant changes and updates to TheHive. So for this post I will be walking through the installation and deployment of TheHive4 (4.0.5) and the connection to MISP, Cortex and enabling Webhooks.

Warning!

If you use the instructions below, you will be installing TheHive 4.1.0+ which comes with additional requirements and significant improvements in index management. Please refer to the updated post here, for up to date instructions for installing TheHive 4.1.0+.

Virtual Machine Resources

For this installation I have deployed a similar profile to my previous build. Resources are recommended to be increased for larger users bases as the database interactions etc. will start using more RAM with concurrent usage.

AttributeValue
vCPU4
RAM8GB
Disk32GB
Virtual Machine Resources

Installation Procedure

The below procedure is intended to be followed in sequence. There is a slight shift around when configuring Cassandra and installing TheHive. This is intentional, although you could shift it around but be aware of the requirement to restart services appropriately.

Install Java Virtual Machine

sudo apt-get install -y openjdk-8-jre-headless
echo JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64" | sudo tee -a /etc/environment
export JAVA_HOME="/usr/lib/jvm/java-8-openjdk-amd64"

Install Cassandra

curl -fsSL https://www.apache.org/dist/cassandra/KEYS | sudo apt-key add -
echo "deb http://www.apache.org/dist/cassandra/debian 311x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
sudo apt update -y
sudo apt install cassandra -y
Cassandra is listening on localhost:7000

Configure Cassandra Storage for Local Filesystem

cqlsh localhost 9042
UPDATE system.local SET cluster_name = 'thp' where key='local';
exit;
nodetool flush

Edit /etc/cassandra/cassandra.yaml and update your configuration file to read as below:

# content from /etc/cassandra/cassandra.yaml

cluster_name: 'thp'
listen_address: localhost
rpc_address: localhost
seed_provider:
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
      parameters:
          # Ex: "<ip1>,<ip2>,<ip3>"
          - seeds: '127.0.0.1' # self for the first node
data_file_directories:
  - '/var/lib/cassandra/data'
commitlog_directory: '/var/lib/cassandra/commitlog'
saved_caches_directory: '/var/lib/cassandra/saved_caches'
hints_directory: 
  - '/var/lib/cassandra/hints'

Save and exit the editor and then restart Cassandra

sudo service cassandra restart

Install TheHive

curl https://raw.githubusercontent.com/TheHive-Project/TheHive/master/PGP-PUBLIC-KEY | sudo apt-key add -
echo 'deb https://deb.thehive-project.org release main' | sudo tee -a /etc/apt/sources.list.d/thehive-project.list
sudo apt-get update -y
sudo apt-get install thehive4 -y

Since we will be configuring Local Storage for TheHive, run the below to create a storage directory and assign ‘thehive’ as the owner.

sudo mkdir -p /opt/thp_data/files/thehive
sudo chown -R thehive:thehive /opt/thp_data/files/thehive

We will come back to configuring TheHive in a section a little further down, but we need to configure Cassandra before we can do that.

Configure TheHive

Edit /etc/thehive/application.conf and update it’s contents to read as below:

db {
  provider: janusgraph
  janusgraph {
    storage {
      backend: cql
      hostname: [
        "127.0.0.1"
      ] # seed node ip addresses

      #username: "<cassandra_username>"       # login to connect to database (if configured in Cassandra)
      #password: "<cassandra_passowrd"

      cql {
        cluster-name: thp       # cluster name
        keyspace: thehive           # name of the keyspace
        local-datacenter: datacenter1   # name of the datacenter where TheHive runs (relevant only on multi datacenter setup)
        # replication-factor: 2 # number of replica
        read-consistency-level: ONE
        write-consistency-level: ONE
      }
    }
  }
}

storage {
  provider: localfs
  localfs.location: /opt/thp_data/files/thehive
}

Save the file, and then get run the below to start TheHive4

sudo systemctl start thehive
TheHive is now listening on port 9000

For your first login to TheHive, you will need to use the default Administrator account. Navigate to your instance using the https://hostname:9000/ address and provide the initial username and password.

[email protected]/secret

From here it would be a good idea to create your Organisation for TheHive, and in this case I have opted to create MySOC as my first Organisation.

Being that TheHive is a multi-tenant platform, multiple units within your organisation may be given access and remain isolated from each other unless sharing has been permitted for cases.

Drilling down into the Organisation, now users can be created. The format for account creation is in the form of an email address.

Create TheHive accounts for your organisation

Configure TheHive for MISP

You will need to have a MISP instance configured for this part of configuration to be completed. I have written a guide on this part over here, but there are also more advanced articles here for most things MISP related.

You will need to edit the /etc/thehive/application.conf file and update the following blocks with particulars from your MISP instance. I would suggest creating a user specific to this integration, and applying appropriate permissions to that account to limit potential negative effects.

play.modules.enabled += org.thp.thehive.connector.misp.MispModule
misp {
  interval: 1 hour
  servers: [
    {
      name = "local"            # MISP name
      url = "http://localhost/" # URL or MISP
      auth {
        type = key
        key = "***"             # MISP API key
      }
      wsConfig {}               # HTTP client configuration (SSL and proxy)
    }
  ]
}

Once you have configured the above components, save and exit the editor, and then restart TheHive.

systemctl restart thehive

Once TheHive has successfully restarted you should notice an additional status icon in the footer of the interface, and it should be highlighted Green. This means the module is enabled, and a connection has been made to the MISP API.

In this case, my MISP instance is not responding correctly due to the self-generated TLS certificate. However that can be remedied relatively easily, which I have described over here.

Configure TheHive for Cortex

You will need a working build of Cortex to integrate this component. Previously, TheHive and Cortex could be installed alongside each other, however I would suggest separating TheHive and Cortex from each other. This is mostly due to the eventual requirement to update TheHive, and how this may break Cortex, but also Cortex can be integrated with other platforms such as MISP and Shuffler, so really it should stand on it’s own.

I have some writeups on Cortex deployment here, but there are also some writeups on Cortex-Analyzers which will help extend capabilities a bit further, over here.

You will need to edit the /etc/thehive/application.conf file and update the following blocks with particulars from your Cortex instance.

play.modules.enabled += org.thp.thehive.connector.cortex.CortexModule
cortex {
  servers: [
    {
      name: "local"                # Cortex name
      url: "http://localhost:9001" # URL of Cortex instance
      auth {
        type: "bearer"
        key: "***"                 # Cortex API key
      }
      wsConfig {}                  # HTTP client configuration (SSL and proxy)
    }
  ]
}

Once you have configured the above components, save and exit the editor, and then restart TheHive.

systemctl restart thehive

Once TheHive has successfully restarted you should notice an additional status icon in the footer of the interface, and it should be highlighted Green. This means the module is enabled, and a connection has been made to the Cortex API.

From here you should be able to query the Cortex analyzers installed and activated in your Cortex analyzer through Cases. But there is more on that in my other writeups here.

Configure TheHive for Webhooks

Webhooks are relatively easy to configure, the only complicating factor will be the TLS enabled webhooks, but that is also relatively simple to implement.

Edit the /etc/thehive/application.conf and add the following to the bottom of the configuration file.

webhooks {
  SOARPlatform {
    url = "http://soar.local/webhook"
  }
}

Save the configuration file and restart TheHive to put this change into effect.

systemctl restart thehive

Once TheHive has restarted, events from within TheHive will start being transmitted to the webhook destination. Those events can then be acted on within that platform.

You can also specify multiple webhook destinations by adding more sub-braces under webhooks.

Conclusion

You should now have a basic working installation of TheHive4 working, and have successfully integrated with MISP and Cortex, and be sending Webhooks to your preferred SOAR platform.

10 Comments

Deploying (and using) TheHive4 [Part 1] – McHugh Security Posted on10:04 pm - March 3, 2021

[…] I have since written an updated installation procedure for TheHive4, which can be found here. Along with guidance on integration for MISP and […]

Kevin Lee Posted on3:56 pm - March 13, 2021

Thanks for all your hard work. I put together a youtube giving credit to thehive developers and for your work on this blog. Thanks. https://youtu.be/kNdpFv-ebzY

    admin Posted on5:54 am - March 13, 2021

    Thank you. I’ve just watched the video, and I think you’ve given me a few pointers to adjust on the blog.

    From memory, Cassandra listens on 127.0.0.1:7000, so I’m not sure adding 7000 to UFW is required. If you were to do so, i’d suggest hardening Cassandra’s installation beyond what I discuss in the article.

    The awkward delay when starting TheHive is actually a Java performance artefact. Similar delays are seen when starting TheHive’s analysis platform Cortex.

    With regards to Java 8, I am not entirely certain as to why it has not been updated to 11/12. This might be something to test.

zerotwo Posted on4:26 am - June 22, 2021

308 / 5000
Resultados de traducción
Dear,
First of all, thank you for your very helpful video tutorial.

Also ask your kindness to help me with the installation.
I have reached the step of “sudo systemctl start thehive”, and there I am stuck.

Port 9000 is not listening and therefore I cannot open “thehive” in the browser.

    McHughSecurity Posted on7:04 pm - June 21, 2021

    Hi, when you start thehive, run tail -f /var/log/thehive/application.log

    Look for error statements and reply here with whatever error is occurring.

    Also, check to make sure you have Java installed by running Java -v

      zerotwo Posted on7:07 pm - June 22, 2021

      Dear, these are the logs:

      java -version
      openjdk version “1.8.0_292”
      OpenJDK Runtime Environment (build 1.8.0_292-8u292-b10-0ubuntu1~20.04-b10)
      OpenJDK 64-Bit Server VM (build 25.292-b10, mixed mode)
      ———————————————————————————-

      tail -f /var/log/thehive/application.log
      at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:175)
      Caused by: java.lang.ClassNotFoundException: org.janusgraph.diskstorage.inmemory.InMemoryStoreManager
      at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
      at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
      at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
      at java.lang.Class.forName0(Native Method)
      at java.lang.Class.forName(Class.java:264)
      at org.janusgraph.util.system.ConfigurationUtil.instantiate(ConfigurationUtil.java:56)
      … 35 common frames omitted

JulianK Posted on5:01 pm - July 20, 2021

Hello,

How do you resolve MISP instance is not responding correctly due to the self-generated TLS certificate. However that can be remedied relatively easily, which I have described over here. I dont see any link. Really i have tried but i dont found solution

    McHughSecurity Posted on1:41 am - July 21, 2021

    Good pickup there – I missed that link (i’ll update it shortly).

    There are two potential answers here.
    1. The best answer – you will need to uncomment the fullchain SSL line within the apache2 configuration for MISP. You will also need a validated SSL certificate to be referenced as part of that fullchain parameter.
    An option to achieve this would be to use certbot (LetsEncrypt) and then link the generated certs in your configuration (I will be writing something up soon for MISP hardening – otherwise you can adapt the process from this TheHive and Cortex hardening post https://mchughsecurity.com/2021/06/18/hardening-thehive4-and-cortex-for-public-deployment/#Install_Certbot)

    2. Do not recommend – You can modify the HTTP configuration file for MISP to respond on port 80 and stop redirections to SSL. I DO NOT ADVISE DOING THIS ON AN INTERNET CONNECTED SYSTEM!
    If you are doing air-gapped or internet denied deployments, then this may be a palatable option, however you have the obvious downsides of credentials being transmitted in cleartext.

    Highly suggest you go with option 1, or purchase certificates if you cannot validate them online.

      McHughSecurity Posted on1:47 am - July 21, 2021

      I will make a correction to my last comment here:

      Here is a snippet from my live MISP installation where I have certbot generating certificates for me automatically.

      Edit /etc/apache2/sites-enabled/misp-ssl.conf

      Find the below:

      # enable HTTP/2, if available
      Protocols h2 http/1.1

      SSLCertificateFile /etc/letsencrypt/live/misp.redacted.com/fullchain.pem
      SSLCertificateKeyFile /etc/letsencrypt/live/misp.redacted.com/privkey.pem
      # SSLCertificateChainFile /etc/letsencrypt/live/misp.redacted.com/fullchain.pem

      Then restart apache2

        JulianK Posted on3:40 am - July 21, 2021

        Thanks. I understand, really you are a master in this field. I think I can do this with nginx really? Really I think the best option is generate a certificate. Is necessary in my domain point to the ip public or not? Another question that I have if I dont public the website do you have an idea for do a CA local? If I dont have a domain public

Leave a Reply

20 + eleven =