According to the MISP taxonomies listing for Estimative Language, this taxonomy is used to descrie the quality and credibility of the underlying information sources, data, and methodologies as described under the Intelligence Community Directive 203 (ICD 203) and JP 2-0. In this article I will describe how these tags may be applied by either an intelligence originator, or when the information is polled from a known credible source to convey likelihood.
Data classification is broadly defined as the process of organising data by relevant categories so that it may be used and protected more efficiently. On a basic level, the classification process makes data easier to locate and retrieve.
In this article, I will be discussing the usage of the data-classification taxonomy for MISP events and attributes within those events. The intent of this taxonomy being categorising the value of data to provide some additional context to the information or asset being affected.
One of the great aspects of MISP, is the use of tags to give an indication of what needs to be done with an indicator within an event. Whole events may be assigned tags, but in this article I am going to talk to marking specific indicators with a Course of Action which implies a response when / if that indicator as been encountered.
Intelligence is pretty much everywhere in unstructured formats, and this can be in informal blog posts, tweets, and even within FBI or US Treasury documents. In this article, I am going to describe how to build a transferrable STIX object from the FBI’s Most Wanted website.
MISP works really well in an internet connected environment in gathering and creating correlations. However, in air-gapped environments the ability to query MISP for indicators is still incredibly useful, except that an air-gapped environment doesn’t ordinarilly have an Internet connection.
In this article I describe how MISP may be used in an Internet denied environment by leveraging off an existing Internet-connected instance.
MISP is certainly intended to be used like this, however, with some creativity and some technical effort, the MISP Threat Intelligence Platform could be utilized as a missing person’s intelligence database.
In this post, I will discuss a methodology in using MISP as the Intelligence Platform, and more traditional applications such as Maltego (and some custom transforms) to collect, enrich, and manage your intelligence.
Lately I have been playing with having MISP be the Intelligence Sharing platform for a number of business intelligence functions. However, the main issue with MISP (from a user’s perspective) is the interface, and how a less technical person would generate information for the platform.
This is where pairing MISP and Maltego together goes really well, and even results in less technical people being able to generate technical data for incorporation into intelligence operations.
I have posted before on participating in other TraceLabs events (such as the Australian Federal Police Missing Persons Hackathon), so here goes a brief recounting of my experiences with a US missing persons event.
Sometime ago I participated in an event run by TraceLabs in conjunction with the Australian Federal Police to locate pieces of information for missing persons across Australia. The twist on this event being it was gamified to allow competing teams to try and beat each other to amass the most amount of points according to a points award system.
I will now be competing in the Missing Persons CTF on the 11th of April 2020, and in the lead up to this now virtual CTF – I will be building some more capable infrastructure and tooling to support this challenge.
So for those new starters, what do you need as a bare minimum to start digging and submitting indicators?
As part of my final Masters degree research component I have been collecting data from honeypots which I have seeded around the globe. The objective being to distil this data in to organisational threat data based on a fictitious business.
Part of the complication I am going to start facing, is how to how Elasticsearch and Kibana to find specific information for me from this live data set.
Previously I have indicated that a data set exists which was produced by the Canadian Institute for Cybersecurity, called IDS 2018, which contains Windows Event Logs and PCAP files relating to a set of simulated attacks generated for the purposes of teaching people how to hunt within similar datasets.
Here I will be discussing the deployment, configuration and interaction with this data set to achieve the outcome required.