If you are like me and deploy lots of small instances of VMs all over the place for various functions, you will find applying updates to them all consistently and in a responsive manner a logistical issue. Fortunately, there is an auto-update function within Ubuntu which can be configured in a few minutes.
I have been playing with CIRCL’s AIL Framework recently (which I will be writing about in another blog post), but I have had an interest in monitoring Telegram channels for Threat Intelligence and Data Breach indicators.
AIL has a very capable framework to detect indicators within processed information using a suite of very comprehensive Yara rules – but unless you want to copy and paste Telegram messages into AIL all day, some level of automation is required.
There is where the feeders come into play!
You will have seen the advertisements as you’re browsing the Internet and will have seen the vendors at various conferences and trade shows spruiking Threat Intelligence as the way to detect the bad guys in your environment, or their product/service delivering highly enriched intelligence relevant to your organisation. But what is Threat Intelligence really? And just how well refined does it need to be?
I have found myself deploying MISP on very small instances lately, mostly to function as a clearinghouse for intelligence I have been generating. So it begs the question – Does MISP run in DigitalOcean or Vultr hosting?
This is post 1 of 1 in the series “Malware Analysis with AssemblyLine” System Requirements For this build, I will be deploying AssemblyLine on my bare-metal hypervisor exposed to the Internet. This is not always a good idea, however, my build will be further hardened by additional controls which I will explain in subsequent articles…
Deploying an incident response platform on the open internet is not always a good idea. For whatever reason you choose to do so, there are some things you need to do before going live with TheHive and Cortex.
In this post, I talk about hardening TheHive and Cortex for an Internet-accessible deployment. This includes the application of TLS v1.2+ and the configuration of multi-factor authentication. Cortex can be further hardened through IP whitelisting, and even walled gardens implemented through Cloudflare.
- [Part 1] Building a Threat Integration and Testing Lab
- [Part 2] Building a Threat Integration and Testing Lab – Elastic Cloud Enterprise (On-Premises)
- [Part 3] Building a Threat Integration and Testing Lab – Splunk Enterprise
- [Part 4] Building a Threat Integration and Testing Lab – MISP Threat Intelligence Sharing Platform
MISP is a threat intelligence platform for sharing, storing and correlating Indicators of Compromise of targeted attacks, threat intelligence, financial fraud information, vulnerability information or even counter-terrorism information.
Within a well structured SIEM environment, a Threat Intelligence Platform may allow an organisation to generate new intelligence relevant to the organisation, and it may allow for the ingestion of external intelligence sources.
In the context of MISP, intelligence handling usually requires a set of stages for that information to be handled effectively. This can be addressed procedurally through a workflow.
Understanding how a taxonomy may be implemented in MISP to assist this process is handy.
According to the MISP taxonomies listing for Estimative Language, this taxonomy is used to descrie the quality and credibility of the underlying information sources, data, and methodologies as described under the Intelligence Community Directive 203 (ICD 203) and JP 2-0. In this article I will describe how these tags may be applied by either an intelligence originator, or when the information is polled from a known credible source to convey likelihood.
Data classification is broadly defined as the process of organising data by relevant categories so that it may be used and protected more efficiently. On a basic level, the classification process makes data easier to locate and retrieve.
In this article, I will be discussing the usage of the data-classification taxonomy for MISP events and attributes within those events. The intent of this taxonomy being categorising the value of data to provide some additional context to the information or asset being affected.