You will have seen the advertisements as you’re browsing the Internet and will have seen the vendors at various conferences and trade shows spruiking Threat Intelligence as the way to detect the bad guys in your environment, or their product/service delivering highly enriched intelligence relevant to your organisation. But what is Threat Intelligence really? And just how well refined does it need to be?
The Intelligence Lifecycle
Before we get too far into the topic of defining Threat Intelligence, we need to understand what constitutes Intelligence, and there are a number of definitions out there. But the one I like to use when comparing the quality of what I am seeing within a Threat Intelligence feed is a bit like this:
Data is raw, individual and unarguable.Daniel Miessler, 2020
Information is the combination of data in a form which can answer a question.
Intelligence is the combination of information which helps to make a decision.
So by the above definition, Intelligence should tell a story, and be able to assist in making a decision on a specific topic. Now we need to understand how to adapt the Intelligence Lifecycle to assist in decision making for informing decision making for threats. To do this we need to understand what characteristics are required to define Threat Intelligence in order to make decisions.
Determining the characteristics of Threat Intelligence
Trustworthiness: Much like asking a friend or family member for advice, the source of the intelligence you are consuming needs to be considered trustworthy, and in some cases (Government etc.) this may mean the intelligence source needs to be assessed, accredited or certified in some manner (to prove their trustworthiness).
Relevant: The advice provided by the friend also needs to be relevant to your topic. It would simply not be appropriate advice if you were to ask a friend for advice on which Dell or HP laptop you should choose when that friend has only ever used or is only familiar with Apple products. Why does it need to be relevant? If the intelligence provided to you has been developed to support informing risk for something which does not affect you, then is the intelligence relevant?
Contains Context: The advice on the topic also needs to provide enough insight to be able to support making a decision, and sometimes this might mean your friend needs to be able to describe the topic quite succinctly to be able to support your decision making. In terms of intelligence, this might require the “who, what, where, when and how” to be provided.
A course of Action: Considering the above, the last requirement is for the intelligence to offer choices for a course of action that can support the decision-making process. If a friend recommended for you to purchase a specific Dell laptop because of its ability to meet your very specific requirements, and is available within your specific budget would be considered relevant, and it has offered a course of action.
Constituent components of Threat Intelligence
Threat Intelligence that meets one or more of the aforementioned criteria (being trustworthy, relevant, contextualised and actionable) can come in various formats and contain varying levels of the appropriate criteria. Those can be in the form of Indicators; Tactics, Techniques, and Procedures (TTP); Security Alerts; Threat Intelligence Reports; and many more formats.
Indicators are generally technical observables related to immediately actionable data points. Those might be IP addresses, domain names (DNS), or URLs which relate directly to the malicious activity which is relevant at the time of the report. Considering the types of data points which comprise an indicator, these are largely considered short-lived in relevance and actionability. IP addresses can be changed, DNS entries can be removed, URLs can be modified and hashes can be altered by changing a single character. Technical indicators of this nature are largely considered high-value during Incident Response, however, when it comes to Threat Intelligence the information can become stale and unusable very shortly.
Tactics, Techniques and Procedures are used to describe the behaviours of whoever is involved in the malicious activity. Tactics are generally high-level descriptors of the actor’s behaviours, with techniques being the further context of their behaviours within their tactics, and lastly, procedures are down to lower-level details of the implementation of their techniques.
An example of this might be a threat actor who specifically targets a specific type of business with a team that conducts initial access and another to perform actions on the target. They use a combination of public exploits and valid user accounts gained through spearphishing campaigns, and once access has been established the “actions on” team begin collating data to extract from the compromised environment via reverse shells to hosts located within the target country.
Security Alerts can be known as advisories, bulletins, or even vulnerability disclosures, and they are generally brief, made to be digested easily and are usually released by an authoritative body such as a government agency, security vendor, NIST etc. Depending on the context of the security advisory the information within it may be immediately relevant to an observed threat within a sector, or even a whole economy. So the information in these security alerts should be considered a high priority for analysis and inclusion within any threat hunting or vulnerability management programs.
Threat Intelligence Reports are generally high-quality reports from a well-reputed source that describes the Tactics, Techniques, and Procedures surrounding a threat, and should describe other threat information such as the targeted sectors and industries. Any indicators within the Threat Intelligence Report should be of high quality with all known false-positive indications and ambiguous observables removed.
How long does threat intelligence remain relevant?
To answer this question, we need to understand how the volatility of indicators is affected over time. And in some regards, this may help to answer “How long do I need to block an IOC for?”.
If you are familiar with the “pyramid of pain”, you will understand there are a number of indicators that reside at the very bottom of the pyramid. Those are indicators that are relatively easy to detect, but also have a very short lifespan in terms of usability. Indicators of this nature can be changed easily by a threat actor (e.g. changing a payload file hash by altering a character or padding its contents).
For an incident responder or a digital forensics team member, finding a hash is relatively easy to do, and there is a multitude of data sources to achieve this objective (i.e. SIEMS, process logging, specialised tooling, etc). Once an incident responder has a hash for a known-bad file, the process of finding and eradicating that file is pretty straightforward – however what happens when the threat actor changes their payload or chooses to tamper with their payload because of a disclosure of their toolset? The hash value becomes meaningless and may actually provide the organisation with a false sense of assurance that their Threat Intelligence is relevant.
If a hash has been flagged as malicious, then that hash needs to only be associated with something that will always be malicious. What is meant by this; is hashes should only be used to flag files that are wholly on their own, malicious. An example of this might be to provide the known hash of a binary used in a particular attack that was created by the threat actor – but this does not mean the hash for cmd.exe should be included within the Threat Intelligence report (because blocking and removing cmd.exe would result in lots of bad things happening to legitimate operations).
Assuming a hash included within a Threat Intelligence report remains relevant, the hash could be blocked for an indefinite period. But as previously discussed, it is trivial for a threat actor to create a new payload to block and very soon you will end up with huge quantities of hashes to block.
IP Addresses are the next level up the pyramid, and these may be related to wholly malicious infrastructure being used by threat actors, but also could be ephemeral or temporary in nature. Take for example Cloud Hosting providers who will let you spin up a Virtual Machine for $2.50/month.
The IP Address used in an attack could literally be relevant for minutes or hours, and may not have a useful purpose for prevention blocking once the service is discontinued. From a detection perspective to determine if a threat actor was interacting with one of your assets, the IP address would certainly be useful – but there is no point in blocking the IP Address a year later when a dozen other legitimate uses have been hosted on the same IP address.
Domain Names are one of the next levels up which usually would require a threat actor to perform some level of configuration, and be more likely to be reused in future campaigns whilst the domain is still registered. Once a domain has been flagged as malicious, that domain could be expected to be accessible by the threat actor for at least a year (assuming they have only paid for a year of registration), or until a takedown or domain registrar process causes the threat actor to lose the domain.
Domains can also be used in attacks that are legitimate or launched from compromised websites. Post-incident that website may no longer be a threat, but without timing out the indicator it is possible for the legitimate website operator to suffer considerable impacts from being associated with an attack.
Network / Host Artifacts
This level of the pyramid deals with artifacts within Threat Actor hosts and networks which are quite a bit harder for them to alter, and may actually be put in place to make their mode of operations easier to perform. Examples of these kinds of artifacts could be in the choice of Operating System for their wholly controlled C2 infrastructure, or even in the repeated use of specific certificates for Certificate-based Authentication to SSH.
Whilst detecting and blocking these types of indicators are a bit harder for incident responders, by successfully doing so the Threat Actor is required to use other tools or techniques to achieve their objective, which may increase the likelihood of detection (depending on what alternative methods are available).
At the Tools level of the pyramid, we are getting into an area that is very hard for a defender to detect and protect from. Similar to the method in which some enumeration tools will make an estimation as to which Operating Systems are on a system based on behaviours in outputs, this level of the pyramid relies on determining what tools are being used by an attacker based on their behaviours.
An example of this could be the identification of web shells based on how User-Agents are coded within the exploit code or the reuse of code from Proof of Concept code in a victim’s environment. There are also potential behavioural detections possible through the frequency of requests through the web shell which might imply a programmatic control mechanism.
Tools, Techniques and Procedures
TTP is by far the hardest to detect, but once detected and mitigated against may cause a Threat Actor to reinvent their operations to commence their activities again. An example of this could be described in how Cosmic Lynx (reportedly Russian BEC Threat Actor) had used custom domain names to spoof victim organisation’s into sending urgent payments for business acquisitions. Once Agari exposed their operations from their investigations, the group appeared to disappear for a period of time, but eventually resurfaced instead of using custom domains, they used Cloud-Based mail systems and exploited deficiencies in victim spoofing protections.
Threat Intelligence is not just IP addresses and hashes; its content needs to assist in business decision making and provide the information needed for Cybersecurity folk to make recommendations to their business, and its customers, and assist in developing improvement actions for their security programs.
In the series for which this article is the vanguard, I will be delving into the depths of implementing a Cyber Threat Intelligence program – which some major features for a home-lab type deployment that can be achieved with relatively little cost.