Can one analyst with zero budget start a Cyber Threat Intelligence (CTI) program?
Yes! In fact, you may already have started a small threat intelligence program without even realising it.
In this interview with Cosive CTO and renowned CTI expert Chris Horsley we delve into the following questions on how analysts and teams can start a threat intelligence practice with limited resources:
Our discussion covers:
Subscribe to the podcast for more educational SecOps and Threat Intelligence discussions like this one.
Summarised transcript:
Tash: Hi, I'm Tash and I'm the Marketing Manager at Cosive.
Chris: Chris Horsley, I'm the CTO at Cosive and one of the co-founders.
Tash: Let's imagine you're working in an organisation, you're a security analyst, and your role is very much operational — so we're talking about detecting threats, deploying security controls, and all of those things. And you keep hearing about CTI a lot and start to wonder: what should I be doing? Everyone seems to be doing this CTI thing — what should I be doing?
Chris: That really leads to that first question about what is it, and how do we define it? I think a nice way to define it is that it's knowing what's going on — being the eyes and the ears of the organisation, and helping people in the organisation make decisions with intelligence information in their hands to help them do that in an informed way. That's really what it is at its core. To me, it's how do we make decisions, and we're factoring in what's going on out there in the world. We're not making them blind or in an uninformed way.
Chris: Sometimes when people approach CTI for the first time, they see how the military might do threat intelligence — and a lot of that lingo has leaked through into cyber threat intelligence. It really does have a lot of basis in military threat intelligence, from thousands and thousands of years ago. That's the army knowing what's going on — or a military force — the eyes and ears of the force. Like what's the enemy's strength, where are they now, what armaments do they have, what are the likely tactics they'll be using against us?
Chris: That really takes us fast forward to now and cyber threat intelligence — and at its core, there's a lot of similarities. But I think sometimes people can be put off by the language, the frameworks, the fact that a lot of organisations spend a lot of money on tools and feeds and analysts and all of those things.
Chris: So if we go back to our example — you're in the thick of it day-to-day, you're putting out all the fires, you're running all the security tools — what can we do to get from zero to something approaching a CTI program? And the good news is, you are probably already doing something. You may not even realise that what you're doing could probably be classified as being called CTI.
Tash: Can any analyst do CTI?
Chris: I would agree with some opinions that some people are very predisposed to being good threat intelligence analysts — and a lot of it comes down to mindset. I've always subscribed to this idea that threat intelligence — a lot of it is about opinions. Sometimes we get to deal with hard facts, but let’s take the example of attribution, where we're deciding who carried out a particular attack or who is behind a particular threat.
Chris: We have to gather evidence, assess that evidence, and decide if there is a proper relationship or correlation between maybe previous things we've seen from a threat actor and what we're seeing in our case. Is that a true correlation or is it coincidental? Do they just happen to be using the same IP address one year apart?
Chris: So we have a range of evidence, but then we have a bunch of opinions — theories that we have to test before we come up with a level of confidence that we can assert. What makes a good threat intel analyst is having a level of scepticism, objectiveness, and also creativity — an ability to think about new hypotheses and how to test them.
Chris: A good threat intelligence analyst should be prepared to review and question their own conclusions at any time and do that in a somewhat dispassionate way. Some people, when proven wrong, just dig in harder. That will not make for a good threat intelligence analyst.
Tash: You mentioned that security analysts might already be doing threat intelligence without even realising it. Can you touch on that a little bit more?
Chris: When we're running threat intelligence in an organisation, you're the eyes and ears of that organisation. So if you're already doing things like reading blog posts, participating in communities — maybe you're in a Slack channel, Discord channel, you're on Twitter — and you're following threats, malware families, ransomware attacks, phishing crews, nation-state activity, criminal gangs...
Chris: You're absorbing all this information about techniques and tactics, threat landscape, threat actors — what you're doing there is intelligence gathering. But the intelligence might all be living in your brain, and that makes it hard to share with the rest of your team and get to outcomes.
Chris: So if you're switched on, you're engaged, and you're not just going into the office and ploughing through alert tickets without thinking — that's the distinction between someone who's just an operator and someone who's primed to be a threat intelligence analyst. You’ve got that curiosity. You're already gathering information. Now it's just about turning that into contextualised intelligence for your organisation.
Tash: If an analyst wants to start sharing some of the threat intelligence and research that they’re doing, what are some ways that they could start doing that in a more structured way?
Chris: I’ve seen many levels of maturity over the years and I won’t say any of them are incorrect at all — because it’s going to depend on number of people, your budget, what tools you have at hand already.
Chris: I’m a big believer in when we’re thinking about how to shape up a program of work — security operations or threat intelligence — sometimes someone comes in and says, “Here’s where you’re at, here’s where you need to get to,” and they recommend this massive budgetary spend. Let’s get the best-of-breed tools, a team of 20 analysts… and for a lot of organisations that’s impossible.
Chris: For the next couple of years, you have to prove the worth and the concept. So let’s talk about the constraints — people, budget, tooling — and work with those things. It might be that there’s two of you and you use Slack and a wiki. That’s what you have to work with, and you can start with those things.
Chris: Even if you’re sharing blog posts on Slack in a channel with your coworker, I would argue that’s the beginning of threat intelligence sharing. That’s technically what we’re doing — taking stuff inside our head and putting it into a repository of information. And you could do this in the wiki just as well.
Chris: Where it goes from “I’m just pasting links from the internet into a Slack channel” to something we could describe as certain intelligence is that contextualised understanding. I’m reading this blog post, thinking about what we do in our organisation, and writing some analysis. It might just be a few bullet points.
Chris: For example: we run this software, this is a critical vulnerability, this is our crown jewel system — so we need to take action on this immediately. That’s our assessment of what is described in the blog post. Now we’ve related it from “what a security company has done” down to “what do we need to do in our organisation about this.”
Tash: Who should analysts think about sharing threat intelligence with in their organisation? I guess the obvious one is other people in their team — are there others?
Chris: Absolutely. You’ve nailed one of the most important questions. If we’re going to start a formal CTI program, the first question is: why are we doing it, and for whose benefit in the organisation?
Chris: One of the first things we need to do is identify stakeholders. Typically those are going to be executives who need to make decisions. Remember, CTI is about decision making and taking actions.
Chris: So at the executive level, it’s about “What sort of threats is our organisation facing?” A classic one would be ransomware. For the executive, they’re thinking: how likely is it to happen, is our sector being targeted, what are the initial access methods? What are we doing? What budget do we need to allocate? Is our security team well-funded enough?
Chris: Then you’ve got the business side. If we were ransomed, what do we do? Do we have cyber insurance? What sort of support can they give us? What’s our business continuity plan? If our whole workstation fleet got taken out, how do we fall back to paper and pen?
Chris: These are business-level questions. So executives making business-level decisions — that’s one major stakeholder.
Chris: Second one is going to be our other technical teams. For example, we’re monitoring for exploits against new vulnerabilities. Ideally we know what assets and infrastructure we are running — and that’s not a given in many organisations. That’s the first challenge.
Chris: So we need to know: what systems do we run, how exposed are they, what security controls do we have, would they even work against these threats?
Chris: Those technical teams probably aren’t in the security team. They’re going to be platform teams, infrastructure teams — other capabilities in the organisation. They’re stakeholders too.
Chris: We may not even have the job title “threat intelligence analyst” but it’s kind of de facto part of our role because we’re doing all this reading and sharing stuff on Slack. How are we going to tell them?
Chris: Sometimes that might be formal, sometimes an informal heads-up. Maybe we find a vulnerability and know we run that equipment — we immediately need to tell that team: there’s a new vulnerability, it’s being exploited, we need to patch this ASAP.
Chris: This is threat landscape type intelligence — both timely and relevant. Once there’s a VPN concentrator vulnerability, it is very quickly exploited. It’s the gateway into a lot of organisations, and attackers move through from there, achieving objectives like data exfiltration or ransomware.
Tash: You hear the phrase “intelligence products” used quite a bit when talking about threat intelligence. How do you understand that, and what does that mean to you?
Chris: I think, along with not identifying and engaging with stakeholders at the beginning — asking how they do their job and what they need — that’s usually the number one way a CTI program fails.
Chris: You can have all the gathering, the expensive tools, the feeds — but if we don’t know what the outcomes are meant to be, I’ve seen CTI programs fail on that basis even after spending all the money.
Chris: To me, it’s like software development. We need to know what the finished product looks like and work backwards. Same with CTI programs.
Chris: Intelligence products are how we deliver what stakeholders need. For executives, they want briefings. We’re reading a blog post, contextualising it to our organisation, explaining how it’s a risk, what to do about it — and writing that up as a briefing.
Chris: Executives don’t want a 15-page blog post. They want: how important is this, what’s the impact, what’s the recommended course of action? That’s an intelligence product — a succinct threat briefing intended for human consumption.
Chris: Another intelligence product could be for the SOC — the security operations centre — who are responsible for alerting. So they’re looking for bad domain names, techniques being used by threat actors.
Chris: That could be a feed of indicators of compromise, or a set of detection rules — maybe Sigma rules translated into Splunk logic. They want something they can deploy into their SIEM.
Chris: I like to break it down into human-level intelligence products and technical, automated or semi-automated ones.
Tash: Would you recommend that analysts start with the human-level research — like reading blog posts — and then later look at potentially consuming an automated feed? Or do both at the same time? Or the opposite order?
Chris: I think most analysts will probably start just naturally — if they’ve got that inclination — they’ll start with that human gathering. So that’s, yeah, like we were talking about earlier — using your networks, your professional networks, blog posts, an RSS reader, all of those things.
Chris: Later we can start to use some of those feeds and we start to need tooling here, which you might start with very simple scripts. Could be a Python script pulling RSS feeds or pulling data from CSVs that are getting published every day.
Chris: Or we start to introduce formal tools — which could be MISP, could be a threat intelligence platform that speaks STIX and TAXII. But personally, I think we would always do both of those things.
Chris: One continual gripe is that a lot of those blog posts don’t have machine representations. So while we humans can go and read it and understand how it hangs together — the techniques, the IOCs, the description of how the threat unfolds — while we have MISP and STIX to describe these on a machine level, many vendors don’t do it. For time, or maybe it’s not a priority for them.
Chris: There are tools that can do this sort of human-readable analysis to machine-readable CTI package — which is a whole other topic. But yeah, our human gathering activities can also feed these systems and turn them into structured data that we can use — for example, to push out that Splunk IOC feed, so that we can do detections based on what we’re reading and what is relevant to the organisation.
Tash: Are there any feeds in particular that would be good for analysts to start with as their first feed? Or any tips on how to look for a feed that might be most appropriate for the organisation?
Chris: There are a number of public feeds out there. One I can give a shoutout to is CIRCL — one of the national cybersecurity incident response teams in Luxembourg. They have a very good MISP feed. It’s really well-structured, well-analysed.
Chris: If you want to look at an example of a well-structured public feed, you can go and have a look at their feed. They also run a MISP community as well, which has other sorts of sharing going on too.
Chris: There are quite a few public feeds like this. Some of them are mixed in quality. Some of them are just piping malware hashes through a sandbox and they generate a feed out of that. It’s good for high automation, but there’s very little context quite often provided.
Chris: And context is a word we use a lot in threat intelligence — because if I just have an IP address or just a hash, and I don’t know what threat it’s related to… if I was to find that IP, if I was to find that hash, what do I do now?
Chris: I’ve got it in a feed — I’m assuming it’s bad — but how bad is it? Was there a phishing site hosted here? Or is it the command-and-control of a ransomware crew? And if I see that in my organisation, does it mean the machines are getting encrypted as we speak?
Chris: Without that really important context of the nature of the threat connected to the IOC, we’re in trouble.
Chris: A lot of the free feeds — some are excellent, like the CIRCL feed. Others are very high volume and noisy, but they do lack that context at times.
Chris: Then we can start to talk about commercial feeds. There’s a lot of great commercial feed providers — I won’t mention any by name here — but they can often cost high amounts of dollars. It is expensive to gather this sort of intelligence and have the right analysts, systems, and quality control.
Chris: One threat intelligence report could take weeks or months of analysis to get to the point where we’re satisfied — for example, if we’re about to call out a nation-state as being behind a certain malware campaign. That could take multiple analysts a long time. So those top-level commercial feeds are expensive, and you need the organisation to be committed to a CTI program before you’re at that point.
Tash: Let’s say an analyst is starting to do threat intelligence — starting to share it with the organisation, maybe tapping into an automated feed, getting that to Splunk. How can they start to measure the success of their early efforts?
Chris: We can come up with maybe KPIs — metrics that talk about, well, how many briefings are we going to ship per week, per month?
Chris: Where I’ve seen it get into very walking-on-thin-ice territory — a bit dangerous — is when we’re saying something like, “We’re going to ship a thousand indicators of compromise a day.” That’s kind of getting things backwards, in my opinion.
Chris: We’re really interested in outcomes. Was the threat intelligence that we sent relevant? Was it actionable? What did the stakeholder think about it? Could they use it? Was it in a format that they understood?
Chris: Was the way and the context that this threat intelligence was couched in something they could understand? If it was automated — have they actually turned it into operational usage?
Chris: There’s nothing worse than going to all this effort and generating feeds, and it just ends up in a black hole. I’ve seen that happen plenty of times — where the threat intelligence team is working really hard, gathering, vetting feeds, doing triage, sending it to the detection team… and the detection team is getting a lot of false positives.
Chris: Classic case — we still have detection rules for an IP address from a year ago. We see a hit on that IP address today, but that IP now belongs to a benign service. A year ago it was malicious. Today it’s fine.
Chris: So all our detection rules fire. We get false positives. The rational decision from the detection team is: let’s turn off that feed. It’s flooding us. It’s hurting trust in the detections. Alarm fatigue. It does more harm than good.
Chris: What the threat intelligence team needs to be doing is continually coming back to the stakeholders and saying, “How are you going with this? Is it accurate? Are you doing something with it? Is it helping with your decision-making?”
Chris: To me, before you start thinking about metrics — because we all want to measure — when we’re talking about that entry level, de facto CTI team, communication and feedback are the most important thing. Then review and iterate on how we’re producing our intelligence products.
Tash: So it sounds like it’s far better to produce a small amount of useful, actionable, insightful threat intelligence that people can go and apply — and take action on — than it is to generate a high volume of threat intelligence that ends up being ignored or isn’t actionable.
Chris: Yeah, exactly. If we are very noisy as a threat intelligence team, and we’ve got all this low-grade stuff, when we send through that report that’s really important — the one we need to act on right now — we have the problem that it could get lost in the noise.
Chris: If we don’t have a way of signalling that “this is not like all the other things we produce — this is really important,” chances are other teams will just stop looking. Just like anyone does with email — when your inbox is flooded every day, and there’s one must-read buried in there — much less chance people are going to pay attention to it.
Tash: One thing that could be cool is giving — let’s say an analyst is listening to this discussion — giving them one small next step they could take to get started with CTI. They’re listening on their way home from work, they’re going to go into work tomorrow… what’s one small thing they could do?
Chris: I would recommend maybe two things.
Chris: First, as of the time we’re recording this, there has been published a new cyber threat intelligence maturity model guide. It’s called CTI CMM — Cyber Threat Intelligence Capability Maturity Model.
Chris: I’ve read my way through it and it’s a really good roadmap — for what domains of threat intelligence we’re talking about, how do we get from level zero — that very ad hoc sort of management — to the point where we have a very well-defined, mature, well-funded, well-resourced capability.
Chris: Don’t expect to get to the top level — that takes years — but it will paint the picture of what that whole spectrum looks like.
Chris: Second thing I’d recommend: getting involved in some of those professional communities. Slack, Discord, LinkedIn — despite all the noise and thought leadership on there — sometimes it’s an excellent resource when people are talking about new emerging threats, if you follow the right people.
Chris: Use those networks. Your colleagues in your industry, in your country, in your company — make excellent filters in their own right. The more people you see talking about a thing, that’s also an indication of priority.
Chris: Naturally we have to be thinking about what our organisation does too, but if you are very time poor and you’re doing operational-level work, and you’re doing this ad hoc CTI work, that’s a really good place — use your networks to get things on your radar.