February 21, 2024

In this interview Cosive’s Managing Director Kayne Naughton shares what he’s learned about threat intelligence programs throughout his career in vulnerability development, SysAdmin and working on threat intel in the financial sector.

Kayne is one of the co-founders of Cosive. Founded in 2015, Cosive specialises in trying to solve the difficult problems in security for Australian and New Zealand organisations.

What are the benefits of having a threat intel program?

Unfortunately a lot of organisations approach their security strategy like a shopping list. They focus on the tools they’re going to buy, or use, but that’s not the same thing as having a strategy or a plan.

You need to know what you're planning on defending against and how good your current defences are. You also need to understand the delta between those things. Unless you know what's going on out there in the world you’ll end up being led by sales people or marketing on where you should be focused. And that's not necessarily accurate.

Threat intel was a bit of a thing that was pushed by everyone not that long ago. It was a trend, similar to machine learning or doing security on the blockchain. You end up chasing fads. And quite often it's product focused. Whereas if you go back to fundamentals you can look at the risks to organisations like your own. You can learn about things that have happened to organisations similar to yours, whether you compare them on geography, sector, or size. If you can find out what happened to those organisations, what worked, what didn't work, then you can ask: how would we be suited to defend against this? And quite often those kinds of exercise are cheap in terms of expenditure, but can be manually expensive in terms of effort.

For example, you could decide to get rid of every externally facing remote desktop login from all of your Windows servers. You won’t have any of them facing the internet. That doesn't require buying a blinky light machine but it’s going to involve a lot of pain in terms of finding all the right areas and making them change to a better business practice. But it also massively reduces your chance of getting ransomwared if someone has a bad password.

Another example is getting rid of local administrator. If you look at commonalities between various attacks, they ultimately somehow get onto a machine and then they escalate their privileges. If people are running as a local administrator, then that's really easy. You can help inform all of these things with threat intelligence. It means you're actually focusing on what’s going to matter to you.

It's not an art, it's not a science, it's somewhere in between - it’s called tradecraft in intelligence communities. It's looking at what's happening over the horizon to try to prepare yourself for what you're seeing now.

I tend to be of a mind to treat intelligence strategically, but I think a lot of people have a focus on finding very precise, but also what I call brittle things, like IP addresses. If you get a report from six months ago from an attack against someone else and you'll go, “OK, here's what we've learned about it: this IP address is used by the attacker.” By all means, look back at your logs from six months ago, but if that's what you're relying on for current detection it’s very brittle. Why would the attacker be using the same IP address they used six months ago? It's very unlikely.

It's much cheaper and easier just to have another system at that point. You might see overlap if you're in the same campaign as somebody else at a previous time, but realistically, looking at the TTPs, the tactics, techniques, and procedures that they use, those are things that you can learn from. But a lot of people drill down that little bit too close and are looking at the exact tools and the exact versions that an attacker used. They need to zoom out a little bit.

Photo by DeepMind on Unsplash.

Where do threat intel programs typically go wrong?

There's a few things that people fall into. The first one is priorities. If you've got multiple priorities, you don't really have any priorities.

Threat intel is an expensive activity to go into, which is something we often warn people. It usually involves your generalist staff who can do a bit of everything. They tend to be the ones who get drawn into all the things that are deemed to be a high priority at the time. I think quite often intel doesn't get done because those folks are busy scrambling to deal with something that the board wants to know about, or an investigation, or a SOC uplift.

Another common problem I see is organisations collecting low grade indicators like hashes and IP addresses and so on, and then keeping them forever. They have a giant list of every phishing site the world's ever seen somewhere that they’re matching against. They have an alert fire whenever someone goes to a domain that hosted a phishing site four years ago, but is now a florist website or something. Those types of things tend to wear people down. You need to be able to downgrade your reliability or applicability of different types of things. Maybe you get an alert about a phishing site two weeks after it was used for phishing. You might want to know that it matches just so you can exclude it, or it might provide some context, but you don't want to be freaking out and sounding an alarm. You want to be careful what you provide to people and have the ability to age things out.

Just generally give context, particularly when you're dealing with technical indicators. This is a trap I see all the time from centralised government or police bodies, or whatever else.  When people put out advisories, they'll have a description of a bunch of things that have been going on, and it will have an appendix, which is a giant list of files or IP addresses or other odds and ends. And the context of those particular things is not always clear. And nearly universally there'll be some sort of public VPN endpoint included in there. It's the context that would make it an indicator of badness, depending on a period of time and a couple of other things. It might be bad. It might be perfectly normal. It's going to make people freak out if they don't understand that. You need to be able to provide context rather than just a list of addresses. You need to give a time window or a reason why, because the idea of things being good or bad, it always depends on time.

There's also degrees of, is this just annoying? Is it just janitorial? Is it a matter of cleaning stuff up, computer hygiene, or is it at the level of panicking and calling the CISO? If you're providing threat intel to people without that information, that means that people end up spinning their wheels, particularly if you're a trusted entity. You might have classed something as a low fidelity thing that you think people should look for, but because you’re a trusted entity they're going to panic if they see it. That's definitely one for people facing externally and more so with technical things.

The other big one where you can totally blow your credibility is if you end up going all Chicken Little and you stamp your feet and say that the organisation must patch this within 24 hours or the sky will fall, we’ll be doomed, we'll be bankrupt, we’ll be levelled to the car park. If nothing happens once, you can get over that. But if it happens again your credibility will be shot. You can't throw your weight around and make demands.

One way to build credibility is to separate your opinions from your facts. You can say things like: this was reported by this organisation and we have a medium level of confidence that it's accurate. In many cases you can add a lot of value by highlighting things that the organisation probably doesn’t need to worry about because they’re unlikely to be accurate. You’ve got to be clear about what is fact and what is your opinion and be precise about the difference in your writing. You want to talk about things like likelihood. Very rarely is there anything that you know to be true. It might have a high probability of being accurate, but especially if it's coming from another party it’s not guaranteed.

Things like double reporting happen a lot. You might hear from four different people about this particular type of attack so you start panicking. But it turns out all four of those people heard it from the same person and then shared it on. They were trying to be helpful, but it caused an outsized response. You have to know what you know and know what you don't know, to paraphrase Rumsfeld.

What should threat intel programs focus on?

Let’s assume you’ve got a SOC at a medium to large sized organisation. You've got 3000 alerts coming in each day from your SIEM tools or whatever you're using for alerting. You're getting a bunch of alerts coming in. How are you prioritising them? If you've got, say five analysts working on it, you've got 2000, 3000 alerts. Your staff can't do that many. It doesn't work that way. You can't do 400 alerts a day with any sort of thinking involved. So how do you prioritise those alerts? Maybe you’re doing it chronologically, alphabetically, or just whatever the person happens to pick off the top. Either way, you're going to end up with hundreds if not thousands left at the end of the day.

A potentially high value focus is to use your threat intelligence program to do some sort of enrichment so that you actually can identify things that are critical and categorise things by level of urgency. You want to cluster things together, so that instead of looking at 3,000 individual alerts you’re looking at 10 classes of alerts, potentially grouped by what the problem is. Maybe problems to do with your password policy, or something that's mistuned but you can go back and deal with that later. Or things that might be valuable but you don't have the telemetry that you need to actually do something about it. Being able to uplift and enrich the data that teams are working with is extremely valuable.

The biggest mistake people fall into in regards to focus is that they end up chasing APT (Advanced Persistent Threats). They're looking at the apex predators, the Fancy Bears and Cozy Bears, all these sort of glamorous attackers. It’s like learning self defence by watching videos of Mike Tyson fights. You’re much more likely to be attacked by a 16 year old Moldovan kid trying well-known passwords than an apex predator. You need to get the basics right before you fixate on top tier actors.

You mentioned the challenges of not being able to find staff or of key stuff being pulled away on other urgent work. How do you overcome that?

Keeping key staff on the program is difficult. You need good support from management. They need to be willing to bear with the pain of not being able to pull those people away from the threat intel program. Ultimately, it takes tenacity and political capital from the people who are driving the threat intel program to make sure that those staff aren't stolen here and there and having their time and focus eroded.

Finding staff for threat intel programs is difficult in itself. Quite often the people who’re the best fit for a threat intel program are the same people who know how things in your organisation work as well as having the sort of the hard techie cyber side of things or having previous experience. And those types of people usually have to be taken from somewhere else if they’re already internal. For external people, trying to recruit in general is hard. Trying to recruit someone who can actually do intel things broadly is very difficult.

Even inside governments most people deal with a segment of intelligence. They might be on the collection side or on the analysis side. Someone who's in a small team and doing it all is a bit of a change for a lot of those people. To be honest, we're not like America, where they have large bodies with thousands of people who are doing this day in and day out and providing a pool of talent. Australia is relatively small in this regard. Sometimes you’ll need to grow someone who’s already inside your security team into someone who can do intel rather than being able to hire someone directly off the market.

Is planning and direction a problem when you’re setting up the threat intel program?

Absolutely. There's two ways that people tend to go with it. Both of which in isolation are a bit of a mistake.

There's either being extremely executive focused, where you're just chasing the news and sort of saying, here's what we've read about today, look at this. Alternatively, they're curating a bunch of IP addresses and hashes and so on and giving them to the security team and the SOC. And in both cases, you've got one sort of stakeholder that is left on their own and they're not getting any value from it.

The other big one is being actionable. And that's a key part of any intelligence program: you have to be able to be actionable. If the response to everything is well, “that's nice” or “thanks”, then you can't really demonstrate any value. You're just letting people know what a problem is, but they don't really have any ability to do something about it. That's a real challenge that most people face.

How should threat intel programs manage their direction?

When we first talk to people about doing threat intel, we tell them to close their eyes and imagine they’ve reached Nirvana. You've got the ultimate threat Intel program. What outcomes are you getting from it? And then work your way backwards. You’ve got an outcome, what is that outcome enabling you to do? In order to do that, what type of delivery do you need to be doing. Does it need to be short form? Does it need to be quick? Who does it go to? What sort of analysis resources do we need? What sort of data curation or tooling do you have? What sort of collection have you got? And then who can set that tasking?

Generally speaking, people need to start at what their destination should be and work their way back. One example is keeping executives engaged in what the security team is dealing with. In a lot of organisations Security is a team that rocks up every couple of years cap in hand and says, “Hi, can I have $3 million please?” The next thing executives hear about them is when there's some terrible incident that’s befallen the organisation.

It's useful being able to undertake PR style activities where you’re sharing updates with executives and the broader team around the threats you’re seeing and how you’re dealing with them. It demonstrates that the organisation is getting value for money for money and keeps security front of mind. It gives Security a seat at the table. You can't always be doom and gloom. That definitely helps with people setting their program and bringing everyone along for the ride.

You can be a great resource for getting security out there and helping different teams talk to each other, but you need to be able to work with your colleagues. You can't be all ‘Secret Squirrel’ about it and keep things to yourself and not share. In some cases you need to keep things confidential, but you also need to be able to collaborate with other teams. Without this, you’re not going to have the support you need internally.

Photo by Goh Rhy Yan on Unsplash.

Who are the people and teams in an organisation that threat intel programs should be communicating with?

Security architects are typically a great starting point. They’re often very experienced in security but in some cases their knowledge is coming from times when they were hands-on tools but they may not be hands-on currently. Keeping them updated with what you're seeing out there and what's happening is often appreciated. They're the ones who are designing the security controls for the systems that are coming in soon, or as part of uplifts. On the detection side by all means it's useful to work with the SOC team, but they don't have the ability to impact the design. And ultimately most of these things are usually better factored in at design time. It's quite hard to retrofit.

Say you decide that mobile phone porting is a big problem that you're seeing at the moment.  That means you can get the security architects to go and fight for push notifications for second factor auth instead of using SMS, for example. That makes it a design decision, it's no extra effort, it just happens early on. That’s a lot better than trying to change the course of something that's already live.

It’s also important to keep executives and risk people involved. You’re not expecting them to do something about a particular vulnerability, but it’s helpful for them to have an awareness of the threat groups and get them thinking about the types of things that threat actors care about.

It’s also important to remember that what threat actors care about is not necessarily what you care about. You might be, say, doing medical imaging and have a bunch of confidential health records for people with cancer. A threat actor may just want to encrypt all your files and demand $20,000 to get them back. They're not necessarily looking to capitalise on the private information, even though that's what you care about. They don’t care what the data is, just that it’s valuable to you. A lot of people fall into the trap of thinking that nobody would bother attacking them. You have to remind people that it can be anyone.

How do you assess the effectiveness of a threat intel program?

It’s tough working out how effective you are because often you can’t see the value you provide because the thing you were protecting against never happened. If you're doing a great job it means that you didn't have to save the day because of the work you did beforehand.

I think in terms of metrics it’s really hard. One thing I think people can tune is the way that they do the delivery of their intelligence. Say you are a company that does software and you're providing intel on the sorts of software vulnerabilities that other people like you have been making and you're trying to feed that back into your dev team so they don’t make those errors. If you write a fancy PDF and you've got images and stuff in it, and then someone has to go and copy and paste those bits into Jira to be a user story that gets implemented, then that's a waste of time. You could just put the information directly in Jira and that'd be good.

I certainly had the feedback when working in banking that when writing briefings for executives (and maybe it ages me a little) that the full lede needs to be visible on the first page of a Blackberry when someone opens the email. If it required clicking through to a portal the executives probably weren’t going to click through. Realistically, they're reading briefings on their way between meetings. You want a paragraph that gives context on what it's about, roughly where you’re positioned, and what needs to happen next, or a call to action. You can't write a giant treatise and expect an executive to read through it. You need to be able to hit the key points within the first paragraph, much like a news story. Those aspects of delivery are relatively easy to tune.

Also try to get champions in the right areas who are willing to engage you to help you do this stuff better. People won't tell you if something is useless, usually you just won't hear from them. So you need to seek that feedback and be willing to incorporate that feedback. Don’t be too hard-headed about it. In terms of actual metrics, it's really tough. It's a lot more subjective I think.

What advice would you give to a lone person in an organisation who wants to spearhead a threat intel program?

They need to try to find someone who can be a champion internally.

If you are a financial institution you can join the FS-ISAC (Financial Services Information Sharing and Analysis Centre) for a relatively modest amount. And then you’ll get a bunch of stuff from your peers elsewhere. That's a great way to get started with getting input because data feeds are expensive and they don't cover everything. You might also be able to get data feeds from your existing vendors.

I'd say that generally collection is a problem because you don't want to be spending all day every day trying to read blogs or read Twitter to follow what's going on. You need to have some sort of way of structuring data so that you can find things and correlate things. I've done it before by using basic tooling and trying to do it in my head. It's no way to live. You want to be able to curate those things a little bit. A lot of people use MISP (and we do a bit of stuff with cloud-hosted MISP), some people use Yeti. There are a lot of commercial tools but they tend to be focused more at the top end of town. If nothing else you can use a Wiki.

Don't overdo the feeds. You want to be able to work out what your objectives are and then go from there rather than focusing so much on collecting all the data you can and trying to do something with it. It’s easy to drown trying to drink from the fire hose. You need to work with data that you can actually derive some meaning from, data that you can store sensibly rather than having an inbox full of a hundred thousand emails that are all unstructured.

What are the characteristics of successful threat intel programs?

I think they help everyone. They're a force multiplier. You don't need to be the hero that saves the day, but you're the one that people know they can rely on and can confide in.

I've seen plenty of threat intel programs, for instance, inside financial organisations that have a great relationship with the help desk or the incoming call centre, because they're the ones that people start calling when they’ve seen something a bit odd. If someone is engaged, they’re more likely to let the threat intel people know about it. The type of interaction that they can help drive can be absolutely fantastic for an organisation.

I've had stuff where I told people about it, and they were like “Why didn't we get briefed on this?” and I was able to say, you were briefed about it three weeks ago. When you're that far ahead you can potentially do something about attacks before they happen to you. That's fantastic.

Source

A useful concept here is the information pyramid. You’ve got Data near the bottom, it's just bits and bytes. Then you go up a level to Information where it's actually got some sort of structure or meaning to it. And then you go up towards Knowledge where we actually know things like the particular tool that a threat group is using. Wisdom is what lets you actually predict what's going to happen. You’re at the point that you can predict the future, you can see a pattern and you can say with some degree of confidence that you’re likely to be targeted by a particular threat group in the next three months.

You're never going to have a crystal ball and it's foolish to try to reach that level of certainty. But when you set an expectation with people on what is likely to happen, it’s very powerful.

You also need to communicate what you expect to happen. If you predict something but don’t tell anyone, does it matter? You need to be telling people ahead of time. That's a great way of providing value and keeping everyone engaged.

The biggest problem in threat intel is secrecy. Everyone thinks that they’re a spook, but being overly secretive can be harmful. It’s really important to work with other organisations. You might be competitors in business, but allies when it comes to security. Having good relationships is incredibly important.

A big part of the value that I’ve provided to threat intel programs is that I’ve helped out other organisations. When the time came that we needed something knocked on the head it meant that people were willing to help us, even if it was a Friday night, because I’d done a favour for them previously. Maybe I'd looked into something or alerted them to something that they didn’t know about yet, even if it was a weekend. You can call in the favours. And that's a great spot for intelligence to be in, to have a network of peers you can work with to make things happen. If you’re not helping others then when you need help it’s essentially a cold call, and that’s difficult.

Photo by Mark König on Unsplash.

Any final words of advice for people running threat intelligence programs?

It’s a tough gig, trying to do threat intel. There are no definite ways of doing it. It depends a lot on the organisation and what the organisation does. A lot of it, honestly, is about being personable and being able to have good relationships with the right people in the right places in order to actually have an impact. It doesn't matter if you're right if everyone thinks you're an a*****e and ignores you.

You need to be able to maintain those relationships and also understand that other people have different priorities. The software team is all about hitting that next release without defects. They aren’t necessarily going to be worried about changing all their cypher suites because the threat intel team is really into quantum security at the moment. You've got to be aware of what matters to other people and temper your advice with that lens rather than alienating people. An unfortunate trend in IT, InfoSec and especially Intel is alienating people by acting like you're always right. You need to be humble and work with other people as peers rather than trying to boss them around.

Written by Kayne Naughton and Tash Postolovski. Cover photo by Etienne Girardet. This is a slightly edited and condensed version of this video interview.