7 MISP Best Practices: Lessons from Effective Threat Intel Teams

7 MISP Best Practices: Lessons from Effective Threat Intel Teams
February 21, 2024

MISP is a powerful open source threat intelligence and sharing platform used by countless SOC teams around the world.

Getting a barebones MISP instance up and running is well within the skill-set of most SOC teams. Download MISP, run it on a VM, and log in to the MISP admin console using default credentials… all within about 10 minutes.

That part is easy. Now for the hard part: how do you get from a barebones MISP install to actually using MISP to solve real-world cybersecurity problems? Making that leap can be much more complex and challenging than it may seem on the surface.

What we see over and over again is that organisations get to the point of running MISP, but struggle to get it running reliably. That becomes a big challenge, because the SOC team is already busy doing a thousand other things and may not have time to give their MISP instance the attention it needs.

I’m yet to meet the person on a cybersecurity team who has a lot of time on their hands.

Maintaining and babysitting a system is generally not a thing you want to be doing on a SOC team. That’s where the idea for CloudMISP came from.

In many ways, we founded Cosive because we didn’t want to be stuck at the coalface of incident response. We wanted to be able to focus on engineering and building tools.

CloudMISP fits perfectly into our mission. We’ve engineered a system to make it easier to use MISP effectively. But what are the best practices that help teams get maximum value from MISP, and how do they help?

MISP Best Practices

1. Carefully upgrade MISP to the latest release as soon as possible

MISP has a very high throughput of releases, about 12 - 15 per year. As with any piece of software, those updates could be security releases. It’s a story as old as time at this point, that serious data breaches can happen when systems don’t get patched.

Photo by Maxim Berg on Unsplash.

Equifax allegedly failed to patch one of their web application libraries and suffered a massive data breach as a result. This kind of work isn’t exciting or glamorous, but it’s also really important to keep your applications up to date with security patches.

The second part of why you want to keep MISP in line with modern releases is that releases could contain important bug fixes, improving the reliability of your operations and automations.

The third benefit is that MISP upgrades often contain useful new features. One really interesting new feature that we’re looking forward to digging into more is MISP workflows.

Previously we’d develop our own scripts and use tools like ZeroMQ to intercept activity happening within MISP and take other actions based on triggers. With MISP workflows, analysts can go into the platform and design threat intel workflows themselves. For example, if we know that threat intel from a particular organisation is very reliable, we could create a workflow to take that and push it straight out to our security applications.

Another recent feature with useful applications is TAXII 2.1 server push integration. Following the recent collaboration of CISA and MITRE, MISP can now be configured to push threat intel to TAXII 2.1 servers.

Keeping up to date with the latest releases of MISP means you can take advantage of these better workflows within the platform, and give these tools to your analyst team so they can process this incoming data more effectively, efficiently and reliably.

So… why doesn’t every team run the latest version of MISP?

For SOC teams, upgrading MISP isn’t always as simple as pressing a button. For all their benefits, upgrades can introduce new behaviour and changes that may impact existing workflows.

Even though MISP comes with a button in the admin console that will automatically update MISP, for CloudMISP, we decided to use a managed update process instead to make upgrades safer and more predictable.

We have control over the componentry and versions running on each of our instances of CloudMISP. We do updates in a non-production environment first, then we run them through both automated tests as well as manual hands-on tests to understand any new capabilities. We also examine the exact code that was added between the previous release and the latest release and make an assessment on the likely effects.

Photo by ThisIsEngineering on Unsplash.

We ask questions like:

Will this break an integration that was relying on functionality in the previous release?

Is this a new capability that we can start to use in our threat intel team to get better throughput, or gain a better way of describing things, or build more automation?

Teams could run through this testing and analysis process themselves, but they often don’t have a few days spare to analyse every single new MISP release out the door. We do this as standard for all of our CloudMISP customers. We’ll share release notes, analysis notes about that new release, and the advantages and potential risks of updating.

Ultimately, we’ll typically recommend staying in line with the latest MISP versions. This process means the upgrade can be carried out as carefully and methodically as possible.


2. Implement robust monitoring and maintenance

In the wild, you’ll often come across SOC teams running an unmaintained MISP instance on a VM under someone’s desk.

Eventually that instance is going to run out of disk and run out of memory as the database grows. That’s an operational, monitoring problem.

If an organisation wants to run MISP in-house, they need the capability to monitor and maintain the instance or it’s going to be unreliable. It needs things like logging, disaster recovery, perhaps even high availability.

The best threat intel teams run MISP like they’d run any important production system: on a scaffold of monitoring and regular maintenance.

3. Avoid the volume trap by focusing on high-quality data

Some SOC teams fall into the trap of thinking a high volume of intel is inherently good. It’s easy to think that a higher volume of threat intel equals more visibility over threats. But a lot of that threat intelligence may be poor quality in terms of detection.

Photo by Robynne Hu on Unsplash.

If I pass a huge volume of threat intelligence into my detection systems, it’s guaranteed to create a lot of false positives and incorrect detections. A classic case would be Google’s public DNS, 8.8.8.8. In and of itself, it’s not malicious, but it could be used by malicious software. If I go out and detect 8.8.8.8 in my logs, I’ll probably find it. But do I want my SOC to be spending all their time triaging alerts for things that aren’t malicious?

There’s this very common idea in cybersecurity called alarm fatigue. When everything is an alert, the alerts that are of critical importance can easily get lost in the noise. Tapping into the right data matters more than the volume of data you process.

Every organisation needs to be selective about which feeds are best for the kinds of threats they typically face. Alerting on the right data is even more crucial. We only want to be responding to things that we can do something about and that actually have an impact. That’s often the piece of this puzzle that takes lots of time, tuning, patience and expertise to solve.

4. Remember that the tools ultimately exist to serve analysts

Sometimes, too much emphasis is placed on tooling being able to solve all the problems in cybersecurity. We dream of having magic black boxes that find all the badness, pass those through to our block lists and log detection systems, and our problems will be solved.

Photo by Cesar Carlevarino Aragon on Unsplash.

Machine learning is good at training against historical datasets, but it’s still not as good as a human at determining what the attacks of tomorrow are going to look like. Analyst intuition and wisdom are still the most effective weapons we have in working out where attacks are coming from and where they’ll be coming from next.

How might an attacker think? What should we be looking for already, and defending against, before we’ve actually seen it? There’s an element of creativity and looking forward that we still rely on analysts to do.

A lot of organisations are putting money into tooling. Being able to house, process, and automate a lot of this data processing is important. But it has to be done in the service of analysts.

5. Have a clear triage process for your analysts

Some MISP workflows can be fully automated. For others, you need to have an analyst in the loop to look through the report and apply their own wisdom about whether it’s relevant and whether detecting the indicators could have side effects like blocking legitimate activity.

Threat intel reports are generated by teams around the world, by different people in different industries, with different levels of experience. Every analyst has different thresholds for what they include in these reports and what they leave out. You may mostly agree with their opinion, but any difference in opinion on what you need to care about is going to matter.

Again, that’s where you need a team of analysts providing oversight on what automated systems are doing.

Having a clear triage process in place can help you decide what can be totally automated, and what requires analyst oversight.

6. Define your threat intel products

Threat intel products are use cases for threat intel within your organisation that produce a beneficial business outcome.

Here’s an example:

One of the consumers of threat intelligence within an organisation might be the network detection team. One of their tasks is to look for malicious activity on the network. To do this well, they need to be armed with information about threats that are being seen in the wild. They want to get a machine-to-machine feed of things that have been verified by the threat intelligence team as being credible threats, with as few false positives for benign infrastructure, files, domain names, URLs as we can possibly manage.

In this case, arming the network detection team with high-quality threat intelligence makes them more effective at detecting threats.

Photo by Barney Yau on Unsplash.

Another consumer of threat intelligence products could be the executive team within an organisation. They may want a more strategic 50,000-foot view around threats facing their industry, such as data breaches.

If threat intelligence tells us that data breaches are a credible threat, that begs the question: as an organisation, where should we be spending our money to avoid a data breach, and what would be the most effective controls based on how these threats unfold?

Using threat intel to inform strategic decisions is another potential threat intel product.

If you think about the products you want to generate out of threat intelligence and out of MISP, you can then figure out how they integrate with all the other processes within your organisation. Sometimes, that bit is quite tricky. You often have to negotiate with other teams. You’ll tell them that you’re going to start collecting all these threat intelligence feeds - but how does this produce beneficial outcomes for them? For threat intel to be useful to the SOC, the executives, and other consumers of the information, you need to understand what they do on their side of the fence to make the organisation safer.

7. Automate as much as possible with machine-to-machine feeds

There are vanishingly few experienced threat intel analysts in Australia and New Zealand. There are some excellent people in that field, but there just aren’t enough of them to go around. If those people are losing time copying threat intel into emails and then CCing a whole bunch of people, that’s not efficient.

We need to, as much as we can, configure machine-to-machine feeds of information, so that once intel has been verified with at least a high level of confidence, either via software or via the wisdom of an analyst, we can ship the data around the organisation in a timely fashion.

Timeliness is one of the key criteria of threat intelligence. Threat intel about how a hack happened six months ago has some value, but far less value than if we can get the defences in place within a day or two of the hack.

Final Words

Spinning up open source software like MISP is a little like buying an axe.

Buying an axe is easy, but the tool will quickly become blunt and unreliable without ongoing maintenance and care.

In many ways, the quality and usefulness of a tool is defined by the effort put into maintaining it.

Many teams can follow the MISP best practices outlined in this article, but not all teams will have the time, experience, and resources to do so.

If you’re part of one of those teams, consider leveraging our managed MISP offering, CloudMISP.

We’ll keep your MISP instance running in peak condition so your analysts can focus on using MISP to solve genuine security problems.