Tash: I’m Tash Postolovski and I’m the Technical Marketing Manager at Cosive.
Emily: Hi, and I’m Emily Etchell. I’m a Security Consultant at Cosive.
Tash: Although today Emily is a Security Consultant at Cosive, like many in cybersecurity Emily’s journey didn’t start in cybersecurity, it started elsewhere. So, your career began in biomedical engineering. Can you share a bit about your background as a biomedical engineer and what inspired you to transition into the field of medical device security?
Emily: So, I started my training as a biomedical engineer. My research project was on muscles and tissue stiffness and things like that, so almost on the medical science, biomed side of things.
I started working at one point, after working for manufacturers, at the TGA (Therapeutic Goods Administration) in Canberra, who regulate medical devices. I was lucky enough to find myself in the laboratories.
Until that point, my view had been more on the manufacturing side of designing medical devices and what is the problem that we’re trying to solve. It was there that my supervisor mentioned cybersecurity. It hadn’t been on my radar, I’d always been really focused on the medical side of things. I’d always really been interested in that, in how the body works and how we’re going to design a device to interface with it. That mention sparked a whole waterfall of realising how much cybersecurity is a part of healthcare and specifically medical devices, and how little, at the time, it was being focused on. And given that it is an industry where there’s so much to think of–there’s the patients, there’s the doctors, there’s so much going on–I felt like I was at the crest of a wave. That medical device cybersecurity was about to take off. And it’s just really interesting, because at the TGA we were starting to look into it. There was a lot of discovery and realising what things mean when applied to a healthcare setting. So I just followed that down and it took me further into cybersecurity until it took me wholly into cybersecurity and a little bit away from medical devices, but it’s always been a big interest of mine.
Tash: What are some examples of medical devices that people in the field of medical device security tend to think about the most often?
Emily: I think one that people get scared about often is pacemakers and things like that, because it’s implanted. If something was to go wrong with that, it’s a big thing that could happen. Another device that I think is quite a big area to focus on, is devices that are really commonly used in the hospital. While there might be three MRI machines in a hospital, there might be hundreds of infusion pumps, or observation monitors. Things that are really used day to day by doctors and nurses, and there are many per ward. You think about how many times you might need to take someone’s heart-rate throughout the day, the amount of devices there are and the amount of times they’re used, and the fact that they also form a baseline, I guess, because that’s the everyday monitoring and everyday medication supply to a patient, there’s the idea that there’s just so much exposure to those devices, that if something was to happen to those, it would be on such a larger scale.
Tash: I have this idea that there are a lot of medical devices that people may not even realise are medical devices, or could be vulnerable to attacks, for example. A lot of people probably think about pacemakers, as you mentioned, but infusion pumps are something I had never thought about as being potentially vulnerable to attacks. Are there other types of medical devices that people who aren’t familiar with the field might not realise could be a security concern?
Emily: Sometimes people don’t realise that even software can be a medical device, if it falls under that definition of being used in the diagnosis or the monitoring or the potential of a disease. From memory, some things that might be medical devices besides infusion pumps, you’ve also got medication dispensing, you’ve got at home devices. There’s a lot more devices that are being developed now to help people have their healthcare at home. That might be a falls monitor, even at home vitals monitoring. Things that don’t seem so dramatic, they’re not the surgical robots, but they’re really relied upon by people to have a good understanding of their health condition. If something was to change that, even if it didn’t have a physical impact because an incorrect measurement was verified by a doctor later on, or was caught in some way before it caused physical harm, if a medical devices gave false readings and a person thought there was something wrong with them when there wasn’t, there’s that psychological stress that really is so important and can have such an impact during that journey. You really don’t want to be stressing someone out more than necessary, it does have that flow-on effect.
Tash: Another question that comes to mind is the potential motives of attackers who would be targeting medical devices. So, obviously there’s disrupting the function of the device, and all the challenges and negative outcomes that can cause. Are there other motives that sometimes motivate attackers to target medical devices?
Emily: I think at the moment a lot of the focus has been on personal health data. Which is I think linked to the increase in ransomware attacks targeting health data, and that data being worth more because you can’t change your health data. You can cancel a credit card, but if someone knows your health history, that sticks with you. Also, there’s the idea that medical devices are also often part of a network. So, that’s within a hospital, and there’s all these other devices, and often times you’re trying to centralise and stream that data together. All that data is collected per patient, but it’s only useful if it’s visible to doctors. Sometimes there’s a push to move it outside the hospital so that you can get second opinions from people outside who are there on that day, and that transfer of information does, in a way, mean that someone could use a medical device to gain access to that wider network, and that might be used for wider intents.
Tash: On that note, what are the technical characteristics and challenges that you found when thinking about medical device security that might be different from securing other types of systems.
Emily: Yeah, I think that unique to medical devices, remembering the context of how it’s used, that it is in a hospital setting, a home setting, a healthcare setting of some form–there’s a bigger problem of legacy devices than perhaps other areas. While our phones might last two years or five years before we expect to get a new one, legacy devices can continue to be used as long as they provide the medical function that they’re there for. That could be much longer than we expect. We then have the issue that there are a lot of medical devices out there that can’t be secured. Their age makes it infeasible to secure them. So when thinking about security for medical devices, it really needs to be based around how to make them have that longevity that, if you look ten years in the future, are we able to maintain patches and software updates in a way that stands the test of time? While at the same time recognising that all the medical devices we have in Australia are thoroughly tested, reviewed, validated and verified that they are doing what they’re supposed to and function safely, and so when we start to bring in things like patch and software updates, and understanding things like when should a medical device be recalled, there’s the consideration of designing it in a way that supports the idea that these devices might be in use 24/7, and might not be able to have a downtime window overnight where a patch can be installed.
There’s a lot of thought in the design process around how these are distinct from other software products, but in the same way, they are vulnerable to similar attacks that other industries would see. They’re similar to other industries, but there are other added layers of the patient always being at the forefront of everyone’s mind who is involved in that industry.
Tash: Are medical devices typically connected to the public internet?
Emily: Yeah, it definitely varies per device. There’s definitely a push toward… or it seems like an increased amount of features that allow the user to have a bit more control over their health data and visibility of what’s happening with their device. With that, there’s connectivity over wifi and bluetooth so they can look at it from their phone, and from an app. While the device itself might not need that connectivity, it’s more about giving people the ability to monitor their own devices and say, if you have a glucose monitor on a child with type I diabetes, there’s the function to allow the parent to have a bit more oversight if they’re able to monitor the data over bluetooth or wifi on their own device. So, there’s definitely increased network connectivity. It might not be internet access, even if it’s just bluetooth, there is that border extended beyond the device itself.
Tash: It’s that classic tension it sounds like, between usability and making data and functionality available, versus locking systems down as much as possible, and where do you find the right balance.
Emily: Yeah, definitely. And there’s especially like… What I always find interesting is that, in the same way that you have proprietary software and then you have a group that develops an open source version of it that has similar functionality, that also happens in medical devices, which I remember finding really interesting.
There would be different groups. Say, the artificial pancreas has a few groups, one of them is the OpenAPS. What they do, is they try to provide an artificial pancreas for people who are interested in that, so people with Type I diabetes who want continual monitoring of their blood glucose levels. They saw that there was a gap in what was available to them, and they created an open source version. The idea being that it’s open source, it’s transparent, available for others, it falls into the same area of balancing the benefits with the risk. Because it’s open source, maybe it will have more eyes on it, more people who can provide security feedback. But on the other side, it becomes a difficult thing when looking at regulations that medical devices go through, and things manufacturers are required to do to keep people safe. If you start amending those, where does that sit? So, there’s a lot of interesting spaces, I think, with how medical devices are just another device.
Tash: You mentioned earlier patching medical devices. In general, is it easy to patch most medical devices? Are some unpatchable, they’re shipped, they’re in use, and they can’t be patched anymore? What does that landscape look like in the industry.
Emily: Yeah, so I think it’s interesting because the TGA has medical device cybersecurity guidance, and I think the latest edition is 2022. And that’s constantly being updated to stay on the forefront of what’s expected in medical device cybersecurity, and one thing that’s expected is that manufacturers will keep a software bill of materials, so they can list out and have an idea of all the components they should be keeping secure. From that, having a patch program, so they can show they have ways to update and respond to any security vulnerabilities that might pop up. Devices that aren’t patchable will fall under legacy devices. Devices going forward, it’s such a necessary part of having a device being supported security wise, and that all medical devices should have that considered in their design process.
Tash: To wrap up, I’m curious to know how your time thinking about medical device security influences how you think about cybersecurity today. Obviously, your role today at Cosive is much broader and you don’t work directly on medical device security anymore, but have you taken some lessons with you or certain perspectives on the industry that you apply to your work today?
Emily: Yeah, I think I really love the idea of just looking beyond the security problem in front of you to the context of how it’s used. If you’re pentesting and the goal is to gain access somewhere, looking beyond that and putting everything in the context of how it’s used. If I’m focused on pentesting something, you might find a few small different vulnerabilities, but really placing those in terms of… if this is a website that doesn’t have banking functions, it’s really just purely informational, that can really change how the end user responds to any findings you may have. It’s always so important having the context. So yeah, I think it’s definitely helped me realise that things are never in isolation, and you really need to consider the space of what you’re looking at in terms of where it sits.