WE'RE IN!

Demystifying OT Cybersecurity with Danielle Jablanski

Episode Summary

The operational technology (OT) computer networks that support life as we know it are increasingly coming under threat. But despite the proliferation of malware aimed at critical infrastructure, Danielle Jablanski isn’t running for the hills. As an OT cybersecurity strategist for Nozomi Networks, Danielle helps critical infrastructure organizations understand and prioritize digital risks, whether they stem from a lack of visibility into industrial environments or a sophisticated cyberattack from a foreign nation-state.

Episode Notes

The operational technology (OT) computer networks that support life as we know it are increasingly coming under threat. But despite the proliferation of malware aimed at critical infrastructure, Danielle Jablanski isn’t running for the hills. As an OT cybersecurity strategist for Nozomi Networks, Danielle helps critical infrastructure organizations understand and prioritize digital risks, whether they stem from a lack of visibility into industrial environments or a sophisticated cyberattack from a foreign nation-state. 

Don’t miss the latest episode of WE’RE IN! to hear Danielle’s insights into industrial control systems (ICS) risk management, including the recently disclosed COSMICENERGY ICS-focused cyberthreat. 

----------

Listen to learn more about: 

* What makes the ICS security field “niche but not nebulous”

* How Danielle’s background in nuclear weapons policy informs her approach to cyber incident planning

* Why so few critical infrastructure operators know where equipment with known vulnerabilities may exist on their networks

* Hacking satellites in space

Episode Transcription

[00:00:13] Blake Sobczak: Hello and welcome to We're In!, a podcast that gets inside the brightest minds in cybersecurity. I'm your host, Blake Sobczak, and I'm excited to welcome self-described industrial control systems obsessive Danielle Jablanski to the show today. She's an operational technology cybersecurity strategist for Nazomi Networks, as well as a non-resident fellow at the Atlantic Council's CyberState Craft Initiative. I expect we'll cover a lot of ground today from nuclear security to the recently uncovered cosmic energy, ICS focused malware, so stay tuned. But first, let's hear a word from our sponsor. 

[00:00:49] Sponsor: Attackers scan your systems daily. You just don't get the report. Synex Security Testing Platform stands out by drawing on a trusted network of global security researchers. From web apps to headless APIs, our platform helps you find and fix gaps in your security posture.Learn more at synack.com. That's S Y N A C K.com. 

[00:01:12] Blake Sobczak: So welcome to We're !n!, Danielle. Thank you so much for joining us. Really appreciate having you on the program here. I wanted to jump right into a, a bit of a juicy topic, malware that's specifically targeting energy systems and other critical infrastructure. Now, I know it's rare, but it's an. Really interesting subject and one that tends to capture the imagination, I feel like, of the cybersecurity community and even the general public. So I know Mandiant, I guess, uh, which is now part of Google Cloud not too long ago, issued this new report about cosmic energy malware. I don't know how they come up with these nicknames, but I guess it's evidently originating in Russia. And what can you tell me about that? Should we be. Heading for the hills.

[00:01:50] Danielle Jablanski: Yeah. So, it's interesting that you mentioned that there's not so many of these that often, uh, that also means that there's kind of like a race within the field to be the first to comment or the first to publish. This one is actually tied to some known exploitation in the past. I don't think it's a run for the hills type scenario. A lot of really, really sophisticated researchers have come out more technical than me about. The need for continued monitoring and pattern recognition and signature recognition. But this isn't any type of, you know, zero day exploit, uh, affecting the servers that are involved. It speaks to kind of a broader, I would call it not an issue in OT and I c s security, but a perspective that recognition maybe isn't good enough, right? Pattern recognition with so few incidents means that we can't do the exact. Type of security management that it can do where we just copy and paste exploits and we know and recognize the patterns because they are copied and pasted between different infrastructures. That's not the case in ot. So the real concern is how do we develop intuition into these signatures, these patterns, this recognition in a way that actually does proactive and preventative security, which is so difficult in this space. That's really the conversation I like to have. I don't think that this sequel. Issue is one to panic over, but you know, the nature of reactive security in OT is something we should be concerned about. I 

[00:03:06] Blake Sobczak: was gonna say, you know, how significant is this sort of news for the work that you do? Right? I know you're joining us from Nazomi Networks, whereas I understand it. Your, your title is OT Cybersecurity Strategist.

Do I have that right? Yes. Perfect. And OT for listeners unfamiliar with this, uh, rarefied world referring to the operational technology that keeps the lights on and the water flowing and everybody happy and working in a functioning society here. So yeah. Talking about this ICS specific malware, how significant is it for the work that you do?

[00:03:35] Danielle Jablanski: It's very significant and it's under-researched, I think, and underrepresented in the broader cybersecurity community. The real concern for OT assets and equipment is that the vendor systems themselves may not have security features built in or enabled, and or those systems or equipment might not be serviced by the original vendor, the oem, and they also might not support things like antivirus or agent-based security.

And so, When we think of these vulnerabilities, we think, oh my gosh. If that's true for legacy systems, then every exploit's gonna have this outsized impact on that technology. And that's not necessarily the case because there are so many different types of hardware and so many different software configurations on these different systems.

It's really difficult to understand the impact of any vulnerability. So this one, Not a huge impact. You know, some vulnerabilities impact a small number of assets and devices. Some affect, you know, a single historical like unpatched system, right? So if it's older than a certain firmware update or something like that, some only apply to a specific interface or install base use in legacy systems.

And then, like I said before, others are difficult to automate. So even if it's a high risk, it requires the victim or end user to interact with that exploit mechanism. Which again, doesn't make it not worrisome, but it makes it less likely to be widespread, right? It's not gonna be a botnet attack if somebody has to interact with it, an individual or a human error.

So it just gets really complex and convoluted. Um, but like I said, a lot of great research, a lot of experts that, that look at these things, cosmic energy, not, not something to ignore. Definitely read up on it, but also not something to, I think raise to maybe like board level awareness at this point. Got it.

Got 

[00:05:04] Blake Sobczak: it. No, that makes a lot of sense. And you know, your mention of the interfaces and the, you know, configurations almost of some of these OT setups, I feel like, compared to the IT world where, yeah, you find a vulnerability in an iPhone and everybody's got a patch update and it's just a huge pain and it affects millions of, not billions of people, potentially some of these blockbuster vulnerabilities and whatnot.

In the OT world, it sounds like maybe things are a little different that you're. Talking more about specific pieces of equipment, specific configurations, how does that contrast, and why is that important to think of when we're looking at, you know, thinking from the adversary's perspective of targeting these systems?

[00:05:40] Danielle Jablanski: It's really important because, you know, in the solution space, you hear this anecdote that visibility is king, right? It's, we need more visibility. And that's really true because I love your iPhone reference today. You can tell me the penetration of iPhones. You can tell me how many iPhones are sold. You can tell me the updates that have been produced, the software versions that we're on.

That you can look at it as penetration or global inventory of OEM vendor systems used in OT and I CS networks and environments doesn't exist. There's chain of custody, there's integrators, different people buy things, different people update things and work on them, uh, maintain them, and those configurations matter.

And so that lack of visibility, it's not just within one single network in terms of understanding the communications, the assets that you own and operate. But it's broader. It's, it's global. We have no direct understanding of how many pieces of each vendor equipment, with the known vulnerabilities and unknown vulnerabilities exist in the world today, what versions they're on, how they're operated, what kind of other security controls are built around them, if any, you know, convoluted is really the best word for it.

It really grows kind of exponentially worrisome as you start to consider all of these aspects. Is 

[00:06:45] Blake Sobczak: there a way to get a handle on the scope of installed OT equipment, for instance? I mean, I feel like that would necessitate the cooperation of the actual end critical infrastructure user to some extent, right?

Since so many of these systems are nominally isolated 

[00:06:58] Danielle Jablanski: from the internet. Yeah, there are a couple ways you could do it. You could work with the OEMs, and I know there's several programs and vulnerability working groups and things like that where the OEMs are doing their best to mention that. But then again, there's this chain of custody thing, right?

So who they sell to where that goes is not really always up to the original vendor. So there's this provenance issue that exists. The ability to get a handle on it will differentiate by country. So in the US you know, we could do some type of census activity state by state or sector by sector that might be able to get a good headcount on things.

And then, you know, in X amount of years, we could maybe map known in vulnerabilities to that penetration, or you can go the monitoring way, right? So that's the space that I sit in. And that's a specific industry within cybersecurity now, whereas we'll have more understanding. Of more assets, the more monitoring is deployed across those sectors in OT networks.

Right? So that increases visibility and we can do more preventative, you know, security in that sense. So, 

[00:07:51] Blake Sobczak: Curious if in the discovery space do these ot, you know, these operational technology companies with all this equipment. I hear so often in it it's like, you don't even know some laptop is running somewhere and you don't even know it's connected to your corporate network or whatever.

You, you just, you have trouble managing all the different assets that are connected, especially if you're a big organization. Does that translate to the OT world with all the, you know, I picture just gears and levers and like steampunk. You watch that, uh, that new sci-fi. So the silo, you know, that's kind of my mind's eye image of OT is like this big humming engine in the basement somewhere, or 

[00:08:24] Danielle Jablanski: the control room full of hmis.

[00:08:26] Blake Sobczak: Right, right. The control room full of the hmis. Referring to the, uh, human machine interfaces, do people have a handle on what's actually running in 

[00:08:33] Danielle Jablanski: their environments? No. A lot of the times the answer is no. When you ask. Owners and operators, if they've been impacted by certain events and incidents, if they're not doing that type of monitoring and visibility, they can't really tell you.

So it makes it really difficult to do threat hunting, right? So if you don't have a solution that's purpose-built to understand industrial and proprietary protocols, then you might not know what's existing. And also if you don't have any type of dynamic security, whether it's monitoring or another type of solution within your OT environment.

Even your best kind of operational status could be so outdated that it would actually be unhelpful, right? Given an incident, a software supply chain type of. You know, vulnerability to go and hunt for things. If you're not keeping logs and your actual inventory is an Excel spreadsheet from X amount of years ago, or it's a person who's moved on and, and that chain of custody, like I said, change management, all of those kind of controls were not thought of, then it's very likely that you have something that you don't know that you're operating that's connected to your network and, or it might have some internet inbound or outbound traffic or um, maybe a vendor remote access point is something we see a lot in the OT world that the owners operators may not know existed before they.

Do this kind of automated mapping and deep packet inspection of the communications protocols, trafficking throughout their networks. And so a lot of the times to zombie will actually. Ask somebody, you know, how many assets do you think you have and and communications traffic, do you think you 

[00:09:54] Blake Sobczak: have? I like even just the framing.

[00:09:56] Danielle Jablanski: Yeah. They'll give us a number and then we'll pull up the inventory we've pulled and they say, oh, you know, I didn't know about these things. And we can identify, you know, did you also not know that there's X amount of clear text passwords on these different assets and things like that? And it becomes this journey, the visibility, right?

So that word. Actually does mean something different to everyone. We think of it as being very straightforward, but it could be, uh, understanding the vulnerabilities in your network. It also could just be understanding what talks to what and why. Yeah. 

[00:10:20] Blake Sobczak: Just don't forget the smart coffee maker in your control room.

Right. That's speaking out somewhere on the public facing 

[00:10:25] Danielle Jablanski: internet. And there's some historic example where in an OT network, somebody couldn't figure out some bandwidth issue and it ended up being somebody who was running crypto mining from a, an asset in that environment. Oh, no. Yeah, so it's not a vulnerability.

There's no threat actor in there, but. You know, you know, 

[00:10:39] Blake Sobczak: we couldn't talk about operational technology for too long without mentioning Stuxnet. So here's the thing, it's been well over a decade since the Stuxnet. Worm, whatever you want to call it, infected Iranian nuclear centrifuges and eventually made waves in the press as this first big landmark example of malware causing physical impacts to the world and to industrial control systems.

And so I'm curious from your perspective, how has the world of operational technology and you know, industrial control system cybersecurity evolved since then? Where have we made progress? And I guess where do we still have to go? 

[00:11:13] Danielle Jablanski: I don't know if you know this, but I actually started my career as a policy analyst looking at nuclear weapons command and control doctrine and use, and some of the modernization program in the us and then vulnerabilities, like I said, to command and control and the way that the nuclear management looks at monitoring analysis and supply chain issues and things like that.

So, uh, sex net was a big part of my career growth trajectory, et cetera. And what I think is interesting is even though when it happened, folks were not necessarily surprised about the capabilities from a nation state level. They talked about it as like this dawn of a new era of, you know, if everything is critical, nothing is off limits, and if we have no red lines in cyber, then where does this kind of escalation theory end?

There really hasn't been a good consensus to that problem. When I moved over to the private sector space though, a thing I've kind of said repeatedly in a couple of talks, and especially when I speak with asset owners, is that. Stuxnet, in my opinion, created this kind of cognitive dissonance. Actually, that didn't really help as a scare tactic across critical infrastructure because what it did was put a lot of distance between me and my operation.

So, As a water treatment facility in a, in a state, in the US versus the Iranian nuclear program. Right. And so I think that that targeting versus likelihood conversation actually wasn't very present at the beginning when Stuxnet was, um, was un uncovered. I think it, it really led to a lot of academic and policy dialogue that was useful.

Um, when I was in academia and, and looking strictly at, at policy and doctrine, there was this separate disconnect between the. Academic and some of the global international cybersecurity legislation right at the UN level or the ggs or even the talent manual and and what those escalation and things looked like and what was off limits.

And I think that there was this lack of understanding of the industry. So back to visibility, penetration networks, logs, monitoring, et cetera. There's a bunch of security controls and events that happen day to day that lead to these big incidents. And I think focusing on something so prolific as that, That very dedicated, well-funded, well-resourced, well thought out, long game that Stuxnet was, didn't really help to articulate to the owners and operators of all of the rest of critical infrastructure, exactly how they could be victims of not similar incidents, right?

How they could be victim of. Remote access or you know, other kind of physical security concerns that weren't as elaborate. And I think we've closed that gap. So I wanna spin this into a positive, right? I think it's absolutely critical for the rest of industry to understand the nuances of cybersecurity, the nuances of what happens in these networks that is not as high caliber, but is still just as important to understand and build your risk frameworks around.

And then I think that that's the net positive of sexed is that it started this national conversation and this capabilities understanding from a nation state level. And slowly but surely, we've worked to now level the playing field and say, these are all of the other concerns that you need to take into account when you're prioritizing what might happen in your industry.

[00:14:17] Blake Sobczak: That makes a lot of sense. It's easy to look at the, you know, thinking back, look at the Stuxnet example and be like, wow, this took like multiple intelligence agencies, you know, years of careful espionage and planning to execute and whatnot. Eh, it can't happen to me. It's not likely to happen to me. What do I really need to worry about?

And we beat that drum about generalist. It vulnerabilities all the time. It doesn't take the flashy zero day to take down your network if you're not guarding against the basics. You could land in hot water. You know, it's just, it's 

[00:14:44] Danielle Jablanski: possible. Or if your human operators don't understand some configuration changes or misconfiguration errors, they could actually cause a worse day.

It was really illuminated to me when I was asked to review a capstone project for a university course on industrial cybersecurity, and they were doing, uh, an assessment for a hypothetical. I think it was a plastics manufacturer in aerospace, so they were like a supply chain manufacturing environment.

And um, it was me and a couple of other quote unquote experts. I say that because who's really an expert, you know, and the students all were saying, I think that the risk to this manufacturing setting is a physical u sb attack. And when you press them on why it was the Stuxnet effect. It wasn't necessarily the most likely scenario for this environment.

And if you look statistically, there's only been two physical access industrial cybersecurity examples in the world. So it's like, is it high profile? Is it probabilistic? Can you do likelihood based off of how few ICS specific attacks are been? And it really changes the conversation around risk management and security controls building around these legacy systems in a way that, you know, I think physical access is important.

Especially from an insider threat perspective, but it's not the only concern. So it 

[00:15:56] Blake Sobczak: sounds like, from what I'm hearing, you're telling me that most of those USB sticks dropped off in parking lots are safe. I'm quoting you that way and saying that they don't turn out to house secret ICS 

[00:16:05] Danielle Jablanski: malware unless it's in a, well, it doesn't not happen.

But that was actually in the Mr. Robot, uh episode, the prison episode that I used as the basis for my Atlantic Council research. Oh, that's awesome. 

[00:16:15] Blake Sobczak: And actually, speaking of your Atlanta Council research, I wanted to highlight a really interesting recent paper that you. Contributed on highlighting the importance of distinguishing between systemically important critical infrastructure from just critical infrastructure.

And actually, I'll stop there because I'm sure you can describe this a lot better than I could, uh, in framing this question, but I think it speaks to how can organizations and government go about prioritizing which critical infrastructure or assets. To protect or 

[00:16:42] Danielle Jablanski: that matter. I'm happy that you've asked about that.

There's so many ways to go about this. The one thing I really wanted people to understand from that paper is that critical infrastructure is not monolithic. A lot of the times when we get into these conversations about scenario planning, it starts to break down between what I've called in the paper, the assets first function debate, but for systemically critical infrastructure, there's this priority between.

Assessing threats to systems, which we talk about in terms of vulnerabilities and potential exploits of equipment and devices versus threats from threat actors. So the specific capabilities and demonstrated TTPs that we've seen in the wild that could impact any given environment, but those two things aren't congruent, right?

So sometimes you have these threat actor capabilities, but they've never demonstrated that they can exploit the technology that you own and operate and vice versa. There are all of these known vulnerabilities to the technologies you own and operate, but they might be very difficult to access them from an enterprise standpoint or from remote access.

And so that. That overlap is really difficult to assess. What this is really creating though, if, if I could kind of tangent from that research, because that paper's out there to read, is that I think we're seeing a couple of new gaps that could potentially be created in this field. So, There's a ton of new focus on industrial security and on critical infrastructure, cybersecurity and cyber physical attacks.

But there's a couple of things that worry me and, and I'll just mention two of those. And so a couple of weeks ago, there was a congressional testimony on closing the gap between, you know, coal powered. Electric generation and renewable energy and how even if we wanna get to a specific point in the future, um, if we shut down too much of generation today, we're gonna create this gap where we're not gonna have enough, uh, generation.

And I think at the same time, you're seeing this cybersecurity gap of focusing on legacy and not a ton of greenfield, um, investment. And so that conversation around investing in security turns to a security by design investment. And that too is X amount of years away. So there's another gap there. If we focus wholly on security by design, we may leave out these legacy systems that will still be in operation for 30 more years if, if they're not gonna rip and replace them, which most asset owners can't afford to do.

I love all of the new energy and focus and attention in this space. I wouldn't be here today if that weren't the case, but I think we need to be really careful about that. And also careful about a lot of information sharing and the way that we go to tackle these issues. If Stuxnet is one reference, people also love to use the cyber nine 11 reference.

I also hate that reference just because people focus on the impact, which is statistically too significant for analysis, right? It's actually such an outlier that if you're looking at terrorism databases, you have to take it out in terms of lives lost. But what's really the most interesting part of that is the box cutter, and people forget about that component.

Right. The box cutter is what was the leveraging point for that attack and what does that look like in a cybersecurity scenario, not just the impact, but my whole point about nine 11 is that we're now creating multiple different silos of information. And it reminds me of the fact that things that were reported became social sources of truth in that commission and that investigation of the information we had before nine 11 that wasn't put together.

Yes. Right. And you're starting to see that now with so many bodies of information work, collaboration, industry, group groups, et cetera, wanting to focus on this, that you might be creating kind of too many soul sources of truth without a lot of collection and corroboration of, you know, are we sharing indicators?

Are we sharing information? Are we doing prevention? Are we. Addressing security by design in a way that actually meets the inventory question we were just talking about in terms of where this technology is actually deployed. Or are we just focusing on vendors that have the highest revenue? How do you cut across these multiple problem sets without creating new gaps?

Trying to solve a a horizon problem and get ahead of things while also trying to backtrack with so many legacy systems and Brownfield right versus Greenfield projects. 

[00:20:35] Blake Sobczak: That notion of information proliferation in the i c s space and you know, silos and that's such an interesting contrast to what I feel like historically has been the sense that ot, cybersecurity, you know, the operational world, is this just really.

Niche space that has very few experts or practitioners and, and so I'm curious if you've seen that evolve and what would you say to somebody who's looking at this from the outside and maybe, you know, worrying about the acronym soup, and we've even tossed around a few here, you know, SCADA and HMIS and whatnot.

How would you characterize the OT cybersecurity community 

[00:21:09] Danielle Jablanski: today? So the community itself is very niche, but it doesn't have to be nebulous. The basics really have remained the same for a long time, and the basics are foundational for understanding all of the different components. So the acronyms that we've mentioned, the types of things that are characterized as OT or i c s learning the basics really is the, the primary kind of starting point in terms of the broader market or field, there's a couple different trends we just touched on that I'll say really impact our ability.

To be successful in this, in this area. So a couple of trends are, there's been this broad realization that operations that tolerate no physical downtime are lucrative targets. That's typically from a ransomware perspective, which we think of as an IT incident. But there are all sorts of ways that the IT incident will turn into an OT incident, right?

We've understood that for, for many years now. Colonial Pipeline, sorry. Yeah. Colonial Pipeline. It did and didn't, right? So it didn't actually touch. The OT networks, but it did cause a disruption to a physical, you know, ability to not produce, but to provide that service. So there's that, and there's information sharing like we were just talking about.

There's known detections and, and fully baked intelligence that we can build detections on when we use tools to go and hunt for, uh, known signatures, right? Snort rules or yara rules and things like that. There's really no type of early warning indicator for that. And then there are these different silos for that information, which become either commercial or private sector or government agency specific.

And it's creating these source of intelligence without the ability for corroboration today. And then the other issue is there's all these hypothetical scenarios and possibilities that don't really have shared evidence or indicators. From a cyber perspective. So there's like the ability to blow up a plant, but then you have to actually work with that plant to understand the steps that a cyber scenario would create to actually cause that effect.

So there's a lot of hypothetical scenarios that don't have the cause and effect mapped out from a cyber perspective. And the issue is these four categories kind of detract from the sole emphasis of this entire field, which is finding single points of dependence and failure across equipment, cybersecurity, and business and operations for critical infrastructure asset owners.

Right. So, like I said, it's niche, but it's not nebulous. You have to understand the basics. I think people that wanna get into this field need to be solution oriented, where there are a lot of people that just like to be problem oriented. They wanna complain about cisa, they wanna complain about the lack of investment.

They wanna complain about people not knowing what they're doing or operating. They want to, you know, just complain. And I think that's a problem oriented way to look at the problem set. And I think we need to be solution oriented. And I think you have to be motivated by the mission, not by the prestige, or by being an industry expert or by being on Blake's podcast, even though you're amazing.

You really have to be focused on how do I. Fly out to a, an energy provider in the middle of the country and talk to them about the difference between an intrusion detection system in it and an intrusion detection system in ot, or the differences in the way that threat actors might target things and why, and what their objectives might be.

So we definitely need more people. I don't think it's an option to have less and we need more. Analysts that wanna work at asset owners. We need more training and academic programs that wanna focus on industrial. We need some subsidization to be quite honest, in several sectors, and we need more government cooperation, more champions, and more people that really take these things seriously and try to create progress, create change.

Stepping 

[00:24:18] Blake Sobczak: back for a second, you mentioned single points of failure and the importance of sort of honing in on those. I'm reminded of a friend of the podcast, Andy Bachman and some of the work he's done for Idaho National Lab around this concept of consequence driven, cyber informed and engineering. He calls it, and it's a sort of a shorthand way to describe that, is, you know, finding your crown jewels.

Considering the things that would be truly devastating, maybe lead to that complete plant shutdown or explosion or whatever, dramatic impact, and consider just disconnecting them all together. I'd be curious to hear your thoughts on that. And then also bringing back your, your background, which you mentioned earlier at the Stanley Center for Peace and Security, kind of evaluating nuclear weapons policy and use.

I feel like that's something that could apply to that sort of sector, but I'll stop there and I'm curious to hear what you make of that. I think it's, uh, called c e for short, to add another 

[00:25:05] Danielle Jablanski: acronym. Yeah. Cyber informed engineering. Uh, very great work. So I've never gotten to publicly thank Andy. I've privately thanked him.

Andy's one of the reasons I'm part of this community. He's one of my early mentors. One of the first interviews that I had as an analyst looking at intrusion detection systems for industrial control systems, and somebody who proposed my name for my Atlantic Council Fellowship. So I'm, I'm super grateful to Andy.

And when he talks about cyber informed engineering, I thought of, like you mentioned, my nuclear background, which when I go to asset owners or help people in it try to understand the OT impacts, I talk about effects based rather than means based approaches to scenario planning and security management.

So it's a similar aspect, but effects based over means based is actually borrowed from military theory and thinking about the law of proportionality and retaliation for cyber incidents. So back to kind of the assets versus functions debate, if I. Poison a water system. I shouldn't focus on the means, the malware that was used to poison, to access that and actually manipulate the system.

I should focus on the population impacted by that. So that's an effect space rather than a means based scenario, right? Or when I'm thinking about proportionality or retaliation, it doesn't matter how it happened. It matters what happened. And so when you think about scenario planning for doing cyber informed engineering or doing security by design, or even doing scenario planning for a tabletop exercise for an asset owner, We would, we would love to see people start with effects based rather than means based, right?

You can't just think about, like I said earlier, the known exploitation to some systems because it might not be as relevant for your worst case scenario as you think it is, and you actually might spend way too much money. You know, trying to patch or build controls around those vulnerabilities that might not be the most impactful to your organization.

And at the same time, you can't just focus on thwarting a specific ransomware family from a specific threat actor. If there's the ability for somebody else or for a different set of tactics to actually impact different assets that you haven't looked at. So there's this difficulty in understanding, again, that threat to systems versus threat from specific actors and how to marry good scenario planning.

And I think cyber informed engineering is one way to do it. Effects based scenario planning is another way to do it, but it really has to be dynamic, like we talked about in terms of visibility and log monitoring and knowing what's what and what vulnerabilities really have significant impact in your environment.

Given that Lego block building of hardware you maintain and that Rubik's cube potential permutations of, uh, of, of complex, uh, configurations, right? So it's not easy, but you do have to think of what's the worst case scenario. And then if you take that a step further, you have to focus on that complex interdependence piece that we just talked about, which I, it's with this kind of like you said, this expansion of information in the industrial space is, in my opinion, getting a little bit harder to understand.

But at the end of the day, if you're a security leader, You have to focus on reducing the severity of any security event, not just preventing the worst case incident. So you really do have to kind of start from. That affects based, you know, worst case scenario, but you also have to exercise to failure. And that's the part that I, I also wrote in the, in the, the, uh, then at council piece, which is, you know, for electric operations, they think of that as islanding, but it's also manual operations.

It's pen and paper. It's the ability to find, you know, patient records on, on paper, and a hospital scenario or having to divert patients to another hospital. Right? So exercising to failure is not just understanding. How to prevent and build scaffolding around certain vulnerabilities to those systems, but also the backup plans for how you run your organization and prevent worst case scenarios.

You know, hour one, hour three hour seven of that disruption or not having access to something critical. And that kind of failure mode and exercising to failure is something else we can learn from cyber informed engineering as well. 

[00:28:42] Blake Sobczak: Yeah, that's a really good point. And it's come up on the podcast before as well.

Some of the more notable. ICS focused cybersecurity events, including the cyber attacks on Ukraine's power grid. There was some restoration and recovery available from those because of some of that planning and manual backup that. Maybe lacking in some industries now with everything getting digitized and speaking of the digital transformation, I am curious.

Say you're doing really well as an organization, right? You've got a good handle on where your operational technology assets are. You're getting all this data streaming through, you've got it properly segmented off. You're really looking, how do you analyze security data? To reduce risk, right? So a lot of, there are a lot of tools to gather it, but how do you really do something with that?

[00:29:27] Danielle Jablanski: I love that question. I was on the Industrial Defender podcast last week and I told Erin on that podcast that everyone is obsessed with efficiency, given data transformation. They wanna do more with less when it comes to cybersecurity. My question is, why wouldn't you wanna do more with more? You're already gathering so much data for these solutions, and if you are just buying products off a shelf to use them kind of as is or quote unquote out of the box, you're probably not utilizing them to their full potential.

And so I think that's something that security leaders need to regroup on and understand, Hey, do we have enough personnel in terms of analysts that can actually look at this stuff? If not, that's a challenge, right? That's a talent gap and you need more people, and if not, you're probably outsourcing. But if you do have these kind of multiple different software solutions that you're using, are you maximizing the inputs and outputs of those?

Are you doing predictive analytics? Are these investments, you know, really categorizing the threat and understanding in a way that's useful to the operator? Or are they just shiny dashboards? When we look at the most sophisticated end users in our customer set, they're looking for anomaly detection. That informs situational awareness in real time.

That's not the same as somebody starting at the very basic journey that we were just talking about of effects based rather than means based assessments. Looking beyond the CTI and saying how can the ICS advisory project dashboard. How do I associate that with the technology and vendor systems I have?

Right? That's square one. And using the available information out there of vulnerabilities and threats to assess your network is not easy, right? That takes a lot of data awareness and context. So that's really the space we're in right now is that context, but people that are ready for the next step really want intuition, and this is where the machine learning and the algorithms that do kind of that continuous.

Baselining and learning of of environments can do both. They can say, Hey, this vulnerability exists because we know how to look for it. And that's called recognition. And also based on patterns and recognition in your network. This anomaly is out of pattern. It's out of the norm. Look into that. And that's intuition that takes context.

And so that's really where this is going when there's fewer companies that do that and fewer experts that do that. But that's really where we're trending. And the real issue there is that everyone's gonna come out and say that they do this now. Right? AI and machine learning and things like that for industrial.

They have to have that, that purpose built, like I said, industrial protocol and proprietary understanding because again, it's nuanced but not nebulous. There are certain people and, and companies that have spent 10 plus years actively building their database for this problem set, and so I would really import all of the asset owners and people out there to ask the hard questions, whether it's a cloud.

You know, project that they're undertaking and understanding the, the relationship between who owns the risk and who owns the security and your cloud deployment, right? That shared responsibility model. Or it's a vendor saying, we do this increased situational awareness based on machine learning. You know, ask for the receipts, ask for the looking under the hood, right?

What does that actually mean? Is it all real time? Is it parsed? What does that investment look like? Because it's not necessarily clear up front. And so we need to be asking more of our vendors and working more in partnership with them. I love telling people when I go for customer visits from Naso, like we work for you, right?

There's this kind of lore with cybersecurity companies that we're all just like fancy people that sit on panels in, in rsa, and that's not true. Like I, I love to go to the sticks and hang out with people, boots on the ground and tell them I work for you, right? Like, I don't work for a software company in San Francisco, and that's a big difference in this space.

[00:32:58] Blake Sobczak: I always get so excited when I get the chance to meet some members of our cac, red team of hackers, you know, cuz they're, they're the ones actually, you know, looking for these vulnerabilities and like finding things and really in the weeds and out, you know, proverbially out there. Cuz of course we don't, you know, do too much of the onsite work, but it is a really eye-opening thing.

Now, I did wanna ask, we've talked a lot about critical infrastructure on this planet. What can you tell me about the final frontier of OT? Cybersecurity and, yes, I mean space. I saw that there was some headlines around a satellite being launched into orbit for Defcon Hackers to actually take a crack at, and, you know, actually do like a live hacking setup from earth to space.

Uh, I guess, what do people need to know about this? Is this like, what's happening there? 

[00:33:38] Danielle Jablanski: Yeah, satellites was something I also looked at in my nuclear background. Even though the nuclear command and control satellites are not in low earth orbit, they're not in ieo where all the, not all of, but most of the satellites for, you know, geospatial intelligence and things like that are, are being, and for satellite internet are being deployed.

A couple of kind generic attack vectors for satellites include ground stations, ground communication. Man in the middle attacks, data poisoning, and then command and control aspects, like I said, in different orbits. So command and control takeover is, is not as easy as you'd think. It, it is. Um, but it, it's not impossible either.

So that would be 

[00:34:11] Blake Sobczak: like actually maneuvering the, that's like maneuvering the satellite? Yes. Or yes. Okay. Yeah, that 

[00:34:16] Danielle Jablanski: sounds hard. And it's not as applicable for the nuclear command and control, which is in a different orbit, like I mentioned, but there's also this issue of debris and collision. So that's another thing to consider.

There's tons of space debris and there are different analytics tools used for satellite operators to predict their possibility or potential for impact or collision with another system. So a few years ago, I, I wrote a paper in Via satellite. As of 2019, when I was doing some research on this, the communications to avoid collision between commercial assets like SpaceX and the European Space Agency was still taking place via email.

So somebody would email somebody else and say, my analytics tool says that there's a 62%, I'm making these numbers up, 62% chance that I might collide with your system. SpaceX and SpaceX has a different system, and if their threshold for communication is 90% chance, and my threshold for communication is 60% chance, and you miss my email.

There's a lot of chances for collision that are not automated. It's not, you know, the worst case issue out there, but it's something to take into consideration. Cause we like to focus on these sexy Stuxnet type world ending scenarios. And at the end of the day, there are still these kind of low hanging fruit communications via email challenges that we need to get over as well.

Also, in May, 2021, in that article I mentioned that a D H S advisor, Had actively confirmed, which we'd all kind of speculated for years and years and years and still continue to do that. All of the 50 national critical functions, according to US national security depend one way and another or another on space-based assets.

One of my favorite stories from this space is when I was, I just dipped my toe in the satellite cybersecurity space. I'm not an expert by any means, but I was talking to a lot of people and doing some research on this front. And I talked to a consultant who was asked to come do a risk assessment for a satellite company.

And they got to this company and they said, great, where's your system? And they pointed into the sky and they had already put the satellite in orbit and then asked the cybersecurity company to come in after the fact and do an assessment. And so this actually reveals the same kind of compliance and checkbox issues that you see reverberating around nerc sip.

Like people love to point fingers at folks that just say, Yeah, I've done the thing. I bought the monitoring tool. It sits in a box over there, but I pay for the subscription fee every year, which means I'm checking compliance and I've done my job that exists in other sectors too. NERC 

[00:36:32] Blake Sobczak: said, referring to the critical infrastructure protection standards set by the North American Electric, uh, reliability corporation.

Basically the grid overseers, the grid rules that everybody's 

[00:36:40] Danielle Jablanski: gotta follow and. Exactly. No. So that's one of my favorite anecdotes for the satellite community. But, uh, back to nerc sip, you know, people harp on this, compliance is does not equal security aspect. And I think the same is true for any kind of n I would say new satellites aren't new, but evolving frameworks for security.

So for NERC sip, one of my other favorite stories is I knew of somebody doing a a NERC SIP audit. And they went to an asset owner and said, I see that you bought this tool and you validated that you've purchased this tool. Where is it? And they literally kicked a box and said, oh, it's right there. And they had never deployed it or implemented it, but just paying for that subscription to maintain their product that they bought, actually technically.

Was verification of that standard. So that's, this is a couple of years ago, but um Oh 

[00:37:24] Blake Sobczak: no. So they checked 

[00:37:25] Danielle Jablanski: the 

[00:37:25] Blake Sobczak: box. They checked the box with like just a box that was not even unpacked. 

[00:37:29] Danielle Jablanski: I love that. They were compliant. They wouldn't be fined, we're 

[00:37:32] Blake Sobczak: compliant. Open the box. In fact, we could still return it if no except rules change.

Oh 

[00:37:36] Danielle Jablanski: no. And it was somebody else. It happened twice at the same place. It was, yeah, we bought it. It's right there. It'd be like buying in a zombie product and saying, yep, we do monitoring. And it's a guardian sensor that sits in a storage unit. 

[00:37:46] Blake Sobczak: Oh no, we're pointing your home nest camera in at a closet or something and just like, or not even taking it out of the box.

Right. Just being like, yeah, we're protected against burglary. Yep. Well, thanks so much for joining us here and sharing some of your insights. I did wanna ask a question that we posed to all of our guests on the show, which is the fun fact question, if you will. The, what's something that we wouldn't know about Danielle?

Looking at her LinkedIn profile. 

[00:38:10] Danielle Jablanski: Yeah, so you wouldn't know. I graduated undergrad when I was 20 years old. That's not the fun fact. The fun fact is I had a few months between undergrad and grad school because I graduated early. So my first job technically out of college was actually in social work. I was a youth care specialist working with foster kids at a facility in the state of Missouri between undergrad and going off to grad school at the University of Denver, where I actually studied human rights, civil conflict, and genocide.

So, The evolution into cybersecurity is really not apparent in my LinkedIn, but I've had a lot of different stints and experiences. And then I'll tell you a fun one, which you might wanna use in the future. The two truths and a lie. One of my truths is always, I used to sing on a cruise ship in high school, and that's the one that always catches people off guard, sing on a cruise ship.

There's a lot of very talented NAIAs. If you find out after two truth and a lie. There's a lot of, uh, musicians over here. It's kind of ironic. 

[00:39:00] Blake Sobczak: Uh, noted for the next big karaoke bash and your social work sounds impressive as well. So thank you for those and thanks again for joining us. Really insightful conversation.

Looking forward to seeing where those hacked satellites turn up in Def Con and beyond. If you liked what you heard today, I hope you'll give us a five star rating and review. It's a big help, and please share this episode. If you know anyone who could appreciate a little InfoSec wisdom on their morning commute, we have a whole catalog of episodes well worth a listen.

So you may wanna check out past interviews as well. Finally, if you know someone who might be a good fit to appear on the podcast or have any comments or feedback, drop us a line at we're in podcast synack.com. That's S Y N A C K.com. Until next time 

[00:39:48] Sponsor: We're in! is brought to you by Synack. If you're looking for on-demand, continuous access to the world's most skilled and trusted security researchers, you can learn more synack.com. Synack recently launched its Empower Partner program, so that partner organizations can more easily offer the Synack pen testing platform to their own customers. This approach helps optimize Synack partners technical competencies and allows them to better integrate Synack into their portfolios. It's a way that partners can win new business by adding continuous best-in-class solutions to cybersecurity, cloud, and DevSecOps offerings. Synack partners with organizations around the world to make them safer, more resistant to cyber attacks. And more capable of finding and fixing dangerous vulnerabilities before attackers are able to exploit them. Learn more at synack.com. That's S Y N A C K.com.