WE'RE IN!

Sarah Armstrong-Smith on understanding the attacker mindset

Episode Summary

Sarah Armstrong-Smith, Chief Security Advisor at Microsoft and a cyber security author, discusses her role in improving cyber postures and staying ahead of threats. She explains how Microsoft uses machine learning in their threat intelligence and what's next with the onset of generative AI. She also highlights the importance of understanding the risks and consequences of AI technology, as well as the need for CISOs to embrace new technologies while ensuring accountability.

Episode Notes

Season 3 Episode 3 

Sarah Armstrong-Smith on understanding the attacker mindset

Sarah Armstrong-Smith, Chief Security Advisor at Microsoft and a cyber security author, discusses her role in improving cyber postures and staying ahead of threats. She explains how Microsoft uses machine learning in their threat intelligence and what's next with the onset of generative AI. She also highlights the importance of understanding the risks and consequences of AI technology, as well as the need for CISOs to embrace new technologies while ensuring accountability. 

In this WE’RE IN! episode, Sarah emphasizes the significance of diversity in the cybersecurity workforce and the need for organizations to foster a culture that encourages diverse perspectives. 

Listen to hear more about:

Understanding and addressing the unique cyber challenges of different sectors and countries 

Balancing the threat landscape with available resources

The human aspect of security and understanding the motivations of attackers

Links: 

Find Sarah on LinkedIn

Find Blake on LinkedIn

Episode Transcription

Hello, and welcome to We're In, a podcast that gets inside the brightest minds in cybersecurity. I'm your host, Blake Thompson Hoyer, and we have a very special guest today, Sarah Armstrong Smith.

[00:00:26] Chief Security Advisor at Microsoft and a cyber security author. Sarah routinely works with board members and CISOs seeking to improve their cyber postures and stay ahead of the next threat looming around the corner. Sarah has also mentored chief technology officers in Ukraine and the run up to Russia's invasion.

[00:00:43] There's a lot to dive into with Sarah, but first, let's hear a quick word from our sponsor, Cynac. 

[00:00:49] Sarah Armstrong-Smith: Confusion in the marketplace about which is the best method of security testing is so real. Bug bounty, pen testing, pen testing as a service. What do they all mean? We break it down in our latest playbook, Navigating the Security Testing Landscape, to demonstrate the strategic value of third party security testing.

[00:01:09] We cherry pick the best elements across the security testing market to incorporate into a strategic, comprehensive pen testing solution, the SYNAC platform. Learn more at synac. com forward slash playbook. That's S Y N A C K. Dot com slash playbook. 

[00:01:32] Blake Thompson Heuer: Welcome, Sarah. Thanks so much for joining us on the program.

[00:01:35] Really great to 

[00:01:35] Sarah Armstrong-Smith: be here. Thank you. 

[00:01:36] Blake Thompson Heuer: So you're currently Chief Security Advisor for Microsoft. That is a heck of a title, if I do say so myself. Tell me a little bit about what that role entails and how it fits into what could reasonably be described as one of the largest security companies in the world.

[00:01:52] Sarah Armstrong-Smith: Yeah, so in essence, my role is to liaise with Microsoft's largest enterprise customers across Europe. So I work multi sector and multi country, tends to be CISO, CIO, kind of C suite, kind of security leadership level. But in essence, it's to decouple the conversational I'm talking about products per se and really getting into the heart of what their challenges are.

[00:02:20] So we think back over the last three years, we've had a global pandemic, we've had the Russian invasion of Ukraine, we've had the war with Israel and Hamas. And that's kind of just from a geopolitical standpoint. So when you add in all the other social issues, technology issues, myriad of cyber attacks, regulatory changes, all of those things combined, it's a really, as you say, getting to the heart of what their challenges are, and ultimately how Microsoft can help with all of those things combined.

[00:02:52] You just listed 

[00:02:53] Blake Thompson Heuer: a ton of potential, both geopolitical flashpoints and cyber challenges. In these conversations, I guess, how do you go about distilling what's most important? How do you prioritize amid this deluge of various threats you could even consider? 

[00:03:08] Sarah Armstrong-Smith: Well, it's really driven by the customers and a lot of that is dependent, I'd say, whether what country they're in, what sector they're in.

[00:03:15] So obviously sometimes I'm talking to critical infrastructure, so that could be energy, financial services, then we might flip into sort of retail, manufacturing, and I think the interesting thing is each one is quite unique in terms of their individual challenges. So you could argue all companies are the same, more or less.

[00:03:36] In terms of you've all got infrastructure, you've got people, you've got technology, you're all being bombarded with cyber attacks left, right and centre, but it's then really trying to understand through all of that what really is the priority and what's really keeping people up at night. And I think you then have to balance the threat with the reality of the world as well, which is people do not have an infinite pot of money to fix all of these different things and fix all these risks.

[00:04:03] So in essence, I think it's about not just understanding the risk here and now, but also how that might evolve over the next 6 months, 12 months, 18 months, you know, plus. Because what we don't want is for that customer to be in a perpetual cycle of transformation. So as soon as they've bought a product, it might take them a year, 12 months, no 18 months to roll out.

[00:04:28] And by that time, 18 months down the road. the challenge, the risk has moved on. And so you kind of think with the pace of change that we're talking about, the level of threat that we're talking about, it really is about that longevity and thinking way beyond the here and now, and kind of anticipating what that fright may continue to look like.

[00:04:47] Blake Thompson Heuer: So you mentioned you often work with, you know, C suite, potentially board members in your capacity as Chief Security Advisor.

[00:04:53] When it comes to board members in particular, this is something that's come up quite often here in the U. S. How do you get non technical business leaders to really wrap their heads around concepts of cyber risk and the importance of good security practices? I think first and 

[00:05:08] Sarah Armstrong-Smith: foremost it's making sure that we talk about cyber security from a business perspective and again I think historically cyber security has been thought about from an IT perspective so cyber security you've got CISO, CIO, CTOs and kind of the board may say oh I've delegated that to the IT part of the organization or to the kind of the CISO but ultimately when we think about this scale of some of the things that we're talking about.

[00:05:36] We've already mentioned about geopolitical threats, uh, huge scale cyber attacks on critical infrastructure. The impact of that is huge. So it goes way beyond an IT incident. And so the knock on effect, when we're thinking about what's the impact of not providing services, critical infrastructure, we've got a knock on effect to Consumers, our people, the media, they all kind of latch on to all of these things.

[00:06:07] So first and foremost, we want to think about cybersecurity in the realm of enterprise risk. So even if the board are not from an IT background, they're not from a security background, they really do understand. So we need to understand enterprise risk and obviously there are multiple risks from an enterprise perspective.

[00:06:26] So the second thing that we then have to do is make sure that when we're communicating about cyber security, so either that's from a what is the threat that we're dealing with, what are the vulnerabilities that we have, and ultimately kind of what's our plan and what are we asking the board for. So we're asking for help, we're asking for resources, we're asking to change the priority maybe on that kind of risk register.

[00:06:49] So we have to put it into a language. It's going to resonate with the board people. So we want to put it in plain English and make it as simple to understand as possible. So when we are communicating these risks and the threat landscape and all of these things, and because of the rapid pace of change that we're talking about, we also need to be, again, very cognizant about what is it you're expecting the board to do.

[00:07:18] As a result of this information, because sometimes you've got risk registered audit reports, test reports, you know, whatever the case may be and they've got that multiple times over from different areas of the business. So, you know, am I giving you stuff for information purposes? Am I asking you to make a decision?

[00:07:36] Am I asking you to accept the risk or transfer the risk? And so it's really important that whatever we're reporting upwards, A, that the board understand, and B, that they know what they're supposed to do with that information. Because ultimately, You know, if it does hit the fan and we talk from a Microsoft, we say you have to have an assumed compromised mindset.

[00:07:58] So for the best will in the world, technology, training, you know, whatever the case may be, you still have to assume that an adversary is in your network. They're trying to exfiltrate data. It may or may not go into the public domain and therefore that being the case, you then need to show that kind of audit.

[00:08:16] record, if you like. Um, particularly if you're a regulated business and you're dependent on the size of the incident, dare I say, you might get hauled up in front of a select committee or the government to answer for all of these things. So actually having that record of who said what and when, what decisions were made, and that might have been the right decision at the time because of the other risks that were going on, because of the different landscape, because of all of these things combined.

[00:08:43] Um, it might be that we decided to Prioritize this area over this one. We put this particular project or plan on the back burner because we have other critical things. So that audit trail and decision making trail is absolutely critical on the lead up to incident as well as during an incident 

[00:09:01] Blake Thompson Heuer: as well.

[00:09:01] I can definitely see how having that sort of documentation would really resonate with the board members who have to be worried about getting, uh, drawn up, as you mentioned, in front of parliament or in front of U. S. Congress and shaken by the collar a little bit in the wake of a breach and, you know, being able to point to, Hey, okay, we listened to the risk.

[00:09:18] We heated it. We treated it as part of the business as a core element. And did this and X, Y, Z, or X, Y, Z, I should say, and really accomplished that. Now, I imagine that also with some of those conversations, the steady drumbeat of absolutely impressive hacking headlines that fall regularly throughout the world somewhat help move things along.

[00:09:40] You know, here in the U. S., we've unfortunately seen some water utilities recently in the news suffering cyber attacks, you know, reportedly linked to Iranian, the Iranian government's Islamic Revolutionary Guard Corps. Dusting off your crisis response hat, what would you say to these small utilities who maybe don't have a board of directors who can point them, you know, to direct resources a certain way, don't even have those resources, who might find themselves in the crosshairs of these nation state hackers?

[00:10:04] Yeah, well, 

[00:10:05] Sarah Armstrong-Smith: as I said, because my background is in business continuity, disaster recovery, crisis management, and cyber, and all of those. So two years ago, I made the decision to actually write my first book, which is Effective Crisis Management. So it's looking back over all of that kind of the last 20 years, looking at all of the incidents I've been personally involved in, but also looking at some of those major, major incidents, kind of how did it happen?

[00:10:31] So you're kind of looking at 9 11, Deepwater Horizon, the Colonial Pipeline disaster, and a lot of these have very common things in play. And in hindsight, you can look back with kind of 20 20 vision, everything like, but actually there's always a pattern that I really wanted to reflect on. And sometimes when you have public inquiries and you have incidents of such big magnitude, as to be saying, sometimes You have select committees, you get called up.

[00:10:59] There's going to be post incident reviews, all of these things. But these are such valuable reports and so valuable to take into consideration. Normally what you have, therefore, is a kind of like being able to look backwards and kind of look at where there was a history here, a failed warning signed.

[00:11:19] failing to act on the risk, test reports, audit reports. So it really comes into play after what I was just saying a minute ago about having that audit record and being able to kind of look backwards. But a lot of that comes down to the culture of the organization. And really, again, have you got a good history that when people come to you with reds and big issues?

[00:11:44] And what are you going to do about it? Do you just put it on the risk register, hope that someone somewhere is going to look at it, or do you kind of put it under the carpet, hope no one notices, and then move on? But your culture of your organization has a massive bearing on what comes next. So either how you've planned for an incident, Or how you're going to deal with an incident.

[00:12:06] So from a crisis management perspective, your strategy has to be proactive and reactive. So from a proactive standpoint, it really is doing that due diligence that we were talking about, understanding the vulnerabilities, understanding how those vulnerabilities are going to. impact, the crown jewels, and then being able to articulate where we are today.

[00:12:29] The reactive part then is having that defined incident response and plan. And again, making sure that everybody understands their role and understanding as well how the attackers themselves are trying to manipulate the situation. So again, if we think about active ransomware attack. The idea is they want it to be in the public domain, they want to have the media attention, they want you to have as much pressure on you from consumers, regulators, again the government, it's all part of their game plan.

[00:13:03] Because the more pressure that they put onto you and they may even give you a timeline. If you don't respond within X timeframe, I'm going to release this. I'm going to do all these various different things. And so part of that is then making sure that even with all of those things going on, you're still thinking rationally and you're still able to make effective decisions, you kind of almost have to take yourself.

[00:13:27] out of the situation. So again, one of the things that we'll talk about is having completely separate, isolated communications, hands off keyboards, because of that assumed compromise mindset that we were talking about. You have to assume your attacker is watching you, and they're waiting and pre empting your next move.

[00:13:45] So if they're in your network and you're talking on Teams, or you're talking on email or whatever, and you're kind of, you're coming across very confident. Yeah, I have a backup. Absolutely not going to pay. Oh, really? You're not going to pay, let me just delete all those backups then. And so therefore it's kind of really important as we're sort of say, if you take yourself out of the situation and be able to think rationally about your decision, but on that decision as well, we also need to make sure that we understand that.

[00:14:13] Every decision has a ripple effect. And so once we've kind of started on this path, it might be very difficult to kind of come off that path. And so we're not just thinking about the decision that we have to make now. We need to kind of think what's the, what's the consequence of that decision. Again, kind of thinking one step, two step, three steps ahead.

[00:14:34] So that again, we understand by making this decision now at this minute, we understand the consequence of that and what's going to come next. And therefore we're prepared, maybe we're prepared for the fallout that's coming from whoever, you know, our stakeholders are or whatever else. But we're thinking that one step ahead all the time.

[00:14:53] And so it's kind of, as I sort of say, it's that balance between the reactive and the proactive, where you really do have to make sure that you've got an effective crisis management strategy. 

[00:15:04] Blake Thompson Heuer: That proactive level of planning, it sounds so important in the field of cybersecurity, but it's also, it has to be so enormously challenging, especially you mentioned ransomware attackers.

[00:15:14] These attackers are pulling out all the stops and just taking a no holds barred approach. I remember quite cheekily, and I won't even dignify naming the ransomware group, they turned around and reported a company that they had hacked. to the US Securities and Exchange Commission for failing to divulge the fact that they were hacked.

[00:15:31] And I couldn't believe it. I said, you've got to be kidding me. I mean, how do you anticipate something like that? And you have actually have a book on the way titled, Understand the Cyber Attacker Mindset. Please tell us because I just, I can't imagine it's so nefarious the way that these groups are behaving.

[00:15:46] Well, I've always been 

[00:15:47] Sarah Armstrong-Smith: interested in the human aspects of security. So when I was a kid, my dad used to be in the Royal Air Force, and we were based in Germany, so he was working in psychiatry. And when we were living in Germany, in sort of 1982, getting to show my age, the Falklands War. was underway, and this is the first time since the Second World War that the entirety of the British military had been in a kind of active war situation.

[00:16:17] And soldiers were coming back from the Falklands War, and I really saw the difference. So even though I was quite young, I really saw the difference between how people were treated. If they came back with physical injuries versus mental injuries, PTSD is a post traumatic stress disorder, was still relatively new at that point of time as well, and I think it's really similar to cybercrime and fraud because we don't see it.

[00:16:43] And when we're thinking about how we, the victims of cybercrime and fraud, we treat them very differently to if they'd been a victim of a violent crime or a robbery. And again, it's kind of that really kind of understanding and really, so one of the aspects I've looked at from the, from the book's perspective, explore some of the issues when it comes to victim blaming and scapegoating.

[00:17:07] And again, you know, we're thinking about what we've just been talking about from a crisis management perspective. It's really easy just to sit there and say, who are we going to blame today? So are we hanging the CISO out to dry, everything about SolarWinds, they blamed it on the intern, and actually sitting there putting blame out and having a scapegoat doesn't actually change.

[00:17:29] So I look at some of the aspects of that side of things about how we think about the victim themselves, but also them moving us into the attacker. I'm really, what I'm trying to do is remove the Some of the attackers, who they are, but also I think we also have this image that we have been perpetuated by the media, this kind of faceless hooded figure in the dark, crouching over a laptop.

[00:17:58] So what I'm trying to do then is actually put a human behind that attack. So I'm using actual names I'm speaking to ex criminals, law enforcement, military, but also academics as well. And really what I'm trying to do is understand the why. So there are a multitude of hows, and I think from a cyber security, we're always focused on the how.

[00:18:24] So if you kind of think about the how could be, they're using malware, they're using a zero day ransomware, phishing, and it's a million and one ways. in which they could try and get access to your network, or try and get access to the data, or try and get access to the people. But really understand why, why are they trying to do that?

[00:18:45] And I think, again, it's a kind of a misconception that a lot of the cyber attackers are motivated by money and financial gain, and I think a lot of them are. But actually, I also interviewed an ex member of Anonymous, for example, and they said, it was money never came into it. So they were motivated by this need to have social justice and to stick up for all of those people who didn't have a voice and couldn't stick up for themselves.

[00:19:14] And I found that really interesting to get, really get to the bottom of some of those whys. And then I was digging around and understand, as I say, understanding some of these ex criminals, how they've reformed, because I was really conscious that I didn't want to give a platform to current criminals.

[00:19:31] Because they could take that as a bit of kudos and showing off and whatever else. So some of the people I've interviewed, I say are ex criminals, they've reformed, they're sharing their why, sharing why they changed as well. But ultimately what I really kind of got to the bottom of is Deep rooted human emotions that a lot of attackers take advantage of when we're talking about social engineering.

[00:19:58] And this can be victim or attacker, and so can we think about kind of the seven deadly sins, cardinal sins, and this is kind of going right the way back like hundreds of years ago to the Roman Catholic Church, and they were kind of talking about how people can be motivated or tempted by greed. Gluttony, pride, lust, envy, wrath and sloth in essence.

[00:20:23] And I was really then taken back that these are kind of really kind of core to us as being a human. And those are kind of the core motivators, if you like, that kind of lead us that one step further. If you understand the human, you understand their motivations, how far they're prepared to go in some of these situations, what are we going to do about it? And that's where I'm trying to get to the human centric part of security. So again, you know, we can talk about different technologies and how we're going to block all these malware, ransomware, etc.

[00:20:54] But what I challenge people on is again, something we've already reflected on is the organizational culture and having a really good look at the inside of your organization. And a good way of doing that is to really reflect on the level of shadow IT. And the level of shadow processes, in essence, all the ways in which people in the business have found a way of bypassing all your controls, ignoring the policies, whether they're doing it maliciously or they just don't understand.

[00:21:29] The fact is, if we don't understand our business and we don't understand our people, how can we protect it? 

[00:21:36] Blake Thompson Heuer: Interesting. Interesting. Yeah, I know. I'm looking forward to reading, Understand the Cyber Attacker Mindset. Speaking of various motivations, we've seen a lot of hacktivists come out of Ukraine. And I noticed on your LinkedIn that you volunteered as a CTO mentor in Ukraine through the USAID right before the onset of the War with Russia there.

[00:21:56] What can you share about that experience? And how would you gauge the impact of that war and that conflict on global cybersecurity so 

[00:22:03] Sarah Armstrong-Smith: far? Yeah, I mean, I think it was a very humbling experience. And I think, as you sort of say, it was kind of the few three months, four months leading up to the actual invasion.

[00:22:15] And they didn't know when they were going to be invaded. They just knew that their troops were starting to amass on the border. They were getting heavily hit with cyber attacks. And I think the real testament that was really making the difference at the time is that Ukraine Was utilizing a lot of old Soviet infrastructure, so Russia understood a lot of the infrastructure where we're talking about healthcare systems or energy, all of those kind of different things combined.

[00:22:46] And what they were trying to do is really trying to look at how do I, I, I kind of got all the massive big legacy estate, my adversaries, no. Exactly what I've got, they know exactly how to attack it, they know what my vulnerabilities are, what am I going to do? And so in essence part of this program was to have security leaders from across the US and also Europe.

[00:23:11] To basically mentor the CTOs of some of these organizations. So these people, they own the technology. They are trying to modernize the technology at the same time, but they're having to deal with this real threat at the same point. So some of the things that we were talking about. was first of all, really kind of understanding the state of play with some of the critical infrastructure and also then thinking about what's their strategy.

[00:23:39] It really comes back to exactly what we just discussed. Having that incident response plan, understanding the vulnerabilities, understanding how the adversary could exploit those vulnerabilities. So let's say that they're able to get straight into a nuclear power station, they're able to get straight into hospitals, they're able to get into education, central government, what can they do?

[00:24:06] With those systems, with that data, with that information. So it was really trying to just get them into the right mindset. It was, there wasn't enough time, you know, this wasn't about rolling out new infrastructure or projects or anything like that, but it was really about preparing them from what was to come and a real field for them.

[00:24:26] Because a part of that was a real challenge that they had to keep that critical infrastructure running and kind of no matter what's hitting at them. And so they have that mix between, so the cyber attacks, but also the kinetic, the actual missiles as well. But I think from that program, one of the things I'm really proud that Microsoft was able to do, kind of on that run up, was actually to help Ukraine government move a lot of their data to the cloud and move it very, very quickly.

[00:24:58] And so again, you know, we think about a lot of government across the globe. So this is not unique to the Ukraine. They have many requirements for data sovereignty, data residency. Your data cannot leave the country. It must stay in where it is. Well, I think that's great in peacetime. To have your data as close to you as possible.

[00:25:19] But actually, if all your data is in data centers and the adversary knows exactly where those data centers are and are prepared to basically blow them up, what are you gonna do? And so the Ukraine government actually change the law probably at one week before the physical invasion, which then enabled them to move their data.

[00:25:41] And so not only did they move their data to the cloud, but they also moved it. out of Ukraine as well. So we were able to kind of copy that data, if you like, across multiple European data centers. And so from, I think, from a Russian perspective, they did actually blow up some of the Ukrainian government data centers, but the data itself wasn't there.

[00:26:03] And so I think from an adversarial perspective, it's kind of in the cloud and inverted commas. I don't know where it is, and I don't know where to attack. And I think that's really important from that perspective. But we were also able to provide free security technology as well. So we deployed a number of security software.

[00:26:22] So, um, I don't want to be a promotion for, for Microsoft per se. But we deployed Defender for Endpoint, but we also deployed a brand new technology that we just acquired from a company called RiskIQ. Now, one of the things that RiskIQ does is actually to map the internet, which, uh, which sounds like a kind of mammoth task.

[00:26:42] But if I just explain the rationale for this. Why we deployed this technology. So if you're thinking about malware on an endpoint, and you're thinking about it's pinging something in the external world, and I don't know what it's pinging. Um, and so this is what this technology does. It's kind of reverse engineer, if you like, and to pinpoint those command and control servers.

[00:27:04] And so we can therefore then identify that not only are you being pinged. But that company, that company, that company, and that company is also being attacked exactly the same time. So by us deploying that infrastructure throughout Ukraine's critical environment, we were able to actually give advance warnings, um, to different organisations.

[00:27:26] So they weren't necessarily Microsoft customers, it was just the fact that I can actively see across the network that you are being targeted by this new strain of malware, wiper malware, ransomware, whatever the case may be. And therefore by deploying the Defender for Endpoint, we were able to then block that malware as well.

[00:27:47] So that was hugely successful. and really great opportunity because what we identified with Russia in particular, that they created seven brand new strains of wiper malware. We see more active attacks in the first four months than the previous eight years. And so actually, as soon as you would play your hand, if you like, so if you're going to develop a new strain of malware, whatever the case may be, Once we've identified it, we can create a detection for it, we can block it, and then having that deployed as quickly as possible, not just into Ukraine, but into other countries, really has made a difference, I think, in terms of Russia's ability to be able to affect Ukraine's critical infrastructure.

[00:28:36] I think from their perspective, they thought the war was probably going to end very quickly because I hit the critical infrastructure. With all of this kind of like wiper malware, destroy hard drives, destroy all the data. Then I hit you with actual missiles. You're probably going to surrender very quickly. That isn't actually what happened. 

[00:28:57] Blake Thompson Heuer: Well, I think, you know, to your point earlier about not wanting to sound too promotional of Microsoft. Any neutral observer of the Ukraine war and the Russia's invasion there, and of course, as we mentioned, Russian president Vladimir Putin had another thing coming with what he initially thought would happen.

[00:29:12] But I think any unbiased observer would really have to recognize the role that Microsoft played in, like you mentioned, Disseminating some of the information, getting threat information out in public very quickly. And this was certainly the hybrid aspect of this war on the cyber IT side. To many outside third party observers has been underwhelming on Russia's part, but that's in large part, I think, a testament to the defenses that went in and both Ukraine and its third party international partners.

[00:29:41] ability to thwart some of these and some of the planning that went into it. So no, I don't think it's, uh, I don't think it's promotional at all. I think it's just a reflection of the reality of what played out on the ground there from that cyber IT defensive perspective. Now I do have to switch gears here for a second and give us kudos because we're well into this podcast recording.

[00:29:57] And to my knowledge, we can check the transcript. Neither of us has mentioned artificial intelligence. And that's quite a feat with the way that AI has been the buzzword of the day. I would be curious to hear, you know, just When it comes to the security space, what does AI mean? How are these large language models, this generative AI technology, how is it shaping the future of cybersecurity?

[00:30:18] A small question, I know. 

[00:30:20] Sarah Armstrong-Smith: I think the interesting thing is even when we talk about AI in its general term, AI is nothing new. So it's been talked about since the 1950s, even way back when scientists and technicians were thinking about, you know, what if one day we could have machines that were as intelligent as humans, then it kind of all got a little bit hard and too difficult.

[00:30:43] Nobody could really figure it out until the kind of the nineties, late nineties. Then we kind of had machine learning. And when I think when we talk about. AI in the modern era, really what we are talking about is the machine learning that many companies have adopted, like kind of looking at huge data sets, being able to kind of find the patterns in the data sets.

[00:31:02] That in essence is how Microsoft utilizes a lot of machine learning. In our threat intelligence, so look for those patterns, what's new, what's different, how things are evolving. So even machine learning in a Microsoft security context has been utilized for a fair number of years. I think what's changed, in essence, is kind of been this onset of generative AI, which again is a subset of the machine learning, sort of deep learning kind of networks.

[00:31:31] And I think that's probably brought to life in the last year when we're thinking about chat GPT. But all of that, so really what we're talking about is the building of those large language models. And again, because why now? If we've been talking about this since the 1950s, well one of those things, again if I talk about Microsoft for a little bit, is the relationship Microsoft has had with OpenAI, who is the builders, if you like, of ChatGPT.

[00:31:58] But our relationship with OpenAI has also helped with that a little bit. So Microsoft has built the supercomputer that runs the OpenAI technology, if you like. So imagine that you've now got a supercomputer in the cloud, you've got the computational power of the cloud, as well as all the data that's needed to kind of build these large language models.

[00:32:25] And that's really been the differentiator. I think we've had to get to now almost to have the level of computational power. and data and have the know how, how to bring it all together. So I think when we think about therefore, you know, as a chat GPT is just one of many technologies that we're now kind of seeing a Google and various others are also building this capability.

[00:32:47] But in essence, I think that there is kind of two camps. One camp is looking at the art of the possible. How do I embrace this new technology? Lots of different case studies. We've seen huge number of businesses who are Standing up R& D and innovation centers, all very cool. Thinking about what does the future look like?

[00:33:09] That being said, when I talked about some of the conversations that I'm having with CISOs and other technology leaders, there's a little bit of a nervousness really, about do we really understand this technology? And even if we kind of think about in the last year, we've talked about what is biases, you know, it's hallucinating, it's make stuff up, um, we don't really don't really understand, The consequences of how this technology is being utilized.

[00:33:36] Blake Thompson Heuer: Even introducing AI specific vulnerabilities as well. Honestly, there are new AI specific vulnerabilities that have 

[00:33:41] Sarah Armstrong-Smith: emerged. Exactly, exactly. And so I think what we then have to kind of consider is the CISOs. Don't want to be the one that says no. And again, historically, as soon as it gets to security, they're, they always the ones that put the brakes on.

[00:33:55] They come at us with all of these policies and processes. They're the kind of the naysayers, don't let us do anything. And I see that CISOs are really trying to change that perception. They really need to be part of the business. So it's not about saying no, it's about understanding, as you said, this.

[00:34:12] scale of the risk and what those risks are. It's about getting the balance, right? It's not about saying no, not to use the technology, but actually do it for the right reasons. I've been accountable for whatever comes next. So if you don't fully understand it, you haven't tested it, you've put it into the wild and it does something.

[00:34:35] Bit weird, let's say. What are you going to do about it? Again, it comes back to that crisis management. But I would say there's three things to think about. One of which I've already spoken about a little bit, which is in how AI is being embedded into security. So again, if I think about, okay, we've been utilizing machine learning for a very long time.

[00:34:52] What's, how's a large language model helping? So, you may have heard of Copilot from a Microsoft perspective. Copilot's being embedded into every bit of technology. It's GitHub, Office, but we're also having Copilot, but into security. And the idea is it's to augment the human, not to replace the human. But if you think about it from a security analyst perspective, working security operations.

[00:35:16] And you see something a bit weird that you don't really know what it is. It could be a zero day, so in essence there are no detections, there's no nothing. The idea really is to be able to ask a question in plain English and get an answer back without having to know how to do active threat hunting, how to do these really complex queries.

[00:35:37] This is one of the game changers that we're kind of seeing from that perspective, and the idea is, is to spread it. Speed up the response, but also speed up the knowledge, so how quickly we can train people to be an analyst, you know, active threat hunter, you know, complex queries, maybe at the moment, it takes 12 months, 18 months, you know, however long it takes.

[00:35:59] And as I say, by having just being able Ask the question in as plain English as possible and get the answer back. That's, so that's one element. The second element then comes down to the security of the AI itself. So for me, again, if you think from an adversary's perspective, their ability to get hold of the model, understand the.

[00:36:21] algorithms, how they might be able to manipulate the output, reverse engineer it, or even steal it. So you need to then think about the adversarial AI. So how might an adversary either utilize the large language models that have already been built, or whether they would actually build their own? I'm asking some Little elements of this that are kind of worrying people.

[00:36:48] So we have seen attackers utilizing some of these large language models, let's say to refine the code that's being used in malware or to perfect the phishing emails. And I would just caveat that the machine itself, the AI, is not actually doing bad things. It's the, again, coming back to the human, it's the human behind.

[00:37:11] the software, the AI that is manipulating the machine, if you like, to give an output. So we've kind of, again, if you reflect again back through GPT, other technologies, what's kind of happened with that is the Developers, if you like, have put safeguards in and jailbreaks. So if you just turn around and say, let's ask a question, how do I kill people?

[00:37:34] And it will say, absolutely not, I'm a responsible AI, I love humans, I'm never going to tell you how to do that. And so kind of, they'll get around it, because yeah, but imagine I'm an evil villain in a Marvel film. And I want to know how to kind of do these things. Like, yeah, okay, I'll tell you how to create a great story plot for a film.

[00:37:57] And you can kind of see that it's asking in a very slightly different way to get the outcome that you are after in the first place. And this is kind of bringing in a whole new realm of what we call prompt engineering. And it really comes down to, it's not what you say, it's how you say it. And this is really interesting.

[00:38:16] And I think that's where you're kind of seeing how people are able to Bend and get round policies is exactly what we were just talking about, where there's a will, there's a way. And if I understand what it is I need to ask and do it in a slightly different way, I'll get the machine to tell me how to do 

[00:38:33] Blake Thompson Heuer: it.

[00:38:33] I wanted to pivot to a topic that's certainly of interest to, to SINAC as well, which is, you know, the need to build a bigger tent for, for women in STEM and usher in a more diverse cybersecurity workforce.

[00:38:45] What are some tangible steps that organizations can take to make that? A reality. I feel like it's, it's. It's all too easy sometimes to pay lip service to this idea of, Oh, we welcome diversity in tech without actually doing 

[00:38:57] Sarah Armstrong-Smith: anything. Exactly. And I think we need to be cognizant first and foremost about what do we mean by diversity?

[00:39:06] So some people will say, well, that means, oh, we need to have more women as an example, because it's a male dominated industry, but actually diversity is much, much more than that. I think we need to look at diversity from all different perspectives. And I think, again, if you think about. The cybersecurity risks, the threats that we're talking about, how they're evolving.

[00:39:25] Actually what we need is different backgrounds, experiences, different cultures, and all of that different perspective together. Because ultimately what that means when we have diversity in our teams, we remove the group think so we all think the same, and we all kind of have the same life experiences.

[00:39:44] What we kind of come up with is probably very similar. As much as I think it's a cliche, we do really need people who can think outside the box, but we also need enough people with a healthy amount of challenge on the status quo. So if we just kind of say, well, that's the way it's always been, and kind of move on.

[00:40:05] And don't ask difficult questions. We're never going to move forward. And I think therefore it kind of comes back again to the culture of the organization and it's that realization about the culture that we've created. And also the culture that we want. And we can't, as you said, you can't just will it to be so because we said so.

[00:40:25] We can't just say, well, we. We want to have a diverse organization. We want to be inclusive and want to do all of those things. If that's not your organization, it's not going to change overnight. So you actually have to have a very top down, very deliberate process, if you like, first of all, you say to have that introspection to really kind of understand where we are today.

[00:40:49] Is it giving us the right balance, the right outputs, the right outcomes that we're expecting? And if not, A, what's the impact of that? And B, what are we going to do about it? But it's not an easy task. As you sort of said, I think people sometimes play lip service to it a little bit by having quotas. And if we have X percent of people from this culture or the X number of women or, you know, whatever the case may be, then we've met our diversity quota.

[00:41:17] And actually it's much, much more than that. But if we get it right, it can have really, really beneficial consequences for the entirety of the business. Not just in terms of how we help people in the organization, how we attract new talent, but also how we manage risk. In totality, because people are scared, and if they feel like if I put my head above the parapet, I'm going to get shot down, if I've done anything wrong, maybe I did click on that phishing link, but if I tell anyone, I'm going to get chastised, I'm going to get disciplined, so you know what, I'm not going to bother, I'll just, I'll just put it on the carpet, like I said with everything else.

[00:41:53] And you kind of, you kind of see, as we sort of said about some of these major incidents. There's a pattern there. And so these little things that seem like great, it's not so bad. You know, it's just the way we've always done it. No, no one cares. Yeah. Just break a few rules here and there. And it kind of, these things build up to big things.

[00:42:12] And that's why I said that that's where, you know, sometimes then for within how to look backwards, how did he get here? There's a pattern, and it really is, and how do you break the pattern? And it really is a conscious decision to do it, not just something you will. Well, finally, 

[00:42:29] Blake Thompson Heuer: there's one question that we ask of all of our guests on the podcast, which is the dreaded fun fact question, if you will.

[00:42:36] What's something that we wouldn't know about you just by looking at your very illustrious, by the way, LinkedIn profile? 

[00:42:42] Sarah Armstrong-Smith: Well, you know, I think I'm a pretty open book. Literally speaking, I mean, I've talked about my first book, Effective Crisis Management, but I really laid it all to bear in there as well.

[00:42:52] So I talked about a lot of the incidents, I talked about, from my perspective, having lived and breathed from it, I talked about real life experience of trauma. as well. And I think that was kind of important to have that out there. But then you've kind of like my, the other thing that I do, it sounds very odd, but I actually use my dogs and my dogs make a special appearance on LinkedIn as well to promote cyber security awareness.

[00:43:17] It sounds very weird, but actually it kind of raises a smile. And I have a bit of a cult following now. So I'm like, yeah, yeah, Microsoft, yeah, whatever. Where's the dogs? I haven't seen you post about your dogs for a while. But I think it's whatever, it's whatever works, but, but ultimately, as I said, there's not a lot that's secret, but I think it's actually important, as we've been talking about all the way through today's podcast, it's really about understanding the human.

[00:43:43] We are all human, ultimately, no matter what job we do. 

[00:43:46] Blake Thompson Heuer: Well, thanks so much for joining us. Uh, really fascinating insights and you've inspired me and now I need to figure out how to get my ginger tabby to say something meaningful about cyber security or pose him in a certain way. I don't know what we'll find out, but thanks again for joining us.

[00:43:59] Really, really great discussion and look forward to checking out your understand the cyber attacker mindset book. 

[00:44:04] Sarah Armstrong-Smith: Oh, you're very welcome. Thanks for inviting me on the show. 

[00:44:08] Blake Thompson Heuer: If you liked what you heard today, I hope you'll give us a five star rating and review. It's a big help. And please share this episode if you know anyone who could appreciate a little InfoSec wisdom on their morning commute.

[00:44:19] We have a whole catalog of episodes well worth a listen, so you may want to check out past interviews as well. Finally, if you know someone who might be a good fit to appear on the podcast, or have any comments or feedback, drop us a line at we'reinpodcastatsynack. com. That's S Y N A C K dot com. Until next time.

[00:44:39] Narrator: We're In is brought to you by Cinec. If you're looking for on demand, continuous access to the world's most skilled and trusted security researchers, you can learn more at Cinec. com. SYNAC recently launched its Empower Partner Program so that partner organizations can more easily offer the SYNAC pen testing platform to their own customers.

[00:44:58] This approach helps optimize SYNAC partners technical competencies and allows them to better integrate SYNAC into their portfolios. It's a way that partners can win new business by adding continuous, best in class solutions to cybersecurity, cloud, and DevSecOps offerings. SYNAC partners with organizations around the world to make them safer.

[00:45:16] More resistant to cyberattacks and more capable of finding and fixing dangerous vulnerabilities before attackers are able to exploit them. Learn more at synack. com. That's S Y N A C K dot com.