Paul Mote, Vice President, Solutions Architects at Synack, discusses if we're ready to embrace AI in a world of ever-evolving threats. Who will AI help more, attackers or defenders?
Paul Mote, Vice President, Solutions Architects at Synack, discusses if we're ready to embrace AI in a world of ever-evolving threats. Who will AI help more, attackers or defenders?
TIMESTAMPS:
[00:00:00] Blake: Hello and welcome to We're In, a podcast that gets inside the brightest minds in cybersecurity. I'm your host, Blake Thompson Heuer. And joining me today is a, a fellow Synack-er! Paul Mote, Vice President of Solutions Architects here. Uh, Paul. Welcome. Glad to have you here.
[00:00:14] Paul: Uh, it's an honor to be here, Blake, you know, longtime listener, first time caller. I really appreciate the, uh, the slot this week.
[00:00:20] Blake: Now, now, Paul, I have to admit, when I first joined Synack, uh, coming up on four years ago now, actually, I didn't totally understand what a solutions architect did. I mean, it sounded good, right? You're, you're solving things, you're building things to solve things, I don't know. But what does it actually look like in, in practice at an organization like Synack?
[00:00:38] Paul: Yeah, it's interesting. I think it's, um, a little bit of a Swiss army knife, right? Kind of consultant and sales engineering merged into one. I think our technical alignment is like the technical sales group, but I think we spend most of our time, you know, talking to clients, understanding. What are they trying to solve with their security testing program?
[00:00:52] Why are they trying to do it this way? What, what are the real business drivers behind the testing that they're doing? Not just the fact of doing security and then making sure that whatever we, we ultimately put in front of them maps into what they need to do to, to guarantee the outcomes they're looking for, and make sure that they're gonna be set up for success.
[00:01:08] Blake: Given that you're on the technical side of the house here a little bit with, with some of, um, how you navigate the challenges that customers are facing, I, I'd be curious to hear your thoughts on AI and, and normally I do. Wait and cover a couple questions before jumping right into the AI buzzword, but, um, I guess listeners can check that off their Bingo card right away because it, it really is important and, and it, it's especially fascinating how generative AI technology and just the rapid advancements we've seen in the last couple years, how has that changed the way that you approach your day-to-day work?
[00:01:36] Paul: Yeah, it's funny. I think you nailed it, right? I think you can't have any conversation without at least talking about AI for a good chunk of it. I think it, AI is the, the technology that's gonna change the, the, the spaces we know it. For everything that we do. So. I think for me, you know, I, I look at it and realized very early on, it's a great companion for brainstorming.
[00:01:55] It's, um, great for research. It helps you kind of fill in the gaps in areas that you may have in terms of your own skillset, as long as you're honest with what your gaps and your skillset are. Um, I actually had a friend that said, I wanna learn ai. What, what should I do? And I said, well, we should ask AI how to learn AI and have it build you a plan.
[00:02:10] And they're like, ah. Didn't even think of doing that. And I think that's the fundamental shift of it's just becoming like the new Google, right? You can't, rather than Googling it, usually you're gonna leverage AI to go help you do the thing and figure out the thing. Um, and naturally with the introduction of like agentic workflows and everything else, everything kind of changes in terms of how to leverage that capability. So I think it just, it's part of your daily life for, for everything.
[00:02:34] Blake: We will, we'll get to the age agentic piece soon, but I, I did wanna flip to the adversarial side of the house. How do you see US foes using or potentially using AI to retool their offensive operations? Right. I, I feel like we hear a lot about the AI enabled attack, and I've even had some conversations with other podcasts, gus, this season on that, and I sometimes in practice, I feel like it's, we're not always seeing generative AI leveraged and sure it might be improving phishing email quality and whatnot, but where does the threat sit as we speak and where might it be evolving?
[00:03:07] Paul: Yeah, I think you know, with everything good, there's always the opportunity to use that same good for bad. So when you're sitting in your chair and you realize you get answers faster, you can build these things faster. You can have the AI look at your code and help you fill in the areas that you're weak on your code or write better code, integrate solutions that you didn't think could be integrated.
[00:03:25] 'cause AI can help you do that. All of those same capabilities earn the adversary seat, right? They have the same subscriptions that we all have. They have the same capabilities we saw with the Chinese, uh, version of chat GPT that came outright, be able to very rapidly spin up and synthesize data and be able to, to execute more quickly in terms of, uh, learning, learning from larger data sets.
[00:03:46] Adversaries are gonna ultimately be able to leverage the same, same capability. So, uh, researching organizations to understand what they're doing, looking at their attack surface, analyzing what is in their attack surface, being able to figure out how to pair the right attacks against that attack surface.
[00:03:59] It's gonna help exponentially increase their capabilities and we have to be ready for the fact that they're gonna be able to, to be better at finding our holes faster. And ultimately that, that rapid escalation of of them being able to look at information and get to us faster and be successful is something we're gonna have to come to terms with and really have a major shift in terms of how organizations handle what their vulnerability and attack surface exposures look like.
[00:04:22] Blake: No, that's, that's definitely a, a sobering thought, the potential for these adversaries to really scale up their operations and leverage ai. I mean, we see, you mentioned in your day-to-day work, I certainly use AI as well, right? It is, it does help with that scale piece of being able to produce more accomplish more. Are agents really up to the task though of finding those gaps that you describe and those holes in people's networks are, are, are they really finding and exploiting critical vulnerabilities? As things stand, perhaps they're a little bit better at carrying out these more discrete concrete one specific task type operations.
[00:04:53] And, you know, some of the vulnerabilities certainly that you deal with Paul, are, are, can be really complex pathways that take multiple hops and steps and almost logical leaps to, to, to arrive at. Are we gonna see AG agentic AI better or grow better at picking off some of those more complex ones? Are they gonna stick to the low hanging fruit?
[00:05:14] Paul: I think, I think eventually we, we will see AI start to be more successful in the things that we think are too complicated for the AI agents to handle today. I do think the reality of our current state is, uh, adversaries are getting better at automating the boring stuff. So just like we should be automating the boring stuff, they're getting better. The larger the data, data pool that they can learn from, the more successful they're gonna be in it, right? So the more bespoke and complex it is, it requires a lot more logic to be inserted and a lot more autonomy of the AI to make its own decisions, and it just doesn't have that, right? So if I'm looking for SSH ports that are exposed that are using an old version so that I can then pair a known CVE to it, I would expect AI to be pretty quick and be able to do that. If an AI agent is focused on SSH and that SSH exploitation, it'll probably do pretty good. But if you said, look at a website and look for a note and look for a function that's not referenced by any API call on the web app that is publicly exposed and exploited for remote code execution, it's not gonna be able to make that leap.
[00:06:11] A human will make it pretty easily when they're going through it going, oh, that's weird. And thought process research. Oh, that's weird. Thought process research. Oh, I found this thing. Now I'm in. I think there's a big gap between AI being able to do that compared to where it is currently today.
[00:06:24] Blake: I'm glad you mentioned CVEs. 'cause I feel like that also plays into some of the discussion around, around scanning technology and automation in general. There's a big difference obviously, between known CVEs and unknown CVEs. Um, where does that factor into the AI discussion, right? Fi figuring out are ai, are you gonna see them dropping new CVEs, uh, easily.
[00:06:48] Paul: I think there might be some low hanging fruit stuff that ultimately feeds into that. I think the challenge of like when you think of CVE, right? CVE stands for common vulnerability exposure. So that's the, just in case folks aren't tracking with CVEs, that's what it stands for. And really that process is, I found a problem in a piece of software and I've reported to a naming authority and they assign a value to it so that the intelligence associated with it can be shared with everyone from a defensive and a scanning perspective. So how do you patch it? Uh, how do you scan for it? How do you look for all those things? Um, I think if we get to a point where AI is finding and submitting what it thinks a CVE will be, uh, it'll just make the pile of vulnerabilities, which is already going up to the right, astronomically uncontrollable.
[00:07:29] I don't think that's really a good, a good use case. I think it would get outta control very quickly. I would think that aI outputs for low hanging fruit techniques, like looking for exposed vial types on web apps, like information disclosures. I think that will, that should help with creating some, um, additional value in terms of finding vulnerabilities, uh, at scale.
[00:07:51] Blake: I'm glad you mentioned that noise point, right, of our AI systems and, and I guess now we're flipping a little bit to the defensive side here. Are they gonna be so good at finding potential things that they just deluge defensive teams with a bunch of junk, right? And these AI reports can be often really well written too.
[00:08:11] I'm there are gonna be very polished looking, but then if you look under the hood and realize, okay, this vulnerability isn't actually exploitable, or the risk is, you know, pretty acceptable for what's happening here. And suddenly you have hundreds and hundreds of these reports to sift through, is that really helping anybody or are people just gonna be drowning in AI vulnerability, slop, I'll call it.
[00:08:29] Paul: Yeah, I think that the, there was a open source program that talked about a program they were running and they were getting, since AI was released, they're getting a, a massive flood of vulnerability or, reports through their bug bounty program, and they said it's all slop. It's a hundred percent low quality unactionable.
[00:08:46] The AI outputs are hallucinating. They're pulling in snippets from Reddit, um, that aren't actually inside of the, the, like in the POCs that they're giving us, or not even available on our website. And it's, I think that is the, that is a major risk and it, and it's a risk that amplifies the existing problem, which is our VUL management teams are already underwater.
[00:09:03] They simply can't handle the stack of known vulnerabilities they have today. They don't know how to prioritize 'em, but they also don't know how to get through that stack 'cause it just keeps getting higher and higher and higher. Change management processes are difficult. So I, I think it could definitely perpetuate a lot of the noise and make it even harder for those teams to figure out what the signal is like, what do I actually need to take action on to keep us out of the news, to protect our customers, to protect our shareholders? Um, I think that's a, it's a major threat to that, to that in space.
[00:09:28] Blake: we hear a lot about the time it takes adversaries to exploit vulnerabilities in the wild, right? I love that phrase. In the wild. It sounds like David Attenborough is narrating advanced persistent threat activities, but so out in the wild, we're seeing this time to exploit go down. Maybe it's kind of leisurely gone from weeks to days. What will that time to exploit look like, a year from now in the AI era.
[00:09:52] Paul: Yeah, I think it's, it's probably one of the biggest threats that organizations face today is, is how fast that that window is shrinking. Volek actually had an amazing report that they released in Q1 of of this year. They highlighted that 159 vulnerabilities, meaning that there had CVEs associated with them, were exploited in the wild.
[00:10:10] And for those that aren't tracking what in the wild means, it means that an adversary, somebody who's not authorized to attack an organization, is using it for some level of gain. It could be nation state activists, could be something else, but they create a breach, right? They break into organization using those vulnerabilities.
[00:10:24] The interesting thing of those, those vulnerabilities is that 28% of them were automated and weaponized within 24 hours of public disclosure. So you kinda sit there and imagine, you know, Blake, you and I are running an organization. We have a thousand systems that we're managing, hundreds of people that are working on this stuff, and all of a sudden the vulnerability drops in the news.
[00:10:44] And somehow we have to see that that vulnerability drops, be able to ingest that information and figure out, do we have an exposure? And then figure out how to solve it. Less than 24 hours because if we wait beyond 24 hours, an adversary may already be in the network. And you take that and say, well, how do I get through change management?
[00:11:03] How do I make sure that a patch, if it's, if it's even available, is going to not break my system and cause downtime as I'm trying to defend my network? All of those things are gonna change our, our landscape dramatically.
[00:11:15] Blake: Well, and you've been really good, Paul, about defining some of the acronyms that we like to toss around in the cybersecurity industry, right, CVE. And we have another one that we like to use at sync the meantime to remediate, or MTTR. Uh, I don't know exactly offhand what the, what the number is, what the average is, but you know, you're not seeing many folks patch with the kind of speed you just described, right?
[00:11:36] That, that the 24 hour window, uh, that perhaps is only shortening from here on out. Are we falling behind?
[00:11:43] Paul: I think as an industry that is a, it is a core problem, um, even in our, our outputs, right? So we have the benefit, we have the Synack Red Team. When we give a report to a client, we give it to 'em as we find it, and it includes the steps we produce and recommend fix action. All of it is included in there with a full exploit chain of how we did it.
[00:12:00] So they get it real time during the test. Even then, organizations still take a while to get through their change management processes to review that information so they can say it's real, it's actionable. I need to fix this. 'cause a human being one, and it's confirmed, it still takes days, weeks to get through those internal processes.
[00:12:16] And so knowing that that is already happening today, you're already getting that level of high fidelity intelligence, it's definitely gonna require a lot of reworking so that organizations can get intel and act on that intel a lot faster, right? They're gonna need, how do I get the, the signal faster? And then how do I act on that signal faster?
[00:12:31] 'cause I won't have the, the benefit of it taking a long time. Right. Otherwise, I'm just gonna have to deal with the fallout of whatever comes, if I'm, if I'm late to the, the party.
[00:12:41] Blake: And presumably that's gonna necessitate leveraging AI as a defender to some extent, to speed some of those processes up, I would imagine. But there's an interesting cultural element to this too, right? We, you know, you talked about using AI in your own workflows. I think there's certainly a lot of pressure, uh, and good reason to turn to some of these AI tools.
[00:12:59] But, uh, certainly in my space, is it, you know, in the communications front and, um, former journalist, I've seen there's a lot of skepticism, a lot of reticence for the uptake of AI tooling, right? People don't necessarily trust that the AI is gonna deliver accuracy or, uh, performance that they need. So you still have potentially, you know, a good chunk of people not turning to these tools that could accelerate some of these, you know, time it takes to remediate vulnerabilities and whatnot.
[00:13:25] I guess, what are you observing working with a lot of, a lot of clients at Synack,, you know, prospects on that cultural side. Are people ready to embrace AI and apply it across their securities function, or is there still a lot of re reticence? What have you been noticing?
[00:13:40] Paul: Yeah, I think it's interesting. There's, there's a weird divergence in terms of how people feel about AI.. A lot of the security teams are now having to respond to the fact that AI is already in their products that they provide to their customers, and they may not even realize that it was there, but it's there, right?
[00:13:53] If any application that had a chat bot, it's probably run on AI now on the backend, and that one of the major challenges on the defensive side is that the vulnerability classes of AI are different. They're not the traditional vulnerabilities that you would experience. It could be things like, you know, could I get, um, you know, bias testing?
[00:14:11] Like, will, will the application say something offensive to my customers? Um, I need to make sure that the user who's who, who's attacking that chat bot I. Since I don't really get insight into what it's doing and what decisions it's making and how it's making those decisions, I need to make sure it doesn't tell the, the end user how to do something illegal.
[00:14:28] Right? I don't, I don't need to be the source of how they got the information, how to do something illegal, right? Which we, I think we had a blog post we wrote about somebody who was able to jailbreak, uh, one of the AI chat bots that we were testing and ultimately get it to feed them some very interesting information.
[00:14:40] Um, so there's that. And on the defender side, there's still an inherent need to have a human in the loop, right? I think that's the number one thing that folks were kind of looking for. It goes back to like the security orchestration days where they were trying to say, well, if an email comes in as it has an attachment, automatic, do they do all these things all the way to quarantine?
[00:14:57] And everyone said, no, no, no, no. Like we can't, we can't trust computers to go do that for us. Um, so I think the. In terms of where to integrate ai, how much to trust ai, that's still a, an area that the security teams are having to, to look into and ultimately figure out what they're comfortable doing. But you see it everywhere, whether it be, you know, Microsoft Sentinel, right?
[00:15:16] Integrating AI for co-pilot all the way to Splunk, integrating co-pilot, uh, even Atlassian now includes co-pilot some kind of AI for their developer workflows. I mean, AI is, is everywhere part of every tool that you're working. So how do you integrate, how do you, how do you adapt it?
[00:15:32] Blake: Well just doubling down on that, I guess it's no surprise you hear one of the top AI companies anthropic their security leader, uh, CISO, Jason Clinton told Axios in the spring that he expects AI powered virtual employees to start roaming around corporate networks, starting really by next year at the latest.
[00:15:52] That's, you know, 2026 right around the corner. Are we gonna start to see AI powered virtual pen testers?
[00:16:03] Paul: Yeah, I, I think every, every industry, if it's technical. Has has an element of how an AI agent would fit in and do some level of function. We've kind of seen it on the developer side already, where they're using AI to be most of the junior developers and having senior developers review the, the code of junior developers, which are really just AI agents, um, that's all gonna play into, you know, how do we secure the network whenever the majority of traffic across the network and across all of our tool bases are not humans.
[00:16:33] And we don't necessarily have insight into all of the pieces and decisions that they're making and how they're making those decisions. So there's a, there's a lot of interesting. Dynamics and, and challenges associated with that. I wouldn't be surprised if we're able to do that. In terms of going back to the philanthropic comment, um, I do think organizations will probably be a little bit slower on critical functions. They probably won't just implement that right away.
[00:16:56] Blake: Right. That speaks to the importance of the point that you raised human in the loop to kind of trust but verify a little bit the AI outputs. That makes a lot of sense to me because I'm even just thinking, you know, Synack fought this battle years ago, right? Of. Demonstrating to organizations that they can and should trust, ethical, vetted hackers from around the world to, you know, test their networks and give a crack at it.
[00:17:18] That was quite a radical concept at some point. Right. And now it's, it's pretty well accepted that that's the best security practice getting some of that adversarial perspective. It's not a, it's not an uphill battle culturally to have those conversations with CISOs and you know, security leaders. But I imagine if you're talking about just throwing a bunch of AI agents at critical assets.
[00:17:38] When you know, even the AI companies themselves admit that there are some black box issues, they're not necessarily gonna be be able to dissect with granularity what these AI products are doing with their decision making, like how they're arriving at certain decisions. I don't know. That would make me squirm a little bit. As a security, as a security leader, how do you win their trust?
[00:17:57] Paul: Yeah, absolutely. And that plays in the offensive security space too, right? They're, I've seen a lot of, of fluff out there in terms of like, you know, this agent can go do all these things, and you kind of dig into it a little bit and you're like, I don't, uh, exactly sure. I, I, I would a hundred percent buy into what they're doing.
[00:18:14] I think there's, there's still a element on the offensive security space of the, the low-hanging fruit stuff should be, can be automated and even, even, we're seeing a ton of success there, right? In terms of like, what an AI can do and look for. But I still think for a lot of these, you need a human validation point, otherwise the noise factor becomes exponential and you just bury teams and stuff and content that they don't trust.
[00:18:36] And the security industry had numerous cases over the last 20 years of burying people with information they don't trust. And all that does is create a culture where people don't take action on the things they need to, or they miss the the signals that they need to be looking at. And have horrible outcomes.
[00:18:50] Right. And I, I think like the target breach was a good example of that where they had signal but their team wasn't looking at the signal 'cause they were buried in the noise. And so had they seen the signal earlier, it probably would've been a, still would've been a breach, but it probably wouldn't have been as bad. Um, I think that's one of the challenges of those, the AI outputs.
[00:19:06] Blake: You mentioned, you know, obviously there's an element of auto autonomous AI function here, right? The, the AI kind of doing things on their own, making certain decisions, f following certain attack paths. Is that the same as like to play Gartner here for a second and make some definitions? Is that the same as automated pen testing?
[00:19:24] Is there, is that the same thing as scanning? Like where, how do we define this? How do we categorize this and are there certain things that are better or worse when you're really looking at, you know, which tool, which pen testing tools might be worth their salt?
[00:19:38] Paul: Yeah, I think it, it's interesting. I think there's a lot of, there's a lot of, uh, misinformation, confusion around the, the notion of automated pen testing. I think if you, if you talk to any pen tester worth their salt and say, Hey, automate this pen, test flow, they'll go, uh, I can automate the scanning. And the scanning can look for the known knowns.
[00:19:54] But I don't trust any of the outputs. It's all suspected. Until I can take action on it, I gotta prove that it's actionable. Otherwise, if I send this all to the end customer, they'll go, yeah, no, no, we've already got all these scan results from Tenable, Qualys, rapid seven. You know? Yeah, yeah, yeah. You just kinda go down that list. So I think from the automated pen testing side, usually what, what I've seen in, in my exposure to it from organizations, we've leveraged it, it's usually looking at known CVEs tied to like infrastructure and kind of leaving the rest of the entire tax surface out of the picture. So it's just. It's kind of like a finer tuned VUL scan compared to like a real pen testing capability.
[00:20:28] Blake: So fundamentally that can only, it's one piece, piece of the puzzle is what you're describing. It's not giving that kind of holistic, top to bottom, you know, pen testing all your potentially exposed assets. That makes, that makes sense. And I think you're gonna see, I mean, certainly we're seeing it with ai, right?
[00:20:42] Like the claims flying around everywhere. It's difficult to sort out what's real and what's not. When, when, it seems like we're in an era where AI can do anything and everything. How do you get to the bottom of what's actually, uh, what it's actually capable of?
[00:20:56] Paul: It creates a lot of confusion, right? Um, it'd be like calling an Nmap scan with a plugin, you know, a V scan. It's like, nope, it's not. And it's a support scan. It's a smarter support scan, and then it's like, can you go up, it's like a V scan is a vol scan. An automated pen test is like a vol scan with like a little more little more intelligence into it, and then you get pen testing and red teaming and everything.
[00:21:13] It goes just well beyond those capabilities.
[00:21:15] Blake: One question I like to ask, just again, centered on so much that we've talked about with ai, and I love hearing your thoughts on this. It's, it's, it's a really important area and one I'm sure a lot of our listeners are. Keenly engaged in is just high level. Do you see AI helping defenders or attackers more say in the next year? I, I know it's hard to predict with these sorts of things, and you know, uh, predictions are tough, as they say famously, but where do you see AI helping more?
[00:21:42] Paul: I think it's on the attack side. Um, 'cause I think on the defense side, it's not just about the technical capability, it's also about the internal process and the culture. There's a lot to overcome beyond technology for the defenders. Whereas the adversaries, they've always had the mantra of like, we just have to be right once. Right? You have to be right a hundred percent of the time. We just have to find one opening and they we're in, and so yeah, the um, yeah, pun intended.
[00:22:05] Blake: Cue the cue, the music we're in. Welcome to, we're in.
[00:22:08] Paul: I love it. Right? Um, and so I think their, their capabilities are growing exponentially and they're gonna be able to tack targets faster and they don't have the internal bureaucracy or politics or, you know, any of those internal, uh, processes they're gonna have to go through to take action on it.
[00:22:22] And so I think it'll benefit the attacker drastically more over the next 24 months compared to the defenders.
[00:22:30] Blake: Finally, for somebody who's really interested in this and listening to this podcast say, and is just feeling a little overwhelmed, we tossed around a lot of fancy terms and, you know, pen testing, uh, processes. If, if somebody were just looking to get started right in AI or in in pen testing even, what resources would you recommend? Where, where should they go?
[00:22:49] Paul: Uh, great. So if you're on the, on the pen testing side, there's a lot of materials out there now. It's kind of, it's very popular to get into the offensive security space. You're always learning, there's always niches to go after and be the expert in like that one niche.
[00:23:01] Um, you could do like hack the box. There's a lot of free online resources from, uh, offensive security. Um, and the, uh, the folks over that create Burt Port Wigger, right? They have a bunch of, um, like acade things you can go through to learn how to hack various things. There's various, like VUL Hub has a bunch of, uh, vulnerable VMs that you can attack and learn in your own home environments.
[00:23:21] There's a lot of cool capabilities to kinda get in and figure out, like, is this for me? And like, how deep down the rabbit hole I want to go. But if you're gonna go in the, the pen testing space, you know that you've gotta go deep into how everything works, right? You gotta understand. How systems work, how protocols work, all of that fun stuff.
[00:23:34] Um, I would dig in there, uh, on the AI side, um, I'm actually a huge fan of Anthropics content, so, you know, personal subscriber to Anthropic. I love their, their age agentic workflow articles out there to really define like how an age agentic workflow can be and all the different variations and flavors and how each agent can be defined.
[00:23:52] Um, I think that's a great place to start to, to kind of open up the i your ideas of like, how could AI. Automate an end-to-end workflow of anything you're doing, security or otherwise. Um, how would you build agents? How would you go through it? How do you think about the architectures that can be implemented?
[00:24:07] And then from an offensive standpoint, I look at that same thing and go, well, where can I break that chain depending on how you've decided to implement your, your AI workflow because there's potential for escape down that process. So, uh, those are the two places. Those are the areas I'd probably start.
[00:24:21] Blake: No, that makes a lot of sense. And I, I've seen, you know, anthropics big on the transparency piece, so they really lay it all out there. I've seen some of their blogs and content that's like, extremely, almost damaging to the AI industry. When you look at it like AI trying to blackmail people and things, and they lay it all out there.
[00:24:36] 'cause I think they realize that it's important to reckon with the ups and downs of AI technology and uh, it's something that certainly the conversation will continue. And, uh, and Paul, I really appreciate you joining for this conversation. I think it's an important one and, and certainly an interesting one.
[00:24:51] Now, finally, before we, uh, we part ways here and get back to our day jobs, uh, there is a question we like to ask of all our guests. Which is, what's something we wouldn't know about you just by looking at your LinkedIn profile?
[00:25:04] Paul: Yeah, that's a good one. Um, what can I say that wouldn't be used against me in social engineering campaigns in the future?
[00:25:10] Blake: Thinking about that.
[00:25:11] Paul: Of course, of course. You gotta, you gotta think about it that way. Um, I think most, most people who who know me know that I actually have a, a huge gaming background. And so I grew up being an avid gamer, um, fueled by A DHD, just, you know, naturally latch onto it for hours and hours and hours.
[00:25:26] But, um, that's probably one thing you, you wouldn't find is that I'm a, I'm a huge gamer, but it also plays out, as you can probably tell in a lot of the ways I, I think of strategies and, uh, getting creative when you run into problems and whatnot. So that's, uh. It's a, a skill and passion and addiction that has been very helpful to me throughout my career.
[00:25:43] Blake: Well, tell, tell me more. If I'm gonna log on to steam later, where am I gonna see you? Where am I gonna see a plane in the lo? Which lobbies are you popping into?
[00:25:50] Paul: Oh, I, I gotta be secret. I gotta be
[00:25:52] Blake: Okay. All right. That's, I won. I'll, I'll look out for Paul underscore Mote 77 somewhere, uh, and, and I'll know it's you. Um, all right, thanks Paul.
[00:26:01] Great to have you on the podcast.
[00:26:02] Paul: Appreciate it, Blake. Thank you.