WE'RE IN!

Ads Dawson on developing the OWASP Top 10 for Large Language Models

Episode Summary

Ads Dawson, release lead and founding member for the Open Web Application Security Project (OWASP) Top 10 for Large Language Model Applications project, has no shortage of opinions on securing generative artificial intelligence (GenAI) and LLMs. With rapid adoption across the tech industry, GenAI and LLMs are dominating the conversation in the infosec community. But Ads says the security approach is similar to other attack vectors like APIs. First, you need to understand the context of AI-related vulnerabilities and how an attacker might approach hacking a particular AI model.

Episode Notes

Ads Dawson, release lead and founding member for the Open Web Application Security Project (OWASP) Top 10 for Large Language Model Applications project, has no shortage of opinions on securing generative artificial intelligence (GenAI) and LLMs. With rapid adoption across the tech industry, GenAI and LLMs are dominating the conversation in the infosec community. But Ads says the security approach is similar to other attack vectors like APIs. First, you need to understand the context of AI-related vulnerabilities and how an attacker might approach hacking a particular AI model. 

In the latest episode of WE’RE IN!, Ads talks about including threat modeling from the design phase when integrating GenAI into applications, and how he uses AI in his red teaming and application security work. 

Listen to hear more about: 

The misuse of AI, such as creating deep fakes for financial gain or manipulating powerful systems like the stock market 

The role of governments in securing the AI space and the concept of “safe” AI

How the infosec community can contribute to OWASP frameworks

Episode Transcription

[00:00:00] Blake: Thanks so much for joining me on the podcast ads.

[00:00:02] Ads: very much. It's a pleasure to be here and thanks for the awesome content.

[00:00:06] Blake: it's, uh, really appreciate it. And it's, uh, I'm really excited to dive into the conversation.

[00:00:10] We've got a, uh, a lot to talk about related to OWASP, but, uh, for the uninitiated, what exactly is OWASP and, and how'd you come to be involved?

[00:00:20] Ads: OWASP is a, is a, is a bunch of open source frameworks predominantly around, um, application security. They're probably most famous for their traditional OWASP top 10 API risks, which, uh, kind of used by organizations as well as other frameworks and guidelines about influencing best security practices.

[00:00:40] They also have some great extensive documentation. Predominantly the majority is contributor based, but they have a lot of great other content, such as like the ASVS, which kind of like delves a bit deeper outside of just like checklists and give developers that kind of like almost like a Bible kind of approach of how to, how to securely deploy feature function. Etc.

[00:01:02] Blake: I like that comparison. The Security Bible. Trust OWASP. That's a, that's a good tagline. Now, you're, you're, you're known for, uh, for leading some work on this new OWASP Top 10 for Large Language Models. I guess I shouldn't call it new. It's, it's been around for at least a little bit here. But what, what's the process like for getting a new Top 10 for OWASP?

[00:01:22] As you mentioned, of course, I think many folks are familiar with some of the more mothership main oas top tens out there. But, , how'd this come to be for LLMs

[00:01:31] Ads: Yeah. So it's a great question. I actually, so I actually kind of got involved in the actual project. So I've always been an OWASP child. Um, I mean, throughout my career, I've kind of dealt through network security and self location security. So OWASP was like a natural guidance for me. I've used it a lot of times and, I actually contribute to the, uh, well, I was in the Vancouver chapter, but then until I moved recently, kind of doing stuff with the Toronto guys.

[00:01:58] But yeah, in terms of like a new project, it was actually Steve who is the project manager, he had already set that up to be completely honest and kind of got through the approvals of that. I'm pretty sure they have a very straightforward process, but, I think I saw like a LinkedIn post or something, and it was like this new us project.

[00:02:17] And at the time I started a new role in AI and I was like, you know what, like, like if I can better my skills and contribute to OS, like this could be like a pretty cool initiative. We actually had our first birthday the other day, which I, thank you, I cannot remember what the date is, but it was recent.

[00:02:34] In the past few weeks, maybe.

[00:02:38] Blake: Well, on to bigger and better things. Uh, , I've got a,

[00:02:41] Ads: Absolutely.

[00:02:42] Blake: You've got a one one year and one year going strong. And I know, I understand. There are actually some updates coming. I guess you're gearing up to release V two of the O os top 10 for LLMs in October. Based on the, you know, official timeline, I guess there's some voting that goes on, the community kind of looks at reviews, the existing entries and starts contributing.

[00:03:01] But what's the scoop based on the preliminary actions and what you've seen so far? What do you think is going to change for V2?

[00:03:07] Ads: Yeah. So to be honest, V2 is definitely challenging. So the current phase we're in is kind of, we're accepting new entries or submissions or ideas, for example, merging entries, we've, I've even seen someone suggest like a top 20, for example. So, I mean, it's literally like, If you want to submit it, you go for it.

[00:03:26] Um, so, that, like, important of, like, data collection and analysis is going to be, like, critical to us to, like, produce something that is viable and is useful, individuals, organizations. In terms of, like, where I see it going? I honestly, at one point, I was hard set on like LLM applications, which the latter part of that, of that kind of structure really does make a huge difference because LLMs themselves inherit their own unique security risks, but there are overlaps between that and also traditional application security.

[00:04:02] So Like when we started out the project, we were pretty much very much focused around like LM's within applications. However, you know, obviously we had like kind of crossovers. So if I just take like a very simple example, something could be like resource consumption, or like model theft or shadow model theft, things like that, which can Be predominantly mitigated with traditional concepts, which is rate limiting, which is, you know, unrestricted resource consumption is in like the traditional API, OSP API list.

[00:04:33] So there was some kind of overlap in there. There's been a lot of talk about different kinds of models, even outside of large language models. The project is generally catered to work more towards generative AI as a terminology. Uh, rather than try and be sandboxed towards like LMs specifically, um, when we started out the project, LMs were just like the driving force behind NLP, in this current way we're seeing.

[00:05:00] So it kind of made sense to have like their own top 10.

[00:05:03] Blake: We'll stay tuned. It's coming soon. And it sounds like there are a couple of avenues that folks could decide to take with these. And it is interesting. You mentioned, you know, app security. It's, you know, I wonder how balanced it is in terms of, you obviously have your, your AI specific vulnerabilities, and then you also have kind of traditional app security problems that can just be.

[00:05:28] Introduced by virtue of organizations rushing to apply these AI tools in their environments. I guess, how do you navigate that? How do you distinguish, you know, looking at a vulnerability saying, ah, that's LLM versus that's just traditional app security?

[00:05:41] Ads: That's a really good question. So, first of all, part of the OWASP project, one thing that I would really suggest for, you know, individuals and organizations is to not stop there. We've seen like a great amount of adoption from the industry and, you know, this has been like mentioned in events, we spoke at RSA this year, which I never liked for us.

[00:06:02] For seeing myself doing so that was that was pretty cool. As much as it is great that it's been adopted It's definitely not limited to so first of all, like you should not be using that as a stopgap Same with all like traditional OS things, right? I'm a big fan of like building internal frameworks and your own taxonomies Which you can then to identify that.

[00:06:22] And then you can get collaboration from, you know, multiple stakeholders across teams and stuff. When it comes to like analyzing risks, the best part is, is threat modeling, which like, you know, depending on the company and how you're integrating or how you're creating generative AI, then you need to be making sure that you're kind of incorporating threat modeling into both like the traditional SDLC and also the machine learning.

[00:06:47] Machine learning development life cycle as well. It's really a requirement to kind of integrate and plug into those. When you're building taxonomies, you'll find that a lot of. You know, there is a lot of overlap, so you can apply, like security and layers, which is a great approach to, to these kind of new attack vectors.

[00:07:05] Blake: Let's talk about threat modeling for a second. With the caveat that, um, any large enterprise is listening, ads is not meant as a, uh, solution for your threat modeling, whatever his comments are here. Probably should be doing that yourself, but, just say, you know, laying out the stakes for AI.

[00:07:19] Say you are a large enterprise with, a thousand plus employees and you, you are so excited. You want to take advantage of everything AI tech has to offer. You want a chat bot to help visitors navigate your site. You want to boost your own workers productivity. Maybe even rolling out AI generated code to help speed up your own software development life cycle.

[00:07:37] How should you be thinking about threat modeling? And where, by extension, where does this OWASP top 10 for LLMs potentially fit in?

[00:07:44] Ads: Yeah, that's, that's a great call. So in terms of like threat modeling should be definitely from the design phase, in my opinion. I think really there's honestly something to be, you know, And I completely agree. NLP is a beautiful technology and it's fantastic. But a lot of times people are kind of get hazy eyes over the fact that what you're actually adding to the stack.

[00:08:06] So if you're doing something like you're integrating generative AI into like a, you know, a chatbot application, predominantly like A lot of your, a lot of your application still revolves around, or still kind of is, is, um, required for like traditional network security infrastructure controls, right? Like, what exactly are you adding?

[00:08:27] Is it just another layer to that, which you can then start to then apply? different kind of mitigations. So threat modeling from the design phase, integrating and working with other collaborators, getting developers involved with threat modeling techniques. Um, and they're also top 10, like I said, should be used predominantly as a, not a stop gap, potential vulnerabilities you should be aware of.

[00:08:50] And some of those may not even be relevant to you as well, right, which is the whole point of threat modeling to identify and, you know, quantify risks. 

[00:08:57] Blake: Maybe if you're not developing the AI model yourself, you don't care about model theft as much, I mean.

[00:09:01] Ads: Yeah, exactly. Right. And that's kind of the one thing, I guess for the OWASP project, which I've always thought about is that a lot of the vulnerabilities in there are, you know, we have like, for example, model theft. You know, if you just like plugged in like an open source model into your application, then you're not so much positive about that as you are.

[00:09:22] Like, I don't know if you. Got some kind of rag with, you know, internal database or something like that. So yeah, a lot of these things aren't, um, as contextual as I would like. 

[00:09:34] Blake: Right. To the earlier point, Ads is not speaking for every, what every organization should do. No, we, we have seen a, we have seen a few, a few hype cycles over the last couple of years. You know, we've had the blockchain boom, Web3, the metaverse. Love or hate AI, I think most would agree that we're in some stage of a hype cycle here.

[00:09:56] I'd be curious to hear what you think is perhaps overhyped about AI, and then the flip side, an angle to AI that maybe people should be considering or talking about a little more.

[00:10:07] Ads: So AI and security, I feel recently has only just become a thing. But that's mainly because I think like the industry shifted so quickly towards it. I think AI is a fantastic tool. Um, there are things to be said by that, which, you know, in terms of where we are with progress on it, like when we're talking about things like over reliance, that's definitely, like as an individual you should be aware of.

[00:10:32] But in terms of like generative AI and, you know, generate, a lot of this has been around for quite a while. It's with the, you know, like with the rapid acceleration, we've really seen an increase of efficiency, which has been remarkable, but like how I think my, like my second cell phone had predictive text in it, for example.

[00:10:50] In terms of where it's going, I'm very impressed by it and there'll be continuous. I think with that, it's kind of a double ended sword because I feel like the attack vector, um, and that spectrum just gets wider as all this kind of blows up.

[00:11:05] So, uh, just making sure you're aware of the, you know, you don't have to be an expert per se, but trying to keep up and understand of how that works and you can identify ways of how that could, you know, identify, introduce vulnerabilities, threats into your existing stack, I think really.

[00:11:22] Blake: And how did you find yourself working in AI and large language model and machine learning security?

[00:11:30] Ads: Um, so I was, I was incredibly lucky. I kind of fell on my feet on it, to be fair. I was kind of in between roles, from my last, uh, tenure. So I was, I was looking for a job and, I managed to find one at the current company I'm at and it was, one of the best decisions I've ever made.

[00:11:46] So I was honestly just incredibly lucky. When I got that opportunity, that's when I kind of got plugged into the, like the OWASP project, cause I felt like I wanted to kind of hone my skills and, and learn more. So just kind of chucked myself in the deep end, so to speak.

[00:12:03] Blake: Right, right. Well, you've certainly produced a lot it seems like in your, in your tenure with OWASP. And as I understand it, this, OWASP, you know, working group that the top 10 for LLM applications working group, released a new charter somewhat recently that expands the focus to include quote, influencing government policy and collaborating with international standards bodies to ensure the secure, safe, and ethical use of LLMs, and generative AI, end quote, interesting government shout there. I'd be curious to hear where those efforts stand and what you think broadly if governments are doing enough to secure the space.

[00:12:43] Ads: So, it's actually a great initiative. It's not one that I'm like predominantly part of, it's part of our working team, but there are a few individuals in the group who have made some great documentation and frameworks for aligning more with the privacy side of things. Like I mentioned earlier, the OWASP project itself has seen, like, great results. Adoption from, from lots of companies. And I think because of how quickly the landscape shifted, then it was only kind of being referenced elsewhere. And I feel we've kind of got into this, um, into this kind of scenario, which is great for collaboration, where, you know, Elasper aligning with like Mida and NIST and all these other frameworks.

[00:13:27] I've seen stuff from like the U. S. government, for example, as well, and I think each taxonomy is trying to align with each other. Because there is kind of such a broad spectrum here to cater for, so yeah, that's an ongoing effort, which has been really successful so far.

[00:13:45] Blake: I'd be curious to hear what like safe AI means to you. I feel like we hear these terms bandied around sometimes of like trustworthy AI, reliable, safe, secure, ethical, is there something that's just all encompassing? Like, AI needs to do this, or are those still up for debate as to where these, uh, should be defined? What constitutes a safe AI?

[00:14:10] Ads: Yeah, so that's actually a really good question. And it's not one that I really found myself in at the early phases of my career, but more so that I was starting to get involved in now per se. So when we talk about safe, we talk about going outside of like ethical guardrails. How is a model trained?

[00:14:29] What is the preamble? Is it being, you know, circumvented? Is it being engineered in a certain way? Are there certain prompts which are able to, you know, to bypass those guardrails? Again, all kind of contextual and the thing that gets really kind of tricky with like safe AI, I don't think there'll ever be anything like, like a global per se, because you've got different states, different countries, different backgrounds, every kind of, when you talk about an opinion or something like that, then they all are contextual and based on, based on who you're talking to.

[00:15:01] So it's a very interesting space and you've seen. But it's kind of like a big kind of shift going between where those like policy makers are, who has the authority to actually kind of implement those policies as well, right?

[00:15:15] Blake: Right. It's easy to imagine an AI system deployed in China that's rightly citing information about the Tiananmen Square protests would be considered unsafe, but the same, you know, AI, you'd want those answers to be clear cut in the, in the US, right? And actually be accurate and whatnot.

[00:15:29] And that would be considered totally safe. It is, I guess there is some, somewhat in the eye of the beholder there that, which makes sense. Almost threat modeling, but, attending to what you're, you know, Safety considerations are and what that, what that means. Now, the working group that you're part of has over a thousand members. Has requested data from outside contributors.

[00:15:47] I know a lot of organizations are thinking about this, wanting to weigh in and kind of shape the direction of some of these efforts. How do you handle that many people and organizations and are, are you using Gen AI to help wrangle some of the data? That would be interesting.

[00:16:01] Ads: I may be using, generate for some kind of meeting notes and stuff. I'm a big, so first of all, I'm a big fan of dog fooding. Um, when it comes to things like red teaming, using like risk reward and using generator AI to, you know, actually red team, another model. I, I'm, I'm totally fan of that.

[00:16:19] So, there's nothing really wacky going on in the project. I can completely assure you. Um, . We have a, we have a fantastic community. It is very large, like I say, um, the best way that we kind of delve it up or the way that we've approached it so far, which isn't necessarily the right way, but we, we have like a main channel where, you know, people can post thoughts, questions, anything of that matter.

[00:16:42] And then we have like sub channels. So we actually have like channels per vulnerability, if that makes sense, where people can really like ask. In depth questions from like people who are more interested or even experts in that field who may be relevant of interest where they can get answers or they can say, you know, hey, the current vulnerability is, I don't think this is right with it.

[00:17:05] So that's kind of the way that we've structured it. It's not necessarily the correct way. Um, I think the, the, the gift of feedback is, is, is awesome. Like it truly is a gift. So it's been great to get so much collaborative input. And all of this, like I said, we'll be using for data gathering and methodology to try and produce a v2.

[00:17:25] Blake: I was going to say for any listeners who might want to get involved. So this is just an open project. Anybody can just go to OWASP and start to contribute or what's the,

[00:17:33] Ads: yeah, yeah, absolutely. Totally open. I think sometimes there is a bit of confusion, so I will just shout it out. When you actually join the, so it's hosted on Slack, and Slack has a concept of workspaces, which are like tenants. if you actually join that, you don't actually need to have an OOS subscription.

[00:17:50] Not that I am, like, telling you to do that, but I'm telling you that you can still join. Um, because I know sometimes people, That's the impression that I got from a few of the people when I spoke to them. So, if you find the, if you just look for the, the project, you'd find our, like, OWASP page and our GitHub repo, and it's got links to the, to the Slack channels, and then we've got kind of like a true structure going.

[00:18:12] Blake: Yeah. And you mentioned in that tree structure, it kind of branches off into the specific vulnerabilities. Do you have like a personal favorite vulnerability and, you know, obviously nobody loves vulnerabilities, a favorite in quotes there, or one that, was the most fun to kind of research and put together.

[00:18:27] Ads: It's a good call. So we actually have other ones as well, which are to shout out, you know, we have like, uh, privacy and data and things like that as well. So, not just limited to vulnerabilities, but we have one for like, categories that, you know, other initiatives. We have, I think we have one for like, content and diagrams and stuff, for example.

[00:18:44] So, there's a lot of, there's a lot of interesting ways to get involved. Because I'm like a, what we call like a vulnerability expert, which means that I submitted, a few of the vulnerabilities that went into the top 10. So it was like, I'm like, naturally you get assigned to like manage those in terms of upkeep and, you know, adoption and stuff like that.

[00:19:04] So I definitely obviously, you know, monitor those data. You know, data model poisoning for me is always super interesting. I love learning and I feel like I learn a lot of stuff, every day, especially when it comes to like the machine learning kind of red teaming aspects. A lot of new concepts I've never even thought about or really like give me brain freeze.

[00:19:26] So we have one for prompt injection, right? Which is like everyone's favorite. And there's been some incredible conversations in there, and a lot of debates, and to be honest, I had to mute the Slack channel, because I had to do some work, so I got to a point where I just like, I just gotta mute guys, sorry, like, I I will dip in my toe every now and again to see what's cooking, but like, yeah, it's way too much.

[00:19:50] Blake: So, so prompt injection for just to kind of wrap our heads around that a little bit for that's like, that's like the, who was it that ex Twitter exec who went and tried to like buy a Chevy Tahoe for a dollar with a chatbot or something? Is that like an example of prompt injection? how can we think about that?

[00:20:05] Ads: I mean, there's, there's, there's multiple ways to, there's multiple ways of prompt injection. It's built on the concept called prompt engineering. We're effectively using unstructured natural language as inputs to a model to try and get it to, to do X, Y, Z function. So when we talk about. If you were to compare that in like a traditional AppSec background, we'd be talking about like structured input, like SQL injection or something like that, which makes like prompt injection so interesting because there are so many different abstracted layers, a way of like natural language, right?

[00:20:40] It's free flowing text. So there's so many different ways of circumventing. And performing prompt injection similar to the R with like, you know, SQL injection and things like that, but we, I kind of sometimes relate it to fuzzing almost, if you were to fuzz a bunch of SQL inputs, but, um, yeah, hopefully I explained that well.

[00:21:01] Blake: That? No, that, that makes sense. So, you know, it's like fuzzing, but with actual Language inputs, and you can kind of do whatever you want. I think there were some Google researchers, if I remember correctly, had the word poem, poem, poem input a bajillion times, and then all of a sudden the model starts spitting out all this potentially not ideal outputs.

[00:21:21] I wanted to circle back to data poisoning. Can you walk us through what that entails? Cause that sounded like that was another area of interest for you.

[00:21:29] Ads: Yeah, absolutely. So data poisoning. So large language models are trained on, on data sets, which are like huge corpus of data. So talking about Common Crawl is like a very, very common one where it's basically an archive of the internet as a, as a snapshot at a certain phase. I actually looked how big Common Crawl was the other day, but I, I don't want to quote it, but it's huge.

[00:21:53] And it, it, basically you, you train them, you train a model on like, let's say Common Crawl. Within that, you're effectively training a model based on data. So like content on web pages with that type. Forums, blogs, wikis, articles.

[00:22:09] Blake: com.

[00:22:10] Ads: Yes. Yeah, exactly. Right. As always been super intriguing when I first kind of like got involved and I was learning about data, I was like, why don't I just like throw the dark web at it?

[00:22:19] But yeah, so basically like data poisoning. If we're talking about like poisoning training data would be for a malicious actor to somehow like poison that data. And what we mean by poison is like, there's many different ways, but like a simple example is taint it. So if I, you know, I tell you, I, I somehow get some data and tell your model about how something very important is incorrect.

[00:22:45] Or it's, I'm giving you false information or bias and you're training the model on that. And he doesn't know else. Or else, then it might be returning that to the user. So if I'm giving you something like, you know, hateful speech, that could be returned to the client input.

[00:23:01] Blake: Hmm. Could it, maybe like faulty code that is like, suffers from

[00:23:06] Ads: yeah, faulty code, right? 100%.

[00:23:10] Blake: using the AI model to help them code, then it's introducing vulnerabilities that, that it thinks are correct. Interesting.

[00:23:17] Ads: So recently I, uh, I did a talk at the OWASP Toronto chapter. Shout out to my OWASP Toronto crew. 

[00:23:23] And yeah, you're absolutely right. We see a lot of common data sets, including things like GitHub. As great as there is some fantastic code on GitHub, we all know that there are a lot of, there is a lot of malware in there and a lot of reverse shells. So, yeah, absolutely. Like, you could, um, inherit, bad code from, you know, accidental training or just because you don't, the problem with data poisoning on training data with large corpus is that, well, it's great that you're getting, you know, a lot of data.

[00:23:54] The problem is you don't have ownership of it, so you can't, you can't verify the integrity or that you can't really trust it.

[00:24:01] AI is clearly a really fast moving space in the technology sector. Things are changing seemingly by the hour. are you keeping an eye on from a security standpoint in, let's say, the near to medium term, maybe the second half of the year, 2024 year and beyond?

[00:24:17] Ads: first of all, I'm always trying to bolster my adversarial arsenal. So I'm really trying to personally learn about. Uh, learn about AI and how it can enhance my work as a red teamer, as an application security engineer. So I've really been looking at ways of enhancing things like, even like threat modeling, there's a great contribution on actually like using GPT to actually perform like stride based threat modeling.

[00:24:42] So that's one thing that I'm really, uh, really involved in right now. I've really tried to accelerate and adopt, um, some kind of growth on my side, especially when it comes to the red teaming. Jelly to AI is a great tool, why not use it? In the context of, of producing benefits in terms of other things.

[00:24:58] So when I first got involved this, I heard about this thing called archive, which for those who don't know, it's a, it's, um, it's a research papers in, in forms of citations. I try to read as much as I would like of that by, I tried keeping a reading tabs, you know, to keep up to date with machine learning kind of threats and stuff.

[00:25:20] And that reading list is Insanely huge right now. So I have a lot of readings. So then I was like, oh, why don't I start asking Gen3RI for TLDRs and it's been pretty successful so far. As long as I trust the output, right? 

[00:25:34] Blake: Right, right. I guess you got to kind of take it with a grain of salt sometimes if it's, uh, if it's telling you certain things.

[00:25:39] Ads: I'm always a, I've always been a tinfoil hot kind of guy, so yeah. I'm always su suspicious of everything

[00:25:45] Blake: Well, I think you're in the right community for that. You're certainly not alone on the tinfoil hat department. And, you know, one thing, speaking of tinfoil hats, now, I'm really diverting a step. No, you talked about red teaming and potentially researching how to use AI. point about ethical AI, this is something that I've thought about recently, in the context of, you know, leveraging the power of AI to, for instance, find vulnerabilities or help find vulnerabilities.

[00:26:13] But then, you know, a lot of these AI models are going to be like, wait a minute, you're telling me to hack something. You're telling me to do something bad. That's outside my guardrails. I don't want to do that. Is that something that you've encountered? And how should companies be thinking about this? Who might want to use that?

[00:26:27] But they're doing it for good, but the AI model might not know that.

[00:26:30] Ads: Yeah, I think, I think, like I say, all contextual, choosing the right model as well, right, and the right data sets. So, what are you training on? what model are you using? How is it integrated into your, into your, like, into your organization? Sometimes as well, one thing that's really to be said is dog fooding, right?

[00:26:49] So, if you're doing things like checks in CICD, why not try utilize some kind of, um, use some kind of genitive AI where you can, you know, For basic code reviews, obviously it's not going to be like, you know, said and done, but at least it's a start, maintaining that corpus of data is like pivotal and making sure that you're getting quality and I guess, good breadth of data.

[00:27:13] So it's very much a toss up of both, um, in, in terms of like, in terms of trying to get a model to produce malware or something like that, regardless of what the guardrails are right now. It's possible. You just have to ask the right thing. There's a lot of, there's a lot of caveats to that.

[00:27:33] I'm just talking, you know, generally, because you've got things like hallucinations and stuff. Just being able to tune your problems, engineering skills, I'm pretty sure you'll see a bit more success.

[00:27:43] Blake: Interesting. Interesting. Yeah, and I guess that's, That then raises questions about the attacker side, because you mentioned earlier they're using these tools too, and it's not just for defense. Do you feel like AI is going to help attackers or defenders more on balance over the next year?

[00:27:58] Ads: OpenAI actually published a great article, um, probably quite a while ago now about how they saw a lot of, uh, basically it was like a threat intel report about usage of their API, across like their threat, uh, threat actors, like APTs and nation states. It was really insightful because it kind of gives you like a, It gives you like a, you know, just a bit of a hot snap of what they're seeing from different territories and different APTs that could utilizing their, their tools already.

[00:28:30] So if you're in that situation, you know, where you, you are a model provider and you are seeing that, then the best thing I can Suggest is use that and kind of look, you know, kind of flip on the other side and be like, is this something that we can use it for on our side, but then also you can use that to improve your detection.

[00:28:47] Right? So it can be like, kind of like a mouse game, but that's, I guess, with like traditional security controls. There's always going to be abstracts and there's always going to be a toss up between, you know, functionality and security. it's been used a lot, like, you know, in, in blue team, for example, for a long time, like, you know, IDS systems, which are, you know, IPS, IDS, which aren't the signature base, you got anomaly detection, web application firewall.

[00:29:13] So it has been utilized recently, but I think. In terms of like the adversarial, probably ahead, just based on like how incredible the technology is right now, if you're talking like image generation or, you know, text to speech, things like that, it's a lot of stuff around, um, deep fakes and things like that, which are extremely kind of intriguing, but also very.

[00:29:42] Blake: Yeah, we'll see probably a lot more of that as election season heats up here in the US. I know you're based in Canada, but I'm sure you'll be following along some of the

[00:29:50] Ads: 100 percent yeah, I will definitely have my popcorn. It's super interesting and it's very insightful. It just goes to show really, but yeah, the magnitude is crazy. I've seen a lot of stuff. I do actually have Twitter, but I saw a report on Twitter about how a text to image generation had created a government, U. S. government building image of it, like on fire.

[00:30:18] Inclining that it had been, that it had been attacked or whatever, um, that had actually then forced like a, like, um, a automatic Twitter, Twitter algorithm, which had then populated low downstream data, increased stock prices, reduced stock prices, and effectively TLDR, someone had been able to manipulate that and take advantage of that, knowing that like basically making this like fake image going viral would do that.

[00:30:46] Does that make sense?

[00:30:47] Blake: No, it does. It reminds me of, uh, an old hack of the Associated Press's Twitter account, I believe, about 10 years ago, where somebody tweeted from the AP account, explosion at the White House, and this was during Barack Obama's, I think, second term, and like Obama is injured, and stock prices plummeted and reacted to that.

[00:31:05] So, based on, in large part, on automated algorithms that aren't necessarily tuned enough to realize that there may have been a hack, um, I don't know. Really alarming to consider that with AI because that is going to get even more convincing because you can add imagery to it too and be like, here's a, here's the building, it's on fire.

[00:31:22] You can see it with your own two eyes and only a very trained AI, you know, maybe the columns are slightly off or maybe something, you know, it's a little wonky. The clouds aren't perfect. It's going to be hard to tell that that's fake.

[00:31:34] Ads: it's, it's incredible how realistic it is right now. So that's like, this is an example of how it's been used for like, you know, financial gain by like, you know, an APT, but it's not necessarily something that it's like targeted someone as well. It's just been able to basically identify a vulnerability in a system or a process.

[00:31:53] I mean, defcon's cancelled, everyone sells the tickets, you buy them really cheap, and then you have like, like millions of Defcon tickets, you know what I mean? It's

[00:32:01] Blake: You heard it here first. Defcon is canceled. That's it. It is. Yeah. Sad.

[00:32:08] Ads: but yeah, you get the idea. It's, uh, when you think about it and kind of step back, it's Kind of simple, not taking away the technicalities.

[00:32:16] It's like, it's very involved. I'm sure it was very well tacted, but yeah, it's, it's, it's crazy.

[00:32:24] Blake: No, really, really fascinating stuff. I've really enjoyed the conversation. A lot of food for thought, a lot of kind of theoretical emerging trends here that are becoming more and more practical and, uh, and potentially risky by the day. Now we do have one question that we ask all of our guests in the podcast, which is, what's something ads that we wouldn't know about you just by looking at your LinkedIn profile?

[00:32:46] Ads: Oh, that's a really good question. I guess something personal. So, I'm a huge kind of outdoors guy, so I really, really love my, my job. My career has been a passion for me. I, I've been really lucky. Like, I did fall in love with my job and I actually didn't have any education or anything in it. So I, I'm like forever grateful where I've ended up, but, when I am not working, I like to ride anything that is a board, surfboard, snowboard, you name it. I would say I'm a bit of a adrenaline junkie, and yeah, I honestly, I, any minute that I can get away from a, from a keyboard, and just love to be outdoors.

[00:33:25] Blake: You said you're based in Toronto? I'm trying to think of the surf spots near Toronto. It's a

[00:33:30] Ads: yeah, Toronto is not great for surfing. Um, but no,

[00:33:36] Blake: Artificial waves,

[00:33:37] Ads: recently, recently lived in Vancouver.

[00:33:39] Blake: That's, that's, now we're

[00:33:40] Ads: yeah, like, yeah, some, some hikes were, were pretty special and I got in trouble a lot of times,

[00:33:46] Blake: Oh boy, oh boy. Well, that, that definitely, uh, that sounds like a lot of fun. And, uh, thanks again for, for joining me on the, on the program, man.

[00:33:54] Ads: Yeah, thanks so much for having me. It's been a pleasure and I, uh, I really appreciate, um, all the great content you guys put out.