WE'RE IN!

Mark Kuhr on AI pentesting and the Synack Red Team

Episode Summary

In this episode of WE’RE IN!, Mark explains how he recruited a community of global top hackers to join the burgeoning Synack Red Team – and what’s at stake as AI capabilities ramp up for attackers and defenders alike.

Episode Notes

Dr. Mark Kuhr, a former National Security Agency employee, faced a host of challenges when he co-founded Synack with CEO Jay Kaplan in 2013. As CTO for the security testing company, Mark has led Synack through dramatic growth while working to shift the mindset of some cybersecurity practitioners. For instance, the Synack platform, featuring access to security researchers around the globe, initially faced skepticism—a group of essentially strangers pentesting enterprise networks? Not the most convincing argument for CISOs. But through a trust-but-verify approach, Synack’s take on security testing has risen to prominence in the industry. 

In this episode of WE’RE IN!, Mark explains how he recruited a community of global top hackers to join the burgeoning Synack Red Team – and what’s at stake as AI capabilities ramp up for attackers and defenders alike. 

Listen to hear more about:

Episode Transcription

[00:00:00] Blake: Well, Mark, thanks so much for joining me on the We're In podcast here. I really appreciate the time.

[00:00:03] Mark: Yeah. Glad to be here. Thanks a lot for having me, Blake.

[00:00:06] Blake: Of course, of course. So, so as Synack co founder alongside Jay Kaplan, you were really in the, in the front lines of founder types who actually came from the government space and really brought a lot of their experience to bear. I'd be curious with your background at the National Security Agency, how did that inform your early experience standing up Synack?

[00:00:28] Mark: Yeah. So Jay and I were at the National Security Agency doing offensive cyber stuff. We can't really go into much detail beyond, beyond that. But one of the things you realize doing that work is that it's incredibly simple to get into large enterprise networks because people don't patch. They don't follow the basics.

[00:00:46] you look at how they're defending their networks and the types of penetration tests that they're doing, and it's just woefully insufficient compared to what the adversary is able to bring to bear against those, those targets. so, you know, Synack is founded to really solve that problem at scale and give you an offensive force for good.

[00:01:06] it's a way to, to harness an offensive, uh, platform that, We'll highlight vulnerabilities ahead of the adversary. And we do that, of course, with our, Red team. And we go out and recruit the best hackers from all over the world, bring them onto the platform and incentivize them to find these exploitable vulnerabilities.

[00:01:23] And the goal is to accelerate that discovery time to be as, as fast as humanly possible. Uh, and of course, they'd be aided by, you know, automation and AI as well.

[00:01:34] Blake: Now, back in 2013, so Synack launches, you started this exciting company, have a great, you know, early investment, you have, uh, then you need to attract these hackers. This Synack Red team you just alluded to, I, I would imagine that that presented something of a, uh, a chicken and egg problem. I mean, you, you obviously want to have the hackers, these, you know, ethical security researchers vetted ready to go to, to help customers, but then how are you going to attract them when you might not have as many customers. And how are you going to attract customers? If you don't, you know, you can see it going back and forth. How did you deal with that?

[00:02:04] Mark: Yeah, very much so. we landed a few customers initially without a single researcher on the platform, and then we scrambled at the last minute to find a few researchers. And we actually went out on what at the time was Elance and ODesk, and we interviewed a variety of penetration testers and security researchers and found, found a few there that we were able to recruit.

[00:02:24] We also hired, some people to scour the internet and find the top bug bounty, uh, players, and we did LinkedIn outreach. We did email, we did a variety of social media outreach. We, we basically, marketed to these researchers that we had an opportunity available, and so it quickly became the case that we had more researchers.

[00:02:43] Then work for them to, uh, to, to do. And so that, that created the other problems. How do you get enough work to satisfy the community of researchers that you have? So that's something we're always evaluating and, and, you know, we're in 11 years of running this business now. And we're always looking at that balance between the researcher community size and the number of customers, and particularly the number of assets that are under management, whether that be, you know, host infrastructure or applications that need testing. And we're always finding that that balance. Um, and if we need to recruit more researchers, we'll do that. And if we need to, you know, dial, dial up activity on certain assessments and tests, we'll do that too.

[00:03:23] Blake: I imagine in those days, too, it was, the conversation was very different. I mean, I remember even in my early days covering cybersecurity in 2014, the notion of an ethical hacker was still somewhat new. And, uh, you know, companies were wary of this, like, letting researchers just, you know, try to find these really sensitive vulnerabilities. Now, I feel like it's a, it's a pretty accepted facet of a, of a solid penetration testing red teaming program that you want to, that that's a, that's a desirable thing, but you, uh, you were probably on the front lines. What was dealing with that culture, that sort of more conservative corporate culture and looking at what Synack brought to bear.

[00:03:56] Mark: It absolutely is a, is a culture shift over the last decade, blank. I mean, when we initially pitched this idea, I recall many investors telling us that that's a crazy idea. You're never going to be able to do that. Who would trust these researchers with, with testing their enterprise assets, and their customers data?

[00:04:14] It was very much a negative feeling on, on the sentiment of involving a crowd. Um, and, and so what we had to overcome was a way to build a technology platform that allowed the crowd to be harnessed, but also provide the insights and the analytics. So people started to trust what was happening with that crowd testing.

[00:04:33] And so we, early on, we introduced, uh, what we call launch point. And LaunchPoint allows us to capture the traffic and do analytics around what researchers are testing and how they're testing and, and making sure that, that people are staying inside the rules of engagement as the tests are going on. Uh, that, that trust, but verify approach has been key to our growth, um, and key to, to trusting, uh, these resources.

[00:04:57] Uh, the researchers, you know, also come from a variety of backgrounds, but, you know, to get that trust level up, we also implemented background checks and interviews and, and really try to. Get to know the researchers as best we can. Um, and you know, working with large enterprises, it does require you build that trust as a trusted process to do testing for someone.

[00:05:19] And that trust once eroded, it's hard to get back. So we try to try to bring that trust factor forward from the initial conversation all the way through the testing process, um, with our researchers. And, and it's ingrained in that culture of, of the SYNAC Red team to, you know, Always stay within the confines of that rules of engagement, um, so that we maintain that trust with the customers.

[00:05:40] Blake: do you see the Synack red team going next?

[00:05:43] Mark: Uh, Synack RED team is growing into a variety of, uh, variety of areas. I mean, lately it's been on the AI front testing, uh, LLM based applications. You know, the, the boon in the last year with the chat GPT's launches, every, every large company and even small companies are launching. Um, chat GPT based apps or LLM based apps with a lot of chat interfaces and the chat interfaces is not as simple as, um, something that's just a well defined API interface.

[00:06:12] So, you know, what we're finding is that researchers are testing these LLM apps and finding ways around. The prompts, uh, and, and finding ways around the protections that, that people are putting in their apps. And essentially the protections are, you know, prompt engineering to make sure that the prompts aren't, uh, are that the, the LLMs are not giving up information that they shouldn't be giving up.

[00:06:35] Um, so that's a, it's a really interesting cat and mouse game that's going on right now with the developers of the applications and the researchers. What we're finding is, you know, there's a lot of edge cases that are not being handled yet as people are very new at developing applications around, uh, these, these AI ecosystems.

[00:06:54] Blake: It definitely seems like a greenfield area for, I remember reading there was some research, I think some Google researchers released with like prompting one of these large language models with the word poem over and over again. And then all of a sudden it yields data that it shouldn't. And I'm just like, how does that happen? Like, what is going on here?

[00:07:11] Mark: and we've got a, we've got a myriad of those cases that are similar to that, uh, that I, that I can't go into here, but it's, there's a lot of prompt manipulation and prompt escapes, uh, that are happening on these chat interfaces. Um, and it's. It's really interesting stuff because the underlying models are also very different. Um, so you've got a variety of models from Anthropic, OpenAPI, OpenAI, um, and Google's Gemini that all behave somewhat differently under the hood. Um, and, and then there's the open source models as well. You've got LOM out there. You've got, of course, Mistral and a variety of, of models on the Hugging Face ecosystem too.

[00:07:49] So it's, it's a really interesting time as people, you know, build, build. Uh, agents, uh, that are chat interfaces, but also interface to multiple models on the back end as well. So if you use a framework like LangChain to develop out your AI agent, you can, you can have it include a variety of models. Uh, you can have it go down different paths based upon the prompts.

[00:08:11] Um, it's, it's really an interesting time, uh, but the complexity is such that, uh, we are finding ways around, uh, the protections in some cases.

[00:08:20] Blake: Speaking of AI agents, do you think that AI will grow powerful enough to carry out some effective red teaming pen testing activities. And what timeline would that play out? And what might that look like?

[00:08:36] Mark: Yeah, it's getting to that point. I, I'd say we're, we're probably already there. And you know, if you're not using this for defense, you should be, you should be using AI for defense, especially because the offensive side of the house is going to start cranking up the use of AI for offensive operations. And that could be. You know, selecting the best target, going through a lot of data, on an organization's assets and finding the best target and the optimal, you know, entry points, the optimal application of, of exploits, um, the optimal way to, to, to come at an organization and, and exploit it quickly. And then the AI agents, you know, can also.

[00:09:16] Get to the point of using tools. So they're also using tools to throw exploits, to do recon, um, and, and gain access. And then there, if you have a multi agent framework, you can even start to get these agents talking to each other, um, and iterating on their techniques for gaining access. Uh, so it's, it's certainly an exciting, uh, research angle.

[00:09:36] And if you read the latest research on this, there have been papers published, um, on, you know, agents, AI agents, finding vulnerabilities and breaking into applications. This is just the start. , but I have a feeling, you know, we're going to see this explode in the next couple of years.

[00:09:51] Blake: I wonder too, what can be accomplished with like a Centaur chess type model? I don't know if you've heard of Centaur chess, where you have the top chess players pair up with the AI engines, which of course have been able to beat humans for many years now. Uh, and it makes actually, Basically the strongest chess player out there because you have the long term strategic vision of the human paired with the tactical genius of the AI.

[00:10:14] Do you see that something similar evolving in the pen testing space where you have SYNAC Red Team members, for instance, leveraging AI to become more effective?

[00:10:22] Mark: Absolutely. Absolutely. We're seeing that we're seeing people use, use that AI agent. It's, I call it human in the loop AI. Um, and, and that lets you get the best of both worlds as you're, as you're talking about, Blake, um, but yeah, that's, that's super important and that's where that safety mechanism is going to come in as well, human in the loops, uh, making those critical decisions around, should I go forward and, and throw a certain exploit, should I do this or that, uh, making ethical decisions, if you will.

[00:10:51] But yeah, this is, this is certainly an evolving space. And I think you're, you're going to see a lot of, uh, offensive tech come, come forward very quickly, uh, as it, as it makes it more efficient to carry out attacks at scale, if you're just automating it. And so we've, we've got a long history of the usage of worms, uh, and automated worms that propagate themselves and exploit, and spread, right.

[00:11:14] All on their own. Uh, that coupled with AI though, can yield a very powerful weapon in the cyber domain that that is more intelligent than what we've seen in the past and makes decisions that are on the fly, , very, very smart, whereas in the past with these worms, they were very dumb decisions. They would just, you know, they could access that scan, they'd jump to the next hop if it was there, um, are going to get much, much smarter, especially as we look at, you know, how agent, uh, and AI embedding. Uh, tech comes along so you don't have to carry the whole model with you and you can start to embed pieces and logical, logical pieces with it so that your, your malware is bringing along, uh, all that intelligence along with it.

[00:11:58] So you've essentially got what is, what is pretty close, going to be pretty close to a human operator, uh, along the way, but at scale, uh, probably doing more, more effective exploitation than we've seen in the past.

[00:12:09] Blake: So as, as attackers leverage AI to carry out some of their. Breaches, you know, you're CTO of Synack you're, the buck falls with you for the defense of a lot of these networks as well. How do you prepare a platform, you know, against these threats, especially a platform that's designed specifically for essentially hackers to operate on?

[00:12:30] So not only do you have, uh, I know, I know SYNAC Red Team also constantly probes our own network for vulnerabilities. So.

[00:12:36] Mark: Yeah, I think there's, there's two sides of this for us, right? So we want to enable those offensive workflows. We're, we're leveraging a crowd of offensive security researchers that are out there to find vulnerabilities ahead of the adversaries for our customers. And we want to enable them. to operate at scale using AI technology, where they need to, but do it in a safe manner. So we're looking at how that can be applied and what frameworks and toolkits we can provide that allow them to build upon that and operate within our ecosystem. So we can leverage AI in a safe manner to accelerate that discovery time.

[00:13:10] And then on the flip side, on the defensive side of just, you know, you're, you're going to see AI embedded into all the SIMs. Um, so your, your security, uh, incident and event monitoring systems are going to be embedded, uh, with AI technology that allows them to mimic what a SOC analyst would do.

[00:13:28] So when an alert comes in, or they see a certain pattern in the logs, the AI is going to take certain actions for them. And, and so what you're seeing is that the acceleration of the pace, um, and that is going to put pressure. On to organizations to patch faster, mitigate faster. Uh, just everything in the sock is going to accelerate.

[00:13:48] Um, and so it'll, it'll, it'll eventually get to a place where, um, AI will be taking the first action on most things and then escalating to a human, if needed.

[00:13:58] Blake: So to switch gears here for a second, Synack recently got FedRAMP approval at the moderate authorized status. Now, for those unfamiliar with the FedRAMP process, it might not sound like such a big deal, FedRAMP, moderate authorized, but it really is quite intensive. And I would be interested to hear your frontline thoughts about the work that went into getting that approval. What had to happen?

[00:14:23] Mark: Yeah, well, the FedRAMP process is the process by which companies go through to get their software accredited for deployment at certain levels within the U. S. government. And that is designed to have a standard level of care, you know, from a security perspective. So there's a lot of checklists, there's a lot of compliance questions that are asked, there's a lot of, Documentation around your internal processes, uh, for software development changes and your security ecosystem, you know, how are you keeping your products secure?

[00:14:56] And so, you know, it's, it's a complicated process that are actually going through a modernization to make it more streamlined, but the current process is highly manual, a lot of paperwork, um, and I'd say for any company going through that, you've really got to have experts on staff that know that inside and out.

[00:15:14] And can work with the government to get you accredited. It's, it's quite a lengthy process. As you mentioned, it took, it took 2 years. Um, and it costs, you know, I'd say easily over a million dollars. Um, and, and so, you know, I think there's 2 sides to that coin. 1, it's effective. Is it effective, you know, to put companies through this so they can work with the government?

[00:15:34] Two, is it too burdensome and too costly for small business? Um, and so I think there's a, there's a happy medium there between that. We could probably spend an hour on FedRAMP modernization, but there does need to be some way for for companies small who can't afford that to work with the U. S. government and do that in a safe way.

[00:15:53] And I think that that that's going to evolve and change over the next couple of years. Uh, but for any company right now going through the process, I would just say, uh, strap in, get prepared, hire the right experts, get the right audit firm. And, uh, you know, make sure your ducks are in a row.

[00:16:09] Blake: And why did Synack decide to run through the FedRAMP gauntlet?

[00:16:13] Mark: We were asked by a customer to go through it. We're deployed in about 25 different federal agencies today. We do penetration testing at scale for a lot of, a lot of government agencies. And one of our customers said, Hey, it'd be great if you guys could get fed ramped.

[00:16:26] So that you're more compliant with the policies that are being put down from the administration to use more FedRAMP certified products. And if you go to the FedRAMP Marketplace, you'll see it's a very small list of products that are actually certified. And that's because the process is very arduous and it's very difficult to get accredited and maintain that accreditation.

[00:16:46] Part of this modernization process, though, is going to be, you know, how do we do this in a more automated fashion? How do we use the right frameworks deployed to the right clouds to have the right compliance regime? Because if you're in a public cloud environment, you're going to end up inheriting a lot of controls from that public cloud, and you're going to share that responsibility of meeting your security obligations with your public cloud provider.

[00:17:08] So it's really important to choose the right tools. The right partner on the journey. Um, so we, we should probably do a breakout on that whole, that whole federative topic.

[00:17:17] Blake: More to come, sequel on the FedRAMP situation here coming soon to a podcast near you. I did want to share a quote from Miriam Grace McIntyre. She's Executive Director of the U. S. National Counterintelligence Security Center. And at a conference in Texas recently, she said, quote, The People's Republic of China today represents the broadest, most active, and persistent cyber espionage threat to the U. S. The PRC also remains the top threat to U. S. technology competitiveness. It was a bit surprising to me given all the buzz around the threat posed by Russia and, you know, the hybrid war going on and Putin's invasion of Ukraine. What do you see as the biggest nation state cyber threat facing U. S. interests right now?

[00:18:02] Mark: I mean, they're, they're both substantial. I think Russia and China both are substantial, you know, adversaries in the cyberspace. China, you know, and Russia both have unique capabilities. But I think the important thing here is that they're in cyber, you have to pre position a lot of your, uh, capabilities, right?

[00:18:22] So if you were going to. You know, invade a country, you know, and you wanted to shut off their internet. You can't just do that. You can't, you're not going to just, just shut it off, uh, um, without some pre positioning activities, right? So, you know, what you're seeing here from China and a lot of the alerts is that they're pre positioning, uh, capabilities, malware, if you will, into our networks, uh, at scale, uh, finding ways to hide in the noise.

[00:18:49] Um, and, and they're prepared to take action if, if they need to. Um, so I wouldn't say that, you know, there's an immediate, immediate threat. Uh, but there's certainly pre positioning and being prepared to take actions against, you know, critical infrastructure, whether it be power grids, water treatment facilities, those kinds of things.

[00:19:08] And, and they're not the only actor that's doing that, right. There's a lot of actors in this space. Um, but yeah, we're seeing a lot of those, Those activities. And it's, it's important that, you know, companies start to look at, you know, how they're, how they're securing all this infrastructure from the ground up, you know, whether it be supply chain attacks on the hardware, the software, uh, or it be, you know, uh, running the latest, latest software from the infrastructure, your routers, your firewalls, all the way up to your apps, adversaries are looking, they're probing all the time.

[00:19:38] And, and China is no different. They have a very, both China and Russia have very professional, uh, offensive cyber units, very similar to, um, you know, the United States. And there are people that are paid to to break into these, these organizations. And, um, that's their full time job. And I think a lot of people underestimate the power of that when you're, you're putting thousands of people in employ to do this every day, that's, that's their mission.

[00:20:06] There's dedicated resources at the nation state level of many countries out around the world now to do this type of work. and that raises the stakes, uh, for, for commercial companies, uh, who have to have limited budgets compared to nation states and limited resources. And so, you know, I think the question remains, like, how do you keep up with a nation state?

[00:20:25] And I'm not sure that that's actually possible, but you can make the best effort. And the important thing there is to do, do, do the basics. But also, do the basics really well and really fast. And that's, that's what we're seeing. These targets of opportunity are popping up and it allows the adversary to get in.

[00:20:42] For example, you know, you've got the Avanti, uh, firewall VPN issue where, you know, you've got CISA releasing guidance that this should all be, urgently. They didn't patch it fast enough themselves and they became a victim. So I think there's, there's some lessons learned there on the speed. Uh, and I think that's, that's really the takeaway, uh, for me as, as we introduce more AI into this whole cyber realm, is that the speed and the pace of operations is going to increase dramatically.

[00:21:14] Blake: It's interesting too that CISA, you mentioned the Cybersecurity and Infrastructure Security Agency, they have also been sounding the alarms recently about this particular PRC sponsored cyber espionage group, Volt Typhoon, doing exactly what you just described about pre positioning, preparation for the battlefield, trying to lurk in critical infrastructure networks, getting ready to pivot to some of those operational assets in the event of You know, for whatever reason, maybe China invades Taiwan, maybe the U. S. relations sour, uh, to such an extent that that needs to be triggered. How do, you know, you alluded to this somewhat, but how do critical infrastructure organizations, how can they respond to that?

[00:21:53] Mark: There's a lot of people focusing on, you know, the, uh, collective defense and, and IOT space. So you've got, you've got to share, you've got a lot of sharing agreements going on between the government and the private sector. I think that's a good thing. As much as we can get out to the commercial sector saying, Hey, this, these are the threats we're seeing. These are what the techniques and the trade craft of the actors. That's going to be critically important. So government sharing with. What they can with the ISACs and the different sharing groups out there. That's that public private partnership is going to be critically important because they're going to see things from, from the global Intel system that commercial companies don't have a chance to see.

[00:22:34] Beyond that, you know, I think hygiene, it's comes down to hygiene, you know, patching, getting your, getting your, um, infrastructure as tight as possible and you're monitoring really tuned up, um, and introducing, you know, And accelerating faster patch cycles. Um, for example, like I was just talking to a friend who works at a big bank.

[00:22:55] This week, and, you know, we were talking about some of the ways that, uh, the adversaries are prepositioning in some of the infrastructure pieces, the routers, the firewalls, um, and it, and it comes up that, you know, some of these, some of these devices, they can't be rebooted easily without an outage and they can't be patched, uh, easily either, uh, without risk of downtime.

[00:23:19] And this becomes a substantial blocker to, to people patching and staying current on the latest software across that critical infrastructure. I mean, if you're, if you're the IT manager for, you know, a certain set of the data center, do you want to take the risk that you cause an outage when you reboot a firewall to apply a patch?

[00:23:35] How do you verify that you're running the right firmware version? How do you verify that that firmware hasn't been modified? How do you verify that, that, you know, all of this traffic that's flowing through the network that looks innocuous is actually innocuous, um, when the adversaries are known to hide in the noise, uh, it's a very hard problem.

[00:23:53] This is, this is very tricky and this is why, you know, that we have the problems we have of like just out of date software, you know, a lot of, a lot of that, uh, but also a lot of trepidation. With changing the way that we operate these networks, because we're trying to avoid downtime and these systems that manage banking, water, uh, power, you know, it's, it's a hard problem in IOT, uh, cause you end up with, with a lot of critical infrastructure that is not designed to be, uh, rapidly patched, um, and not designed to be patched without downtime.

[00:24:27] Um, so we have a, we have a challenge.

[00:24:30] Blake: And we saw how costly an outage can be a couple years back with the Colonial Pipeline cyber incident. And, you know, in that case, obviously it was a much noisier threat actor, uh, you know, going in there with some of the ransomware angles, but it was, uh, I mean, gosh, the amount of contracts that just went belly up from that with, with oil and gas deliveries and well, particularly fuel.

[00:24:49] Jet fuel and gasoline deliveries that were just all along the East coast, I think it would supplied something like 50% of the East coast fuel needs, which is just absolutely, uh, ouchie on a, on a, on a financial dashboard there. Um, not to harp on China too much, but, uh, they are investing billions of dollars in Beijing into, you know, quantum research and development.

[00:25:10] And right now China holds twice as many quantum related patents to the extent that that is a, is a, is a good metric, maybe not, but as the US and that. That, you know, includes quite a bit of military applications, but also has some security implications. How are you thinking about the quantum race, and how big of a deal is it really for cybersecurity?

[00:25:30] Mark: Yeah, I think the quantum, quantum threat comes in two, two areas. One, people are thinking that, you know, quantum computing can help with decryption, right? So if we can, we can have more compute power, we can record data now and decrypt it later. Right. So there's a lot of people moving to quantum safe cryptography. And I think that's probably generally a good move. Um, you need to, you need to move to, you know, safer algorithms. And NIST actually has worked with NSA to come up with a couple that they can, they can broadcast and you can use. And that, so there is a path to using quantum safe crypto already. And a lot of large organizations are moving to that.

[00:26:10] And I think it's a good move for everybody. Um, You know, in general, though, you know, I think the bigger issue is quantum cryptography is going to enable compute on a far more capable level. Um, and if you couple the quantum computing, uh, with AI, you end up with very powerful AI agents that can handle a lot more, uh, processing than we're doing now.

[00:26:33] And so you're going to end up with. I think AGI, Artificial General Intelligence, on top of quantum computing, uh, in the next 10 years, that, that could change the landscape of, of how, uh, capable the AI agents actually can be in real world scenarios. Um, I think we can also see that, you know, the basic GPT 4 models today are certainly, certainly smarter than any human given the data access that they have, and they can, they can conjure up.

[00:27:03] But if you couple that with the ability to reason, At a large scale and multiple agents conversing and quantum computing, uh, to, to help process all this data and analyze, you know, questions that come up, you know, to do deep analysis on data sets. Um, that's going to be really powerful and I think the military applications of it are really interesting too.

[00:27:25] I mean, you're already seeing some of this deployed. Uh, today with, with, uh, the NGA and some of the spatial, spatial data that we have. So, for example, from a targeting perspective, if you have satellites, you have spy planes, you have different things taking imagery over, uh, over a country, you know, it's, it's, it used to take human analysts with, with, Magnifying glasses to identify tanks and planes and count them and all this stuff.

[00:27:50] And all that now has been, has been automated. So now all that goes to image recognition software to pick out the various pieces. And the next leg of that is, you know, after you select the targets, you're going to, you're going to send them off to, to some operator and he's going to click, accept for a kill mission.

[00:28:09] Um, and you're going to be able to tie that right into the command and control systems of the drones. And say, Hey, you know, we've got identified, you know, 10 targets and there's going to be some, some military officer is going to click, accept, accept, accept on all those targets. And the drones will go action it all through the command and control system at, at, you know, really, really fast pace, really fast speeds.

[00:28:31] This is the next stage and it's going to be to the point where I think with China and everybody thinking about what happens with Taiwan. You're going to get to the point where the battle space is such, such a fast paced battle space with AI driving target selection, target identification, prioritization, weapon selection, um, you know, identification of, of who's the best drone in position to, to take the strike.

[00:28:56] You're going to get to the point where that battle space is so fast paced and so dangerous. where the action is happening, that humans can't operate in that space. So you're going to push the human operators further out. So you're going to end up with a bunch of standoff, you know, human planes driving drones.

[00:29:15] And if you look at, look at, look at the way this plays out, that pace of operations is going to accelerate dramatically. Just like I was talking about in cyberspace, this is going to happen in warfare as well. And then combining the effects. Of not only the kinetic warfare, but also the cyber warfare, you end up with a very, very strange place where you're attacking targets, not only that are infrastructure targets, but military targets and also space assets at the same time to take out the communications that are driving all this intelligence at the same time.

[00:29:48] So it's going to be a very interesting time. I do see a place where, uh, most of this is, is automated to a high degree, except for, you know, the point of clicking accept and taking a shot. Uh, I think we'll reserve that for human operators and human decision makers. However, the AI is going to get really good at recommending targets and priorities.

[00:30:07] And so I think we're, we're not far off from models that are, that are pretty high confidence on things that we should do. And if the pace accelerates to the point where we can't tolerate that human in the loop. I can see us pushing the envelope in certain cases to automate the target selection to target kill decisions, uh, and make that loop really tight.

[00:30:30] Blake: May you live in interesting times, as the old saying goes, right? And there are a number of things that both fascinate and concern me about what you just said, not least of which is the notion of already we're struggling with things. AI black box scenarios where you don't necessarily, you can't necessarily audit or probe why a machine did what it did, or at least you can't with as much effectiveness as you may have once been able to.

[00:30:53] When you're talking about automating that entire chain and throwing quantum into the mix, I mean, that's just, the black box problem becomes seemingly insurmountable because no human is going to be able to understand an AI operating on that kind of level.

[00:31:07] Mark: Well, well, yeah, we're going to need right now, there's way too much non deterministic flow, uh, call flows in these agents, right? So if you, you ask ChatGPT a question, you know, and then you ask it, Another question, a few minutes later, that's the same content mostly, it could take a wildly different path, right?

[00:31:28] And it could give you a very different answer. You end up with hallucinations as well. Um, so there's a lot of randomness that comes out of these models. Um, that's going to have to get teased out, right? So you're going to have to eliminate that. You're going to have to put guardrails around it. Either with prompts or challenges or develop new models that are more, that have the controls baked into it.

[00:31:49] So I can see us developing out, a lot more, uh, controls and wrappers around these, these core models that make it a deterministic outcome. Uh, very predictable, something we can test. Um, and then I think you put guardrails around, you know, obviously, kinetic action. Any, anything involving human lives, you're going to, you're going to want to keep humans in the loop for a long period of time.

[00:32:12] Um, but, you know, if the pace, if the pace accelerates, I can see us setting our, you know, cyber defenses on auto autopilot, for example, like, Hey, you know, Sim, go ahead. Uh, take whatever action you think is best and then let me know how it goes, you know, I, I think there's, we're getting close to that point with, uh, some of the co pilot operations that are out there.

[00:32:34] And then I think in the battle space, that's going to be a fast evolving place. Defense tech is getting a lot of investment right now. Um, and, you know, the pace at which drones and drone swarms and, and all the signals need to be correlated, combined, synthesized, and actioned, uh, especially if you think about the, the speed of, of hypersonic weapons going at Mach 20, um, this is, this is going to have to be automated pretty quick and humans will not be able to stay in the loop forever.

[00:33:04] So it'll be very important for us to design software systems that are AI based, that are predictable, that stay within the rules, um, and, and do what we, uh, do what we want them to do, not, not unilaterally make their own decisions.

[00:33:16] Blake: Well, I think we learned we could talk about FedRAMP or AI at some considerable length here, but I do want to move on to the final question that we ask of all of our podcast guests, which is what's something we wouldn't know about you by looking at your LinkedIn profile?

[00:33:33] Mark: Well, my LinkedIn profile is pretty sparse, Blake, um, but

[00:33:37] Blake: Then you got, you got a lot of options.

[00:33:39] Mark: I don't have brewmaster on there. I do brew my own beer. And, uh, I, I make a mean, uh, Belgian, Belgian blonde with some, with some lavender scents. 

[00:33:50] Blake: Lavender, Belgian blonde ale. That sounds quite tasty. I think I might need to hit you up for some of those, uh, homebrews at some point here.

[00:34:00] Mark: Yeah. I'll have to do a batch and can it. We'll send it out.

[00:34:03] Blake: Yeah, that is, I'm sold, sold. Well, thanks so much for joining me on the podcast here. Really fascinating discussion, I think, certainly a lot, uh, a lot going on in your world, so appreciate the time as well.

[00:34:16] Mark: Thanks a lot, Blake. Appreciate it.