The Impact Team Gulf

LLM's and AI Governance

The Impact Team Gulf Season 10 Episode 4

Today we're  joined by Darren Wray and Robert Westmacott as we dive into the most pressing topics in modern technology, large language models, and in this episode, we explore how organizations are navigating the delicate balance between the transformational benefits of LLMs and the risks that they introduce. 

SPEAKER_00:

Welcome to another episode of the Impact Team Golf Podcast. I'm Mark Rothwell Brooks. Today I'm joined by Darren Ray and Robert Westmacott as we dive into the most pressing topics in modern technology, large language models. They offer extraordinary potential, from unlocking major productivity gains for your teams to giving organizations a real competitive edge. They're becoming powerful knowledge augmentation tools across just about every industry you can think of. But with that promise comes a serious challenge security. And in this episode, we explore how organizations are navigating the delicate balance between the transformational benefits of LLMs and the risks that they introduce. Enjoy. So Darren, Robert, welcome. Let's start broad. Employees love it, boards want the productivity. But everyone's terrified of the data leak horror story. So let's start off by saying where where are we right now in the adoption versus risk cycle?

SPEAKER_02:

Well that's uh such a great question. And I I look I the way I think about this, and I've said this to uh several people, and Rob and I have spoken in these terms uh about it. I'm comparing the the adoption of AI with the take up of the internet in the sort of mid uh towards late 90s. And what I found was if you look at it in that way, we are still round about 1996 or 1997 in internet terms. Now, at that point in time, most organizations were starting to look at um, you know, the impact of uh the internet on them. You know, e-commerce wasn't such a big thing yet, but there were uh data breaches were occurring, perhaps only small ones, but they were starting to occur. And that people were suddenly starting to realize actually, having all our computers connected and accessible from the internet, suddenly this is actually introducing a whole heap of new risk factors that we hadn't considered. And that's very much where I think we are right now. People are suddenly realizing hold on, there's risk factors, and data's getting breached out of this, and organizations are oversharing with um their AI, with Gen AI and other AIs as well, but with particularly with Gen AI, uh, they're oversharing. That's where I'd say that we are.

SPEAKER_00:

Okay, interesting. I mean, it's a it it's it's both uh an attacker and a defender, I think. I mean, I've I've I remember having a comment this conversation with a load of CISOs in the insurance space about a year ago, where they they started to look at how it could be introduced from a both uh an attack and defence perspective. You know, almost a year on as the year is coming to a close. What what would you say you know what what emerging trends have you seen in 2025 and where do you think you know is going in 2026?

SPEAKER_02:

Yeah, I mean 2025 has definitely seen um the an increase in um in in the nature of using AI as an attacker and as a broadening attack surface too. So both aspects of um have increased. Um being an attacker, let's talk about that bit for a minute. Um hackers, uh, you know, ne'er do wells, uh, you know, bad actors, they're called many different things. But those those individuals who are looking to or have motivations that are contrary to your organization's motivations, let's put it as simply as that. Those who want to stop you doing or want to steal your data or um in some way subvert your business, they're using Gen AI in a number of different ways. Um, you know, vibe coding is you know a term that's sort of um probably past its peak now. But um, you know, people are using hackers and uh bad actors are using uh vibe coding approaches to be able to develop attacks that um that would have taken longer previously. And there's been a number of cases, you know, uh using CVE's uh reporting, uh which is uh the mechanism for describing um you know a bug in a system and using those, um, you know, giving those as a brief to a Gen I NI system, um for it to then you know create an attack um on an organization is something that we've seen done several times this year, and those are just the ones that have come into the public realm. So that's the attack. The attack surface, um really sort of touched on that already. Um, you know, the more data that goes into um Gen AI systems, even despite the tokenization, it still means you've got an increased chance of data being able to be subverted or um you know being um taken away as part of pre-loading and stuff like that. We've seen that some of that as well. On the defense side, defense is reactive, it's always playing catch up. And that's why the approach that you know we've taken, Rob and I have taken in in what we're doing, is more um you know, proactive rather than reactive. You're actually actively doing something before it becomes a problem rather than trying to respond. So on the defensive side, what we're seeing is organizations constantly trying to play catch-up. Um you know, you're seeing new tools uh being developed that are defensive.

SPEAKER_01:

I think that's right. I was just gonna say it's definitely a race on both sides. I mean, defenders are using AI to sort of triage alerts and hunt threats and simulate attacks at machine speed. And then you know, attackers are using AI to customize phishing and social engineering, uh, amongst other things. I mean, I think that the key shift is really scale and iteration at speed. You know, if you might test dozens of ideas, and an AI assisted attacker can test thousands in seconds. It's nanoseconds. So yeah, I th I think you know it's it's just it's scale and speed.

SPEAKER_00:

I mean, one of the things that's going on in the Gulf with G42 called Core 42 is is the you know the on-demand uh uh capabilities that they're launching, and they they made some predictions with regards to the sheer number of LLMs that are going to be um put out there in the marketplace. Now, everyone at the moment thinks of LLMs as you know a copilot, a chat GPT, a GROK, etc. But you know, they're they're predicting many, many, many of these LLMs will be uh will be launched. So I mean it begs the question do you think that the LLM will eventually replace the traditional search engines, or do you think that's just a load of old hype?

SPEAKER_01:

I don't I don't think it's a replacement, at least not yet. I mean it's more of a sort of fragmentation. I think um you know LLMs handle queries, you know, mainly regard you know requiring synthesis, you know, summarise these three conflicting regulations while traditional search you know still dominates retrieval, you know, find a restaurant near me. You know, the tipping point I think where a AI search overtakes Google um is probably two to three years away. Um right now it's only about five to ten percent of the volume, but it's a high value volume, the complex questions knowledge workers will ask.

SPEAKER_00:

Okay, so you're you guys are working on a firewall product for Gen AI then. So so so and you it's it's called AI data firewall. So so what is that in plain English for the you know for the non-technical people that are listening to this podcast?

SPEAKER_02:

Yeah, there's a there's a challenge for you, Darren.

unknown:

Yeah.

SPEAKER_02:

Um okay, so for um in the most simplistic terms, it's a firewall, so it actually prevents something, pr uh prevents information from um going to uh gen AI or any other kind of AI, but Gen AI specifically when it doesn't need to. So here we're guarding against the leakage of um you know personal sensitive, commercially sensitive um type information being um leaked into Gen AI um where it doesn't need to. Um and through our process of uh protection, uh we use a process called pseudonymization, which effectively means um uh it's a type of encryption or replacement, if you like. Um but we use in context data. So you know, Darren doesn't get changed, you know, gets changed from being Darren into John or Fred or some other you know random male name in that as part of that process. Um so it's a a um you know a simple concept to understand. But through this process of um pseudonymization, it means that the gen AI can retain context. You're not you know saying old person one or person two, um, you know, with some kind of tokenization replacement, um, or you're not just blocking the names out or other personal information out. Um so the gen AI has the context, so you can still do those kinds of questions. You know, if you know, tell me what role Darren plays in this um, you know, in this document, um, et cetera, et cetera. Those kinds of things can be um you know passed through to a Gen AI and you still get a sensible answer back. Whereas with you know, other forms of uh protection that we see from uh people who sort of say that we do a similar thing to um to what we do, um that's not the case. You lose that context, you lose that ability to be able to drill into the details because the information is just not there for the Gen AI to provide the answer. Interesting.

SPEAKER_01:

I think that's right. I mean, I think the other thing to mention here is is the sort of juxtaposition between what traditional firewalls are. I mean, you know, they sort of look at where your data is going, IP addresses, etc. And really what we're doing is we're in real time we're looking where the data actually is um and dealing with it uh there and then. And what I think is really cool about that, although you know, trying to rein in my bias, is that by dealing with the data, um you become uh self, you know, you you you adhere to um a regulatory regime by uh by dealing with the data there and then. What what you what we're finding is is that because of the way that we've set this up and the way we're doing it, we we become very aligned to current GDPR principles, um, and in effect the you know, EU AI Act, which is uh you know starting to gain some recognition, albeit it's subject to you know quite a lot of change at the moment. But it really it's a kind of um you know, it's almost uh I like to think of it as an auto-regulatory data management system uh in real time.

SPEAKER_02:

Yeah, uh and but also keeping in mind that it's data loss prevention, it's not just about privacy. So even if you're in an organization um that isn't um you know concerned with privacy regulation, this is actually about um you know breach prevention, data loss prevention, and good information um you know security and hygiene. You know, it it goes back to those those those very simple uh principles of ensuring that you're you're only sharing the data that needs to be shared at any any particular point in time.

SPEAKER_01:

And important to note at that point that traditional DLP tools don't really work in an AI world, right? I mean, they're not fit for purpose. So yeah, just important to raise that to the top.

SPEAKER_00:

Okay. You you you're based you guys are based in the UK, but I understand um contextal has has just entered the Gulf Market, so congratulations on that. Um what's the difference? Uh interesting enough, when I go out this morning, I'm obviously in Dubai and I got up this morning, it was very foggy, it reminded me of London, so um there you are. But what's the difference about the Middle East um conversation uh around AI governance that you're finding then compared to the conversation that you're having um in in Northern? Or is there a difference?

SPEAKER_02:

Yeah, there definitely is a difference, and I I would I think uh the I would summarize um that the the Gulf region is very much um AI forward and AI um you're aware. Many many regions are that that's not unique to the Gulf, but uh I think there's a real focus on pushing AI forward and the ability to um to do that and and local models. I mean you've already uh you know mentioned uh G42, code 42, um in the in in the conversation, the ability to do that um in region and uh and to be able to do that, I think is uh really powerful messaging that's coming out of uh out of the Gulf. So there's a lot of awareness, and there's also um a heightened awareness around um you know DLP, um, you know, particularly with some of the regulated organizations that we're jointly having conversations with.

SPEAKER_01:

I think I think that's absolutely right. I I'd also add that I think what's interesting about the region is they're very focused or seemingly very focused on building almost like a sovereign AI infrastructure.

SPEAKER_00:

Yeah, that's true.

SPEAKER_01:

They're so focused on doing that to reduce the reliance on Western tech that it's it's really good to see. And I think you know, from purely from a business uh side of things, you know, there's sales cycles seem to be in days and weeks rather than months and years. Uh if you compare it to how enterprises engaged in the UK and in the EU, um the speed and iteration at which the Middle East are moving uh in this space is just phenomenal.

SPEAKER_00:

Yeah, it is it is different for sure. So getting going back to the risk aspect, which is you know what a lot of people are worrying about. I guess you know what what would you say? Do you have a view of what you would say the top three gen AI risks are that the boards are actually losing sleep over?

SPEAKER_02:

Yeah, I mean, obviously, you know, obviously from um from uh where we're coming from and the conversations we're having with you know members of C-suites and um and and boards, there is um an increasing awareness of you know the DLP and the cybersecurity aspect. So I'd I'd definitely put that within the you know within the top three. Um, but I'd also put um you know some of the other um you know more regular impact assessment type risks, if you like. You know, uh, what is AI today? What's it going to be tomorrow? How does it impact my business? What does that mean? Um, you know, how does it affect our competitiveness? You know, there are uh you know sectors that have already been uh you know quite damaged by um you know AI as it appears at the moment, you know, proofreading and things like that as you know, simple use cases. So I would definitely put those kinds of risks are um you know, are something that boards are still grappling with today. I would also put um you know governance and oversight um you know separate to the DLP side, you know, the governance and oversight, it's now quite easy for groups of people to go off and have a conversation with um you know a gen AI expert in invented commerce and come back with um you know seemingly very very credible answers and responses and documents, okay, that are not nearly as um uh professional or well founded, uh let's say, or founded in truth that uh as they may otherwise be. And that's something that organizations are looking at. And obviously, you know, AI acts like uh the EU one that Rob already mentioned, um, you know, play into that because organizations, of course, have a responsibility for the way that they're the information they're producing, the advice they're giving to others. Um so the ability to look at that uh from a governance, risk, and compliance perspective is really important too. And that's an area that we work with, you know, as well. But that's a very important um aspect. And many organizations are well, I think it dawned on a lot of uh a lot of people when you know some of the lawyers got really caught out. And there was a couple of famous um instances where you know lawyers had used uh chat GPT and had come back and made up um you know different case law and different um you know examples. And I think that really sort of proved things out to um you know to some organisations. Oh my goodness, uh these even even people like lawyers are being caught out by this. Um, you know, where does this leave us? Um, you know, we're setting all these people loose and encouraging them to to use Gen I AI on an increasingly you know, increasing basis. How do we actually govern that? How do we um defend ourselves against the wrong information being used?

SPEAKER_00:

Yeah, I think you're right about the GRC thing for me is it is quite a big one. Quite a lot of debates um here with with some of the central banks in the region about how AI is governed from a regulatory perspective. And that they are you know, some are struggling to get their heads around you know what guidance they provide. And I I've used this analogy before, in the absence of uh permission, they're just gonna you know ask for forgiveness a bit later on once they've implemented AI within the context of their organisation. You know, going back to those banks, they are you know, we're seeing that they are they're trialling a whole raft of AI in their organisations in an attempt, you know, there's as many use cases as you can think of, and you know that it's a bit of a feeding frenzy at the moment. So yeah, uh definitely that definitely resists.

SPEAKER_01:

Can I add one thing to that that that list? Um I I wouldn't disagree with anything anyone said, but the the one thing that I mean, I spoke to a CEO yesterday. The one thing he was worried about actually was um was the use, um, the unauthorised, unsanctioned use of AI in his organisation. So the kind of shadow AI aspect to all of this, uh I think is the thing that keeps the management team up. You know, what's already left? Uh yeah, what what what are people using that hasn't been um authorized by uh by the security people? And I think um when you look at the stats that Gartner are producing at the moment, which is sort of well over 85% of people globally are now using um some form of AI, either sanctioned or unsanctioned. Um I think management teams are trying to work out how to get that down. Um and really one of the only ways you can get that down is to work with organizations, I suppose, like ours, which makes AI usage safe. Um, you know, if you've got technology in there that that makes the the the in interactions with um with uh LLMs safe, then then that has a uh the double effect of uh obviously being able to do what you want to do in terms of prompts and uploading files, but also it reduces the uh reliance upon people using unauthorised devices. But just adding that.

SPEAKER_00:

Yeah, I mean that's a nice segue into the sort of the dilemma, isn't it? So you know, organizations want their, as we said before, that they want the productivity gains. Employees are using these tools in the proofs of their own homes, I guess, and they want to use them at work. So how are you how are those organizations actually responding to the these employees demanding these sorts of tools? Because I think there's there's a couple of ways of answering that particular question from we're not using it to free range, off you go, and everything in between. So do you do you do you want to just comment a little bit about that that dilemma?

SPEAKER_01:

As we see it, I think there are sort of three categories. There's a sort of the ban it category, which sort of doesn't last for that long because uh organizations can't really afford to just outright ban it. I mean it's it's a it's a knee-jerk reaction to the risk. And then and then there's the second category, which is basically um limited access uh but governed through policies and procedures, which you know is about as useful as a chocolate fire guard. You know, try telling somebody in France um who's busy trying to do his work that um what he's typing into a prompt isn't um you know isn't uh acceptable for whatever reason. I mean, but people just don't know, they're not aware, there's no there's no real training, uh as as certainly as I've seen it, that's uniformly being rolled out in this area. And then you've got open free access where you know it's not locked down and people are are trusting the likes of Microsoft's Copilot Implementation, Gemini, and others. Um and there are there are quite significant risks there as well, really. I mean, we we stopped talking about cost savings. I was just saying, we we just stopped talking about cost savings and started talking about risk avoidance and velocity.

SPEAKER_00:

Yeah. Where where does it where does the AI um data firewall sit then in in the modern security architecture, Darren? Is it sort of what is it, zero trust, least privilege? How how how how does it map?

SPEAKER_02:

Yeah, absolutely. Well, it it maps with all of those, right? I mean, the principles of zero trust are um yeah, the old models models used to be um you know like uh sort of like a medieval castle. If you think about it like that, you'd have a you know a perimeter that beyond which those were the bad guys on the outside, you know, and anyone inside the walls, well, they were the good guys until they weren't, right? Until the bad guys got in. And that was uh very much how organizations had organized themselves and ranged themselves uh from a security posture. Um, you know, up until I guess the early 2000s when that really very much started to break down. Um people were seeing that, you know, that no longer being a good case. Because once you were on the inside, you had full reign then, you were trusted. Um so zero trust came about um you know as a as a response to that, to say if we can't trust everyone who's inside the building, how can we trust them? And even people, some people on the outside of the building, we want to trust them equally as well, because they're trusted partners, you know, however, that may work. So that's where zero trust came in. Now, the LLMs and Gen AI have really broken that model very you know very f flagrantly, really. Uh, I mean, I see documents being um you know sent out uh when we're working, when you know when we're working with uh uh with uh you know clients and partners and things, I see documents being sent out to Gen AI that I know that five, ten years ago people would have been fired for sending out. Uh, you know, they wouldn't have been allowed to send these documents out. But suddenly they're sending them out to Gen AI and they're being encouraged to use this stuff, and suddenly it's safe, but it's not safe because everyone's just burying their head in the sand about it. So all the Gen AI stuff breaks the uh the zero trust model and the data minimalization models. Um, you know, uh they just don't adhere at all. So what we're doing actually adheres fully. Um, you know, we're uh taking a zero trust approach. Um we're helping organizations maintain their zero trust um stance. The data minim minimalization, well, obviously we're doing that, we're making sure that you know uh data, personal information, other kinds of data are not uh you know being exfiltrated to outside of the organization or to a you know to a uh an LLM which should be considered uh you know an untrusted third party. Um so we sit we're very much within that um and complying with that.

SPEAKER_00:

Walk us through a real real life example then. So you know you mentioned the law firms either so you know, take a law firm, they're using Chat GPT because they've been allowed to, they're reviewing a load of NDAs. So what happens you know with and and without the AI dated firewall? Those two scenarios. I mean, uh well, first of all, is that is that a is that a real life scenario? Have you seen that? And can you just walk us through the implications of uh of of uh of you doing that within a law firm if you have got if you haven't got any um protection and if you have?

SPEAKER_02:

Yeah, sure. Well, uh let's use um um let me twist that example just a uh just a little bit, but um bear with me. I think you'll um understand where I'm coming from here. We had a conversation, Rob and I had a conversation uh just the other day uh with a uh with a law firm who um who said, uh look, explain these, you know, these uh these risks to me. So one of the one of the examples is right at the moment there's a big law case going on in the US where Chat GPT is being sued by the New York Times. People have perhaps heard about this, but it's a big important case. But one of the things that a judge has said there is that any prompts, any files, any content submitted to OpenAI has to be retained for legal discovery. Okay. Now, if you imagine that you're a law firm who is pushing in documents and content and um you know uh prompts out to ChatGPT, suddenly that content is going to be reviewed by a third-party law firm. That's okay, they're reviewing the the case in the um you know against the New York Times, but that information is now going to be shared with power third parties and third party law firms, potentially that you're sitting on the other side of, you know, in in some of those uh those cases. It's not nearly an ideal situation. Now that's just an example. Now, how would uh AI data firewall change that situation? Well, the details, the names of the corporations, the names of the individuals, other uh other aspects can be pseudonymized and changed as part of that process or even removed as part of that process of submission. Or even the the documents can be prevented from going completely because they're determined to be too sensitive to be sent to an external Gen AI. So as part of that process, you've got that extra layer of um obfuscation and um pseudonymization, but also of um you know data loss prevention always round. So in that kind of uh instance, a document that's been pseudonymized, the names the names have been changed to protect the innocent, to use that old uh phrase from 70s uh uh cop shows. Um, you know, the names have been changed, the details of the organizations have changed, perhaps even amounts have been changed. So the the commercial sensitivity um you know has um you know is has disappeared completely. And those risks uh you know they're not just um when an organization's getting sued, or you know, such as the New York Times um suing open AI. This information is going out regularly, and the fact that it's going out of your organization into the hands of another organization who's going to use it in ways that at the moment you don't necessarily know. And those contracts you know, we very often hear organizations say, oh, the contract says they can't do it. Those contracts are getting changed with increasingly regular uh you know frequency right at the moment. Uh, we're seeing this all over the place. I mean, one that happened um just the other day, you know, LinkedIn, okay, not a you know, uh not uh a document sharing location uh necessarily, but LinkedIn is now has now changed their rules to say any content, so your profile, any posts you put up, um, you know, any uh slides that you share, etc. etc., they're going to be used by their organization by LinkedIn um to actually read that content and train their AI on that kind of content. So long and the short is um that using AI data firewall protects organizations and ensures that information follows those data minimization um requirements, so zero trust requirements, and also adheres with um privacy regulation, whether it be privacy regulation in the Gulf, um the EU, you know, or in the US, it means all of those things all flow together and that you've got that one um uh that one layer of additional uh protection uh and adherence protection.

SPEAKER_00:

Right, interesting. What what is let's just talk about the return on investment conversation that you guys are having. So um you're obviously speaking to CSAs and CFOs right now. What what what is that ROI that you what is that ROI question that you that you get? Is it is it uh risk mitigation ROI?

SPEAKER_01:

Is it uh this is what we're putting in the way in order to uh understand the kind of data that is leaving the building and uh and what we propose on doing uh to stop it. Uh and and we track all sorts of things. Uh we track obviously the prompts, what's in the prompts, uh what files are being uploaded. There's a there's a fairly comprehensive audit log as to what we're doing. And you know, Mark, you and I talked about this um uh uh not that long ago, where we were talking about um running a sort of gap analysis as part of our expose. I think it's uh something that we're considering doing, which is basically look, we can come in, we can plug ourselves into your um environment, uh, and in a relatively short space of time we can pretty much see at a network level uh the kinds of information that's leaving the building. And uh and and that's quite you know, that's quite um uh surprising in some cases. Some of the things that we've seen before are are really quite surprising, what people put into prompts and what people upload into documents. And it's it's validated our use case entirely, uh, in our view.

SPEAKER_00:

Yeah, I think that's c I think that's that could be quite powerful because you mean you you don't know what you don't know, right? So you know like it's like all risks, you know, risk is a series of layered controls, usually. So I mean this is just another an another layer um to address you know the changing risk posture of an organization now that they're using these these tools. Um so yeah, I I see that. I see that. Um where do you see where do you see where do you see it going? Gen AI governance, where where is that stat going in the next five years? Do you have any predictions on where you think it's it's going there from a governance perspective?

SPEAKER_01:

So I'll be I'll be a little bit controversial here. I think it's going to be completely invisible. I think you know, just as HTTPS is baked into the web, I think AI governance will be baked into the network. I mean, I'm not sure everyone will agree with me, but I just don't think you'll end. Up buying a separate AI firewall, your whole entire network will natively understand and then sanitize the semantic data that's that's moving around the system. But I think for the next five years, um, because I think that's how far out it is, I think you're going to need a specialist layer. But I that's just I don't know, that's just me in a dark room with a flannel on my head posturing about the future. I might um people will, I'm sure, disagree with me. Darren, I'm sure, will be one of them.

SPEAKER_02:

Well, uh look, I um yeah, I'm I mean I'd love that um to uh to be the case. I think it's probably five years is probably a little short um horizon uh to redesign the entire network level um and network layer. You know, I think um you know bear in mind uh that you know HTTPS took many, many years, uh, and it's still controversial in some uh in some circles too. Um so uh um I think the governance um you know you're still going to need to have governance. That's you know, that's the first thing. Um, you know, this doesn't become a governance-free zone, it doesn't become some kind of free pass. AI acts in the US and in um you know, and in the EU and other places too, are coming to bear and coming to look at these these kinds of things. So from that perspective, you're gonna have to, you're gonna have to do more. And the more you use it, um, you know, and the more that it replaces people, um, which it will, you know, the more uh governance you're gonna have to have around it because you can't, you know, if you're not managing an AI, at least to the same level you're managing a person, and you're not providing the oversight and governance, at least to the same level as uh as a person, then you're you know you're gonna be out of whack. Um, you know, there'll be a governance reduction, uh, and that isn't uh you know where the world is um is heading, unfortunately. So I definitely think that there'll be uh you know there'll be aspects to this.

SPEAKER_01:

So I I I agree with that. I I guess the dichotomy though is that the speed at which AI is moving forward from an innovation standpoint vis-a-vis the speed at which regulation moves is is the gaps just getting wider, right? So having an enforcement layer, which is really what we are, um said slightly differently, having an enforcement layer in there to make those kinds of decisions in real time, I think is a real benefit. I but we perhaps don't make enough of that. Um but that's that's why I'm so bullish about um what we're doing.

SPEAKER_00:

Okay. What's the biggest misconception about Gen AI um from a risk point of view that you hear from execs?

SPEAKER_02:

Okay, well, I'll I'll I'll leap in one in with one. Um I I would say the big biggest miss misconception is um you can uh uh uh two parts to this is you can just send in whatever you like to it, it doesn't matter. And the second, uh the other side of that is uh because the contract protects us. Um, you know, I would say those are the the greatest misconceptions. They're the Emperor has no clothes, um kind of Emperor's new clothes uh kind of uh um you know misconceptions of our age. Um you you know no security professional uh would consider a contract to be you know the sole protection, and neither should executives and board members or security um people be considering that around Gen AI. And many of them, too many of them, are right at the moment. And some of them are getting bitten by that already, and more will be. Um thankfully our customers aren't amongst them.

SPEAKER_01:

I heard this the other day, which made me laugh. Our people will follow the policy. Which they won't. Uh not because they're negligent, but because AI is just too useful and policy without technical control is fantasy.

SPEAKER_00:

Well, that I mean that that contra that that that supports the you know increase increased shadows. Because they won't. You know, they see the uh they they see the efficiency gains that these tools can deliver, and they want they want to they want to they want that at the desk nine to five, not just you know in the evenings when they're uh scrolling through the internet.

SPEAKER_01:

Here's another contrarian one, uh, which I think people um often miss, and that is that the heat human in the loop effectively solves the security problem. Um and it doesn't. Um, humans are tired, bored, and click approve too easily.

SPEAKER_00:

You know, I think don't get me started on that. Yeah, I mean with my IAM head on and like that approval for entitlement debate. Yeah, I know exactly what the human behaviour looks like with that regard, and it is yeah, I I really can't be bothered, so I'm just gonna go approve. Exactly. Yeah, yeah. Final question then from me. So you get one of those execs is listening to this and has been putting it off for a while. What's what what do you recommend they do first 30 days? What would you recommend?

SPEAKER_02:

Look, first 30 days, if they're if they're already using um you know Gen AI, yeah, let's assume that most uh execs and certainly those listening uh to such a smart podcaster will be doing so. If they're already using Gen AI and their uh their organizations already using Gen AI, um know and understand and have a better understanding of the risks. Now you listen to uh what we've been saying here, um, yeah, come and chat with us. First thing you should do, uh certainly if you're in the golf, reach out to um you know, reach out to Mark, reach out to um Rob or myself, um, and we can take you through that um yeah, that full tailored plan. If you're not using Gen AI, or if you're not looked at uh looked at this stuff, start looking at it now because your organization is being going to be left behind by your competitors who I guarantee are looking at it uh if they're not already using it.

SPEAKER_01:

I think I I couldn't agree more with that, and I I'll leave you with one more thought. Um, and that is that you cannot govern what you can't see. Um I I would I would encourage anyone to really install you know a passive monitor like us, you know, we could we could just be uh put in as in audit mode, you know, doing a a gap analysis as we talked about earlier on. You know, are you curious to know what's leaving the building? Uh we can tell you within 10 business days, we can we can provide you with a a full audit report. Uh that's really useful. And recommendations as to what what to do next. Um you know the results might terrify you, but that report is the only budget just justification you'll need.

SPEAKER_00:

Yeah, that uh that that that's we should expand uh in the in the podcast notes about about that, so we'll make sure that we get that across. Janet, um really appreciate your time. It's been a great conversation as ever. Uh, thankfully we didn't get on to football, which is uh another great relief considering the team that I football. No, no, no, it's okay. We'll save that for another we'll save that for another occasion. Yeah, but guys, thanks a lot. Speak you soon.