From vulnerabilities associated with autonomous decision-making to AI-powered attacks, rapid advances in AI agents can pose novel threats to organizations and society.
What responsibility-based strategies must businesses adopt to leverage AI agents effectively while minimizing emerging cybersecurity risks?
This is the full audio from a discussion hosted at the World Economic Forum in Geneva on 13 November, 2024. The video is available to Forum digital subscribers here: https://toplink.weforum.org/event-mode/a0PTG0000004wNC2AY/sessions/a0WTG000000TESb2AO/defending-the-digital-mind-the-emerging-challenges-of-ai-agents
Grant Waterfall, Partner, Europe, Middle East and Africa and Germany; Leader, Cybersecurity and Privacy, PwC
Hoda Al Khzaimi, Director, Centre for Cybersecurity, New York University Abu Dhabi
Matan Getz, Chief Executive Officer, Aim Security
Nic Chavez, Chief Information Security Officer, Datastax
Check out all our podcasts on wef.ch/podcasts:
Transcripción del podcast
This transcript has been generated using speech recognition software and may contain errors. Please check its accuracy against the audio.
Grant Waterfall: Good morning and welcome to this session entitled "Defending the Digital Mind, The Emerging Challenges of AI Agents."
A couple of weeks ago, I was in Berlin and I attended Microsoft's AI tour and I attended a presentation by Satya Nadella, who spoke about this exciting new transformative world of agents and automated agents and personalized agents and how it can be democratized and all of us can create agents using Copilot's studio.
And this was really the big thing of the whole conference. He also made an analogy to the world 30 years ago or even now, where we had thousands and thousands of spreadsheets in our organizations and how, in some ways, agents would take over what that once was.
And I thought it was a great analogy because, in a way, that sort of shows how the use of AI agents, personalized agents that automation will become democratized in the end user context and therefore, massively transformative to us.
But at the same time, we all remember what a mess having thousands of spreadsheets in an organization is and was and we spent years trying to get rid of them and put them into more controlled systems.
So, I think it opens up a world of both opportunity and also a whole lot of risks of how do we control this new world which will be unleashed upon us.
And I think there's sort of, an agentic world, as we spoke about, will become something – my prediction is will become quite mainstream. So everybody will be talking about AI Agents soon. And at the moment, it's probably more of a it's a hype area but it's probably more in the tech space that people are hyped up about.
So I do think in a year or two years' time, everybody, Microsoft and the other hyperscalers will democratize this phraseology. And of course, it's not just Microsoft doing this. All the big tech companies are doing it and a bunch of other smaller tech companies as well. And we've got a couple of them here who can comment.
So we're going to explore two parts of this agenda today. The first is the organizational adoption of agents and sort of giving you a bit of context of that already and what that means more broadly for an organization adopting lots of agents and automating lots of things and personal agents but also what it means for security and how we might get better at securing.
And then we'll touch on the offensive side. So, you know, for the bad guys, for attackers to be using agents and be able to very easily create AI agents in a more offensive context. And let me introduce my panel very quickly.
So I have a really great panel here today. And first of all, let me introduce Hoda Al Khzaimi who is the director, Centre for Cyber Security at New York University in Abu Dhabi. Welcome, Hoda.
Matan Getz who is the chief executive officer of Aim Security and Aim is an innovative start-up, which is actually looking at protecting the use of AI in organizations. And Nic Chavez, who's chief information security officer [CISO] at DataStax, which is an AI data platform.
So, I'm sure you'll agree there's a lot of great experience in this group. I'm going to kick off with an opening question to all of you. And that is really for each of you in your world, what do AI agents mean to you and what are the implications and considerations that businesses need to think about in the adoption of AI agents? And I'll start with you, Hoda.
Hoda Al Khzaimi: I think AI agents is not something that is quite new to the field, I would say. I mean, look at all of us. When was the last time or the first time that we have used Siri or Amazon Echo or all of these kind of devices? That is considered the personalized assistants. That's also our model of AI agents.
When was the last time that we heard of like automated car systems where you want to live an automated driving experience that's also a model of an AI agent, ultimately.
So the ultimate capacity of AI agent is the ability to have that at the moment, I think it's very restricted by being a piece of software but in the future it would be the capability of software and hardware that would analyze big data structures around the organization and be able to make automated decisions without the intervention of the human in the middle.
And it's quite an intrinsic, a critical experience at the moment because you use a lot of reinforcement learning and a lot of kind of machine learning aspects that go with analyzing and learning huge datasets on environments.
So the accuracy of those models are quite important. I'm not sure of how many of our audience are working with manufacturing units and having these kind of autopilots within the manufacturing units that would automate the structures around factories. So this is also quite an example of AI agent.
Grant Waterfall: Some more examples. Thank you very much. And Matan, on to you first.
Matan Getz: Thank you for the opportunity being this morning. So thank you for that. For me, to be very simple. AI agents are just applications that act on your behalf. And similarly to what Hoda said, Alexa and Siri was here for a long time ago. And the change is exactly what you described. I believe the change is the ability now to create your own AI agent very, very easily.
You don't need to be data scientist. You don't need to be a software engineer with some tools that are out there. You can just click some UI tweaks or maybe give natural language description and instructions and you have your own agent.
So the change is the democratization you described. And now with the ability of everyone creating so many AI agents, this is where all the security concerns are coming because I believe that the Alexa of the worlds and the Siris of the worlds used to be owned by big giants. Not anymore. Now they give us the ability to do this. So.
Grant Waterfall: To create our own right. Excellent. Thank you, Matan. Nic.
Nic Chavez: So to your point, it's incredibly simple to create an AI agent. We have something called Langflow that we use to do that. And so given my role as CISO of an AI data platform company, I often meet with other CISOs to help them understand what it looks like.
And so I guess the way that I most frequently describe it is: Imagine your most valuable employee or the colleague that you just enjoy working with the most, right? And that's an AI agent.
And really, there are three categories of things that an agent is good at. First one is they're autonomous, right?
They can, after you give them the initial instructions, they go off on their own with very little input from you. Second thing is they're adaptable. So, I mean, you'll tell it what to do. It'll try and figure out how to do it. And if it can't, then it'll try a different way.
And then the third thing is that it's very goal oriented. So after you have given it, you know, the description of what you'd like it to do and how you think it should, it'll do it until it achieves that goal.
Now we think about a colleague as one person, right? What if you had ten of those? What if you had 100 of those? Or if you had 1,000 or 1 million? And so, you know, to go easy on our, you know, human brains, let's just say that it's a thousand colleagues that are doing exactly what you say all of the time until they achieve the goal, at which time they can be deployed to do something else.
And so that's agentic AI and whatever you pointed at is what it will, along with its, you know, vectorized database store and the LLM that powers it, will accomplish your goal.
Grant Waterfall: So massive opportunity there.
Matan Getz: Would love to leverage actually Nic's beautiful analogy because I love that analogy between humans and agents because this is the ultimate goal.
But think about how like what expectations you have from humans. Like you expect them to behave politely. You expect them to behave the right way, assuming the expectations. That sometimes we're not like we don't know. So we had an expectation that we will all get here on time, right? That we will dress up a certain way, that we will speak a certain way.
With machines, all of these expectations needs to be actually described. And these are the guardrails maybe that everyone is talking about and this is the complicated part in the opportunity that you suggest.
Nic Chavez: Yeah, I agree with that. I think that as an IT professional or a stakeholder in the business, you really have to think about what's the goal and can I leverage the cloud that has some trust and safety built into it by maybe the large language model providers.
Or do I need to go with an on-prem, almost small language model tied with an AI agent with, you know, some sort of vectorized database to vectorize all the information.
And in doing so, you can then deploy with your own trust and safety guidelines, which may be very different than the cloud based environment.
Grant Waterfall: So we're starting to go down this route of AI agent adoption within organizations. And maybe Matan, I'll start with you. Let's just dig a little bit deeper into some of the risks which exist around this kind of ability for everybody to be creating AI agents and what that might mean in organizations.
So I know your company is quite focused on solving these problems. So where do you see the risks?
Matan Getz: Yeah. So, first, I want to say before describing the risks that it's early stages of everyone taking their own policies and deciding their own policies and unfortunately, most companies act like they acted when ChatGPT was released to the world. They take yes or no, our policy. I think it's the wrong way.
Usually, what companies do, they either block the use of ChatGPT or AI agents and then they introduce fiction. They actually, you know, limit the business value that makes this right or an amazing value. And if they say yes, usually they tend to just say yes and have a complete, you know, a very open playground where you can do everything and anyone can, you know, to anyone that is needed.
I think two parts here or three parts here that are very risky with AI agents. First, is their action ability or the autonomous part of it. So if it has the ability to run financial transactions as an example or to make decisions on your behalf and no human confirmation is needed, then this is a risk. So this is the first part.
Second. AI agents are very good when they are doing what they are designed to do. But when you start manipulating them, intentionally or unintentionally, sometimes there are some small data drifts that makes them behave not the way you wanted them to behave.
And this is where attackers can actually leverage this kind of unpredictability. Or also some employees can do it actually or people can do it unintentionally. So I believe this is the second part.
And third part. And this is mainly for organizations actually that embrace AI agents. It gets very effective when it's connected or trained by sensitive data. But when AI touches sensitive data, this is where you need to be very careful because this is where you have to make sure that the right guard is deployed to prevent attacks, to prevent data leaks. So this will be the third part.
Grant Waterfall: Right. So in theory, then we could be increasing the attack surface of the organization by deploying these these agents on control.
Matan Getz: Exactly.
Hoda Al Khzaimi: Okay. And to your point, Matan, I think what we have here is a dilemma of trying to understand the efficacy of applying these agents within the platform. Because at the same time, while you are trying to improve the autonomous decision making structures within the system to have efficiency around the operation, you are also introducing a threat factor.
You are introducing a mass capability of surveillance and analysis of your systems possibly. And even if you have applied it with unlimited structures, you don't know if there is also the ability of a malicious actor to get into AI agents, kind of a configuration structure and apply maybe this generative adversarial network structure where adversarial models can be downloaded on the agents in order to, you know, maybe tweak and attack into the system.
So this kind of intentional attack space exist one way or another. And at the same time, you discussed the merge of AI agents with structures and agents don't strictly work with LLM as one of the different machine learning kind of structure automation structure it works with but the ability of having this kind of vulnerable space.
The LLMs are not really a perfect space for reliability or perfect space for having, you know, because they are very weak towards false positives and false negatives. So what would happen if you were introducing a critical decision-making aspect within your own automated manufacturing unit? And a small tweak will introduce a false positive and false negative into the whole organization.
So, instead of solving a problem, you introduce a mass kind of a similar phenomena into the organization and dealing with that kind of mass failure from a single point of view is very important.
I think the second risk that we should highlight here is the fact that we, like AI agents, tend to require specific level of privacy protocols to be applied on the infrastructure. Most of those privacy protocols are not actually designed to be used in mass and not designed to be used in IoT sensory networks, and most of those agents would work with IoT sensory networks.
I'm talking about federated learning, for example, homomorphic encryption, differential privacy kind of protocols. They're doing well when it's a limited structure but if you want to apply them at mass, they would have issues of I would say, speed and complexity analysis.
So how would you deal with those? What kind of new protocols maybe you would need to introduce into the environment in order to preserve privacy and other aspects within the wider map of things?
Grant Waterfall: There's a lot in there.
Matan Getz: And I can highlight very common mistake that I see lots of companies are having. Usually what happens is that there is a policy that the creator of an AI agent can actually give the AI agents all these permissions.
Well, I'm not sure this is the right policy. When there is a human in the loop, maybe you can access sensitive data when it's an autonomous and it's running on scale, I'm not sure the same permissions should be inherited by the AI agents. It was created by the same person. So like, I know it's very simple, very simple use case but very common one.
Grant Waterfall: Yeah. I mean, that's an interesting concept. You would sort of generally think that the human permissions would be inherited but when you start to think about it, it's a different type of processing.
Nic, you advise your customers on this kind of thing. So what sort of questions are you getting and from a risk perspective on AI agents?
Nic Chavez: The most common question I'm asked is within AI agent, okay, because it is drag and drop and what you see is what you get with our product: Is my data secure? Is my data secure still at rest and secure in transit? Right.
Like, are the API secured? How are they secured? And it's an important question because on the surface it's very simple. As you dig a little deeper into it, data poisoning is a real thing. I think that's what you were both referring to. And that implies some sort of malicious actor. Right.
But there's something that you could be exposed to depending on how your agent is configured to learn is if it's at all self-referential and there is the probability or possibility that it could hallucinate when generating information.
You now have an imperfect data set and you're now creating vectors within your database that tie perfect information with imperfect information. Right. And then you systematize it and maybe you scale it. Right. And so we always say, you know, you don't want to scale a bad process but it's certainly possible.
Matan Getz: Yeah. Yeah.
Hoda Al Khzaimi: Imagine if the whole system around you is going to say anything about critical data and critical infrastructure kind of performance aspects. So, I think there should be a limitation and there are limited applications of AI agents within those circumstances. But how can we make sure as well?
I think you mentioned it at the beginning of the conversation that those AI agents are not there just for functionality. They're also there to maybe cross-interact with the humans.
So if they cross-interact with the human, then we have to bring in the other dimension of the operation, which relates more to understanding the human intelligence in the room, the emotional intelligence in the room, being careful about the well-being of people and all of these other aspects, the ethical considerations across the map, especially if they're going to be used on healthcare kind of data on the mass.
Matan Getz: Being very practical, I would love to suggest for characteristics that I believe that if we truly understand them and define them for any AI agent, I'm not sure it will be secure. But you're on the right path. I want to say for making sure that secure and first one that I see most companies are struggling with is the purpose of the AI agent.
So the AI agent needs to create value and the value needs to be very specific and well defined. And I'll give you an example. So like a shopping online assistant. Well, what do you mean to shop clothes or also chemicals? Like as when the purpose of the AI agent is very narrow and very specific. And I believe it's the same like software.
We want to build software that is building of like very specific components and with microservices. Same with AI agents. They want agents to be as specific as possible. So purpose is the right is the first thing.
Second thing is accountability. Like and accountability doesn't end at the day of the creation but the opposite way. It starts at the day of the creation. Third, discoverability. Everything needs to be auditable. And the fourth one and I think we are all talking about that part, is it needs to be trustworthy. And I'm sure we are going to discuss that.
Grant Waterfall: It needs to be sorry?
Matan Getz: Trustworthy.
Grant Waterfall: Okay. Right. That is very great advice. And let's park, we'll come back a little bit to this control side when we come on to talking more about the use of AI agents in cyber security. But before we do that, I'd like to go to Nic.
And what's your perspective, Nic, on the offensive side here?
So from the perspective of the attackers using the same technology, which is now, as you say, driving drag-and-drop click sort of stuff, it becomes democratized on the other side. So what's your prediction on how this starts to evolve and the threat to organizations?
Nic Chavez: Maybe prediction would be too strong a word because I hope it doesn't happen.
Grant Waterfall: Right.
Nic Chavez: Yes. Drag-and-drop, right. At least the script kiddies back in the day needed to know a little bit of Linux and a little bit of Bash, right, in order to attack an organization. Let's go back to the example of the diligent colleague or diligent employee that you'd love to have that is autonomous and adaptable and goal-oriented.
We all want those things. Now, what we don't want is a hacker that is capable enough to drag-and-drop an associated vector, like a certain data set into a vectorized database and then deploy that agentic AI, once, 10 times, 100 times, 1,000 times, a million times.
Now, all of a sudden, you have this very intelligent, self-referring and very, very adaptive, basically DDOS tool, right? I mean, it's attacking your firewalls 24 hours a day and never rests and never sleeps. And, you know, so on and so forth. Right.
Once it has penetrated, then it can go in and basically do lateral movements because it's been trained to do that. Anything that you can train a human to do on a keyboard and an interface you can train an AI agent to do.
Now, if you have a million of those entities all coming at you all the time, there's a very low cost of implementing that. There is a very low barrier to entry. So, not only will you have nation states that are controlling millions of potential attackers, you might have someone down the street doing the same thing.
And so we as an industry need to come up with some better identification and better tools to address that before it happens. Because when it happens, that's going to be a huge fire to fight.
Grant Waterfall: Okay. Any other thoughts on that one?
Hoda Al Khzaimi: I think we should like put a guidelines, design guidelines when it comes to insisting on reliable design and insisting on secure and resilient design, when we are talking about any type of technology that would do mass analysis and automation at the same time.
So mass analysis and automated decision-making because the human is not in the middle to be able to capture any kind of malicious intent in the system.
Just going across with what you just explained, Nic, I think, having that kind of self-regulated resiliency programmed into the AI agents is quite important from the get-go, where an AI agent would be able to detect if they are being tweaked for malicious kind of purpose and then they would stop that immediately and then they will self evolve or self-heal if there is any type of attack that is happening against them in the system.
And I think that's the missing element of the ability to stop and hedge any type of risks that would be introduced into the system, either by functional errors or by human kind of intended, malicious kind of act.
So that's what we are missing today in the design of the mass technology that relates to, I think, automated systems and AI is how can we inherently, by design, make sure that those technologies are responsible and resilient and self-regulate themselves when it comes to those capabilities.
I mean, it's interesting that we want to automate them for functionality but we should automate them for security. That's the purpose. We should automate them to be secure and resilient and be able to detect the peripheral kind of threat map.
Nic Chavez: That's a great point, I think. You know, I have a little bit of, a lot of grey hair now, and so I'm old enough to have gone through several decamillion ERP implementations.
Now designing an ERP implementation, secure first, secure by design. Great. Right. The thing with these AI agents is that they're so simple to spin up that you could literally create something and then have an AI agent that is a combative agent to that agent spun up in two minutes like later. Right.
And so there's a certain amount of systemic framework that we need to agree upon in order to prevent that from being callously or casually used, right. Because we're not going to have the personal expertise embodied in each person creating an AI agent that will know to do those things right.
So they have to be systemically implemented and that has to be decided between all of us and industry in concert with the regulators and with the standards organizations.
Matan Getz: And I would love to highlight not just the urgency but also the opportunity here and to what you both said, I believe that security used to be very active.
Like to your question, like usually is that what happened is that I saw security leaders waiting for let us see what value we will get out of AI agents. Let's first understand the adoption of AI agents and then decide if it's in our prioritization list because it's always about prioritization; risk is everywhere.
So I'm not sure if this is the right time to start working on on protection and security guidelines and policy for agents, because I don't know how well it is adopted. And I think we should think about it the opposite way.
This is an opportunity here for security leaders not to be reactive but to be proactive and set the guidelines first, set the right design first and only, then actually be one of the leaders, maybe even of the adoption and show, you know, their use cases for adoption of the AI agent.
And I think this is a fault like, this is a failure that I'm seeing a lot of time and AI in general presents here new opportunity specifically also with AI agents.
Hoda Al Khzaimi: Yeah. And I think we tend to design and release technology, to your points Matan, to the actual environment, to the situational environment as soon as possible. And what we need to do here is allow an incubation process or a sandbox in process where you have the freedom to design AI agents.
But at the same time, it's the risks are very hedged within that environment. And then you can later on decide to scale it up, scale it down, not use it at all. If it introduce specific, you know, magnified risks to the platform.
Grant Waterfall: It's interesting, you also gave the ERP example because I worked in this for many years as well. And, you know, in the end, it was partly the responsibility of the software provider to ensure a controlled and secure system.
But actually, most of the effort came into the company to actually configure, implement and put the monitoring and control around it.
So, I guess it is going to be a shared responsibility. But your point, Matan, around how security views need to get ahead of this one before it bites us because I do remember all the assurances that nothing could go wrong around ERP and they certainly did. Right. So, good. But let's move on then just to touch on the the defensive side.
So, how AI agents can potentially be used to make companies more secure. So, that's on the detection and response and the automation of response as well. So obviously, there's huge opportunity in there if we start to use them.
There's also some risk between automation and control and some of which you've already mentioned how to leave it to any of you'd like to start. How can we use this in cyber security?
Nic Chavez: My first computer had a lock on it so you couldn't turn it on. And that was kind of like, "Hey, nobody's getting around this, right?" And so as any industry progresses, the subtlety of the attacks and the defence becomes more sophisticated.
And I think right now, as CISOs, we need to be thinking about, you know, we have network traffic anomaly detection. Right. These agentic AIs can be programmed to develop and simulate network traffic that is similar to the existing network traffic in the system that you're trying to target, right?
So now we as defenders have to build an AI agent to detect an AI agent that's simulating the existing traffic on the network in order to penetrate the network, right.
And so when you look at how meta that is and the level of specialization that the persons that are going to need to develop these AI agents will need to have, you realize the amount of power that the AI agent has and the amount of trust that you must have in the person or the persons that will be developing the agentic defence to the agentic attacks.
Grant Waterfall: Wow. That means a lot of new skills and thinking differently. Hoda?
Hoda Al Khzaimi: And we should also be mindful about the loss of information or fragmented information within AI agents because, most of the time, they work in some scenarios in a decentralized manner to be able to analyze and detect anomalies. That's how we work on cybersecurity, right?
We need the fingerprint of the effect of the attack spectrum and to analyze any of these. But what would happen if they, as well, have the capability at the same time to scale up and to predict the next cycle of attack and maybe mimic a next cycle of attack?
Would that be as well be considered just a defence layer kind of approach or is it going to introduce a different level of complexity of attacking and defending at the same time within the network?
And the computational power of those resources is something that we should discuss because on a normal day, you know, you have a specific boundaries to edge computing devices. And these are most of the time, I would assume, that you will not be able to run all of them on the cloud at a specific point of time. Right.
So you would have to send the edge sensors or edge computing device and allow them to compute on the device and learn on the device. But how much of our devices have the capability to do a wide mass surveillance of those structures and anomaly detection on such a wider surface?
So my concern here is on the ability to scale up beyond the current scenarios that are available for defence that we are having at the moment.
Grant Waterfall: Ah. OK.
Matan Getz: To your question, we all got beautiful examples for maybe the advantages of AI agents in favour of cybersecurity. Two weeks ago, Google's Project Zero published that they found there was an AI agent found for the first time a zero day vulnerability.
So this is a beautiful example and I think this is just, you know, a beginning of a new wave of implementations that we will see of AI agents that are supporting and working for cyber security.
Grant Waterfall: Huge opportunities there. I think we've got about 10 minutes to go, so I'm going to pause here and see if there are any questions from the audience. Hopefully, we haven't baffled you all with AI agent speak here. I don't have any questions coming. I think there's a question. We've got two actually.
Audience member 1: Hi, my name is Carrie-Anne. I'm just wondering, based on the discussion in terms of the use of AI and how it can learn from each other and even mount attacks against each other by learning from each other.
Do you see a world where the cat is already outside of the box, like outside of the bag, pretty much, where you're speaking of regulation, you're speaking of frameworks but the technology is now being used already and put in frameworks would only be those just like cyber other tools by responsible ethical uses.
Do you see a world where we would end up having companies within the web, for example, coming up to be defenders against those of us who refuse to use the technology responsibly?
Just listening to you, it just seems as if we're kind of ramping up to that point where we're going to have persons having to do exactly what, like you guys have said, train an AI to find an AI who's a bad AI, to stop the bad AI. Yeah.
So I'm just want to – do see a world where it would get to the point where we're going to have to go beyond the business model of I'm first to market but I now have to defend the market.
Grant Waterfall: It's a good question. Yeah, Matan. And then, yeah.
Matan Getz: So first I want to say that good always win, the good are always winning. OK. So I don't like to sell fear. OK.
Grant Waterfall: The cat isn't out of the box.
Matan Getz: Yes. Yes. So we are good. I do believe that, like, external regulations and these kind of practices are behind. I'm surprised to see that what's leading the innovation or adoption of AI, secure AI, is what companies are doing for themselves. They have self-governance. They have self-regulations. They help self policies. Well, is it easy to define these kinds of policies?
No, it's difficult. You know, like it takes time. Is it easy to enforce those kind of policies? No, it's very difficult. But this is where I see lots of financial institutions and manufacturing companies, giants that are taking the lead in actually deciding what will be the secure way to deploy these kind of technologies.
And they are not waiting for any frameworks, external frameworks or regulations to come because they understand that these kind of want to see forces all behind. And if they would wait for them, they will be behind.
Grant Waterfall: Yeah. That's very reassuring.
Hoda Al Khzaimi: I think for a very long time. That's an amazing question and I think an amazing visualization of a battlefield and a field of agencies to build security. And I think we've been running away from the demons and in the cybersecurity field for a very long time.
And our demons are and the lack of governance structure amidst like all of the heterogeneous networks and platforms that we're building in multiple sectors, we have to face governance structures for those kind of new innovations, including agents and many other emerging technology concepts. Very soon also the lack of agile regulation for devices.
I mean, it takes us now three years to build the regulation that would kick into the field while the devices are being introduced and in a monthly basis, maybe within different variations and versions of updates in few weeks. So in no time.
So I would think that we need to build agile regulations that would also allow those agents to self-regulate because you need to also get them to learn about regulations, structure, hedging structure. So we don't have that kind of chaotic battlefield that we have in mind at the moment.
Unfortunately, I think the mass of the risks that we see today, it's because we don't have security. I would say scientists and cyber scientists and developers involved in designing the agents and other emerging technology. I would say tools.
You have amazing scientists and amazing, as well, developers who are racing to bring the technology to market for profitability. But also we need to bring responsible technology to market for ethical use and for responsible use across the board.
Grant Waterfall: Very, very good point. Did you have something to add, Nic?
Nic Chavez: Yes, briefly. I think that the hyperscalers will necessarily be the on-ramps for this. Right?
Because, you know, it's kind of easiest to say, look, I've got a budget of 10,000 or 100,000, I'd like to deploy a database and you know, start building agentic AI. And so if the regulators were to focus anywhere and probably take some flak for saying this, they might want to think about doing that. Working with the hyperscalers first.
Now, to Matan's point, companies will need to build new muscles right in the new muscle that they're going to need to build is doing an assessment of what their information is valued at and then do the calculation of how much they're willing to spend to protect it because the adversary is doing that very same calculation.
So, I mean, if you have the ability to go and spin up an AI agent and you know, do it on a hyperscale or maybe, you know, kind of one of the lesser known cloud platforms and you know, build a Unix box or Linux box and get it ready to rock and roll. Maybe they'll spend 20,000 to make 13 million. Right. You know, through that agenda.
And so the corporations really need to have an agile process to identify the information, how the information is being protected, how it's being deployed agentically and what they're willing to do to to protect it. Right. On the investment side. And they need to do that on an almost consistent basis.
Now, it's like a change process meeting, right? It's like, OK, let's talk about it for this month or this week or, you know, this period of time because someone else is always evaluating the value of your information and how much it's going to cost them to take it.
Hoda Al Khzaimi: Fantastic. I would like to add to what just Nic said that I think it's quite important to advocate for open source kind of platforms and here because you want to see what's under the hood, you want to see what's being developed and allow for transparency so you can cross-analyze, cross and prove the security status of the product that you have in mind within the GenAI.
Nic Chavez: True open source.
Hoda Al Khzaimi: Right. True open source. I agree with you.
Grant Waterfall: Just check with my panel. Have we got enough time for a question? One more. Okay, good. I think. Yes.
Audience member 2: [Unclear] I have a question regarding, which builds upon what you discussed briefly. You know, running and operating agents is difficult. If you look at the history of guiding agents, there are less surprises in that. And this we see in the cyberspace now as well.
Intelligent agents tend not always sticking to the rules as expected. And how. But this alignment with the original intent with which they will build is the prerequisite for trust and for giving them more important business tasks as well.
As validation and verification of AI is in the far future, perhaps we need something in between to monitor and to react on unproductive or unwanted behaviour of agents earlier.
So what would be your take on safeguarding that digital mindset does what it's expected.
Matan Getz: Yeah, well, this is exactly what we do. So thank you for the question. Well, there is a need for in-line, real-time, deep, real-time guardrails that will limit what AI agents can do and cannot do. These are the expectations I was talking about.
So, like, if it's a loan underwriting assistance as an example, we need to decide like what kind of topics should be discussed with this kind of assistant and what topics would not be discussed in some way.
The fact that we can describe everything in natural language is an advantage because now we can set the expectations in natural language. But it's not that easy to to define it very strictly and to have enough room for the AI agent to do whatever is needed.
But understand time to be limited enough to be in the others. And it requires expertise. But I agree with you that there is a need for new technology that will be monitoring and will be safeguarding AI agents and will do that in real time. And you know, the existing security stack is not capable of doing so.
Grant Waterfall: With that, I'm going to have to close this panel. So we're out of time. I want to say thank you very, very much to our panellists. I think that was a fantastic panel of such great diversity of perspectives.
And honestly, I think this is really a huge emerging topic for us. And hopefully that's been clear to everybody. And you heard it here first. So thank you, everyone. Let's close the panel. Thanks, everyone, for attending and for those of you online.
Sean Doyle and Natalia Umansky
2 de diciembre de 2024