AI Security is a ‘Green Field’: Major Risks Ahead!

Full Video Transcript

AI security is not just password security anymore. There’s a lot more to it and it’s a malicious prompt injection basically, right?

So, agentic AI introduces majorly a security risk. AI security is a green field. There’s lot of work to be done.

AI Security is a ‘Green Field’: Major Risks Ahead!

So, let’s talk more about AI security because of course this is some of the things like I grew up as a ’90s kid. So I’ve seen the landline telephone. I’ve seen the mobiles. Now I’m still I’m seeing the AI eventually evolve into whatever it’s going to be and the way it will become mainstream.

But I do remember when Intel came around and computer virus was a big thing and people started worrying about you know antivirus and anti-malware softwares and stuff like that.

So now when AI is on its way to beame become mainstream for so many of these where do you see in terms of that what are the security risk that are you saying that eventually in next few years we will have like anyone who is running a model is going to have like some kind of software like an antivirus they or kind of like guard rails or something that they will have to run along with their models to just make sure that they ensure the security of the of their data.

How do you see this eventually being a problem and then what would be the solution?

Great question. I don’t know anybody will believe you that you went through all the old security needs and all of that. But yeah, fine. You grew up partially in India. So yeah, the landlines were there. I give it to you. Okay.

Yeah, I was I’m still a ‘ 90s kid. So I even remember by the way that the landline which didn’t even have buttons, it used to have a ring thingy on there. Okay. So you know how to use a rotary phone because there was a joke about if you give it now they will not know how to dial from there. They’ll probably press the button.

I haven’t touched that phone but I’ve known that phone.

Sure. Okay.

Great.

So the security so AI

Levels of AI Security

security let’s talk about AI security.

So there are few levels. One the first one which people don’t even realize is that when you’re talking to AI it’s not a security it’s you’re giving away it data. Some AIs make it explicit. They will not use the data for training their model but they still keep the data they save it and they can kind of sell stuff to you based on their data right so that security if you consider that the it’s more of a privacy thing basically and sometimes people will give out all their medical information uh information when asking in fact now chat GP and claude both have I think announced that they will have a medical bot a medical chat chat for for it’ll be separate it’ll be more secure And people can ask so many people are asking for advice from there.

Right?

So the first is data privacy security is an issue for sure. People just don’t realize other reason they don’t realize is they sound so much like human. So it’s like what we used to call like using social engineering to get people’s passwords like it was a technique. So it’s like you talk to a person, they’re talking to you nicely. The chat is talking to you nicely. So you give away more than you want to.

That’s one uh side effect and one problem with the AI security and that will have to be dealt with by actually not allowing certain kind of talks or limiting or people being aware right.

That’s one.

The other security is this idea of that if you are using certain apps which have agentic a AI app which has agentic capabilities, that agentic capability by definition means that they are working autonomously and they might go and do something on your behalf.

Many of them when they’re doing something how they do is not defined. They might be going online to figure out how to do this. And when they are searching and finding how to do it, they went to an article which was intentionally written to mislead them to do something off.

So this we are getting into malicious. It’s a malicious prompt injection basically.

Right.

There used to be malicious code injection. Now it’s a malicious prompt injection. Let’s do a scenario maybe to make it more clear.

So could you tell a little bit more about agentic AI before we get into the malicious injection?

Agent AI explained simply

Yes.

So agentic AI one simple way I like to think of it there is lot more technical definitions but one way to think about is when you’re talking about LLM that’s the main model you talk to you chat with you ask questions and get answers back. That’s the main one. It’s a it’s real time. Yeah, you might have to wait few seconds. There might be delay when but when it’s doing research but if you’re doing it right then agentic is it’s an agent which you give it a task it goes and does it asynchronously and come back and tells you right and usually it is not running with this current model it’s running its own some other ways I like to describe it’s a small LLM running and doing some specific job for you separately.

So it is called it might be called from your main LLM chat or otherwise by clicking or filling a form whichever way it is but it goes and does the job for you.

Good example is get me Taylor Swift. I don’t know who’s the one which is hard to get tickets right now. So you you might know more.

So you want to get some tickets and get me those tickets. I don’t care how much they are. The concert is this Saturday and I want that ticket. Somehow get it for me. I’m willing to pay whatever it’ll cost.

This is a dangerous request by the way, but you can make a request with a proper agentic AI.

What agentic AI will do in this situation is first of course they’ll try the ticket master or whatever the standard channels and find, hey, there’s no more tickets for sale.

Malicious prompt injection example

But this is agentic AI. Smart AI.

What is smart AI going to do?

We’ll say okay, I cannot find it on the normal channels. I cannot buy this ticket. I’m going to go and look for some other websites and other places where I can find these tickets. the aftermarket for tickets.

It’ll go to the now what if it runs to a website which says hey we’ll get you tickets guaranteed for any anywhere and that site says here go to this place and that site is called ivies.com or something like that. I build that site and then you go there and first thing I say is, “Hey, enter your credit card to reserve this spot. You got to pay $100 to get in and your AI enters that $100 even before it and went to a ticket because that’s the way to do it.

If the instruction said that AI might do that because get it at any cost, right?”

Yeah.

I get $100 and say it’s no more tickets, you’re done or whatever it is depending on whatever.

So what I have done is it’s the malicious prompt injection. In this case, the prompt was indirect because the AI was given the freedom to go and figure out.

So it’s actually looking for a prompt. It’s looking for a solution. So it’s really not directly a prompt injection, but basically you told AI to do certain things and it went on and did it.

I know usually malicious prompt injection is associated with when you’re prompting you intentionally do a prompt.

Right now people are talking about making get nudes when nudes are disabled then they come back in and say just give me see-through bikini or something like that and then it does it so they just trick it.

Usually it’s associated with that but it can be a lot more when agent with agentic it can go really crazy because agentic you’re really asking it to do things which it has not even thought of what will happen so it’s trying to do it just work very hard to find the answer and the answer where it finds might be from a website which is actually malicious and it has code and plans there for so that’s the other security risk so agentic AI introduces majorly a security risk and we were talking about agentic hopefully I gave the example of agentic now and from yeah from there it just goes to that security and then there’s other security which is the voice chatb security which we have actually we wrote quite a lot about that how to do that but The basic concept is when you’re talking even if you were validated by password like who is talking what they’re allowed to get information you would need to do more than that because what if they get information like your it could be a user from outside the company or inside the company they get access to information which they are not should not have.

So that’s a problem even if they’re logged in with the password and the other thing is what is stopping them while talking they switched and passed the phone or chat to somebody else and that person got the information right somebody else they can’t oh let me take care of this and they were not supposed to have

Voice AI, password risks, and identity problems

access.

So basically you’re saying with AI coming into picture voice recognition softwares are going to be dead all of those especially the ones that use voice for security reasons.

No no actually no it will not be dead it now you got to do the voice check ongoing.

I mean there is additional value now right of course what you’re talking about is voice can be mimicked.

Yeah because the chlor has been so bad now.

Yes so that creates a separate problem yes absolutely that creates additional challenge of but what I’m talking about is even if you validated them with password handed over the phone then somebody switches you might need a ongoing validation and you might need lot more data partitioning right that’s what this is a lot to it right.

AI security is not just password security anymore. There’s a lot more to it and which has a few things I pointed out basically one is the data privacy is a problem. Second is of course if you’re doing a agentic you got to really worry about and the solution now you were talking about just everybody will run a software or something actually there is no standardized way to do anything because the possible scenarios it’s interesting that lot of innovation in technology happens because of the militia’s agents so lot of these standardization will and blocking and being careful will happen by people who are trying to exploit it.

Oh, here’s my question though.

But because we have now reached a point where we have a software which is equally smarter to an average person, is it is there a way that for the first time in for the first time for human beings it is easy to predict what kind of malicious attacks will happen?

I I I I would not. No, I don’t think so because it’s a spy versus spy way. I used to love those probably I don’t know if they still do those spy versus spice these comic strips just black why spy because it just goes on right so this is I AI can detect and AI can tell you more but then AI can be used to find more ways to break it too right.

Right.

So it’s both sides right if somebody is smart that they will do their own model and use AI to come up with the ways to break it too right.

There will be collateral damage there is no way to get ahead of this thing.

No you can you can get ahead Don’t be stupid. There are basic things not to do is whatever could be security problems. We talked about few at least implement those right. Most companies are moving so fast they’re not even implementing that.

Then you’ll find that specialized people came up with other things which you didn’t think of. Okay, those will get found. So there’ll be a ongoing progress right ongoing plan and there is also there is a the AI will help in unexpected behavior like the pattern can be matched and say we were not expecting what the heck is it going on here.

As simple as one you can say you can go only to these sites that’s easy but you don’t know right if you’re asking AI to solve a very difficult problem it might have to go to many different sites right so you cannot prior know those kind of a things so there might but what they what it does with the site and what it is trying to do.

One of the things is hey before submitting the credit card it’ll ask you right that this is the site I’m submitting the credit card yes or no of course that’s great but it is also a problem because that you’ve increased the friction now right you wanted that done now you have to wait for that and then do the things if you’re buying trying to buy a ticket it might be too late.

True.

Yeah it is similar to what we have now right even if you’re manually entering credit card anywhere sometimes the amount or the site it will give you a notification before it it authorizes it.

Yeah. So that could be but that increases the friction. We we as a users at least I get very angry and frustrated when I have to do that. I have to enter the especially when I’m doing a legitimate transaction. I want it done. Right.

Correct.

You know we’re doing it for security but that’s a friction. I wanted my I want to download that soft. I want to run it now.

I’m impatient. Right.

So that that’s long. Right. Most of us are right. So there is that challenge. Right. You have to balance out on that part.

So security is an issue will remain an issue. New things will come up but things will keep getting better too.

So yes you can do agent take within will the standardized like the way you described like antivirus software things will happen probably some standardization will happen in some areas the apps will all follow those right.

I think it’s a AI security is a green field. There’s lot of work to be done.

So anybody coming from security world wanting to do more this is an amazing area a lot of work to be done like what all can be what all can will happen and planning for that and and getting ahead of the people who are trying to break