Jon Scheele (00:00)
So I'm really pleased to welcome back Buu Lam to Singapore. The last time I saw you was a few months ago at Govware. We're here at F5's App World event and really, really pleased to welcome you. Perhaps you could introduce why you came to Singapore and what you do at F5.
Buu Lam (00:20)
Yeah, well actually since we last spoke my title changed so I'm the director of community evangelism at F5. Our community is the F5 Dev Central community. It's been around for over 20 years at this point and it was really born out of the programmability of our BigIP platform giving people a space to be able to collaborate on our data plane programmability as well as our control plane programmability as far as API's as well so there's a REST API for BigIP and you can use
with it and we have a declarative interface as well. as one of the community evangelists as well as being the director of community evangelism, I go out to events and speak to the community and you know kind of get those insights.
Jon Scheele (01:01)
So in our last conversation we talked a bit about API observability and I think that's still really a very relevant topic. We had apidays Singapore last month and F5 are a great supporter of that. Chuck Herrin from F5 came to speak and I guess what is really clear about F5's priorities, that you not only want to help organizations integrate, but you integrate securely. And Chuck's experience around security is pretty clear.
What I also notice today from the sessions is that now F5's AI gateway is general availability. You have always had an API gateway for a long time and now you've extended you're offering to your customers with an AI gateway. Can you tell us a little bit about what are the key things that you see happening about AI gateways and where that plays into what organizations should really be thinking about in terms of security?
Buu Lam (02:05)
That's a great question. People have asked before, you have an API gateway already. Why did we go out and build an AI gateway, which is actually a different architecture than our traditional sitting in front of APIs and doing API security?
And the key thing to that, when you look at the big IP platform, we have programmability on the data plane, which is through scripting and it's event-based. With AI Gateway, what we're doing is we have a processor and it's actually running WebAssembly. The processors are compiled into WebAssembly. And so now you have the ability to actually build an application and be able to take AI flows and run them through the processors.
Instead of traditionally our method of injection and adding extra features has been through a TCL scripting language. So based on an event, perform these actions and flow through that instead of this processor. And you can just write whatever you want.
Jon Scheele (03:02)
Right, so one of the things that was mentioned in the session was that API gateways were really designed initially to manage internal traffic. There may be traffic that comes from outside, but then it gets into the environment and the API gateway is there to make sure that internal communication is protected. But an AI gateway needs to be aware of both the request and the response because there are people, bad actors, who are trying to get at an organization's sensitive data and you have to be careful about all the sorts of OWASP attacks that have been prevalent in application security and also API security. There are a few more in LLMs that require you not just to look at the type of request that's coming in, and being text based often it can be very nuanced and quite sneaky in how the bad actors try to get around the security features. But then also look at the response that's created by the AI model or application and ensure that it is not breaking any rules about data exposure, particularly sensitive data exposure. So where do you see this heading? Because I think there's one thing to have a set of rules, but when the bad actors are constantly evolving and innovating, it's hard to update all the rules all the time. So where do you see this sort of space involved?
Buu Lam (04:43)
I think it's interesting in that now that we're dealing with language and semantics that before we would have signature-based rules on a web application firewall. And we would actually have pretty defined ways of working with an application as long as you kind of knew what good looked like. Then you can implement a rule to say just do the good things. if it doesn't do one of those things, then it's a bad thing. Now we are dealing with language.
Just in the last couple weeks somebody showed me this site. I can't remember what it was, but it's basically a prompt injection game. You're, and it has different levels to it, and you're playing with prompt injections to get around system prompts. And it's amazing to see that simple adjectives will allow you to totally change the behavior and scoot around system prompts.
And I think from an inbound and an outbound perspective of having something like AI Gateway that would actually process in that manner and be able to understand, maybe they did something and they made this request and the way they worded it, yes, it made sense to make that request.
And then if we look at the response as well, we can actually see that based on the semantics of some sort of check that we're writing as well, we can understand, that was what we're even seeing here is actually not what we want to be showing to anybody. In addition to just knowing, like being able to do a regulatory check against PII type data as well. But I think that's the more interesting part is that that brings in some of the what would normally be a human element to it of understanding the intent of a prompt and of a response and then being able to catch that on both the inbound and outbound.
Jon Scheele (06:20)
Maybe we need an agent to be watching for the different types of new signatures that are going to appear and be able to respond to that. I think also from the presentation there's still a need currently to have a human in the loop for certain things because the AI Gateway may identify something that looks like sensitive information, but there may be a valid reason for sharing that with a particular requester. Throwing up an alert to a human to make a decision may also have a place.
Buu Lam (06:59)
Well, if we think about the word agentic or agency and giving something the agency to do something and in this case are we giving agency to a security function that has the ability to stop a traffic flow where millions of dollars are being transacted? How comfortable are we with that level of agency where we need to invoke a human? A human is able to make that call that will potentially affect, if I shut this down right now because I feel concerned, a human is the one that's going to say, okay, a million dollars, let's turn this off and check this out for a second, or whatever that reaction might be. So I think that, yeah, it comes down to that level of agency that you trust within the tools that you have.
Jon Scheele (07:39)
So that leads me to, reminds me actually, well, okay, if humans want to stay in the loop, if we want to be a value add to the organization and use AI to augment ourselves rather than be replaced by it. What advice would you give to technology professionals about how to best position yourself for the opportunities, but also to recognize that some parts of their role might go away in time.
Buu Lam (08:08)
This will be controversial, but I will say that everybody should be working to replace themselves right now. Everything that you do right now, figure out how you would replace yourself. And if you are a person who is constantly learning and trying to get better, through that activity, I believe you will figure out the next iteration of yourself as a professional.
But if you don't push yourself to replace yourself with AI, again, this sounds terrible. But if you don't push yourself to do that, somebody else might be doing that already with their new startup or whatever it is. They're building some sort of agentic flow. So you can do it now. And through that, you figure out, OK, now that I've built this, I actually realize that there's some extra things that I can do, which I still need to do. Or maybe I've replaced a bunch of my functions, and now I can add other value in this other thing I'm exploring.
But I think there's this uncomfortable ladder that you have to climb and you have to shed some of your existing skin in order to reveal that next iteration of yourself.
Jon Scheele (09:08)
Thanks, that's a great insight, Buu and I think it's a great principle to live and work by. I really appreciate our conversation and look forward to catching up again soon.
Buu Lam (09:20)
Thank you, Jon.