AI + a16z

How to Think About Foundation Models for Cybersecurity

Episode Summary

a16z's Zane Lackey and Joel de la Garza discuss the state of the cybersecurity market vis a vis generative AI, foundation models, and large language models — and explain why 2024 could be a watershed year for security teams.

Episode Notes

In this episode of the AI + a16z podcast, a16z General Partner Zane Lackey and a16z Partner Joel de la Garza sit down with Derrick Harris to discuss how generative AI — LLMs, in particular — and foundation models could effect profound change in cybersecurity. After years of AI-washing by security vendors, they explain why the hype is legitimate this time as AI provides  a real opportunity to help security teams cut through the noise and automate away the types of drudgery that lead to mistakes.

"Often when you're running a security team, you're not only drowning in noise, but you're drowning in just the volume of things going on," Zane explains. "And so I think a lot of security teams are excited about, 'Can we utilize AI and LLMs to really take at least some of that off of our plate?'

"I think it's still very much an open question of how far they go in helping us, but even taking some meaningful percentage off of our plate in terms of overall work is going to really help security teams overall."

Follow everyone:

Zane Lackey

Joel de la Garza

Derrick Harris