Command Zero cofounder and CTO Dean de Beer discusses the benefits of training large language models on security data, as well as the myriad factors product teams need to consider when building on LLMs.
In this episode of the AI + a16z podcast, Command Zero cofounder and CTO Dean de Beer joins a16z's Joel de la Garza and Derrick Harris to discuss the benefits of training large language models on security data, as well as the myriad factors product teams need to consider when building on LLMs.
Here's an excerpt of Dean discussing the challenges and concerns around scaling up LLMs:
"Scaling out infrastructure has a lot of limitations: the APIs you're using, tokens, inbound and outbound, the cost associated with that — the nuances of the models, if you will. And not all models are created equal, and they oftentimes are very good for specific use cases and they might not be appropriate for your use case, which is why we tend to use a lot of different models for our use cases . . .
"So your use cases will heavily determine the models that you're going to use. Very quickly, you'll find that you'll be spending more time on the adjacent technologies or infrastructure. So, memory management for models. How do you go beyond the context window for a model? How do you maintain the context of the data, when given back to the model? How do you do entity extraction so that the model understands that there are certain entities that it needs to prioritize when looking at new data? How do you leverage semantic search as something to augment the capabilities of the model and the data that you're ingesting?
"That's where we have found that we spend a lot more of our time today than on the models themselves. We have found a good combination of models that run our use cases; we augment them with those adjacent technologies."
Learn more:
Follow everyone on social media: