It seems like application security engineers are constantly being asked to do more with less. We're expected to be good engineering partners, great software engineers, architects, and even the occasional source of comedic relief. What if there was a way to manage all these demands effectively while still leaving room to breathe? Large Language Models (LLMs) and generative AI offer powerful tools to streamline research, analysis, and documentation, empowering engineers to focus on strategic security decisions. This talk explores how these technologies can be leveraged to gain actionable insights, identify key areas of focus, and enhance AppSec workflows.
Viewers will learn how to:
Create clear, relevant, and accurate security documentation
Leverage AI insights to pinpoint critical areas of focus within applications
We also delve into the role of existing and emerging AI-powered tools in the industry. Some particularly interesting areas include:
Automated vulnerability remediation
API fuzzing and input/output validation
Automated context-aware threat modeling