Scott and Mark learn responsible AI

Thursday, November 21
5:00 PM - 5:45 PM Greenwich Mean Time
Duration 45 minutes
BRK329
On Demand
Breakout
In Chicago + Online

Join Mark Russinovich & Scott Hanselman to explore the landscape of generative AI security, focusing on large language models. They cover the three primary risks in LLMs: Hallucination, indirect prompt injection and jailbreaks (or direct prompt injection). We'll explore each of these three key risks in depth, examining their origins, potential impacts, and strategies for mitigation and how to work towards harnessing the immense potential of LLMs while responsibly managing their inherent risks.

Profile picture of Mark Russinovich

Mark Russinovich

CTO, Deputy CISO, and Technical Fellow, Microsoft Azure
Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University.
Profile picture of Scott Hanselman

Scott Hanselman

VP Developer Community
Programmer.