Yair Netzer

Microsoft (Israel)

Responsible AI – what it is and how to test for it

The introduction of AI, and specifically LLMs into our lives sparks innovation and a rush to include AI capabilities in new features.
However, this deployment of new features using AI is not without obstacles. LLM responses are sometimes rude, argumentative, or simply wrong.
This led to the understanding that with all the power LLMs currently have, we often can’t use them “as-is” and need to deploy safeguards as part of our development of features, and test for those.
The term “Responsible AI” was born, and aims to identify the main risks that working with LLMs can create, and how to address them.
In this talk we will explain RAI, give examples of LLM faults, and provide suggestions for testing.
We will also connect this new topic to security – and how the RAI process can benefit from classic security processes such as ‘design review’.

Yair is a Principal Security Research Manager at Microsoft, with over 15 years in the field of security research.

As part of his job he oversees AI red team activities, where AI models are tested for RAI faults, as well as potential security issues.