conferences | speakers | series

Fooling AI Into Believing Turtles Are Weapons

home

Fooling AI Into Believing Turtles Are Weapons
Grazer Linuxtage 2022

In this talk, we will explore adversarial attacks on neural networks. We will show how it is possible to make neural networks believe that turtles look like weapons or any other kind of object.

In recent years, AI systems have shown incredible abilities, such as playing chess, driving cars, recognizing speech, diagnosing cancer, and recognizing different kinds of objects. But there are cases in which AIs fail in strange ways. In this talk, we will explore adversarial attacks on neural networks. The goal is to create images that look unsuspicious to humans but can trick AI into believing that it sees something entirely different. We will show how it is possible to make neural networks believe that turtles look like weapons or any other kind of object. This talk will cover: - What kinds of adversarial attacks are there? - How do they work? - What are the consequences for security and safety in AI technology?

Speakers: Johannes