Séminaire ARMEDIA présenté par le prof. Martin Klimo, le 29 mai 2024, à 14h00 en salle 4A467 à Palaiseau

Quand: le mercredi 29 mai 2024, à 14h00

Où: Salle 4A467, à Palaiseau ou sur zoom sur le lien suivant: https://zoom.us/j/96601150911

Biography :

Martin Klimo received his diploma in telecommunication engineering from the University of Transport and Communication Zilina in 1973. Since 1990, he worked as an Associate Professor, and in 2003, he was appointed a full professor of Applied Informatics. His research interests include communication theory, queuing theory, machine learning and fuzzy logic implementation by nanotechnology. He focused on these disciplines mainly from a speech point of view: packet network performance for voice transmission (VOIP quality), text-to-speech systems, and speech recognition. His interest is memristor-based fuzzy computing, anomaly detection and explainable artificial intelligence applied mainly in image recognition and IP networks.

Abstract : Explainable pattern recognition

Humans have rich experience applying linear models and logical thinking, but only experts understand the behaviour of non-linear systems. However, the deep neural network (DNN) implementation of non-linear systems outperforms optimal linear models. Therefore, the forward DNN (the pattern recognition system in this presentation) attracts attention to the necessity of interpreting the results obtained by DNN. To preserve the high performance of DNN, we focus on a post hoc explanation; this approach means building an explainable model for the decision obtained by the black box. To avoid the interpretation of a set of millions of non-linear functions, we divide DNN into two parts: the feature extractor and the classifier. Following that, we argue for a specific interpretation of each of them. While for classifiers, we have several suitable explainable models (and we decided on the fuzzy logical function), we believe that feature interpretation is a creative scientific activity corresponding to the usual research. The presentation shows a tool to help researchers and users understand extracted features not necessarily known in the specific application domain. Explanation of the new features is the way of learning from computers. In order not to try to explain the unexplained, pattern recognition must include the detection of anomalies, i.e., patterns that are substantially different from the patterns of the training set. The talk will also show the results obtained for anomaly detection by GAN-based networks.