{"id":5416,"date":"2022-12-05T12:23:12","date_gmt":"2022-12-05T11:23:12","guid":{"rendered":"https:\/\/samovar.telecom-sudparis.eu\/?p=5416"},"modified":"2022-12-06T16:18:15","modified_gmt":"2022-12-06T15:18:15","slug":"avis-de-soutenance-de-monsieur-nathan-hubens","status":"publish","type":"post","link":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/2022\/12\/05\/avis-de-soutenance-de-monsieur-nathan-hubens\/","title":{"rendered":"AVIS DE SOUTENANCE de Monsieur Nathan HUBENS"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">L&rsquo;Ecole doctorale : Ecole Doctorale de l&rsquo;Institut Polytechnique de Paris<br>et le Laboratoire de recherche SAMOVAR &#8211; Services r\u00e9partis, Architectures, MOd\u00e9lisation, Validation, Administration des R\u00e9seaux<\/h2>\n\n\n\n<p>pr\u00e9sentent<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">l\u2019AVIS DE SOUTENANCE de Monsieur Nathan HUBENS<\/h2>\n\n\n\n<p>Autoris\u00e9 \u00e0 pr\u00e9senter ses travaux en vue de l\u2019obtention du Doctorat de l&rsquo;Institut Polytechnique de Paris, pr\u00e9par\u00e9 \u00e0 T\u00e9l\u00e9com SudParis en :<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Signal, Images, Automatique et robotique<\/h2>\n\n\n\n<h1 class=\"wp-block-heading\">\u00ab Compression et acc\u00e9l\u00e9ration de r\u00e9seaux de neurones profonds par \u00e9lagage synaptique \u00bb<\/h1>\n\n\n\n<p>le mercredi 7 d\u00e9cembre 2022 \u00e0 14h00<\/p>\n\n\n\n<p>Salle Maquet<br>31, Boulevard Dolez 7000 Mons &#8211; Belgique<\/p>\n\n\n\n<p>ou via le lien teams suivant :\u00a0<\/p>\n\n\n\n<p><a href=\"https:\/\/teams.microsoft.com\/l\/meetup-join\/19%3ameeting_YmM4NzEzMzItZjU4Yy00NzM0LTg3NmYtNTVlMzJhYjA3YjFh%40thread.v2\/0?context=%7b%22Tid%22%3a%22488bed9d-d6a7-48d5-ba1f-ebec3823b357%22%2c%22Oid%22%3a%22a7c0263c-c5ad-43e5-b854-ec7e51d37a79%22%7d\">https:\/\/teams.microsoft.com\/l\/meetup-join\/19%3ameeting_YmM4NzEzMzItZjU4Yy00NzM0LTg3NmYtNTVlMzJhYjA3YjFh%40thread.v2\/0?context=%7b%22Tid%22%3a%22488bed9d-d6a7-48d5-ba1f-ebec3823b357%22%2c%22Oid%22%3a%22a7c0263c-c5ad-43e5-b854-ec7e51d37a79%22%7d<\/a><\/p>\n\n\n\n<p><strong>Membres du jury :<\/strong><\/p>\n\n\n\n<p><strong>M. Titus&nbsp;ZAHARIA<\/strong>, Professeur, T\u00e9l\u00e9com SudParis, FRANCE &#8211; CoDirecteur de th\u00e8se<br><strong>M. Bernard&nbsp;GOSSELIN<\/strong>, Professeur, Universit\u00e9 de Mons, BELGIQUE &#8211; CoDirecteur de th\u00e8se<br><strong>M. Ioan&nbsp;TABUS<\/strong>, Professeur, Universit\u00e9 de Tampere, FINLANDE &#8211; Rapporteur<br><strong>M. Bruno&nbsp;GRILH\u00e8RES<\/strong>, Docteur, Airbus, FRANCE &#8211; Examinateur<br><strong>M. John&nbsp;LEE<\/strong>, Professeur, Universit\u00e9 Catholique de Louvain, BELGIQUE &#8211; Rapporteur<br><strong>Mme V\u00e9ronique&nbsp;MOEYAERT<\/strong>, Professeure, Universit\u00e9 de Mons, BELGIQUE &#8211; Examinatrice<br><strong>M. Thierry&nbsp;DUTOIT<\/strong>, Professeur, Universit\u00e9 de Mons , BELGIQUE &#8211; Examinateur<\/p>\n\n\n\n<p><strong>R\u00e9sum\u00e9 :<\/strong><\/p>\n\n\n\n<p>Depuis leur r\u00e9surgence en 2012, les r\u00e9seaux de neurones profonds sont devenus omnipr\u00e9sents dans la plupart des disciplines de l&rsquo;intelligence artificielle, comme la reconnaissance d&rsquo;images, le traitement de la parole et le traitement du langage naturel. Cependant, au cours des derni\u00e8res ann\u00e9es, les r\u00e9seaux de neurones sont devenus exponentiellement profonds, faisant intervenir de plus en plus de param\u00e8tres. Aujourd&rsquo;hui, il n&rsquo;est pas rare de rencontrer des architectures impliquant plusieurs milliards de param\u00e8tres, alors qu&rsquo;elles en contenaient le plus souvent des milliers il y a moins de dix ans. Cette augmentation g\u00e9n\u00e9ralis\u00e9e du nombre de param\u00e8tres rend ces grands mod\u00e8les gourmands en ressources informatiques et essentiellement inefficaces sur le plan \u00e9nerg\u00e9tique. Cela rend les mod\u00e8les d\u00e9ploy\u00e9s co\u00fbteux \u00e0 maintenir, mais aussi leur utilisation dans des environnements limit\u00e9s en ressources tr\u00e8s difficile. Pour ces raisons, de nombreuses recherches ont \u00e9t\u00e9 men\u00e9es pour proposer des techniques permettant de r\u00e9duire la quantit\u00e9 de stockage et de calcul requise par les r\u00e9seaux neuronaux. Parmi ces techniques, l&rsquo;\u00e9lagage synaptique, consistant \u00e0 cr\u00e9er des mod\u00e8les r\u00e9duits, a r\u00e9cemment \u00e9t\u00e9 mis en \u00e9vidence. Cependant, bien que l&rsquo;\u00e9lagage soit une technique de compression courante, il n&rsquo;existe actuellement aucune m\u00e9thode standard pour mettre en \u0153uvre ou \u00e9valuer les nouvelles m\u00e9thodes, rendant la comparaison avec les recherches pr\u00e9c\u00e9dentes difficile. Notre premi\u00e8re contribution concerne donc une description in\u00e9dite des techniques d&rsquo;\u00e9lagage, d\u00e9velopp\u00e9e selon quatre axes, et permettant de d\u00e9finir de mani\u00e8re univoque et compl\u00e8te les m\u00e9thodes existantes. Ces composantes sont : la granularit\u00e9, le contexte, les crit\u00e8res et le programme. Cette nouvelle d\u00e9finition du probl\u00e8me de l&rsquo;\u00e9lagage nous permet de le subdiviser en quatre sous-probl\u00e8mes ind\u00e9pendants et de mieux d\u00e9terminer les axes de recherche potentiels. De plus, les m\u00e9thodes d&rsquo;\u00e9lagage en sont encore \u00e0 un stade de d\u00e9veloppement pr\u00e9coce et principalement destin\u00e9es aux chercheurs, rendant difficile pour les novices d&rsquo;appliquer ces techniques. Pour combler cette lacune, nous avons propos\u00e9 l&rsquo;outil FasterAI, destin\u00e9 aux chercheurs, d\u00e9sireux de cr\u00e9er et d&rsquo;exp\u00e9rimenter diff\u00e9rentes techniques de compression, mais aussi aux nouveaux venus, souhaitant compresser leurs mod\u00e8les pour des applications concr\u00e8tes. Cet outil a de plus \u00e9t\u00e9 construit selon les quatre composantes pr\u00e9c\u00e9demment d\u00e9finis, permettant une correspondance ais\u00e9e entre les id\u00e9es de recherche et leur mise en \u0153uvre. Nous proposons ensuite quatre contributions th\u00e9oriques, chacune visant \u00e0 fournir de nouvelles perspectives et \u00e0 am\u00e9liorer les m\u00e9thodes actuelles dans chacun des quatre axes de description identifi\u00e9s. De plus, ces contributions ont \u00e9t\u00e9 r\u00e9alis\u00e9es en utilisant l&rsquo;outil pr\u00e9c\u00e9demment d\u00e9velopp\u00e9, validant ainsi son utilit\u00e9 scientifique. Enfin, afin de d\u00e9montrer que l&rsquo;outil d\u00e9velopp\u00e9, ainsi que les diff\u00e9rentes contributions scientifiques propos\u00e9es, peuvent \u00eatre applicables \u00e0 un probl\u00e8me complexe et r\u00e9el, nous avons s\u00e9lectionn\u00e9 un cas d&rsquo;utilisation : la d\u00e9tection de la manipulation faciale, \u00e9galement appel\u00e9e d\u00e9tection de DeepFakes. Cette derni\u00e8re contribution est accompagn\u00e9e d&rsquo;une application de preuve de concept, permettant \u00e0 quiconque de r\u00e9aliser la d\u00e9tection sur une image ou une vid\u00e9o de son choix. L&rsquo;\u00e8re actuelle du Deep Learning a \u00e9merg\u00e9 gr\u00e2ce aux am\u00e9liorations consid\u00e9rables des puissances de calcul et \u00e0 l&rsquo;acc\u00e8s \u00e0 une grande quantit\u00e9 de donn\u00e9es. Cependant, depuis le d\u00e9clin de la loi de Moore, les experts sugg\u00e8rent que nous pourrions observer un changement dans la fa\u00e7on dont nous concevons les ressources de calcul, conduisant ainsi \u00e0 une nouvelle \u00e8re de collaboration entre les communaut\u00e9s du logiciel, du mat\u00e9riel et de l&rsquo;apprentissage automatique. Cette nouvelle qu\u00eate de plus d&rsquo;efficacit\u00e9 passera donc ind\u00e9niablement par les diff\u00e9rentes techniques de compression des r\u00e9seaux neuronaux, et notamment les techniques d&rsquo;\u00e9lagage.<\/p>\n\n\n\n<p><br><strong>Abstract : \u00ab\u00a0Towards Lighter and Faster Deep Neural Networks with Parameter Pruning\u00a0\u00bb<\/strong><\/p>\n\n\n\n<p>Since their resurgence in 2012, Deep Neural Networks have become ubiquitous in most disciplines of Artificial Intelligence, such as image recognition, speech processing, and Natural Language Processing. However, over the last few years, neural networks have grown exponentially deeper, involving more and more parameters. Nowadays, it is not unusual to encounter architectures involving several billions of parameters, while they mostly contained thousands less than ten years ago. This generalized increase in the number of parameters makes such large models compute-intensive and essentially energy inefficient. This makes deployed models costly to maintain but also their use in resource-constrained environments very challenging. For these reasons, much research has been conducted to provide techniques reducing the amount of storage and computing required by neural networks. Among those techniques, neural network pruning, consisting in creating sparsely connected models, has been recently at the forefront of research. However, although pruning is a prevalent compression technique, there is currently no standard way of implementing or evaluating novel pruning techniques, making the comparison with previous research challenging. Our first contribution thus concerns a novel description of pruning techniques, developed according to four axes, and allowing us to unequivocally and completely define currently existing pruning techniques. Those components are: the granularity, the context, the criteria, and the schedule. Defining the pruning problem according to those components allows us to subdivide the problem into four mostly independent subproblems and also to better determine potential research lines. Moreover, pruning methods are still in an early development stage, and primarily designed for the research community. Indeed, most pruning works are usually implemented in a self-contained and sophisticated way, making it troublesome for non-researchers to apply such techniques without having to learn all the intricacies of the field. To fill this gap, we proposed FasterAI toolbox, intended to be helpful to researchers, eager to create and experiment with different compression techniques, but also to newcomers, that desire to compress their neural network for concrete applications. In particular, the sparsification capabilities of FasterAI have been built according to the previously defined pruning components, allowing for a seamless mapping between research ideas and their implementation. We then propose four theoretical contributions, each one aiming at providing new insights and improving on state-of-the-art methods in each of the four identified description axes. Also, those contributions have been realized by using the previously developed toolbox, thus validating its scientific utility. Finally, to validate the applicative character of the pruning technique, we have selected a use case: the detection of facial manipulation, also called DeepFakes Detection. The goal is to demonstrate that the developed tool, as well as the different proposed scientific contributions, can be applicable to a complex and actual problem. This last contribution is accompanied by a proof-of-concept application, providing DeepFake detection capabilities in a web-based environment, thus allowing anyone to perform detection on an image or video of their choice. This Deep Learning era has emerged thanks to the considerable improvements in high-performance hardware and access to a large amount of data. However, since the decline of Moore\u2019s Law, experts are suggesting that we might observe a shift in how we conceptualize the hardware, by going from task-agnostic to domain-specialized computations, thus leading to a new era of collaboration between software, hardware, and machine learning communities. This new quest for more efficiency will thus undeniably go through neural network compression techniques, and particularly sparse computations.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>L&rsquo;Ecole doctorale : Ecole Doctorale de l&rsquo;Institut Polytechnique de Pariset le Laboratoire de recherche SAMOVAR &#8211; Services r\u00e9partis, Architectures, MOd\u00e9lisation, Validation, Administration des R\u00e9seaux pr\u00e9sentent l\u2019AVIS DE SOUTENANCE de Monsieur Nathan HUBENS Autoris\u00e9 \u00e0 pr\u00e9senter ses travaux en vue de l\u2019obtention du Doctorat de l&rsquo;Institut Polytechnique de Paris, pr\u00e9par\u00e9 \u00e0 T\u00e9l\u00e9com SudParis en : Signal, [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"ocean_post_layout":"","ocean_both_sidebars_style":"","ocean_both_sidebars_content_width":0,"ocean_both_sidebars_sidebars_width":0,"ocean_sidebar":"0","ocean_second_sidebar":"0","ocean_disable_margins":"enable","ocean_add_body_class":"","ocean_shortcode_before_top_bar":"","ocean_shortcode_after_top_bar":"","ocean_shortcode_before_header":"","ocean_shortcode_after_header":"","ocean_has_shortcode":"","ocean_shortcode_after_title":"","ocean_shortcode_before_footer_widgets":"","ocean_shortcode_after_footer_widgets":"","ocean_shortcode_before_footer_bottom":"","ocean_shortcode_after_footer_bottom":"","ocean_display_top_bar":"default","ocean_display_header":"default","ocean_header_style":"","ocean_center_header_left_menu":"0","ocean_custom_header_template":"0","ocean_custom_logo":0,"ocean_custom_retina_logo":0,"ocean_custom_logo_max_width":0,"ocean_custom_logo_tablet_max_width":0,"ocean_custom_logo_mobile_max_width":0,"ocean_custom_logo_max_height":0,"ocean_custom_logo_tablet_max_height":0,"ocean_custom_logo_mobile_max_height":0,"ocean_header_custom_menu":"0","ocean_menu_typo_font_family":"0","ocean_menu_typo_font_subset":"","ocean_menu_typo_font_size":0,"ocean_menu_typo_font_size_tablet":0,"ocean_menu_typo_font_size_mobile":0,"ocean_menu_typo_font_size_unit":"px","ocean_menu_typo_font_weight":"","ocean_menu_typo_font_weight_tablet":"","ocean_menu_typo_font_weight_mobile":"","ocean_menu_typo_transform":"","ocean_menu_typo_transform_tablet":"","ocean_menu_typo_transform_mobile":"","ocean_menu_typo_line_height":0,"ocean_menu_typo_line_height_tablet":0,"ocean_menu_typo_line_height_mobile":0,"ocean_menu_typo_line_height_unit":"","ocean_menu_typo_spacing":0,"ocean_menu_typo_spacing_tablet":0,"ocean_menu_typo_spacing_mobile":0,"ocean_menu_typo_spacing_unit":"","ocean_menu_link_color":"","ocean_menu_link_color_hover":"","ocean_menu_link_color_active":"","ocean_menu_link_background":"","ocean_menu_link_hover_background":"","ocean_menu_link_active_background":"","ocean_menu_social_links_bg":"","ocean_menu_social_hover_links_bg":"","ocean_menu_social_links_color":"","ocean_menu_social_hover_links_color":"","ocean_disable_title":"default","ocean_disable_heading":"default","ocean_post_title":"","ocean_post_subheading":"","ocean_post_title_style":"","ocean_post_title_background_color":"","ocean_post_title_background":0,"ocean_post_title_bg_image_position":"","ocean_post_title_bg_image_attachment":"","ocean_post_title_bg_image_repeat":"","ocean_post_title_bg_image_size":"","ocean_post_title_height":0,"ocean_post_title_bg_overlay":0.5,"ocean_post_title_bg_overlay_color":"","ocean_disable_breadcrumbs":"default","ocean_breadcrumbs_color":"","ocean_breadcrumbs_separator_color":"","ocean_breadcrumbs_links_color":"","ocean_breadcrumbs_links_hover_color":"","ocean_display_footer_widgets":"default","ocean_display_footer_bottom":"default","ocean_custom_footer_template":"0","ocean_post_oembed":"","ocean_post_self_hosted_media":"","ocean_post_video_embed":"","ocean_link_format":"","ocean_link_format_target":"self","ocean_quote_format":"","ocean_quote_format_link":"post","ocean_gallery_link_images":"off","ocean_gallery_id":[],"footnotes":""},"categories":[286,169],"tags":[],"class_list":["post-5416","post","type-post","status-publish","format-standard","hentry","category-fractualites-ennews-fr","category-seminaires-armedia","entry"],"_links":{"self":[{"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/posts\/5416","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/comments?post=5416"}],"version-history":[{"count":2,"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/posts\/5416\/revisions"}],"predecessor-version":[{"id":5460,"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/posts\/5416\/revisions\/5460"}],"wp:attachment":[{"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/media?parent=5416"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/categories?post=5416"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/samovar.telecom-sudparis.eu\/index.php\/wp-json\/wp\/v2\/tags?post=5416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}