پديد آورنده :
شالبافان، پويان
عنوان :
مقاوم سازي شبكه هاي عصبي عميق در برابر مثال هاي مخرب با استفاده از توسعه ويژگي هاي تصادفي براي فرايندهاي گاوسي
مقطع تحصيلي :
كارشناسي ارشد
گرايش تحصيلي :
هوش مصنوعي و رباتيك
محل تحصيل :
اصفهان : دانشگاه صنعتي اصفهان
صفحه شمار :
پانزده، [75]ص.: مصور، جدول، نمودار
استاد راهنما :
مهران صفاياني
استاد مشاور :
عبدالرضا ميرزايي
توصيفگر ها :
شبكههاي عصبي عميق , فرآيندهاي گاوسي , مثالهاي مخرب , فرآيندهاي گاوسي مقياسپذير
استاد داور :
محمدرضا احمدزاده
تاريخ ورود اطلاعات :
1398/06/11
دانشكده :
مهندسي برق و كامپيوتر
تاريخ ويرايش اطلاعات :
1398/06/11
چكيده انگليسي :
Robustifying Deep Neural Networks Against Adversarial Examples Using Random Feature Expansions for Gaussian Processes Pouyan Shalbafan p shalbafan@ec iut ac ir June 2019 Department of Electrical and Computer Engineering Isfahan University of Technology Isfahan 84156 83111 Iran Degree M Sc Language Farsi Supervisor Prof Mehran Safayani safayani@cc iut ac ir Advisor Prof Abdolreza Mirzaei mirzaei@cc iut ac ir Abstract In recent years a concept known as the vulnerability of machine learning based models has been presented whichshows that almost all learning models including parametric and non parametric models are vulnerable The most well known of these vulnerabilities or in other words attacks is injection of adversarial examples into the learning model whichis based on artificial neural networks deep artificial neural networks specifically have the highest degree of vulnerabilityto adversarial examples The adversarial examples are such that they add noise to the input data of the target network sneural network so that the input data from the user s eye will not change significantly but it will make network classify theinput data wrongly It has been shown that neural network models are vulnerable to these kind of adversarial examples aspreviously mentioned learning models are divided into two sets of parametric and nonparametric in parametric models wewill have different functions for different values of the parameters by defining the probability distribution over parametersin the bayesian methods we actually define probabilities distribution over different functions indirectly gaussian processesmake it possible to define the probability distribution directly on functions on the other hand computational cost is high inGaussian processes Therefore to reduce the complexity of computations and to reduce learning time despite the limitationsof tools scalable Gaussian processes are used Recent research suggests that Gaussian Process based models are moreresistant to adversarial examples but are not practical in many cases due to the high volume of computing Therefore inthis study firstly a model based on scalable Gaussian processes was analyzed using random features in terms of resistanceto adversarial attacks Secondly in the base model the idea of automatic relevance determination to improve the accuracyof the proposed model was used Thirdly using uncertainties in probabilistic models as well as ensemble learning basedmodels a hybrid model was introduced that could detect attacks Finally a deep kernel model was introduced that inaddition to its high initial accuracy was more resistant to adversarial examples in compare with base models So that theattack is applied with a purtubation coefficient of 0 3 only 6 percent of the initial accuracy was reduced Key Words Adversarial Examples Gaussian Process Neural Networks Deep Learning Probabilis tic Model
استاد راهنما :
مهران صفاياني
استاد مشاور :
عبدالرضا ميرزايي
استاد داور :
محمدرضا احمدزاده