پديد آورنده :
جعفري، بهزاد
عنوان :
طراحي و شبيه سازي آموزش شبكه هاي عصبي مصنوعي بصورت مدارهاي VLSI آنالوگ به منظور پياده سازي توابع ديجيتال
مقطع تحصيلي :
كارشناسي ارشد
محل تحصيل :
اصفهان: دانشگاه صنعتي اصفهان، دانشكده برق و كامپيوتر
صفحه شمار :
ده، 89، [ II ]ص.: مصور، جدول، نمودار
استاد راهنما :
محمدرضا احمدزاده
واژه نامه :
فارسي به انگليسي
توصيفگر ها :
پردازش سيگنال ، اطلاعات تصويري , محاسبات آنالوگ , مدارهاي انتقال خطي , گيت شناور , روش ديناميكي , اغتشاش وزني ، نروني , سيناپس , نرون , المانهاي ذخيره سازي , بلوك اينورتري
استاد داور :
شادرخ سماوي، سعيد صدري
تاريخ ورود اطلاعات :
1396/09/07
رشته تحصيلي :
برق و كامپيوتر
دانشكده :
مهندسي برق و كامپيوتر
چكيده فارسي :
به فارسي و انگليسي : قابل رؤيت در نسخه ديجيتال
چكيده انگليسي :
Abstract Nature has comprised of highly advanced systems capable of performing complexcomputations adaptation and learning using analog components Although digital systemshave significantly surpassed analog systems in terms of precision high speed andmathematical computations digital systems cannot outperform analog systems in terms ofpower In this thesis analog VLSI circuits are presented for performing arithmeticfunctions and for implementing neural networks These circuits are based upon the powerof the analog building blocks to perform low power and parallel computations Circuits forperforming squaring square root and multiplication division are shown A circuit thatperforms a vector normalization based on cascading the preceding circuits is shown todisplay the ease with which simpler circuits may be combined to obtain more complicatedfunctions In this thesis two feedforward neural network implementations are alsopresented The first uses analog synapses and neurons with a digital serial weight bus Thenetwork is trained in a loop with the computer performing control and weight updates Also in the second neural network weights are implemented digitally and counters areused to update them A parallel perturbative weight update algorithm is used The networkuses multiple pseudorandom bit streams to perturb all of the weights in parallel Some ofthe conventional architectures dose not have desired characteristics for high speedoperation and requires area for chip implantation In this thesis we propose a modificationon basic blocks of designed neural networks to minimize chip area in order to increaselearning speed and decrease the cost Also a new mechanism based on two LFSRs and oneXOR network is presented to combine outputs from different taps to obtain uncorrelatednoise Experimental simulations show that both networks are learned successfully as digitalfunction such as AND and XOR
استاد راهنما :
محمدرضا احمدزاده
استاد داور :
شادرخ سماوي، سعيد صدري