شماره مدرك :
19483
شماره راهنما :
16847
پديد آورنده :
خان احمديان، عاطفه
عنوان :

تقريب مشتق با گام مختلط توسط شبكه‌هاي عصبي پيشخور چندلايه

مقطع تحصيلي :
كارشناسي ارشد
گرايش تحصيلي :
آناليز عددي
محل تحصيل :
اصفهان : دانشگاه صنعتي اصفهان
سال دفاع :
1402
صفحه شمار :
نه، 97ص. : مصور، جدول، نمودار
توصيفگر ها :
شبكه‌هاي عصبي پرسپترون , شبكه عصبي پيشخور , تقريب توابع , تقريب مشتق با گام مختلط
تاريخ ورود اطلاعات :
1403/03/22
كتابنامه :
كتابنامه
رشته تحصيلي :
رياضي كاربردي
دانشكده :
رياضي
تاريخ ويرايش اطلاعات :
1403/03/23
كد ايرانداك :
23040984
چكيده فارسي :
اهميت تقريب توابع در علوم و مهندسي بر كسي پوشيده نيست. در اين پايان‌نامه به معرفي شبكه‌هاي عصبي مصنوعي پرداخته شده است و اين شبكه‌ها به عنوان ابزاري كارآمد براي يافتن تقريب توابع مورد استفاده قرار گرفته‌اند. از مزاياي اين ابزار به امكان يافتن تقريب، خصوصا در ابعاد بالا كه چالشي مهم در نظريه تقريب بوده است اشاره مي‌شود. همچنين به عدم ‌حساسيت شبكه‌هاي عصبي نسبت به محاسبات در حساب مميز شناور و نيز عدم نياز به حل دستگاه‌هاي خطي با ضريب وضعيت بسيار بزرگ در مقايسه با روش‌هاي كلاسيك‌ اشاره شده است. چالش‌هاي موجود در تقريب توابع توسط روش‌هاي كلاسيك بررسي مي‌شود و با ارائه مثال‌هايي، كارايي شبكه‌هاي عصبي براي تقريب اين توابع نشان داده خواهد شد. در انتها به مسئله مشتق‌گيري عددي پرداخته و ناپايداري فرمول‌هاي تفاضلات متناهي بيان شده است. براي غلبه بر اين مشكل روش تقريب مشتق با استفاده از گام مختلط، بيان شده و توسط شبكه‌هاي عصبي پياده‌سازي مي‌شود.
چكيده انگليسي :
Artificial neural networks (ANN) are the main pillars of artificial intelligence. These computational structures, inspired by the structure of the human brain, can solve complex problems and perform complex calculations. Artificial neural networks can learn from experimental data and perform intelligent activities inspired by the function of the brain system. In artificial intelligence, neural networks play a pivotal role. These structures can model and approximate complicated functions and have wide applications in fields such as machine vision, natural language processing, and pattern recognition. Multilayer perceptron neural networks are widely used neural network models to approximate functions. Using optimization methods such as backpropagation, these networks can learn from training data and extract useful features. In our case study, multilayer perceptron neural networks are used to approximate some benchmark functions. These networks are capable of modeling and learning complex patterns in data and have various applications in different fields of artificial intelligence. Approximation of functions plays an important role in many fields of science and engineering. Many engineering and scientific problems require approximating functions. This is particularly important when dealing with nonlinear functions. We have been faced with complex functions for which finding the exact function is very difficult and sometimes impossible. In these cases, approximate methods are used to achieve an easily calculable solution. Numerical methods are efficient tools in function approximation. A family the methods that have been widely considered in recent years are based on using neural networks. Neural networks are powerful tools to approximate the approximation of functions. It has many advantages that have made it an attractive area for researchers and engineers. One of the most important advantages of neural networks in the approximation of functions is their ability to find an approximation in higher dimensions. This issue has always been considered one of the main challenges in approximation theory. Artificial neural networks can easily obtain multi-dimensional approximations using their structure. In addition, neural networks are very robust in floating-point arithmetic systems. In a floating point arithmetic system, it is possible to train a neural network in fast, stable, and high-accuracy calculations. Also, in neural networks, there is no need to solve linear systems with large condition numbers, which makes it much more effective than the classical methods. In the sequel, we refer to some of the challenges in approximation of functions using classical methods and then the performance of neural networks is presented to approximate these problems. Also, neural networks can easily approximate these functions by using their complex structure. In engineering problems derivative of a function should be eva‎luated numerically. Finite difference formulas may lead to problems such as instability. This is particularly important when the step sizes are too small. To overcome this problem, the complex step method is implemented for derivative approximation. Function approximation in this method is performed by the neural networks. In general, approximation of functions using neural networks as a powerful and efficient tool plays an important role in many fields of science and engineering. This method can be used as an appropriate replacement for classical methods with advantages such as high-dimensional approximation ability, robustness in floating point arithmetic, and lack of need to solve linear systems with large condition numbers.
استاد راهنما :
مهدي تاتاري ورنوسفادراني
استاد مشاور :
مرضيه كمالي
استاد داور :
امير هاشمي , محسن مجيري فروشاني
لينک به اين مدرک :

بازگشت