توصيفگر ها :
شبكههاي عصبي پرسپترون , شبكه عصبي پيشخور , تقريب توابع , تقريب مشتق با گام مختلط
چكيده فارسي :
اهميت تقريب توابع در علوم و مهندسي بر كسي پوشيده نيست.
در اين پاياننامه به معرفي شبكههاي عصبي مصنوعي پرداخته شده است و اين شبكهها به عنوان ابزاري كارآمد براي يافتن تقريب توابع مورد استفاده قرار گرفتهاند. از مزاياي اين ابزار
به امكان يافتن تقريب، خصوصا در ابعاد بالا كه چالشي مهم در نظريه تقريب بوده است اشاره ميشود. همچنين به عدم حساسيت شبكههاي عصبي نسبت به محاسبات در حساب مميز شناور و نيز عدم نياز به حل دستگاههاي خطي با ضريب وضعيت بسيار بزرگ در مقايسه با روشهاي كلاسيك اشاره شده است.
چالشهاي موجود در تقريب توابع توسط روشهاي كلاسيك بررسي ميشود و با ارائه مثالهايي، كارايي شبكههاي عصبي براي تقريب اين توابع نشان داده خواهد شد. در انتها به مسئله مشتقگيري عددي پرداخته و ناپايداري فرمولهاي تفاضلات متناهي بيان شده است. براي غلبه بر اين مشكل روش تقريب مشتق با استفاده از گام مختلط، بيان شده و توسط شبكههاي عصبي پيادهسازي ميشود.
چكيده انگليسي :
Artificial neural networks (ANN) are the main pillars of artificial intelligence. These computational structures, inspired by the structure of the human brain, can solve complex problems and perform complex calculations. Artificial neural networks can learn from experimental data and perform intelligent activities inspired by the function of the brain system.
In artificial intelligence, neural networks play a pivotal role. These structures can model and approximate complicated functions and have wide applications in fields such as machine vision, natural language processing, and pattern recognition. Multilayer perceptron neural networks are widely used neural network models to approximate functions. Using optimization methods such as backpropagation, these networks can learn from training data and extract useful features.
In our case study, multilayer perceptron neural networks are used to approximate some benchmark functions. These networks are capable of modeling and learning complex patterns in data and have various applications in different fields of artificial intelligence.
Approximation of functions plays an important role in many fields of science and engineering. Many engineering and scientific problems require approximating functions. This is particularly important when dealing with nonlinear functions. We have been faced with complex functions for which finding the exact function is very difficult and sometimes impossible. In these cases, approximate methods are used to achieve an easily calculable solution. Numerical methods are efficient tools in function approximation.
A family the methods that have been widely considered in recent years are based on using neural networks. Neural networks are powerful tools to approximate the approximation of functions. It has many advantages that have made it an attractive area for researchers and engineers.
One of the most important advantages of neural networks in the approximation of functions is their ability to find an approximation in higher dimensions. This issue has always been considered one of the main challenges in approximation theory. Artificial neural networks can easily obtain multi-dimensional approximations using their structure. In addition, neural networks are very robust in floating-point arithmetic systems. In a floating point arithmetic system, it is possible to train a neural network in fast, stable, and high-accuracy calculations. Also, in neural networks, there is no need to solve linear systems with large condition numbers, which makes it much more effective than the classical methods.
In the sequel, we refer to some of the challenges in approximation of functions using classical methods and then the performance of neural networks is presented to approximate these problems. Also, neural networks can easily approximate these functions by using their complex structure.
In engineering problems derivative of a function should be evaluated numerically. Finite difference formulas may lead to problems such as instability. This is particularly important when the step sizes are too small. To overcome this problem, the complex step method is implemented for derivative approximation. Function approximation in this method is performed by the neural networks.
In general, approximation of functions using neural networks as a powerful and efficient tool plays an important role in many fields of science and engineering. This method can be used as an appropriate replacement for classical methods with advantages such as high-dimensional approximation ability, robustness in floating point arithmetic, and lack of need to solve linear systems with large condition numbers.