MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI
SUN’IY INTELLEKT
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI.
SUN’IY INTELLEKT FANI
S.B. Ergashev
s.b.ergashev@gmail.com
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI.
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI.
SUN’IY INTELLEKT. S.B. Ergashev
s.b.ergashev@gmail.com
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI “AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI. SUN’IY INTELLEKT
S.B. Ergashev
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI.,
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI., SUN’IY INTELLEKT.
s.b.ergashev@gmail.com
S.B. Ergashev
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI
SUN’IY INTELLEKT
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI.
SUN’IY INTELLEKT FANI
S.B. Ergashev
s.b.ergashev@gmail.com
An artificial neuron network (neural network) is a computational model that mimics the way nerve cells work in the human brain. Artificial neural networks (ANNs) use learning algorithms that can independently make adjustments - or learn, in a sense - as they receive new input
The document discusses various neural network learning rules:
1. Error correction learning rule (delta rule) adapts weights based on the error between the actual and desired output.
2. Memory-based learning stores all training examples and classifies new inputs based on similarity to nearby examples (e.g. k-nearest neighbors).
3. Hebbian learning increases weights of simultaneously active neuron connections and decreases others, allowing patterns to emerge from correlations in inputs over time.
4. Competitive learning (winner-take-all) adapts the weights of the neuron most active for a given input, allowing unsupervised clustering of similar inputs across neurons.
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI.
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI.
SUN’IY INTELLEKT. S.B. Ergashev
s.b.ergashev@gmail.com
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI “AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI. SUN’IY INTELLEKT
S.B. Ergashev
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI.,
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI., SUN’IY INTELLEKT.
s.b.ergashev@gmail.com
S.B. Ergashev
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI
SUN’IY INTELLEKT
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI.
SUN’IY INTELLEKT FANI
S.B. Ergashev
s.b.ergashev@gmail.com
An artificial neuron network (neural network) is a computational model that mimics the way nerve cells work in the human brain. Artificial neural networks (ANNs) use learning algorithms that can independently make adjustments - or learn, in a sense - as they receive new input
The document discusses various neural network learning rules:
1. Error correction learning rule (delta rule) adapts weights based on the error between the actual and desired output.
2. Memory-based learning stores all training examples and classifies new inputs based on similarity to nearby examples (e.g. k-nearest neighbors).
3. Hebbian learning increases weights of simultaneously active neuron connections and decreases others, allowing patterns to emerge from correlations in inputs over time.
4. Competitive learning (winner-take-all) adapts the weights of the neuron most active for a given input, allowing unsupervised clustering of similar inputs across neurons.
Neural networks are computing systems inspired by biological neural networks in the brain. They are composed of interconnected artificial neurons that process information using a connectionist approach. Neural networks can be used for applications like pattern recognition, classification, prediction, and filtering. They have the ability to learn from and recognize patterns in data, allowing them to perform complex tasks. Some examples of neural network applications discussed include face recognition, handwritten digit recognition, fingerprint recognition, medical diagnosis, and more.
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
Neural networks are algorithms that mimic the human brain in recognizing patterns in vast amounts of data. They can adapt to new inputs without redesign. Neural networks can be biological, composed of real neurons, or artificial, for solving AI problems. Artificial neural networks consist of processing units like neurons that learn from inputs to produce outputs. They are used for applications like classification, pattern recognition, optimization, and more.
The document discusses neural networks and their applications. It provides an overview of neural networks, including their history and how they are modeled after biological neurons. Supervised learning is described as training neural networks using labeled input-output pairs. Specific neural network concepts like the perceptron, backpropagation, and convolutional neural networks are explained. Applications mentioned include mobile computing, forecasting, character recognition, data mining, and image recognition. Both merits like flexibility and demerits like requiring large processing are noted.
Artificial neural networks (ANNs) are processing systems inspired by biological neural networks. They consist of interconnected processing elements that dynamically change their outputs based on external inputs. While much simpler than actual brains, some ANNs have accurately modeled systems like the retina. ANNs are initially trained on large datasets to learn input-output relationships, then make predictions on new inputs. They are nonlinear, adaptable systems suited for parallel processing tasks.
This document provides an introduction to neural networks. It discusses how neural networks have recently achieved state-of-the-art results in areas like image and speech recognition and how they were able to beat a human player at the game of Go. It then provides a brief history of neural networks, from the early perceptron model to today's deep learning approaches. It notes how neural networks can automatically learn features from data rather than requiring handcrafted features. The document concludes with an overview of commonly used neural network components and libraries for building neural networks today.
This document provides an overview of artificial neural networks. It describes the biological neuron model that inspired artificial networks, with dendrites receiving inputs, the soma processing them, the axon transmitting outputs, and synapses connecting neurons. An artificial neuron model is presented that uses weighted inputs, a summation function, and an activation function to generate outputs. The document discusses unsupervised and supervised learning methods, and lists applications such as character recognition, stock prediction, and medicine. Advantages include human-like thinking and handling noisy data, while disadvantages include the need for training and high processing times.
The document discusses different types of logical reasoning systems used in artificial intelligence, including knowledge-based agents, first-order logic, higher-order logic, goal-based agents, knowledge engineering, and description logics. It provides examples of objects, properties, relations, and functions that can be represented and reasoned about logically. It also compares different approaches to logical indexing and outlines the key components and inference tasks involved in description logics.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
An artificial neural network (ANN) is a computational model inspired by the human brain that can learn from large amounts of data to detect patterns and relationships. ANNs are formed from hundreds of artificial neurons connected by coefficients that are organized in layers. The power of ANNs comes from connecting neurons, with each neuron consisting of a weighted input, transfer function, and single output. ANNs learn by adjusting the weights between neurons to minimize error and reach a specified level of accuracy when trained on data. Once trained, ANNs can be used to make predictions on new input data.
An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. It works on real-valued, discrete-valued and vector valued.
The document discusses neural networks, including human neural networks and artificial neural networks (ANNs). It provides details on the key components of ANNs, such as the perceptron and backpropagation algorithm. ANNs are inspired by biological neural systems and are used for applications like pattern recognition, time series prediction, and control systems. The document also outlines some current uses of neural networks in areas like signal processing, anomaly detection, and soft sensors.
Neural networks are a type of data mining technique inspired by biological neural systems. They are composed of interconnected nodes similar to neurons in the brain. Neural networks can learn patterns from complex data through supervised or unsupervised learning methods. They are widely used for applications like fraud detection, risk assessment, image recognition, and stock market prediction due to their ability to learn from examples without being explicitly programmed.
Neural networks are computational models inspired by the human brain. They consist of interconnected nodes that process information using a principle called neural learning. The document discusses the history and evolution of neural networks. It also provides examples of applications like image recognition, medical diagnosis, and predictive analytics. Neural networks are well-suited for problems that are difficult to solve with traditional algorithms like pattern recognition and classification.
Neural networks are computing systems inspired by biological neural networks in the brain. They are composed of interconnected artificial neurons that process information using a connectionist approach. Neural networks can be used for applications like pattern recognition, classification, prediction, and filtering. They have the ability to learn from and recognize patterns in data, allowing them to perform complex tasks. Some examples of neural network applications discussed include face recognition, handwritten digit recognition, fingerprint recognition, medical diagnosis, and more.
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
Neural networks are algorithms that mimic the human brain in recognizing patterns in vast amounts of data. They can adapt to new inputs without redesign. Neural networks can be biological, composed of real neurons, or artificial, for solving AI problems. Artificial neural networks consist of processing units like neurons that learn from inputs to produce outputs. They are used for applications like classification, pattern recognition, optimization, and more.
The document discusses neural networks and their applications. It provides an overview of neural networks, including their history and how they are modeled after biological neurons. Supervised learning is described as training neural networks using labeled input-output pairs. Specific neural network concepts like the perceptron, backpropagation, and convolutional neural networks are explained. Applications mentioned include mobile computing, forecasting, character recognition, data mining, and image recognition. Both merits like flexibility and demerits like requiring large processing are noted.
Artificial neural networks (ANNs) are processing systems inspired by biological neural networks. They consist of interconnected processing elements that dynamically change their outputs based on external inputs. While much simpler than actual brains, some ANNs have accurately modeled systems like the retina. ANNs are initially trained on large datasets to learn input-output relationships, then make predictions on new inputs. They are nonlinear, adaptable systems suited for parallel processing tasks.
This document provides an introduction to neural networks. It discusses how neural networks have recently achieved state-of-the-art results in areas like image and speech recognition and how they were able to beat a human player at the game of Go. It then provides a brief history of neural networks, from the early perceptron model to today's deep learning approaches. It notes how neural networks can automatically learn features from data rather than requiring handcrafted features. The document concludes with an overview of commonly used neural network components and libraries for building neural networks today.
This document provides an overview of artificial neural networks. It describes the biological neuron model that inspired artificial networks, with dendrites receiving inputs, the soma processing them, the axon transmitting outputs, and synapses connecting neurons. An artificial neuron model is presented that uses weighted inputs, a summation function, and an activation function to generate outputs. The document discusses unsupervised and supervised learning methods, and lists applications such as character recognition, stock prediction, and medicine. Advantages include human-like thinking and handling noisy data, while disadvantages include the need for training and high processing times.
The document discusses different types of logical reasoning systems used in artificial intelligence, including knowledge-based agents, first-order logic, higher-order logic, goal-based agents, knowledge engineering, and description logics. It provides examples of objects, properties, relations, and functions that can be represented and reasoned about logically. It also compares different approaches to logical indexing and outlines the key components and inference tasks involved in description logics.
Basic definitions, terminologies, and Working of ANN has been explained. This ppt also shows how ANN can be performed in matlab. This material contains the explanation of Feed forward back propagation algorithm in detail.
An artificial neural network (ANN) is a computational model inspired by the human brain that can learn from large amounts of data to detect patterns and relationships. ANNs are formed from hundreds of artificial neurons connected by coefficients that are organized in layers. The power of ANNs comes from connecting neurons, with each neuron consisting of a weighted input, transfer function, and single output. ANNs learn by adjusting the weights between neurons to minimize error and reach a specified level of accuracy when trained on data. Once trained, ANNs can be used to make predictions on new input data.
An artificial neuron network (ANN) is a computational model based on the structure and functions of biological neural networks. It works on real-valued, discrete-valued and vector valued.
The document discusses neural networks, including human neural networks and artificial neural networks (ANNs). It provides details on the key components of ANNs, such as the perceptron and backpropagation algorithm. ANNs are inspired by biological neural systems and are used for applications like pattern recognition, time series prediction, and control systems. The document also outlines some current uses of neural networks in areas like signal processing, anomaly detection, and soft sensors.
Neural networks are a type of data mining technique inspired by biological neural systems. They are composed of interconnected nodes similar to neurons in the brain. Neural networks can learn patterns from complex data through supervised or unsupervised learning methods. They are widely used for applications like fraud detection, risk assessment, image recognition, and stock market prediction due to their ability to learn from examples without being explicitly programmed.
Neural networks are computational models inspired by the human brain. They consist of interconnected nodes that process information using a principle called neural learning. The document discusses the history and evolution of neural networks. It also provides examples of applications like image recognition, medical diagnosis, and predictive analytics. Neural networks are well-suited for problems that are difficult to solve with traditional algorithms like pattern recognition and classification.
MIRZO ULUGʻBEK NOMIDAGI OʻZBEKISTON MILLIY UNIVERSITETINING JIZZAX FILIALI. Kafedra Axborot tizimlari va texnologiyalari.
AXBOROT TIZIMLARINI LOYIHALASH.
S.B. Ergashev
MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI.,
“AXBOROT TIZIMLARI VA TEXNOLOGIYALARI” KAFEDRASI.,
SUN’IY INTELLEKT., s.b.ergashev@gmail.com
S.B. Ergashev
More from MIRZO ULUG‘BEK NOMIDAGI O‘ZBEKISTON MILLIY UNIVERSITETI JIZZAX FILIALI (12)
2. БАНК
WOODGROVE
(ANN) SUN'IY NEYRON TARMOQLAR
2
Sun'iy neyron tarmoqlari (ANN) - bu inson miyasiga asoslanib modellashtirilgan
mashinani o‘rganish algoritmlaridir.
ANN qatlamlarda bir-biriga bog'langan va murakkab muammolarni hal qilish
uchun ishlatiladigan neyronlardan iborat.
ANN tasvirlarni aniqlash, tabiiy tilni qayta ishlash va robototexnika kabi turli
vazifalar uchun muammolarni yechimni topishda ishlatiladi. Ular ma'lumotlardan
o‘rganishlari va topilgan natijaviy naqshlar asosida bashorat qilishlari mumkin.
ANN turli xil ilovalar uchun, aniq va mukammalroq modellarni yaratish uchun
ishlatilishi mumkin bo‘lgan kuchli vositalardir.
SNT
O‘ZMUJF
3. БАНК
WOODGROVE
NEYRON TARMOQNI O'RGANISH
3
Sun’iy neyron tarmoq ma’lumotlar to’plamini
ikkiga bo‘lish yo‘li bilan o‘qitiladi va sinovdan
o‘tkaziladi: tarmoq o‘rnatishlarini optimallashtirish
uchun qism o'quv to'plami (80%)ni va ko'rinmas
ma'lumotlar bo‘yicha modelning to‘plamini
o‘lchash uchun test to‘plami (20%)ni tashkil qiladi.
Bu imtihon uchun o‘qishga o‘xshaydi, lekin bu
erda savollar o‘quv materiali bilan bir xil emas.
SNT
O‘ZMUJF
4. БАНК
WOODGROVE
O‘RGATISH ALGORITMLARI
4
SNT
O‘ZMUJF
O‘rgatish algoritmlari bu computer algoritmlarni o‘rganishning bir yo‘nalishi
bo‘lib, ko‘p miqdordagi ma’lumotlar orqali o‘z - o‘zini tajribalarini rivojlantira
oladigan algoritmlardir.
O‘rgatish algoritmlari berilgan ma’lumotlar asosida ma’lumotlani o‘rganish orqali
model quradi. Ushbu model bashoratlar qilishi yoki qarorlar qabul qilishi uchun
alohida dastrulash talab etilmaydi.
O‘rgatish algoritmlarining o‘qituvchili (Supervised learning - Nazorat ostidagi
o'rganish), o‘qituvchisiz (Unsupervised learning - Nazoratsiz o'rganish) va
kuchaytirilgan (Reinforcement learning).
5. БАНК
WOODGROVE
O‘QITUVCHILI O‘RGATUVCHI (SUPERVISED)
5
O‘qituvchili o‘rgatuvchida ma’lumotlar oldin o‘rganiluvchi va qanday
o‘rganilishni baholash orqali ajratiladi.
1. O‘rganiluvchi ma’lumotlar turlarga yoki kategoriyalarga ajratiladi.
2. O‘rganiluvchi ma’lumotlar orqali model o‘rgatilib, undan keyin tekshiruvchi
ma’lumotlar modelida tekshirib ko‘riladi.
3. Model o‘rgatilayotganda kirishga berilgan ma’lumot modelni chiqishida
tekshiriladi. Modelda xatoliklar o‘rganiladi va yangilanadi.
Misol uchun: model kirishiga olmaning rasmi berilganda chiqishda olma degan
yozuvni chiqarish bersa xatolik nol bo'ladi va og'irliklar o'zgarmaydi.
4. Modellashtirish to’g’ri qaror qabul qilish darajasiga qarab modelni o’rgatish
davom ettiriladi yoki to'xtatiladi.
5. O‘qituvchili o‘rgatishni qo‘llashga yuzni tanish, ob'ektlarni tanish, ovozni tanish
kabilarni misol qilib keltirish mumkin.
SNT
O‘ZMUJF
7. БАНК
WOODGROVE
O’QITUVCHISIZ O’RGATISH (UNSUPERVISED)
7
O’qituvchisiz o’rgatish algoritmlari kirish ma’lumotlarini olib
ma’lumotlarini aniqlaydi hamda tuzilishga ko‘ra guruhlashtiradi va
klasterlashtiradi. Bu algoritmlarda o’rganiluvchi ma’lumotlar guruhlarga,
turlarga yoki kategoriyalarga ajratiladi.
O’qituvchisiz o’rgatish ma’lumotlardagi umumiylikni aniqlaydi va yangi
ma’lumotlarda shunday umumiy jihat bor yo’qligini topadi.
SNT
O‘ZMUJF
8. БАНК
WOODGROVE
UNSUPERVISED O'RGANISH ALGORITMI
8
Kirishda toifalarga ajratilmagan
ma'lumotlarini oldik. Endi ushbu noaniq
kirish ma'lumotlarini o'qitish modeliga
beriladi.
Ma'lumotlardan yashirin naqshlarni topish
uchun noaniq ma'lumotlarni sharhlaydi va k
– vositalarini klasterlash, qarorlar daraxti va
shu kabi boshqa algoritmlarni qo'llaydi.
Kerakli algoritmni qo'llaganidan so'ng,
algoritm ma'lumotlar ob'ektlarini ular
orasidagi o'xshashlik va farqlarga ko'ra
guruhlarga ajratadi.
SNT
O‘ZMUJF
9. БАНК
WOODGROVE 9
SNT
O‘ZMUJF
Kuchaytirilgan o‘rganish (ta’lim) komponentlari
- Qaror qabul qilish / Agentni o'rganish
- Muhit - muhitning agentlarga o‘zaro ta’sirini o‘rganish
- Harakatlar - agent nima qiladi
KUCHAYTIRILGAN O'RGANISH
11. БАНК
WOODGROVE
KUCHAYTIRILGAN O'RGANISH (REINFORCEMENT
LEARNING)
11
ANN berilgan ma’lumot xaqida bashorat qila olishi uchun o’rgatilishi kerak
bo’ladi. Buning uchun oldindan to’plangan ma’lumotlar asosida neyron
tarmoq quriladi va o’rgatiladi.
Ushbu to’plangan ma’lumotlar asosida neyron tarmog’ini o’rgatishidan
oldin bir necha to’plamlarga bo’linadi.
Ko’pincha ma’lumotlar 3 ta asosiy to’plamga: o’rgatiluvchi, tekshirish, va
test to’plamiga ajratiladi.
SNT
O‘ZMUJF
13. БАНК
WOODGROVE
KUCHAYTIRILGAN O'RGANISH (REINFORCEMENT
LEARNING)
13
Kuchaytirilgan o'rganishga xatolardan o'rganish sifatida qarashimiz mumkin. Har
qanday muhitga kuchaytirilgan o'rganish algoritmini joylashtirilganida u boshida juda
ko'p xatolarga yo'l qo'yadi.
Yaxshi xatti-harakatlarni ijobiy signal bilan va yomon xatti-harakatlarni salbiy
signalni bog'laydigan algoritmga qandaydir rag'batlantirish taqdim qilganimizda, biz
yomon xatti-harakatlardan ko'ra yaxshi xatti-harakatlarni afzal ko'radigan algoritmni
kuchaytirishini ko‘rish mumkin.
Vaqt o'tishi bilan bizning o'rganish algoritmimiz avvalgiga qaraganda kamroq xato
qilishni o'rganadi.
SNT
O‘ZMUJF
15. БАНК
WOODGROVE 15
SNT
O‘ZMUJF
Kuchaytirilgan o‘rganish - bu agentning atrof-muhit bilan o‘zaro
munosabatda bo‘lishi va eng yaxshi natijani topish qobiliyatidir.
Agent - maqsadi eng ko'p mukofot ballarini olish va ish faoliyatini
yaxshilashdir.
Agent to‘g‘ri yoki noto‘g‘ri javobni topgani uchun ball bilan taqdirlanadi yoki
jazo sifatida bal olib tashlanadi. Bu ijobiy mukofot ballari asosida model
o‘zini-o‘zi o‘qitish qobilyatini qo‘lga kiritiladi va unga keyingi taqdim etilgan
yangi ma’lumotlarni bashorat qilishni o‘rgatdi.
KUCHAYTIRILGAN O'RGANISH