Solution of N Queens Problem genetic algorithm MohammedAlKazal
Artificial Intelligence
Supervised by
Dr. Fawzia R.
Mosul university
Computer Sciences Department
Masters Students 2018 - 2019
Mohammed Al-Gazal
and
Aalaa Alrashidy
Muna Mohammah Saeed
Mustafa Ameen
Mayada Al-Obaidy
Jacobi Iteration Method is Used in Numerical Analysis. This slide helps you to figure out the use of the Jacobi Iteration Method to submit your presentatio9n slide for academic use.
Solution of N Queens Problem genetic algorithm MohammedAlKazal
Artificial Intelligence
Supervised by
Dr. Fawzia R.
Mosul university
Computer Sciences Department
Masters Students 2018 - 2019
Mohammed Al-Gazal
and
Aalaa Alrashidy
Muna Mohammah Saeed
Mustafa Ameen
Mayada Al-Obaidy
Jacobi Iteration Method is Used in Numerical Analysis. This slide helps you to figure out the use of the Jacobi Iteration Method to submit your presentatio9n slide for academic use.
In this presentation i have explained history of Runge kutta method and algorithm for this method...By using the algorithm you can solve differential equation problems.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
In this presentation i have explained history of Runge kutta method and algorithm for this method...By using the algorithm you can solve differential equation problems.
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...CSCJournals
The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity) continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP) image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna) in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error), whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration) on a simple MLP structure (2 hidden layers).
Face expression recognition using Scaled-conjugate gradient Back-Propagation ...IJMER
International Journal of Modern Engineering Research (IJMER) is Peer reviewed, online Journal. It serves as an international archival forum of scholarly research related to engineering and science education.
International Journal of Modern Engineering Research (IJMER) covers all the fields of engineering and science: Electrical Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, Computer Engineering, Agricultural Engineering, Aerospace Engineering, Thermodynamics, Structural Engineering, Control Engineering, Robotics, Mechatronics, Fluid Mechanics, Nanotechnology, Simulators, Web-based Learning, Remote Laboratories, Engineering Design Methods, Education Research, Students' Satisfaction and Motivation, Global Projects, and Assessment…. And many more.
Faster Training Algorithms in Neural Network Based Approach For Handwritten T...CSCJournals
Handwritten text and character recognition is a challenging task compared to recognition of handwritten numeral and computer printed text due to its large variety in nature. As practical pattern recognition problems uses bulk data and there is a one step self sufficient deterministic theory to resolve recognition problems by calculating inverse of Hessian Matrix and multiplication the inverse matrix it with first order local gradient vector. But in practical cases when neural network is large the inversing operation of the Hessian Matrix is not manageable and another condition must be satisfied the Hessian Matrix must be positive definite which may not be satishfied. In these cases some repetitive recursive models are taken. In several research work in past decade it was experienced that Neural Network based approach provides most reliable performance in handwritten character and text recognition but recognition performance depends upon some important factors like no of training samples, reliable features and no of features per character, training time, variety of handwriting etc. Important features from different types of handwriting are collected and are fed to the neural network for training. It is true that more no of features increases test efficiency but it takes longer time to converge the error curve. To reduce this training time effectively proper train algorithm should be chosen so that the system provides best train and test efficiency in least possible time that is to provide the system fastest intelligence. We have used several second order conjugate gradient algorithms for training of neural network. We have found that Scaled Conjugate Gradient Algorithm , a second order training algorithm as the fastest for training of neural network for our application. Training using SCG takes minimum time with excellent test efficiency. A scanned handwritten text is taken as input and character level segmentation is done. Some important and reliable features from each character are extracted and used as input to a neural network for training. When the error level reaches into a satisfactory level (10 -12 ) weights are accepted for testing a test script. Finally a lexicon matching algorithm solves the minor misclassification problems.
Higher-Order Conjugate Gradient Method (HCGM) For Solving Continuous Optimal ...iosrjce
In this paper, we considered the role of penalty when the higher-order conjugate gradient method
(HCGM) was used as a computational scheme for the minimization of penalised cost functions for optimal
control problems described by linear systems and integral quadratic costs. For this family of commonly
encountered problems, we find out that the conventional penalty methods require very large penalty constants
for good constants satisfaction. Numerical results shows that, as the penalty constant tend to infinity, the
convergence rate of the method becomes poor. To circumvent the poor convergence rate of the penalty method,
the HCGM was often considered as a good substitute to accelerate the convergence.
Medical Conferences, Pharma Conferences, Engineering Conferences, Science Conferences, Manufacturing Conferences, Social Science Conferences, Business Conferences, Scientific Conferences Malaysia, Thailand, Singapore, Hong Kong, Dubai, Turkey 2014 2015 2016
Global Research & Development Services (GRDS) is a leading academic event organizer, publishing Open Access Journals and conducting several professionally organized international conferences all over the globe annually. GRDS aims to disseminate knowledge and innovation with the help of its International Conferences and open access publications. GRDS International conferences are world-class events which provide a meaningful platform for researchers, students, academicians, institutions, entrepreneurs, industries and practitioners to create, share and disseminate knowledge and innovation and to develop long-lasting network and collaboration.
GRDS is a blend of Open Access Publications and world-wide International Conferences and Academic events. The prime mission of GRDS is to make continuous efforts in transforming the lives of people around the world through education, application of research and innovative ideas.
Global Research & Development Services (GRDS) is also active in the field of Research Funding, Research Consultancy, Training and Workshops along with International Conferences and Open Access Publications.
International Conferences 2014 – 2015
Malaysia Conferences, Thailand Conferences, Singapore Conferences, Hong Kong Conferences, Dubai Conferences, Turkey Conferences, Conference Listing, Conference Alerts
Medical Conferences, Pharma Conferences, Engineering Conferences, Science Conferences, Manufacturing Conferences, Social Science Conferences, Business Conferences, Scientific Conferences Malaysia, Thailand, Singapore, Hong Kong, Dubai, Turkey 2014 2015 2016
Global Research & Development Services (GRDS) is a leading academic event organizer, publishing Open Access Journals and conducting several professionally organized international conferences all over the globe annually. GRDS aims to disseminate knowledge and innovation with the help of its International Conferences and open access publications. GRDS International conferences are world-class events which provide a meaningful platform for researchers, students, academicians, institutions, entrepreneurs, industries and practitioners to create, share and disseminate knowledge and innovation and to develop long-lasting network and collaboration.
GRDS is a blend of Open Access Publications and world-wide International Conferences and Academic events. The prime mission of GRDS is to make continuous efforts in transforming the lives of people around the world through education, application of research and innovative ideas.
Global Research & Development Services (GRDS) is also active in the field of Research Funding, Research Consultancy, Training and Workshops along with International Conferences and Open Access Publications.
International Conferences 2014 – 2015
Malaysia Conferences, Thailand Conferences, Singapore Conferences, Hong Kong Conferences, Dubai Conferences, Turkey Conferences, Conference Listing, Conference Alerts
Application of local search methods for solving a quadratic assignment proble...ertekg
Ertek, G., Aksu, B., Birbil, S. E., İkikat, M. C., Yıldırmaz, C. (2005). “Application of local search methods for solving a quadratic assignment problem: A case study”, Proceedings of Computers and Industrial Engineering Conference, 2005. Istanbul, Turkey.
Differential Geometry for Machine LearningSEMINARGROOT
References:
Differential Geometry of Curves and Surfaces, Manfredo P. Do Carmo (2016)
Differential Geometry by Claudio Arezzo
Youtube: https://youtu.be/tKnBj7B2PSg
What is a Manifold?
Youtube: https://youtu.be/CEXSSz0gZI4
Shape analysis (MIT spring 2019) by Justin Solomon
Youtube: https://youtu.be/GEljqHZb30c
Tensor Calculus
Youtube: https://youtu.be/kGXr1SF3WmA
Manifolds: A Gentle Introduction,
Hyperbolic Geometry and Poincaré Embeddings by Brian Keng
Link: http://bjlkeng.github.io/posts/manifolds/,
http://bjlkeng.github.io/posts/hyperbolic-geometry-and-poincare-embeddings/
Statistical Learning models for Manifold-Valued measurements with application to computer vision and neuroimaging by Hyunwoo J.Kim
"Stochastic Optimal Control and Reinforcement Learning", invited to speak at the Nonlinear Dynamic Systems class taught by Prof. Frank Chong-woo Park, Seoul National University, December 4, 2019.
Part 1: Brief introduction of Markov chain theory and how Page rank works with intuitive explanation of linear algebra.
Part 2: Applications of Perron-Frobenius theorem. MCMC and simulated annealing.
Yfycychcucuchchcgxhxhxhxhc
Hxyxgxyztx
Nchxhxgzt
Hxgxgxyzyxycyc
Vuguguvuvuvuv
D
D
D
Ddkkdidbi
Jxyxhxhchxucuchchcus
S
S
S
H jsvuvsvuvsuvusvu
Sibisuvusvuvusvuvsuvuavusvuvaivusvus
Skisbivsuvusbusuvusvubsobizb
Similar to Solving Poisson Equation using Conjugate Gradient Methodand its implementation (20)
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Water scarcity is the lack of fresh water resources to meet the standard water demand. There are two type of water scarcity. One is physical. The other is economic water scarcity.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
3. From the Basics, Ax=b
Linear Systems
𝐴𝑥 = 𝑏
Goal of this presentation
What have you learned?
• Direct Method
• Gauss Elimination
• Thomas Algorithm (TDMA) (for tridiagonal matrix only)
• Iterative Method
• Jacobi method
• SOR method
• Conjugate Gradient Method
• Red Black Jacobi Method
9. Preconditioned System
𝑴−𝟏
𝑨𝒙 = 𝑴−𝟏
𝒃 With Preconditioner 𝑀
𝑀 𝐺𝑆 = 𝐷 − 𝐸Gauss-Seidel
𝑀𝑆𝑆𝑂𝑅 =
1
𝜔 2 − 𝜔
𝐷 − 𝜔𝐸 𝐷−1
(𝐷 − 𝜔𝐹)SSOR
𝑀𝐽𝐴 = 𝐷Jacobi
𝑀𝐽𝐴 =
1
𝜔
(𝐷 − 𝜔𝐸)SOR
It may not be “SPARSE” due to inverse (𝑀−1)
How to compute this?
𝑤 = 𝑀−1
𝐴𝑣 𝑟 = 𝐴𝑣 and 𝑀𝑤 = 𝑟
𝐴𝑣 might be expensive. Much better?
𝑤 = 𝑀−1 𝐴𝑣 = 𝑀−1 𝑀 − 𝑁 𝑣 = 𝐼 − 𝑀−1 𝑁 𝑣
𝑟 = 𝑁𝑣
𝑤 = 𝑀−1
𝑟
𝑤 ≔ 𝑣 − 𝑤
N may be sparser than A
and less expensive than 𝐴𝑣
10. Minimization Problem
Forget about 𝐴𝑥 = 𝑏 temporarily, but thinking about some quadratic function 𝑓
Function Matrix
𝑓(x) =
1
2
𝐴𝑥2
− 𝑏𝑥 + 𝑐 𝑓 𝑥 =
1
2
𝑥 𝑇
𝐴𝑥 − 𝑏 𝑇
𝑥 + 𝑐
𝑓′
x = 𝐴𝑥 − b 𝑓′ x =
1
2
𝐴 𝑇 𝑥 +
1
2
A𝑥 − b
If Matrix 𝐴 is symmetric, 𝐴 𝑇
= 𝐴, then
𝒇′ 𝒙 = 𝑨𝒙 − 𝒃
Setting the gradient to zero, we get the linear system we wish to solve.
Our original GOAL!!
11. (a) Quadratic form for a positive
definite matrix
(b) Quadratic form for a negative
definite matrix
(c) Singular (and positive-indefinite)
matrix; A line that runs through
bottom of the valley is the set of
solutions
(d) For an indefinite matrix. Saddle
point.
For a Symmetric and Positive Definite Matrix, minimizing
𝑓 𝑥 =
1
2
𝑥 𝑇 𝐴𝑥 − 𝑏 𝑇 𝑥 + 𝑐
Reduced to our solution
Minimization Problem
12. Steep Descent Method
Choose direction in which 𝑓 decrease most quickly, which is the direction opposite 𝑓′(𝑥 𝑖 )
𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖)
−𝑓′ 𝑥 𝑖 = 𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖)
𝑥(1) = 𝑥(0) + 𝛼𝑟(0)
To Find 𝛼, set
𝑑
𝑑𝛼
𝑓 𝑥 1 = 0
𝑑
𝑑𝛼
𝑓 𝑥 1 = 𝑓′
𝑥 1
𝑇 𝑑
𝑑𝛼
𝑥(1) = 𝑓′
𝑥 1
𝑇
𝑟(0)
𝑓′
𝑥 𝑖+1
𝑇
and 𝑟(𝑖) are orthogonal!
−𝑓′
𝑥 𝑖+1 = 𝑟(𝑖+1)
𝑓′
𝑥 𝑖+1
𝑇
𝑟(𝑖) = 0
𝑟 𝑖+1
𝑇
𝑟(𝑖) = 0
𝜶 =
𝒓 𝒊
𝑻
𝒓 𝒊
𝒓(𝒊)
𝑻
𝑨𝒓(𝒊)
13. Conjugate Gradient Method
Steep Descent Method not always converge well
Worst case of steep descent method
• Solid lines : worst convergence line
• Dashed line : steps toward convergence
Why it doesn’t directly go along line for fast
convergence? → related to eigen value
problem
Introducing Conjugate Gradient method
14. Conjugate Gradient Method
What is the meaning of conjugate?
• Definition : A binomial formed by negating the second term of binomial
• 𝑥 + 𝑦 ← conjugate → 𝑥 − 𝑦
Then, what is the meaning of conjugate gradient?
• Steep descent method often finds itself taking steps in the same direction
• Wouldn’t it better if we got it right the every step?
• Here is a step
• error 𝑒(𝑖) = 𝑥(𝑖) − 𝑥, residual 𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖), 𝑑(𝑖) a set of orthogonal search
direction
• for each step, we choose a point 𝑥(𝑖+1) = 𝑥(𝑖) + 𝛼(𝑖) 𝑑(𝑖)
• To find 𝛼, 𝑒(𝑖+1) should be orthogonal to 𝑑(𝑖). (𝑒 𝑖+1 = 𝑒 𝑖 + 𝛼 𝑖 𝑑 𝑖 )
𝑑(𝑖)
𝑇
𝑒(𝑖+1) = 0
𝑑(𝑖)
𝑇
(𝑒 𝑖 +𝛼(𝑖) 𝑑(𝑖)) = 0
𝛼(𝑖) = −
𝑑 𝑖
𝑇
𝑒 𝑖
𝑑(𝑖)
𝑇
𝑑(𝑖)
We don’t know anything about 𝑒(𝑖), because if we know 𝑒(𝑖), it means we know the answer.
15. Conjugate Gradient Method
Instead of orthogonal, introduce 𝐴-orthogonal
𝒅(𝒊)
𝑻
𝑨𝒅(𝒋) = 𝟎, if 𝑑(𝑖) and 𝑑(𝑗) are 𝐴-orthogonal, or conjugate
𝒆(𝒊+𝟏) is 𝑨-orthogonal to 𝒅(𝒊), and this condition is equivalent to finding the minimum
point along the search direction 𝑑(𝑖) , as in steep descent method
𝑑
𝑑𝛼
𝑓 𝑥 𝑖+1 = 0
𝛼 minimize 𝑓 when directional
derivative is equal to zero
𝑓′ 𝑥 𝑖+1
𝑇 𝑑
𝑑𝛼
𝑥 𝑖+1 = 0
−𝑟 𝑖+1
𝑇
𝑑(𝑖) = 0
Chain rule
𝑓′ 𝑥(𝑖+1) = 𝐴𝑥(𝑖+1) − 𝑏
𝑟(𝑖) = 𝑏 − 𝐴𝑥(𝑖)
𝑥(𝑖+1) = 𝑥(𝑖) + 𝛼(𝑖) 𝑑(𝑖)
𝑑(𝑖)
𝑇
𝐴𝑒(𝑖+1) = 0 𝑥(𝑖+1)
𝑇
𝐴 𝑇
𝑑(𝑖) − 𝑏 𝑇
𝑑 𝑖 = 0
𝑥(𝑖+1)
𝑇
𝐴 𝑇
𝑑(𝑖) − 𝑥 𝑇
𝐴 𝑇
𝑑 𝑖 = 0
𝑒 𝑖+1
𝑇
𝐴 𝑇
𝑑(𝑖) = 0 Transpose again
How it can be same as orthogonality used in steep descent method?
20. Implementation Issue
• For 3D case, Matrix 𝐴 would be huge. (for (128 × 128 × 128) grid, 𝐴 matrix has
128 × 128 × 128 × 128 × 128 × 128 = 32𝑇𝐵, (for 2D it takes only 2GB)
• However, there are almost 0 in 𝐴 matrix for poisson equation. ⇒ Sparse Matrix!
How to represent Sparse Matrix?
• Simplest thing. Store nonzero value and row, column index. (Coordinate
Format, COO)
Too many
duplication
21. Sparse Matrix Format
Compressed Sparse Row (CSR)
• Store only non-zero values
• Available three or four arrays
• Not easy to construct the algorithm such as ILU or IC preconditioner
22. Use MKL (Intel Math Kernel Library)
MKL?
• a library of optimized math routines for science, engineering, and financial
applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse
solvers, fast Fourier transforms, and vector math. The routines in MKL are
hand-optimized specifically for Intel processors.
• For my problem, I usually use BLAS, fast Fourier transforms (for poisson
equation solver with Neumann, periodic, dirichlet BC)
BLAS?
• a specified set of low-level subroutines that perform common linear algebra
operations, widely used. Even in MATLAB!
• Usually used in vector or matrix multiplication, dot product like operations.
• Level 1 : vector – vector operation
• Level 2 : matrix – vector operation
• Level 3 : matrix – matrix operation
• Parallelized internally by Intel. Just turn on the option.
• Reference manual : https://software.intel.com/en-us/mkl_11.1_ref
23. How to use Library
For MKL
• For compile (when creating .c files in your makefile)
• -i8 -openmp -I$(MKLROOT)/include
• For link (when creating executable files using –o option)
• -L$(MKLROOT)/lib/intel64 -lmkl_core -lmkl_intel_thread
-lpthread –lm
• https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor
Library Linking Process
• Compile
• -I option indicate where is
header file (.h file), specifying
include path
• Linking
• -L option indicate where is
library file (.lib, .dll, .a, .so),
specifying linking path
• -l option indicate library name
24. Reference
• Shewchuk, Jonathan Richard. "An introduction to the conjugate gradient
method without the agonizing pain." (1994).
• Deepak Chandan, “Using Sparse Matrix and Solver Routines from Intel
MKL”, Scinet User Group Meeting, (2013)
• Saad, Yousef. Iterative methods for sparse linear systems. Siam, 2003.
• Akhunov, R. R., et al. "Optimization of the ILU(0) factorization algorithm with
the use of compressed sparse row format." Zapiski Nauchnykh Seminarov POMI
405 (2012): 40-53.