This document introduces basic concepts in optimization, including:
- Local and global optima are defined, with local optima being points where no nearby points have lower objective values, and global optima having no other feasible points with lower values.
- Numerical methods are used to find optima by iteratively improving search along feasible directions from a starting point.
- Convex and concave functions and sets are defined, with convex functions/sets having important implications for optimization.
Numerical Methods was a core subject for Electrical & Electronics Engineering, Based On Anna University Syllabus. The Whole Subject was there in this document.
Share with it ur friends & Follow me for more updates.!
shooting method with Range kutta methodSetuThacker
how to solve the differential equation by using shooting method & range Kutta method.there is one problem statement, which is related to the temperature distribution problem over an iron rod. there is two graphs of the 2nd order and 4th order Range Kutta. there is one combine graph of the both the method which gives correct & closest answer.
Thank You...!!!
Numerical Methods was a core subject for Electrical & Electronics Engineering, Based On Anna University Syllabus. The Whole Subject was there in this document.
Share with it ur friends & Follow me for more updates.!
shooting method with Range kutta methodSetuThacker
how to solve the differential equation by using shooting method & range Kutta method.there is one problem statement, which is related to the temperature distribution problem over an iron rod. there is two graphs of the 2nd order and 4th order Range Kutta. there is one combine graph of the both the method which gives correct & closest answer.
Thank You...!!!
We propose a new stochastic first-order algorithmic framework to solve stochastic composite nonconvex optimization problems that covers both finite-sum and expectation settings. Our algorithms rely on the SARAH estimator and consist of two steps: a proximal gradient and an averaging step making them different from existing nonconvex proximal-type algorithms. The algorithms only require an average smoothness assumption of the nonconvex objective term and additional bounded variance assumption if applied to expectation problems. They work with both constant and adaptive step-sizes, while allowing single sample and mini-batches. In all these cases, we prove that our algorithms can achieve the best-known complexity bounds. One key step of our methods is new constant and adaptive step-sizes that help to achieve desired complexity bounds while improving practical performance. Our constant step-size is much larger than existing methods including proximal SVRG schemes in the single sample case. We also specify the algorithm to the non-composite case that covers existing state-of-the-arts in terms of complexity bounds.Our update also allows one to trade-off between step-sizes and mini-batch sizes to improve performance. We test the proposed algorithms on two composite nonconvex problems and neural networks using several well-known datasets.
Subgradient Methods for Huge-Scale Optimization Problems - Юрий Нестеров, Cat...Yandex
We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piecewise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, the total cost of which depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.
We show that the updating technique can be efficiently coupled with the simplest subgradient methods. Similar results can be obtained for a new non-smooth random variant of a coordinate descent scheme. We also present promising results of preliminary computational experiments.
One of the central tasks in computational mathematics and statistics is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation in high dimensions with large data sets. The algorithms are of iterative nature and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Mat 121-Limits education tutorial 22 I.pdfyavig57063
limitsExample: A function C=f(d) gives the number of classes
C, a student takes in a day, d of the week. What does
f(Monday)=4 mean?
Solution. From f(Monday)=4, we see that the input day
is Monday while the output value, number of courses is
4. Thus, the student takes 4 classes on Mondays.Function: is a rule which assigns an element in
the domain to an element in the range in such a
way that each element in the domain
corresponds to exactly one element in the range.
The notation f(x) read “f of x” or “f at x” means
function of x while the notion y=f(x) means y is a
function of x. The letter x represents the input
value, or independent variable
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
02 basics i-handout
1. Basic Concepts in Optimization – Part I
Benoˆıt Chachuat <benoit@mcmaster.ca>
McMaster University
Department of Chemical Engineering
ChE 4G03: Optimization in Chemical Engineering
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 1 / 23
Outline
1 Local and Global Optima
2 Numerical Methods: Improving Search
3 Notions of Convexity
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 2 / 23
Local Optima
Neighborhood
The neighborhood Nδ(x◦) of a point x◦ consists of all nearby points; that
is, all points within a small distance δ > 0 of x◦:
Nδ(x◦
)
∆
= {x : x − x◦
< δ}
Local Optimum
A point x∗ is a [strict] local minimum for the function f : IRn
→ IR on the
set S if it is feasible (x∗ ∈ S) and if sufficiently small neighborhoods
surrounding it contain no points that are both feasible and [strictly] lower
in objective value:
∃δ > 0 : f (x∗
) ≤ f (x), ∀x ∈ S ∩ Nδ(x∗
)
[ ∃δ > 0 : f (x∗
) < f (x), ∀x ∈ S ∩ Nδ(x∗
) {x∗
} ]
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 3 / 23
Illustration of a (Strict) Local Minimum, x∗
δ
S
xx
Nδ(x∗)
f (x)
x∗
f (x∗)
f (x∗) < f (x), ∀x ∈ S ∩ Nδ(x∗) {x∗}
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 4 / 23
2. Global Optima
Global Optimum
A point x∗ is a [strict] global minimum for the function f : IRn
→ IR on the
set S if it is feasible (x∗ ∈ S) and if no other feasible solution has [strictly]
lower objective value:
f (x∗
) ≤ f (x), ∀x ∈ S
[ f (x∗
) < f (x), ∀x ∈ S {x∗
} ]
Remarks:
1 Global minima are always local minima
2 Local minima may not be global minima
3 Analog definitions hold for local/global optima to maximize problems
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 5 / 23
Illustration of a (Strict) Global Minimum, x∗
f (x∗)
S
x
x∗
f (x∗) < f (x), ∀x ∈ S {x∗}
f (x)
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 6 / 23
Global vs. Local Optima
Class Exercise: Identify the various types of minima and maxima for f on
S
∆
= [xmin, xmax]
f (x)
xmin xmax
x1 x2 x3 x4 x5
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 7 / 23
How to Find Optima?
Review: Three Methods for Optimization
1 Graphical Solutions
Great display + see multiple optima
But impractical for nearly all practical problems
2 Analytical Solutions (e.g., Newton, Euler, etc.)
Exact solution + easy analysis for changes in (uncertain) parameters
But not possible for most practical problems
3 Numerical Solutions
The only practical method for complex models!
But only guarantees local optima + challenges in finding effects of
(uncertain) parameters
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 8 / 23
3. Numerical Optimization: The Dilemma!
Consider the optimization problem:
min
0≤x1,x2≤5
f (x1, x2)
∆
=
1
1 + (x1 − 1)2 + (x2 − 1)2
+
0.5
1 + (x1 − 4)2 + (x2 − 3)2
0
1
2
3
4
5 0
1
2
3
4
5
0
0.2
0.4
0.6
0.8
1
1.2
f (x1, x2)
x1 x2
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 9 / 23
Numerical Optimization: The Dilemma!
Typically, only some local information is know about the objective function
typically at a current point x◦ = (x◦
1 , x◦
2 )!
Question: Which move do I make next?
current point
0
1
2
3
4
5 0
1
2
3
4
5
0
0.2
0.4
0.6
0.8
1
1.2
f (x1, x2)
x1 x2
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 9 / 23
Numerical Optimization: The Basic Approach
Improving Search
Improving search methods are numerical algorithms that begin at a
feasible solution to a given optimization model, and advance along a
search path of feasible points with ever-improving function value
0
1
2
3
4
5 0
1
2
3
4
5
0
0.2
0.4
0.6
0.8
1
1.2
f (x1, x2)
x1 x2
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 10 / 23
Direction-Step Paradigm
At the current point x(k), how do I decide:
the direction of change
the magnitude of change
whether further improvement is possible?
The Basic Equation
Improving search advances from current point x(k) to new point x(k+1) as:
x(k+1)
=
x
(k+1)
1
x
(k+1)
2
...
x
(k+1)
n
= x(k)
+ α∆x =
x
(k)
1
x
(k)
2
...
x
(k)
n
+ α
∆x1
∆x2
...
∆xn
where:
∆x defines a move direction of solution change at x(k) ( ∆x = 1)
α > 0 determines a move magnitude, how far to pursue this direction
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 11 / 23
4. Direction of Change, ∆x
Improving Directions
Vector ∆x ∈ IRn
is an improving direction at current point x(k) if the
objective function value at x(k) + α∆x is superior to that of x(k), for all
α > 0 sufficiently small
(maximize problem) ∃¯α > 0 : f (x(k)
+ α∆x) > f (x(k)
), ∀α ∈ (0, ¯α]
0
1
2
3
4
5 0
1
2
3
4
5
0
0.2
0.4
0.6
0.8
1
1.2
f (x1, x2)
x1 x2
current point
∆x, improving direction
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 12 / 23
Direction of Change, ∆x
Improving Directions
Vector ∆x ∈ IRn
is an improving direction at current point x(k) if the
objective function value at x(k) + α∆x is superior to that of x(k), for all
α > 0 sufficiently small
(maximize problem) ∃¯α > 0 : f (x(k)
+ α∆x) > f (x(k)
), ∀α ∈ (0, ¯α]
x(k)
x(k+1)
x1
x2
set of improving directions at x(k)
set of improving directions at x(k+1)
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 12 / 23
Direction of Change, ∆x (cont’d)
Feasible Directions
Vector ∆x ∈ IRn
is an feasible direction at current point x(k) if point
x(k) + α∆x violates no model constraint for all α > 0 sufficiently small
∃¯α > 0 : x(k)
+ α∆x ∈ S, ∀α ∈ (0, ¯α]
x(k)
x1x1
x2
set of feasible directions at x(k)
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 13 / 23
Optimality Criterion
Necessary Condition of Optimality (NCO)
No optimization model solution at which an improving feasible direction is
available can be a local optimum
x∗
x1
x2
set of feasible directions at x∗
set of improving directions at x∗
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 14 / 23
5. Continuous Improving Search Algorithm
Step 0: Initialization.
◮ Choose any starting feasible point x(0)
and let index k ← 0.
Step 1: Move Direction.
◮ If no improving feasible direction ∆x exists at current point x(k)
, stop.
◮ Otherwise, construct an improving feasible direction at x(k)
as ∆x(k+1)
.
Step 2: Step Size.
◮ If there is no limit on step sizes for which direction ∆x(k+1)
continues
to both improve the objective function and retain feasibility, stop —
The model is unbounded.
◮ Otherwise, choose the largest step size α(k+1)
.
Step 3: Update.
◮ x(k+1)
← x(k)
+ α(k+1)
∆x(k+1)
◮ Increment index k ← k + 1 and return to step 1.
Remarks:
This basic algorithm may terminate at a suboptimal point
Moreover, it does not distinguish between local and global optima
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 15 / 23
A Word of Caution!
Caution: A point at which no improving feasible direction is available may
not be a local optimum!
x∗
x1
x2 set of feasible directions at x∗
set of improving directions at x∗
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 16 / 23
Finding out Optima!
Class Exercise: Determine whether each of the following points is
apparently a local/global minimum? a local/global maximum? neither?
10 20 30 40
30
40
20
60
40
50
50
50
60
90
60
70
100
70
80
x1
x2
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 17 / 23
Convex Sets
A set S ⊂ IRn
is said to be convex if every point
on the line connecting any two points x, y in S
is itself in S,
γx + (1 − γ)y ∈ S, ∀γ ∈ (0, 1)
x
yS
Nonconvex Set: Some points on the
line connecting x, y do not lie in S
S
x
y
Nonconnected sets are nonconvex;
e.g., the discrete set {0, 1, 2, . . .}2
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 18 / 23
6. Convex and Concave Functions
Convex Functions
A function f : S → IR, defined on a convex set S ⊂ IRn
, is said to be
convex on S if the line segment connecting f (x) and f (y) at any two
points x, y ∈ S lies above the function between x and y,
f (γx + (1 − γ)y) ≤ γf (x) + (1 − γ)f (y), ∀γ ∈ (0, 1)
Strict convexity:
f (γx + (1 − γ)y) < γf (x) + (1 − γ)f (y), ∀x, y ∈ S, ∀γ ∈ (0, 1)
Concave Functions
f is said to be [strictly] concave on S if (−f ) is [strictly] convex on S,
f (γx + (1 − γ)y) ≥ [>]γf (x) + (1 − γ)f (y), ∀x, y ∈ S, ∀γ ∈ (0, 1)
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 19 / 23
Convex and Concave Functions (cont’d)
Case of a strictly convex function
on the convex set S
Case of a nonconvex function on S,
yet convex on the convex set S′
SS
S′
γx1 + (1 − γ)x2
f (γx1 + (1 − γ)x2)
γf (x1) + (1 − γ)f (x2)
x1 x1x2 x2
f (x)f (x)
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 20 / 23
Sets Defined by Constraints
Define the set S
∆
= {x ∈ IRn
: g(x) ≤ 0}, with g a
convex function on IRn
. Then, S is a convex set
Why?
g(x) = 0
S
Consider any two points x, y ∈ S. By the convexity of g,
g(γx + (1 − γ)y) ≤ γg(x) + (1 − γ)g(y), ∀γ ∈ (0, 1)
Since g(x) ≤ 0 and g(y) ≤ 0,
g(x) + (1 − γ)g(y) ≤ 0, ∀γ ∈ (0, 1)
Therefore, γx + (1 − γ)y ∈ S for every γ ∈ (0, 1); i.e., S is convex
Class Exercise: Give a condition on g for the following set to be convex:
S
∆
= {x ∈ IRn
: g(x) ≥ 0}
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 21 / 23
Sets Defined by Constraints (cont’d)
What is the condition on h for the following
set to be convex:
S
∆
= {x ∈ IRn
: h(x) = 0}
The set S is convex if and only if h is affine
x
y h(x) = 0
S
points not in S
Convex Sets Defined by Constraints
Consider the set
S
∆
= {x ∈ IRn
: g1(x) ≤ 0, . . . , gm(x) ≤ 0, h1(x) = 0, . . . , hp(x) = 0}
Then, S is convex if:
g1, . . . , gm are convex on IRn
h1, . . . , hp are affine
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 22 / 23
7. Convexity and Global Optimality
Consider the constrained program:
max
x
f (x)
s.t. gj (x) ≤ 0, j = 1, . . . , m
hj (x) = 0, j = 1, . . . , p
If f and g1, . . . , gm are convex on IRn
, and h1, . . . , hp are affine, then
this program is said to be a convex program
Sufficient Condition for Global Optimality
A [strict] local minimum to a convex program is also a [strict] global
minimum
On the other hand, a nonconvex program may or may not have local
optima that are not global optima
Benoˆıt Chachuat (McMaster University) Basic Concepts in Optimization – Part I 4G03 23 / 23