In this talk, I am going to discuss logistic regression, a topic that has been (and still is) quite heavily used as a solution to many supervised learning problems, in several different domains.
I will be focussing on three fundamental and general aspects related to any supervised learning problem: i) the model, ii) the error measure (or cost function), and iii) the learning algorithm. Then, I will cast each of those aspects into the specific case of logistic regression.
39. Discussion: Termina&on
• When does the algorithm stop?
• Intui&vely, when θ(t+1) = θ(t) è - η Ein(θ(t)) = 0 è Ein(θ(t)) = 0
• If the func&on is convex we are guaranteed to reach the global
minimum when Ein(θ(t)) = 0
– i.e. there exists a unique local minimum which also happens to be the
global minimum
• In general we don’t know if eventually Ein(θ(t)) = 0 therefore we
can use several criteria of termina&on, e.g.,:
– stop whenever the difference between two itera&ons is “small enough” à
may converge “prematurely”
– stop when the error equals to ε à may not converge if the target error is
not achievable
– stop aber T itera&ons
– combina&ons of the above in prac&ce works…