1.
Algorithm-Independent Machine Learning Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking and Multimedia, National Taiwan University
No matter how clever in choosing a “good” algorithm and a “bad” algorithm, if all target functions are equally likely, the “good” algorithm will not outperform the “bad” one
There is at least one target function for which random guessing is a better algorithm
20.
Total Number of Predicates in Absence of Constraints
Let d be the number of regions in the Venn diagrams (i.e., number of distinctive patterns, or number of possible values determined by combinations of the features)
21.
A Measure of Similarity in Absence of Prior Information
Number of features or attributes shared by two patterns
Concept difficulties
e.g., blind_in_right_eyes and blind_in_left_eyes , (1,0) more similar to (1,1) and (0,0) than to (0,1)
There are always multiple ways to represent vectors of attributes
e.g. blind_in_right_eye and same_in_both_eyes
No principled reason to prefer one of these representations over another
22.
A Plausible Measure of Similarity in Absence of Prior Information
111.
Sum and Difference of Test and Training Error
112.
Fraction of Dichotomies of n Points in d Dimensions That are Linear
113.
One-Dimensional Case f ( n = 4, d = 1) = 0.5 X 1111 X 0111 X 1110 0110 1101 0101 X 1100 0100 1011 X 0011 1010 0010 1001 X 0001 X 1000 X 0000 Linearly Separable? Labels Linearly Separable? Labels
Be the first to comment