A Sufficient Condition for
Learnability of Unbounded Unions
of Languages with Refinement
Operators	
Tomohiko OKAYAMA, Ryo YOSHINAKA,!
Keisuke OTAKI, Akihiro YAMAMOTO!
Kyoto University, Japan!
6th, January, 2013	

1
Contribution	
We present a condition for a class of languages !
under which unbounded unions of languages can be !
learned in the Gold-Style from positive data with !
refinement operators. !
Unbounded unions of
languages : the class of all
finite unions of languages,
given from a class of
languages.	

Refinement operator : an
operator with which a learner
hypothesize an appropriate
representation of a
language. 	

h1
h2 h3
A language !
= a set of objects 	

A union of !
languages	

h4

2
Learning Model : the Gold-Style	
A Language	
L(r)
A representation of a language!
R
( = A parameter) in	
A Class of languages	
A hidden target
language	
 (t)
L

{L(r) | r 2 R}

···

Learnability of the class 	
8t 2 R, 9N 2 N, 8n N,
rn = rN ^ L(rN ) = L(t)

A hypothesized
L(r)
language	

Input : e1 , e2 , ...
	

Output : r1 , r2 , ...
	

Learning Machine	
3
Example	
L(n) = {n, 2n, ..., mn, ...}
A Language	

A representation of a language!
R
( = A parameter) in	
A Class of languages	
A hidden target
language	
L(12)

+

{L(n) | n 2 N }

···

A hypothesized
L(r)
language	

	
Input : 24, 72, ... Output : 24, 24, ...
	

Learnability of the class 	
Once the output is 12, !
it will not be changed. !

Compute the greatest
common divisor of inputs	

3
Learnable Base Class implies Learnable
Union Class?!
A Base Class	
{L(r) | r 2 R}

An Union Class	

One language	

···

Input : 	
e1 , e2 , ...

Output : 	
r1 , r2 , ... 2 R

A finite set of languages	

{L(r) | r 2 R}

···

Input : 	
e1 , e2 , ...

Output : 	
R1 , R2 , ... ✓ R

4
Bounded and Unbounded Unions of
Languages	
Learning bounded unions of languages!
A learner knows that data are given from finite number of
languages from the class, and also knows the upper bound
of the number of languages. (many positive results)!
eg [K. Wright 1989], [H. Arimura et al. 1995], 

[T. Shinohara et al. 1995]!
!

Learning unbounded unions of languages!
A learner knows that data are given from finite number of
languages from the class. (few positive results)!
!
!
5
Concept class	
Object set!
X = {e1 , e2 , ...}
Representation Set R = {r1 , r2 , ...}
Language Mapping! L : R ! 2X
! all e 2 X, whether e 2 L(r) is decidable.	
For
C = {L(r) ✓ X | r 2 R}
Base Class!
Base Concept Class (C, R, L)
Example!
Rex = {hm, ni | m, n 2 N+ }
Lex (hm, ni) ={(x, y) 2 N ⇥ N
|x

m, y

n}

Cex = {Lex (hm, ni) | hm, ni 2 Rex }

y	
hm, ni

quarter plane	

n
m

x
6
Refinement Operators	
Refinement operator ⇢ on R
!
r
Input : a representation !
Output : a finite set of representations!{r1 , ..., rn }
8r 2 R, 8ri 2 {r1 , ..., rn }, L(ri ) ✓ L(r)
L(r)
r refinement
path	
!

L(r1 ) · · ·
· · L(rn )
L(ri ) ·

r1 · · r · · rn
· i ·
7
Refinement Operators	
⇢0 (r)

r

⇢1 (r)
q1

q2

r

q1

q2

⇢2 (r)

.
.
.
p

The refinement path
p
between r and 	

.
.
.
⇢k (r)
p

The representations obtained !
by applying rho to r for k times	

7
Refinement Operators	
L(r)

L(r1 ) · · ·
· · L(rn )
L(ri ) ·

r

r1 · · r · · rn
· i ·

⇢⇤ (r) = ⇢0 (r) [ · · · [ ⇢k (r) [ · · ·

⇢+ (r) = ⇢1 (r) [ · · · [ ⇢k (r) [ · · ·

7
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

2
1
1

2

3

4

x

S = {(3, 4), (2, 5), (4, 2)}
8
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

h2, 1i

h1, 2i

2
1
1

2

3

4

x

S = {(3, 4), (2, 5), (4, 2)}
8
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

h2, 1i
h3, 1i

2

h1, 2i

h2, 2i

1
2

3

4

x

S = {(3, 4), (2, 5), (4, 2)}
8
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

h2, 1i
h3, 1i

2

h1, 2i

h2, 2i

1
2

3

4

x

S = {(3, 4), (2, 5), (4, 2)}
8
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

h2, 1i
h3, 1i

2

h2, 2i

h3, 2i
2

3

4

h1, 2i

h2, 3i

x

S = {(3, 4), (2, 5), (4, 2)}
8
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

h2, 1i

3

h3, 1i

2

h2, 2i

h3, 2i
2

3

4

h1, 2i

h2, 3i

x

S = {(3, 4), (2, 5), (4, 2)}
8
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
5

h1, 1i 2 T

4

h2, 1i
h3, 1i

2

h2, 2i

h3, 2i
2

3

4

h1, 2i

h2, 3i

x

S = {(3, 4), (2, 5), (4, 2)}
8
Learnability of a Concept Space with
Refinement Operators	
Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3].!
[A-1] For any p, r 2 R , if L(p) ( L(r) then there exists a
representation q 2 R such that q 2 ⇢+ (r) and L(p) = L(q)holds.!
[A-2] There is finite T ✓ R such that for any L(p) 2 C ,!
there exists t 2 T satisfying q 2 ⇢⇤ (t) and L(p) = L(q). !
[A-3] There is no infinite sequence of! L(t)
t2T
r1 , r2 , ... 2 R such that! ri+1 2 ⇢(ri )
and L(r1 ) = L(r! ) = · · ·
r
2
L(r)
for all i 2 N+.!
p
q
L(p) = L(q)
9	
T : initial representation set
Learnability of a Concept Space with
Refinement Operators	
Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3].!
[A-1] For any p, r 2 R , if L(p) ( L(r) then there exists a
representation q 2 R such that q 2 ⇢+ (r) and L(p) = L(q)holds.!
[A-2] There is finite T ✓ R such that for any L(p) 2 C ,!
there exists t 2 T satisfying q 2 ⇢⇤ (t) and L(p) = L(q). !
[A-3] There is no infinite sequence of! L(t)
t2T
r1 , r2 , ... 2 R such that! ri+1 2 ⇢(ri )
and L(r1 ) = L(r! ) = · · ·
r
2
L(r)
for all i 2 N+.!
p
q
L(p) = L(q)
9	
T : initial representation set
Example	

	
 = N+ ⇥ N+
Xex

Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y

A union of quarter planes	
L(h1, 3i) [ L(h2, 2i) [ L(h4, 1i)

3
2
1
1

2

4

x

10
Union Concept Class	
Union Representation Set! R⇤ = {{r1 , ..., rk }
!
| r1 , ..., rk 2 R, k 2 N+ }
L({r1 , ..., rk }) = L(r1 ) [ · · · [ L(rk )
Language Mapping!
C ⇤ = {L(R) | R 2 R⇤ }
Union Class
Union Concept Class
(C ⇤ , R⇤ , L)
!

Example of Unions of Languages
n

	

quarter planes	

y

(1, 3)
(2, 2)
(4, 1)
m

3
2
1

x

4
1 2
R = {h1, 3i, h2, 2i, h4, 1i} L(h1, 3i) [ L(h2, 2i) [ L(h4, 1i)

11
Learnability of Unbounded Unions	
Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3].!
Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].!
[A-1] For any p, r 2 R , if L(p) ( L(r) then there exists a
representation q 2 R such that q 2 ⇢+ (r) and L(p) = L(q)holds.!
[A-2] There is finite T ✓ R such that for any L(p) 2 C ,!
there exists t 2 T satisfying q 2 ⇢⇤ (t) and L(p) = L(q). !
[A-3] There is no infinite sequence of L(r1 ) = L(r2 ) = · · · such
that r1 , r2 , ... 2 R and ri+1 2 ⇢(ri ) for all i 2 N+.!

12
Learnability of Unbounded Unions	
Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3].!
Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].!
[C-1] For any p, r 2 R , L(r) = L(p) ) r = p .!
[C-2] For any n 2 N+ , for any r0 , r1 , ..., rn 2 R ,!
L(r) ✓ L(r1 ) [ · · · [ L(rn ) , 9ri 2 {r1 , ..., rn }, L(r) ✓ L(ri )
!
L(r3 )
!
L(r3 )
L(r2 )
L(r1 )
!
L(r)
L(r)
,
!

12
Learnability of Unbounded Unions	
Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3].!
Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].!
˜
Lemma 4 (Our Contribution) ⇢ satisfies [A-1] to [A-3] if!
(C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3]
and satisfies [C-1] and [C-2].!
!

12
A Refinement Operator for Unions of
Languages	
⇢(r) = {r1 , r2 , ..., rn } !
[
⇢(P ) =
˜
{P  {r}} [
r2P

[

maximal r2P

Q 2 ⇢(P )
˜
(1) Q = P  {r} for some r 2 P
!

!

{P [ ⇢(r)  r}

!

(2) Q = P [ ⇢(r)  {r} for some r 2 P !
such that L(r) 6( L(p) for any p 2 P ( r : maximal) 	
(2)	
r
L(P ) =

r
p1 p2 p3 p4
13
A Refinement Operator for Unions of
Languages	
⇢(r) = {r1 , r2 , ..., rn } !
[
⇢(P ) =
˜
{P  {r}} [
r2P

[

maximal r2P

Q 2 ⇢(P )
˜
(1) Q = P  {r} for some r 2 P
!

!

{P [ ⇢(r)  r}

!

(2) Q = P [ ⇢(r)  {r} for some r 2 P !
such that L(r) 6( L(p) for any p 2 P ( r : maximal) 	
(2)	
Lemma 3 (holds by [C-1] and [C-2] )	
p1 p2 p3 p4
L(Q)

p

(
L(P )

13
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }
⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y
y

n}

R⇤ ={{hm1 , n1 i, ..., hmk , nk i}
ex

| hmi , ni i 2 Rex , i 2 {1, ..., k}}

3

L({hm1 , n1 i, ..., hmk , nk i})

2
1
x

1

2

3

S = {(3, 1), (2, 2), (1, 3)}

= L(hm1 , n1 i) [ · · · L(hmk , nk i)
!
[
⇢(P ) =
˜
{P  {r}}
r2P

[

[

maximal r2P

{P [ ⇢(r)  r}

!
14
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }
⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
{h1, 1i} 2 T
3
2
1
1

2

3

S = {(3, 1), (2, 2), (1, 3)}

14
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }
⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
{h1, 1i} 2 T
{h2, 1i, h1, 2i} = R

3
2
1
x

1

2

3

S = {(3, 1), (2, 2), (1, 3)}

T [ ⇢(h1, 1i)  {h1, 1i}
= {h2, 1i, h1, 2i}

14
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }
⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
{h1, 1i} 2 T
{h2, 1i, h1, 2i} = R

3

{h1, 2i}

{h2, 1i}
{h2, 1i, h2, 2i, h1, 3i}

2
1
1

2

3

S = {(3, 1), (2, 2), (1, 3)}

x {h3, 1i, h2, 2i, h1, 2i}

H [ ⇢(h2, 1i)  {h2, 1i}

= {h3, 1i, h2, 2i, h1, 2i}

14
Example	
Xex = N+ ⇥ N+
Rex = {hm, ni | m, n 2 N+ }
⇢(hm, ni) = {hm + 1, ni, hm, n + 1i}
Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n}
y
{h1, 1i} 2 T
{h2, 1i, h1, 2i} = R

3

{h1, 2i}

{h2, 1i}
{h2, 1i, h2, 2i, h1, 3i}

2
1
1

2

3

S = {(3, 1), (2, 2), (1, 3)}

x {h3, 1i, h2, 2i, h1, 2i}

{h3, 1i, h2, 2i, h1, 3i}

14
˜
Lemma 4 (Our Contribution) ⇢ satisfies [A-1] to [A-3] if!
(C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3]
and satisfies [C-1] and [C-2].!
!
[A-1] For any P, Q 2 R⇤, if L(Q) ✓ L(P ) then there exists!
0
a set of hypotheses Q0 2 R⇤ such that L(Q) = L(Q ) and
Q0 2 ⇢⇤ (P ) holds.!
˜
[A-2] There is finite T ✓ R⇤such that for anyL(P ) 2 C ⇤
,!
˜
there exists S 2 T satisfying Q 2 ⇢⇤ (S) and L(P ) = L(Q).!
[A-3] There are no infinite sequence P1 , P2 , ... 2 R⇤ such that!
L(P1 ) = L(P2 ) = · · · and Pi+1 2 ⇢(Pi ) for all i 2 N+.
˜
!

15
Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3].!
˜
Lemma 4 (Our Contribution) ⇢ satisfies [A-1] to [A-3] if!
(C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3]
and satisfies [C-1] and [C-2].!
!
Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from
positive data if (C, R, L) admits a refinement operator ⇢
satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].!

15
Concluding Remarks	
Conclusion!
We have proposed a non-trivial sufficient condition ([A-1][A-3], and [C-1]-[C-2]) for learning union concept from
positive data.!
l  Introducing a refinement operator for union class!
!

Future work !
There is base classes which are learnable with a refinement
operator. We check whether the union concept of this class is
also learnable from positive data using our result. !

16
Reference	
Ouchi, S., and Yamamoto, A. 2010. Learning from
positive data based on the MINL strategy with
refinement operators. In Nakakoji, K.; Murakami, Y.;
and McCready, E., eds., New Frontiers in Artificial
Intelligence, volume 6284 of Lecture Notes in
Computer Science. Springer Berlin Heidelberg. 345–
357. !
!
	

17
Formal Definition of!
Refinement Operators!
Refinement Operators	
Ouchi & Yamamoto(2010)!
Let (C, H, L) be a concept class. A mapping ρ : H → 2H is called a
refinement operator on the class if it satisfies the following three:
[R-1] For every h ∈ H, ρ(h) is recursively enumerable.
[R-2] For every h ∈ H, g ∈ ρ(h) ⇒ L(g) ⊆ L(h)
[R-3] There is no sequence h1, h2,…, hn of hypotheses such that
h1 = hn and hi+1 ∈ ρ(hi) (1 ≦ i ≦ n-1)	

ρ(h)	

h	
g	

L(g) ⊆ L(h)	

No	
  loops
Difference Between
Bounded and Unbounded!
Difference Between Bounded and
Unbounded	
Representation Set	
Language Mapping	

R = {h0i, h1i, ...} [ {h⇤i}
(
L(hni) = {n} if hni 2 {h0i, h1i, ...}
L(h⇤i) = N

A class of languages	
 C = {L(hni) | hni 2 R}
= {{0}, {1}, ...} [ {N}
2-bounded unions of
languages	

C 2 = {L({hgi, hhi}) | hgi, hhi 2 R}

= {L(hgi) [ L(hhi) | hgi, hhi 2 R}

◆C

(

k-bounded unions of
languages	

C k = {S | S ✓ N, |S|  k} [ N
◆C

k 1

◆ ··· ◆ C

)
Difference Between Bounded and
Unbounded	
Representation Set	

R = {h0i, h1i, ...} [ {h⇤i}

2-bounded unions of
languages	

C 2 = {L({hgi, hhi}) | hgi, hhi 2 R}

= {L(hgi) [ L(hhi) | hgi, hhi 2 R}

◆C

Input : 	
 1,
Output : {h1i},
	

↑Sets of hypotheses which represents!
a minimal language which includes all inputs
Difference Between Bounded and
Unbounded	
Representation Set	

R = {h0i, h1i, ...} [ {h⇤i}

2-bounded unions of
languages	

C 2 = {L({hgi, hhi}) | hgi, hhi 2 R}

= {L(hgi) [ L(hhi) | hgi, hhi 2 R}

◆C

Input : 	
 1, 4
Output : 	
{h1i}, {h1i, h4i},

↑Sets of hypotheses which represents!
a minimal language which includes all inputs
Difference Between Bounded and
Unbounded	
Representation Set	

R = {h0i, h1i, ...} [ {h⇤i}

2-bounded unions of
languages	

C 2 = {L({hgi, hhi}) | hgi, hhi 2 R}

= {L(hgi) [ L(hhi) | hgi, hhi 2 R}

◆C

Input : 	
 1, 4, 5, 9, 7...
Output : 	
{h1i}, {h1i, h4i}, {h⇤i}, {h⇤i}, {h⇤i}, ...
After receiving 3 or more data,!
A learner outputs {⇤}
Difference Between Bounded and
Unbounded	
Representation Set	

R = {h0i, h1i, ...} [ {h⇤i}

2-bounded unions of
languages	

C 2 = {L({hgi, hhi}) | hgi, hhi 2 R}

= {L(hgi) [ L(hhi) | hgi, hhi 2 R}

◆C
Unbounded unions of C ⇤ = {S | S ✓ N, S is finite} [ N
languages	

Input : 	
 1, 4, 5, 9, 7...
Output : {h1i}, {h1i, h4i}, {h1i, h4i, h5i}, {h1i, h4i, h5i, h9i}, ...
	
A learner never outputs {⇤}. !
→ Unbounded unions is NOT LEARNABLE!!

140106 isaim-okayama

  • 1.
    A Sufficient Conditionfor Learnability of Unbounded Unions of Languages with Refinement Operators Tomohiko OKAYAMA, Ryo YOSHINAKA,! Keisuke OTAKI, Akihiro YAMAMOTO! Kyoto University, Japan! 6th, January, 2013 1
  • 2.
    Contribution We present acondition for a class of languages ! under which unbounded unions of languages can be ! learned in the Gold-Style from positive data with ! refinement operators. ! Unbounded unions of languages : the class of all finite unions of languages, given from a class of languages. Refinement operator : an operator with which a learner hypothesize an appropriate representation of a language. h1 h2 h3 A language ! = a set of objects A union of ! languages h4 2
  • 3.
    Learning Model :the Gold-Style A Language L(r) A representation of a language! R ( = A parameter) in A Class of languages A hidden target language (t) L {L(r) | r 2 R} ··· Learnability of the class 8t 2 R, 9N 2 N, 8n N, rn = rN ^ L(rN ) = L(t) A hypothesized L(r) language Input : e1 , e2 , ... Output : r1 , r2 , ... Learning Machine 3
  • 4.
    Example L(n) = {n,2n, ..., mn, ...} A Language A representation of a language! R ( = A parameter) in A Class of languages A hidden target language L(12) + {L(n) | n 2 N } ··· A hypothesized L(r) language Input : 24, 72, ... Output : 24, 24, ... Learnability of the class Once the output is 12, ! it will not be changed. ! Compute the greatest common divisor of inputs 3
  • 5.
    Learnable Base Classimplies Learnable Union Class?! A Base Class {L(r) | r 2 R} An Union Class One language ··· Input : e1 , e2 , ... Output : r1 , r2 , ... 2 R A finite set of languages {L(r) | r 2 R} ··· Input : e1 , e2 , ... Output : R1 , R2 , ... ✓ R 4
  • 6.
    Bounded and UnboundedUnions of Languages Learning bounded unions of languages! A learner knows that data are given from finite number of languages from the class, and also knows the upper bound of the number of languages. (many positive results)! eg [K. Wright 1989], [H. Arimura et al. 1995], 
 [T. Shinohara et al. 1995]! ! Learning unbounded unions of languages! A learner knows that data are given from finite number of languages from the class. (few positive results)! ! ! 5
  • 7.
    Concept class Object set! X= {e1 , e2 , ...} Representation Set R = {r1 , r2 , ...} Language Mapping! L : R ! 2X ! all e 2 X, whether e 2 L(r) is decidable. For C = {L(r) ✓ X | r 2 R} Base Class! Base Concept Class (C, R, L) Example! Rex = {hm, ni | m, n 2 N+ } Lex (hm, ni) ={(x, y) 2 N ⇥ N |x m, y n} Cex = {Lex (hm, ni) | hm, ni 2 Rex } y hm, ni quarter plane n m x 6
  • 8.
    Refinement Operators Refinement operator⇢ on R ! r Input : a representation ! Output : a finite set of representations!{r1 , ..., rn } 8r 2 R, 8ri 2 {r1 , ..., rn }, L(ri ) ✓ L(r) L(r) r refinement path ! L(r1 ) · · · · · L(rn ) L(ri ) · r1 · · r · · rn · i · 7
  • 9.
    Refinement Operators ⇢0 (r) r ⇢1(r) q1 q2 r q1 q2 ⇢2 (r) . . . p The refinement path p between r and . . . ⇢k (r) p The representations obtained ! by applying rho to r for k times 7
  • 10.
    Refinement Operators L(r) L(r1 )· · · · · L(rn ) L(ri ) · r r1 · · r · · rn · i · ⇢⇤ (r) = ⇢0 (r) [ · · · [ ⇢k (r) [ · · · ⇢+ (r) = ⇢1 (r) [ · · · [ ⇢k (r) [ · · · 7
  • 11.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 2 1 1 2 3 4 x S = {(3, 4), (2, 5), (4, 2)} 8
  • 12.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 h2, 1i h1, 2i 2 1 1 2 3 4 x S = {(3, 4), (2, 5), (4, 2)} 8
  • 13.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 h2, 1i h3, 1i 2 h1, 2i h2, 2i 1 2 3 4 x S = {(3, 4), (2, 5), (4, 2)} 8
  • 14.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 h2, 1i h3, 1i 2 h1, 2i h2, 2i 1 2 3 4 x S = {(3, 4), (2, 5), (4, 2)} 8
  • 15.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 h2, 1i h3, 1i 2 h2, 2i h3, 2i 2 3 4 h1, 2i h2, 3i x S = {(3, 4), (2, 5), (4, 2)} 8
  • 16.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 h2, 1i 3 h3, 1i 2 h2, 2i h3, 2i 2 3 4 h1, 2i h2, 3i x S = {(3, 4), (2, 5), (4, 2)} 8
  • 17.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y 5 h1, 1i 2 T 4 h2, 1i h3, 1i 2 h2, 2i h3, 2i 2 3 4 h1, 2i h2, 3i x S = {(3, 4), (2, 5), (4, 2)} 8
  • 18.
    Learnability of aConcept Space with Refinement Operators Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3].! [A-1] For any p, r 2 R , if L(p) ( L(r) then there exists a representation q 2 R such that q 2 ⇢+ (r) and L(p) = L(q)holds.! [A-2] There is finite T ✓ R such that for any L(p) 2 C ,! there exists t 2 T satisfying q 2 ⇢⇤ (t) and L(p) = L(q). ! [A-3] There is no infinite sequence of! L(t) t2T r1 , r2 , ... 2 R such that! ri+1 2 ⇢(ri ) and L(r1 ) = L(r! ) = · · · r 2 L(r) for all i 2 N+.! p q L(p) = L(q) 9 T : initial representation set
  • 19.
    Learnability of aConcept Space with Refinement Operators Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3].! [A-1] For any p, r 2 R , if L(p) ( L(r) then there exists a representation q 2 R such that q 2 ⇢+ (r) and L(p) = L(q)holds.! [A-2] There is finite T ✓ R such that for any L(p) 2 C ,! there exists t 2 T satisfying q 2 ⇢⇤ (t) and L(p) = L(q). ! [A-3] There is no infinite sequence of! L(t) t2T r1 , r2 , ... 2 R such that! ri+1 2 ⇢(ri ) and L(r1 ) = L(r! ) = · · · r 2 L(r) for all i 2 N+.! p q L(p) = L(q) 9 T : initial representation set
  • 20.
    Example = N+⇥ N+ Xex Rex = {hm, ni | m, n 2 N+ }, ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y A union of quarter planes L(h1, 3i) [ L(h2, 2i) [ L(h4, 1i) 3 2 1 1 2 4 x 10
  • 21.
    Union Concept Class UnionRepresentation Set! R⇤ = {{r1 , ..., rk } ! | r1 , ..., rk 2 R, k 2 N+ } L({r1 , ..., rk }) = L(r1 ) [ · · · [ L(rk ) Language Mapping! C ⇤ = {L(R) | R 2 R⇤ } Union Class Union Concept Class (C ⇤ , R⇤ , L) ! Example of Unions of Languages n quarter planes y (1, 3) (2, 2) (4, 1) m 3 2 1 x 4 1 2 R = {h1, 3i, h2, 2i, h4, 1i} L(h1, 3i) [ L(h2, 2i) [ L(h4, 1i) 11
  • 22.
    Learnability of UnboundedUnions Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3].! Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! [A-1] For any p, r 2 R , if L(p) ( L(r) then there exists a representation q 2 R such that q 2 ⇢+ (r) and L(p) = L(q)holds.! [A-2] There is finite T ✓ R such that for any L(p) 2 C ,! there exists t 2 T satisfying q 2 ⇢⇤ (t) and L(p) = L(q). ! [A-3] There is no infinite sequence of L(r1 ) = L(r2 ) = · · · such that r1 , r2 , ... 2 R and ri+1 2 ⇢(ri ) for all i 2 N+.! 12
  • 23.
    Learnability of UnboundedUnions Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3].! Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! [C-1] For any p, r 2 R , L(r) = L(p) ) r = p .! [C-2] For any n 2 N+ , for any r0 , r1 , ..., rn 2 R ,! L(r) ✓ L(r1 ) [ · · · [ L(rn ) , 9ri 2 {r1 , ..., rn }, L(r) ✓ L(ri ) ! L(r3 ) ! L(r3 ) L(r2 ) L(r1 ) ! L(r) L(r) , ! 12
  • 24.
    Learnability of UnboundedUnions Theorem 1 (Ouchi & Yamamoto) (C, R, L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3].! Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! ˜ Lemma 4 (Our Contribution) ⇢ satisfies [A-1] to [A-3] if! (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! ! 12
  • 25.
    A Refinement Operatorfor Unions of Languages ⇢(r) = {r1 , r2 , ..., rn } ! [ ⇢(P ) = ˜ {P {r}} [ r2P [ maximal r2P Q 2 ⇢(P ) ˜ (1) Q = P {r} for some r 2 P ! ! {P [ ⇢(r) r} ! (2) Q = P [ ⇢(r) {r} for some r 2 P ! such that L(r) 6( L(p) for any p 2 P ( r : maximal) (2) r L(P ) = r p1 p2 p3 p4 13
  • 26.
    A Refinement Operatorfor Unions of Languages ⇢(r) = {r1 , r2 , ..., rn } ! [ ⇢(P ) = ˜ {P {r}} [ r2P [ maximal r2P Q 2 ⇢(P ) ˜ (1) Q = P {r} for some r 2 P ! ! {P [ ⇢(r) r} ! (2) Q = P [ ⇢(r) {r} for some r 2 P ! such that L(r) 6( L(p) for any p 2 P ( r : maximal) (2) Lemma 3 (holds by [C-1] and [C-2] ) p1 p2 p3 p4 L(Q) p ( L(P ) 13
  • 27.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ } ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y y n} R⇤ ={{hm1 , n1 i, ..., hmk , nk i} ex | hmi , ni i 2 Rex , i 2 {1, ..., k}} 3 L({hm1 , n1 i, ..., hmk , nk i}) 2 1 x 1 2 3 S = {(3, 1), (2, 2), (1, 3)} = L(hm1 , n1 i) [ · · · L(hmk , nk i) ! [ ⇢(P ) = ˜ {P {r}} r2P [ [ maximal r2P {P [ ⇢(r) r} ! 14
  • 28.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ } ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y {h1, 1i} 2 T 3 2 1 1 2 3 S = {(3, 1), (2, 2), (1, 3)} 14
  • 29.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ } ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y {h1, 1i} 2 T {h2, 1i, h1, 2i} = R 3 2 1 x 1 2 3 S = {(3, 1), (2, 2), (1, 3)} T [ ⇢(h1, 1i) {h1, 1i} = {h2, 1i, h1, 2i} 14
  • 30.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ } ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y {h1, 1i} 2 T {h2, 1i, h1, 2i} = R 3 {h1, 2i} {h2, 1i} {h2, 1i, h2, 2i, h1, 3i} 2 1 1 2 3 S = {(3, 1), (2, 2), (1, 3)} x {h3, 1i, h2, 2i, h1, 2i} H [ ⇢(h2, 1i) {h2, 1i} = {h3, 1i, h2, 2i, h1, 2i} 14
  • 31.
    Example Xex = N+⇥ N+ Rex = {hm, ni | m, n 2 N+ } ⇢(hm, ni) = {hm + 1, ni, hm, n + 1i} Lex (hm, ni) ={(x, y) 2 N ⇥ N | x m, y n} y {h1, 1i} 2 T {h2, 1i, h1, 2i} = R 3 {h1, 2i} {h2, 1i} {h2, 1i, h2, 2i, h1, 3i} 2 1 1 2 3 S = {(3, 1), (2, 2), (1, 3)} x {h3, 1i, h2, 2i, h1, 2i} {h3, 1i, h2, 2i, h1, 3i} 14
  • 32.
    ˜ Lemma 4 (OurContribution) ⇢ satisfies [A-1] to [A-3] if! (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! ! [A-1] For any P, Q 2 R⇤, if L(Q) ✓ L(P ) then there exists! 0 a set of hypotheses Q0 2 R⇤ such that L(Q) = L(Q ) and Q0 2 ⇢⇤ (P ) holds.! ˜ [A-2] There is finite T ✓ R⇤such that for anyL(P ) 2 C ⇤ ,! ˜ there exists S 2 T satisfying Q 2 ⇢⇤ (S) and L(P ) = L(Q).! [A-3] There are no infinite sequence P1 , P2 , ... 2 R⇤ such that! L(P1 ) = L(P2 ) = · · · and Pi+1 2 ⇢(Pi ) for all i 2 N+. ˜ ! 15
  • 33.
    Theorem 1 (Ouchi& Yamamoto) (C, R, L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3].! ˜ Lemma 4 (Our Contribution) ⇢ satisfies [A-1] to [A-3] if! (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! ! Theorem 2 (Our Contribution) (C ⇤ , R⇤ , L) is learnable from positive data if (C, R, L) admits a refinement operator ⇢ satisfying [A-1] to [A-3] and satisfies [C-1] and [C-2].! 15
  • 34.
    Concluding Remarks Conclusion! We haveproposed a non-trivial sufficient condition ([A-1][A-3], and [C-1]-[C-2]) for learning union concept from positive data.! l  Introducing a refinement operator for union class! ! Future work ! There is base classes which are learnable with a refinement operator. We check whether the union concept of this class is also learnable from positive data using our result. ! 16
  • 35.
    Reference Ouchi, S., andYamamoto, A. 2010. Learning from positive data based on the MINL strategy with refinement operators. In Nakakoji, K.; Murakami, Y.; and McCready, E., eds., New Frontiers in Artificial Intelligence, volume 6284 of Lecture Notes in Computer Science. Springer Berlin Heidelberg. 345– 357. ! ! 17
  • 36.
  • 37.
    Refinement Operators Ouchi &Yamamoto(2010)! Let (C, H, L) be a concept class. A mapping ρ : H → 2H is called a refinement operator on the class if it satisfies the following three: [R-1] For every h ∈ H, ρ(h) is recursively enumerable. [R-2] For every h ∈ H, g ∈ ρ(h) ⇒ L(g) ⊆ L(h) [R-3] There is no sequence h1, h2,…, hn of hypotheses such that h1 = hn and hi+1 ∈ ρ(hi) (1 ≦ i ≦ n-1) ρ(h) h g L(g) ⊆ L(h) No  loops
  • 38.
  • 39.
    Difference Between Boundedand Unbounded Representation Set Language Mapping R = {h0i, h1i, ...} [ {h⇤i} ( L(hni) = {n} if hni 2 {h0i, h1i, ...} L(h⇤i) = N A class of languages C = {L(hni) | hni 2 R} = {{0}, {1}, ...} [ {N} 2-bounded unions of languages C 2 = {L({hgi, hhi}) | hgi, hhi 2 R} = {L(hgi) [ L(hhi) | hgi, hhi 2 R} ◆C ( k-bounded unions of languages C k = {S | S ✓ N, |S|  k} [ N ◆C k 1 ◆ ··· ◆ C )
  • 40.
    Difference Between Boundedand Unbounded Representation Set R = {h0i, h1i, ...} [ {h⇤i} 2-bounded unions of languages C 2 = {L({hgi, hhi}) | hgi, hhi 2 R} = {L(hgi) [ L(hhi) | hgi, hhi 2 R} ◆C Input : 1, Output : {h1i}, ↑Sets of hypotheses which represents! a minimal language which includes all inputs
  • 41.
    Difference Between Boundedand Unbounded Representation Set R = {h0i, h1i, ...} [ {h⇤i} 2-bounded unions of languages C 2 = {L({hgi, hhi}) | hgi, hhi 2 R} = {L(hgi) [ L(hhi) | hgi, hhi 2 R} ◆C Input : 1, 4 Output : {h1i}, {h1i, h4i}, ↑Sets of hypotheses which represents! a minimal language which includes all inputs
  • 42.
    Difference Between Boundedand Unbounded Representation Set R = {h0i, h1i, ...} [ {h⇤i} 2-bounded unions of languages C 2 = {L({hgi, hhi}) | hgi, hhi 2 R} = {L(hgi) [ L(hhi) | hgi, hhi 2 R} ◆C Input : 1, 4, 5, 9, 7... Output : {h1i}, {h1i, h4i}, {h⇤i}, {h⇤i}, {h⇤i}, ... After receiving 3 or more data,! A learner outputs {⇤}
  • 43.
    Difference Between Boundedand Unbounded Representation Set R = {h0i, h1i, ...} [ {h⇤i} 2-bounded unions of languages C 2 = {L({hgi, hhi}) | hgi, hhi 2 R} = {L(hgi) [ L(hhi) | hgi, hhi 2 R} ◆C Unbounded unions of C ⇤ = {S | S ✓ N, S is finite} [ N languages Input : 1, 4, 5, 9, 7... Output : {h1i}, {h1i, h4i}, {h1i, h4i, h5i}, {h1i, h4i, h5i, h9i}, ... A learner never outputs {⇤}. ! → Unbounded unions is NOT LEARNABLE!!