SlideShare a Scribd company logo
1 of 24
KERAS
2019.3.10
nyh
์ธ๊ณต์ง€๋Šฅ
โ€ข ๋ณดํ†ต์˜ ์‚ฌ๋žŒ์ด ์ˆ˜ํ–‰ํ•˜๋Š” ์ง€๋Šฅ์ ์ธ ์ž‘์—…์„ ์ž๋™ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์—ฐ๊ตฌํ™œ๋™
โ€ข Symbolic AI: ๋ช…์‹œ์ ์ธ ๊ทœ์น™์„ ์ถฉ๋ถ„ํžˆ ๋งŽ์ด ๋งŒ๋“ค์–ด ์ง€์‹์„ ๋‹ค๋ฃจ๋Š” ๊ฒƒ.
MACHINE LEARNING
Conventional
Programming
Machine
Learning
rule
data Data
rule
โ†’ โ†’
โ†’
โ†’ โ†’
โ†’
answer
answer
LEARN REPRESENTATION FROM DATA
(๋ฐ์ดํ„ฐ์—์„œ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ)
โ€ข 3 Prerequisites(Speech Recognition)
โ€ข Input Data: recorded sound files ๋…น์Œ๋œ ๋Œ€ํ™” ํŒŒ์ผ
โ€ข Answer: Transcription of the input sound files ํ•ด๋‹น ๋Œ€ํ™” ํŒŒ์ผ์˜ ์ „์‚ฌ(๋ฐ›์•„ ์“ด)
ํ…์ŠคํŠธ
โ€ข Measure of performance: difference from answer and output. Feedback for
learning.
โ€ข ๊ฐ€์„ค ๊ณต๊ฐ„์„ ์‚ฌ์ „์— ์ •์˜ํ•˜๊ณ  ํ”ผ๋“œ๋ฐฑ ์‹ ํ˜ธ์˜ ๋„์›€์„ ๋ฐ›์•„ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์—
๋Œ€ํ•œ ์œ ์šฉํ•œ ๋ณ€ํ™˜์„ ์ฐพ๋Š”๊ฒƒ
KERAS
โ€ข Keras in Tensorflow
DATASET
NE
โ€ข ์ธต(layer): ์‹ ๊ฒฝ๋ง์˜ ํ•ต์‹ฌ๊ตฌ์„ฑ์š”์†Œ / ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ํ•„ํ„ฐ
โ€ข Sequential ๋ชจ๋ธ
โ€ข 2๊ฐœ์˜ layer
โ€ข Activation function: relu
โ€ข Output Activation function: softmax
NETWORK COMPILE
โ€ข Compile ๋‹จ๊ณ„์— ํฌํ•จ๋  3๊ฐ€์ง€
โ€ข Loss function: ์‹ ๊ฒฝ๋ง ์„ฑ๋Šฅ์ธก์ •
โ€ข Optimizer: network update mechanism
โ€ข Training ๊ณผ Test ๊ณผ์ •์„ Monitoring ํ•  Measure: ์ •ํ™•๋„(์ •ํ™•ํžˆ ๋ถ„๋ฅ˜๋œ ์ด๋ฏธ์ง€
์˜ ๋น„์œจ) for this example!
PREPROCESS
โ€ข Data preprocess
โ€ข Shape: (60000, 28, 28) => (60000, 28*28)
โ€ข Value: [0,255] uint8 => [0,1] float32
One-hot encoding ( one-dimensional labels => (n, k) n: number of data, k: number of classes)
โ€ข Train ์‹œ์— Keras ์—์„œ๋Š” fit ์‚ฌ์šฉ
โ€ข Test ์‹œ์—๋Š” evaluate ์‚ฌ์šฉ
DATA
โ€ข Tensor: ์ž„์˜์˜ ์ฐจ์›์„ ๊ฐ€์ง€๋Š” ํ–‰๋ ฌ์˜ ์ผ๋ฐ˜ํ™”๋œ ๋ชจ์Šต
โ€ข Scalar
โ€ข 0d tensor
โ€ข Just a value
โ€ข Vector
โ€ข 1d tensor
โ€ข Array
โ€ข Ex. [1,2,3,4,5]
โ€ข Ambigous!! 5D Vector / 5D Tensor
โ€ข Matrix
โ€ข 2 axises
โ€ข Row(ํ–‰) / Column(์—ด)
โ€ข A = np.array([[1,2,3,4,5], [6,7,8,9,10], [0,0,0,0,0]]) / A.shape => (3,5)
โ€ข 3D tensor
โ€ข Array of Matrix
โ€ข ์ง์œก๋ฉด์ฒด
โ€ข Key Attribute
โ€ข Axis(์ถ•, rank): ndim
โ€ข Shape: ํ…์„œ์˜ ๊ฐ ์ถ•์„ ๋”ฐ๋ผ ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์ฐจ์›(dimension)์ด ์žˆ๋Š”์ง€ ๋‚˜ํƒ€๋‚ด๋Š”
tuple. Length
โ€ข Ex. [1,2,3,4,5] => (5,)
โ€ข Data type: ํ…์„œ์— ๋‹ด๊ธด ๋ฐ์ดํ„ฐ์˜ ํƒ€์ž…. Float32, uint8, float64
โ€ข ์‚ฌ์ „์— ํ• ๋‹น๋œ ๋ฉ”๋ชจ๋ฆฌ(๋™์ ํ• ๋‹น X)์ด๋ฏ€๋กœ ๊ฐ€๋ณ€๊ธธ์ด ์ง€์› X
SLICING
โ€ข slicing
BATCH
โ€ข ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์€ ํ•œ๋ฒˆ์— ๋ชจ๋“  ๋ฐ์ดํ„ฐ์…‹์„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ์ž‘์€ ๋ฐฐ
์น˜๋กœ ๋‚˜๋ˆ„์–ด์„œ ํ•™์Šตํ•œ๋‹ค
โ€ข ๋”ฅ๋Ÿฌ๋‹์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ ํ…์„œ์˜ ์ฒซ๋ฒˆ์งธ ์ถ•์„ ์ƒ˜ํ”Œ(sample) ์ถ•
์ด๋ผ๊ณ  ํ•œ๋‹ค => ๋ฐ์ดํ„ฐ์˜ ๊ฐœ์ˆ˜
TENSOR WITH EXAMPLES
โ€ข Vector Data: ๊ฐ€์žฅ ํ”ํ•œ ๋ฐ์ดํ„ฐ ํ˜•ํƒœ
โ€ข (samples, features) ํ˜•ํƒœ์˜ 2D Tensor
โ€ข Ex. ํ–‰: ๋ฐ์ดํ„ฐ ๊ฐœ์ˆ˜ ์—ด: ๋‚˜์ด, ์šฐํŽธ๋ฒˆํ˜ธ, ์†Œ๋“๋“ฑ
โ€ข Sequence Data or ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ
โ€ข (samples, timesteps, features) ํ˜•ํƒœ์˜ 3D Tensor
โ€ข Timestep์€ ์ง„์งœ ์‹œ๊ฐ„! ๋˜๋Š” ์ˆœ์„œ๋ฅผ ์˜๋ฏธํ•œ๋‹ค
โ€ข Ex. ์ฃผ์‹ ๊ฐ€๊ฒฉ๋ฐ์ดํ„ฐ์…‹(Time): 1๋ถ„๋งˆ๋‹ค ํ˜„์žฌ ์ฃผ์‹๊ฐ€๊ฒฉ, ์ง€๋‚œ 1๋ถ„๋™์•ˆ์˜ ์ตœ๊ณ /์ตœ์ €๊ฐ€๊ฒฉ์„
๊ธฐ๋กํ•œ๋‹ค๊ณ  ํ• ๋•Œ ํ•˜๋ฃจ์˜ ์ฃผ์‹ ๋ฐ์ดํ„ฐ์…‹์€ (390, 3) 2D ํ…์„œ๋กœ ์ €์žฅ ๊ฐ€๋Šฅ. 250 ์ผ์น˜์˜ ์ฃผ
์‹ ๋ฐ์ดํ„ฐ์…‹์€: (250, 390, 3)
โ€ข Ex. Tweet Dataset(Sequence): 128๊ฐœ์˜ ์•ŒํŒŒ๋ฒณ. 280๊ฐœ์˜ ๋ฌธ์ž ์‹œํ€€์Šค์ผ๋•Œ 100๋งŒ๊ฐœ์˜ ํŠธ
์œ—์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฐ์ดํ„ฐ์…‹์€ (1000000, 280, 128) cf. ์•ŒํŒŒ๋ฒณ์€ 128 D binary vector๋กœ
one-hot encoding
TENSOR WITH EXAMPLES
โ€ข Image : (samples, height, width, channels) ๋˜๋Š” (samples, channels,
height, width) ํ˜•ํƒœ์˜ 4D ํ…์„œ
โ€ข MNIST ๋ฐ์ดํ„ฐ ์…‹ (๋ฐฐ์น˜, ๋†’์ด, ๋„ˆ๋น„, ์ฑ„๋„) โ†’ (N, 28, 28, 1)
โ€ข Video : (samples, frames, height, width, channels) ๋˜๋Š” (samples, frames,
channels, height, width) ํ˜•ํƒœ์˜ 5D ํ…์„œ
โ€ข 60์ดˆ ์งœ๋ฆฌ 144$times$256 ์œ ํŠœ๋ธŒ ๋น„๋””์˜ค ํด๋ฆฝ์„ ์ดˆ๋‹น 4ํ”„๋ ˆ์ž„์œผ๋กœ ์ƒ˜ํ”Œ๋งํ•˜๋ฉด
240 ํ”„๋ ˆ์ž„์ด๋ฉฐ, ์ด๋Ÿฐ ๋น„๋””์˜ค ํด๋ฆฝ์„ 4๊ฐœ ๊ฐ€์ง„ ๋ฐฐ์น˜๋Š” (4, 240, 144, 256, 3) ํ…์„œ
์— ์ €์žฅ๋œ๋‹ค
TENSOR OPERATION
โ€ข keras
ELEMENT-WISE OPERATION
โ€ข ์›์†Œ๋ณ„ ์—ฐ์‚ฐ(element-wise operation): ํ…์„œ์— ์žˆ๋Š” ๊ฐ ์›์†Œ์— ๋…๋ฆฝ์ ์œผ๋กœ
์ ์šฉ๋˜๋Š” ์—ฐ์‚ฐ
โ€ข Numpy: BLAS ์‚ฌ์šฉ(C/fortran) So fast!!
BROADCASTING
โ€ข ๋‘ ํ…์„œ A, B ์ค‘ ํฌ๊ธฐ๊ฐ€ ์ž‘์€ ํ…์„œ๋ฅผ ํฌ๊ธฐ๊ฐ€ ํฐ ํ…์„œ์™€ ํ˜•ํƒœ(shape)๊ฐ€ ๋งž๊ฒŒ
๋” ๋Š˜๋ ค์ฃผ๋Š” ๊ฒƒ
โ€ข ํฐ ํ…์„œ์˜ ๋žญํฌ(์ฐจ์›)์— ๋งž๋„๋ก ์ž‘์€ ํ…์„œ์— ์ถ•(axis)์ด ์ถ”๊ฐ€๋œ๋‹ค
โ€ข ์ž‘์€ ํ…์„œ๊ฐ€ ์ƒˆ ์ถ•์„ ๋”ฐ๋ผ์„œ ํฐ ํ…์„œ์˜ ํ˜•ํƒœ์— ๋งž๋„๋ก ๋ฐ˜๋ณต๋œ๋‹ค
์‚ฌ์‹ค ๊ฐœ๋…์ ์œผ๋กœ ์ด๋ ‡๋‹ค๋Š”๊ฑฐ์ง€ ์‹ค์ œ๋กœ๋Š” ์ƒˆ๋กœ์šด 2D ํ…์„œ๊ฐ€ ์ƒ์„ฑ๋˜์ง€๋Š” ์•Š๋Š”๋‹ค
TENSOR DOT
โ€ข Tensorflow์—์„œ๋Š” matmul ๋ผ๊ณ  ๋ถ€๋ฅด์ง€๋งŒ keras์—์„œ๋Š” dot์ด๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค
(but also numpy)
โ€ข ๊ฐ€์žฅ ์œ ์šฉํ•œ ํ…์„œ ์—ฐ์‚ฐ (also called as โ€œtensor productโ€)
โ€ข ์›์†Œ๋ณ„ ์—ฐ์‚ฐ๊ณผ๋Š” ๋‹ค๋ฅด๊ฒŒ ์ž…๋ ฅ ํ…์„œ์˜ ์›์†Œ๋“ค์„ ๊ฒฐํ•ฉ์‹œํ‚จ๋‹ค
โ€ข ๋‘ ๋ฒกํ„ฐ์˜ dot์€ scalar ๊ฐ’์ด๋‹ค => ์›์†Œ๊ฐ€ ๊ฐ™์€ ๋ฒกํ„ฐ๋ผ๋ฆฌ ์ ๊ณฑ๊ฐ€๋Šฅ
โ€ข Matrix * vector => column ๊ฐœ์ˆ˜์™€ vector์˜ ์›์†Œ๊ฐœ์ˆ˜๊ฐ€ ๊ฐ™์•„์•ผ ํ•จ
โ€ข ์ฆ‰, matrix.shape[1] == vector.shape[0]
โ€ข Return ๊ฐ’์€ matrix.shape[1] ํฌ๊ธฐ์˜ ๋ฒกํ„ฐ
TENSOR DOT
โ€ข Matrix * Matrix => A์˜ column ๊ฐœ์ˆ˜์™€ B
์˜ row ๊ฐœ์ˆ˜๊ฐ€ ๊ฐ™์•„์•ผํ•จ
โ€ข ์ฆ‰, A.shape[1] == B.shape[0]
โ€ข Return ๊ฐ’์€ (A.shape[0], B.shape[1])์˜
Matrix
โ€ข (a, b) โˆ™ (b, c) โ†’ (a, c)
โ€ข (a, b, c, d) โˆ™ (d,) โ†’ (a, b, c)
โ€ข (a, b, c, d) โˆ™ (d, e) โ†’ (a, b, c, e)
RESHAPING
โ€ข ํŠน์ • ํ˜•ํƒœ์— ๋งž๊ฒŒ ํ–‰๊ณผ ์—ด์„ ์žฌ๋ฐฐ์—ด
โ€ข NumPy์˜ .reshape()์„ ์ด์šฉํ•ด ํ˜•ํƒœ๋ฅผ ๋ณ€ํ™˜
โ€ข ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ: ์ „์น˜(Transposition) : ํ–‰๊ณผ ์—ด ๋ฐ”๊พธ๊ธฐ
ํ…์„œ ์—ฐ์‚ฐ์˜ ๊ธฐํ•˜ํ•™์  ํ•ด์„
โ€ข ์•„ํ•€๋ณ€ํ™˜(affine transformation) ํšŒ์ „, ์Šค์ผ€์ผ๋ง(scaling) ๋“ฑ ๊ธฐ๋ณธ์ ์ธ ๊ธฐ
ํ•˜ํ•™์  ์—ฐ์‚ฐ์„ ํ‘œํ˜„
โ€ข Affine transformation: ์ , ์ง์„ , ํ‰๋ฉด์„ ๋ณด์กดํ•˜๋Š” ์•„ํ•€ ๊ณต๊ฐ„์œผ๋กœ์˜ ๋ณ€ํ™˜
โ€ข ๊ฑฐ๋ฆฌ์˜ ๋น„์œจ๊ณผ ์ง์„ ์˜ ํ‰ํ–‰์„ ์œ ์ง€ํ•˜๋Š” ์ด๋™, ์Šค์ผ€์ผ๋ง, ํšŒ์ „
โ€ข Affine Space : ์•„ํ•€๊ณต๊ฐ„์€ ๋ฒกํ„ฐ๊ณต๊ฐ„์„ ํ‰ํ–‰์ด๋™ํ•œ ๊ฒƒ
โ€ข ๋”ฐ๋ผ์„œ, Fully-Connected layer๋ฅผ Affine layer๋ผ๊ณ ๋„ ํ•จ

More Related Content

What's hot

K means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง
K means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋งK means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง
K means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋งEdward Yoon
ย 
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01Kwang Woo NAM
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€Haesun Park
ย 
2.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-22.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-2Haesun Park
ย 
Dl from scratch(7)
Dl from scratch(7)Dl from scratch(7)
Dl from scratch(7)Park Seong Hyeon
ย 
From maching learning to deep learning
From maching learning to deep learningFrom maching learning to deep learning
From maching learning to deep learningYongdae Kim
ย 
keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)
keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)
keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)beom kyun choi
ย 
2.linear regression and logistic regression
2.linear regression and logistic regression2.linear regression and logistic regression
2.linear regression and logistic regressionHaesun Park
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จHaesun Park
ย 
Coursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌ
Coursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌCoursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌ
Coursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌSANG WON PARK
ย 
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)SANG WON PARK
ย 
5.model evaluation and improvement
5.model evaluation and improvement5.model evaluation and improvement
5.model evaluation and improvementHaesun Park
ย 
Intriguing properties of contrastive losses
Intriguing properties of contrastive lossesIntriguing properties of contrastive losses
Intriguing properties of contrastive lossestaeseon ryu
ย 
DL from scratch(1~3)
DL from scratch(1~3)DL from scratch(1~3)
DL from scratch(1~3)Park Seong Hyeon
ย 
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์ŠตJuhui Park
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹Haesun Park
ย 
3.neural networks
3.neural networks3.neural networks
3.neural networksHaesun Park
ย 
Siamese neural networks for one shot image recognition paper explained
Siamese neural networks for one shot image recognition paper explainedSiamese neural networks for one shot image recognition paper explained
Siamese neural networks for one shot image recognition paper explainedtaeseon ryu
ย 
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01Kwang Woo NAM
ย 
์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder
์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder
์•Œ๊ธฐ์‰ฌ์šด Variational autoencoderํ™๋ฐฐ ๊น€
ย 

What's hot (20)

K means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง
K means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋งK means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง
K means ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ์˜ํ™”๋ฐฐ์šฐ ํด๋Ÿฌ์Šคํ„ฐ๋ง
ย 
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 07-๊ณ ๊ธ‰ ๋ถ„๋ฅ˜ ๊ธฐ๋ฒ•-์ปค๋„ ๊ธฐ๋ฒ•๊ณผ svm-01
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 2์žฅ. ๋จธ์‹ ๋Ÿฌ๋‹ ํ”„๋กœ์ ํŠธ ์ฒ˜์Œ๋ถ€ํ„ฐ ๋๊นŒ์ง€
ย 
2.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-22.supervised learning(epoch#2)-2
2.supervised learning(epoch#2)-2
ย 
Dl from scratch(7)
Dl from scratch(7)Dl from scratch(7)
Dl from scratch(7)
ย 
From maching learning to deep learning
From maching learning to deep learningFrom maching learning to deep learning
From maching learning to deep learning
ย 
keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)
keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)
keras ๋นจ๋ฆฌ ํ›‘์–ด๋ณด๊ธฐ(intro)
ย 
2.linear regression and logistic regression
2.linear regression and logistic regression2.linear regression and logistic regression
2.linear regression and logistic regression
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 4์žฅ. ๋ชจ๋ธ ํ›ˆ๋ จ
ย 
Coursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌ
Coursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌCoursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌ
Coursera Machine Learning (by Andrew Ng)_๊ฐ•์˜์ •๋ฆฌ
ย 
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
๋‚ด๊ฐ€ ์ดํ•ดํ•˜๋Š” SVM(์™œ, ์–ด๋–ป๊ฒŒ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ)
ย 
5.model evaluation and improvement
5.model evaluation and improvement5.model evaluation and improvement
5.model evaluation and improvement
ย 
Intriguing properties of contrastive losses
Intriguing properties of contrastive lossesIntriguing properties of contrastive losses
Intriguing properties of contrastive losses
ย 
DL from scratch(1~3)
DL from scratch(1~3)DL from scratch(1~3)
DL from scratch(1~3)
ย 
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
๋ฐ‘๋ฐ”๋‹ฅ๋ถ€ํ„ฐ ์‹œ์ž‘ํ•˜๋Š” ๋”ฅ๋Ÿฌ๋‹_์‹ ๊ฒฝ๋งํ•™์Šต
ย 
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
[ํ™๋Œ€ ๋จธ์‹ ๋Ÿฌ๋‹ ์Šคํ„ฐ๋”” - ํ•ธ์ฆˆ์˜จ ๋จธ์‹ ๋Ÿฌ๋‹] 1์žฅ. ํ•œ๋ˆˆ์— ๋ณด๋Š” ๋จธ์‹ ๋Ÿฌ๋‹
ย 
3.neural networks
3.neural networks3.neural networks
3.neural networks
ย 
Siamese neural networks for one shot image recognition paper explained
Siamese neural networks for one shot image recognition paper explainedSiamese neural networks for one shot image recognition paper explained
Siamese neural networks for one shot image recognition paper explained
ย 
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01
์ง‘๋‹จ์ง€์„ฑ ํ”„๋กœ๊ทธ๋ž˜๋ฐ 06-์˜์‚ฌ๊ฒฐ์ •ํŠธ๋ฆฌ-01
ย 
์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder
์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder
์•Œ๊ธฐ์‰ฌ์šด Variational autoencoder
ย 

Similar to Keras

๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)
๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)
๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)Tae Young Lee
ย 
LSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐ
LSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐLSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐ
LSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐMad Scientists
ย 
2017 tensor flow dev summit
2017 tensor flow dev summit2017 tensor flow dev summit
2017 tensor flow dev summitTae Young Lee
ย 
Naive ML Overview
Naive ML OverviewNaive ML Overview
Naive ML OverviewChul Ju Hong
ย 
๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ
๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ
๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅํ™๋ฐฐ ๊น€
ย 
CoreDot TechSeminar 2018 - Session4 Park Eonyong
CoreDot TechSeminar 2018 - Session4 Park EonyongCoreDot TechSeminar 2018 - Session4 Park Eonyong
CoreDot TechSeminar 2018 - Session4 Park EonyongCore.Today
ย 
Tensorflow for Deep Learning(SK Planet)
Tensorflow for Deep Learning(SK Planet)Tensorflow for Deep Learning(SK Planet)
Tensorflow for Deep Learning(SK Planet)Tae Young Lee
ย 
Transliteration English to Korean
Transliteration English to KoreanTransliteration English to Korean
Transliteration English to KoreanHyunwoo Kim
ย 
Review MLP Mixer
Review MLP MixerReview MLP Mixer
Review MLP MixerWoojin Jeong
ย 
Chapter 6 Deep feedforward networks - 2
Chapter 6 Deep feedforward networks - 2Chapter 6 Deep feedforward networks - 2
Chapter 6 Deep feedforward networks - 2KyeongUkJang
ย 
์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5
์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5
์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5HyeonSeok Choi
ย 
Spark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐ
Spark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐSpark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐ
Spark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐTaejun Kim
ย 
[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ
[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ
[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐJoeun Park
ย 
Deep Learning from scratch 4์žฅ : neural network learning
Deep Learning from scratch 4์žฅ : neural network learningDeep Learning from scratch 4์žฅ : neural network learning
Deep Learning from scratch 4์žฅ : neural network learningJinSooKim80
ย 
Machine Learning with Apache Spark and Zeppelin
Machine Learning with Apache Spark and ZeppelinMachine Learning with Apache Spark and Zeppelin
Machine Learning with Apache Spark and ZeppelinDataya Nolja
ย 
Lecture 4: Neural Networks I
Lecture 4: Neural Networks ILecture 4: Neural Networks I
Lecture 4: Neural Networks ISang Jun Lee
ย 
DL from scratch(4~5)
DL from scratch(4~5)DL from scratch(4~5)
DL from scratch(4~5)Park Seong Hyeon
ย 
Titanic kaggle competition
Titanic kaggle competitionTitanic kaggle competition
Titanic kaggle competitionjdo
ย 

Similar to Keras (20)

๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)
๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)
๋”ฅ๋Ÿฌ๋‹์„ ์œ„ํ•œ Tensor flow(skt academy)
ย 
1๊ฐ• - pytorch์™€ tensor.pptx
1๊ฐ• - pytorch์™€ tensor.pptx1๊ฐ• - pytorch์™€ tensor.pptx
1๊ฐ• - pytorch์™€ tensor.pptx
ย 
LSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐ
LSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐLSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐ
LSTM ๋„คํŠธ์›Œํฌ ์ดํ•ดํ•˜๊ธฐ
ย 
2017 tensor flow dev summit
2017 tensor flow dev summit2017 tensor flow dev summit
2017 tensor flow dev summit
ย 
Naive ML Overview
Naive ML OverviewNaive ML Overview
Naive ML Overview
ย 
๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ
๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ
๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ์ž์—ฐ์–ด์ฒ˜๋ฆฌ์˜ ์—ฐ๊ตฌ๋™ํ–ฅ
ย 
CoreDot TechSeminar 2018 - Session4 Park Eonyong
CoreDot TechSeminar 2018 - Session4 Park EonyongCoreDot TechSeminar 2018 - Session4 Park Eonyong
CoreDot TechSeminar 2018 - Session4 Park Eonyong
ย 
Mahout
MahoutMahout
Mahout
ย 
Tensorflow for Deep Learning(SK Planet)
Tensorflow for Deep Learning(SK Planet)Tensorflow for Deep Learning(SK Planet)
Tensorflow for Deep Learning(SK Planet)
ย 
Transliteration English to Korean
Transliteration English to KoreanTransliteration English to Korean
Transliteration English to Korean
ย 
Review MLP Mixer
Review MLP MixerReview MLP Mixer
Review MLP Mixer
ย 
Chapter 6 Deep feedforward networks - 2
Chapter 6 Deep feedforward networks - 2Chapter 6 Deep feedforward networks - 2
Chapter 6 Deep feedforward networks - 2
ย 
์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5
์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5
์ปดํ“จํ„ฐ ํ”„๋กœ๊ทธ๋žจ ๊ตฌ์กฐ์™€ ํ•ด์„ 3.5
ย 
Spark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐ
Spark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐSpark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐ
Spark & Zeppelin์„ ํ™œ์šฉํ•œ ๋จธ์‹ ๋Ÿฌ๋‹ ์‹ค์ „ ์ ์šฉ๊ธฐ
ย 
[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ
[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ
[PyCon KR 2018] ๋•€๋‚ด๋ฅผ ์ค„์ด๋Š” Data์™€ Feature ๋‹ค๋ฃจ๊ธฐ
ย 
Deep Learning from scratch 4์žฅ : neural network learning
Deep Learning from scratch 4์žฅ : neural network learningDeep Learning from scratch 4์žฅ : neural network learning
Deep Learning from scratch 4์žฅ : neural network learning
ย 
Machine Learning with Apache Spark and Zeppelin
Machine Learning with Apache Spark and ZeppelinMachine Learning with Apache Spark and Zeppelin
Machine Learning with Apache Spark and Zeppelin
ย 
Lecture 4: Neural Networks I
Lecture 4: Neural Networks ILecture 4: Neural Networks I
Lecture 4: Neural Networks I
ย 
DL from scratch(4~5)
DL from scratch(4~5)DL from scratch(4~5)
DL from scratch(4~5)
ย 
Titanic kaggle competition
Titanic kaggle competitionTitanic kaggle competition
Titanic kaggle competition
ย 

Keras

  • 2. ์ธ๊ณต์ง€๋Šฅ โ€ข ๋ณดํ†ต์˜ ์‚ฌ๋žŒ์ด ์ˆ˜ํ–‰ํ•˜๋Š” ์ง€๋Šฅ์ ์ธ ์ž‘์—…์„ ์ž๋™ํ™”ํ•˜๊ธฐ ์œ„ํ•œ ์—ฐ๊ตฌํ™œ๋™ โ€ข Symbolic AI: ๋ช…์‹œ์ ์ธ ๊ทœ์น™์„ ์ถฉ๋ถ„ํžˆ ๋งŽ์ด ๋งŒ๋“ค์–ด ์ง€์‹์„ ๋‹ค๋ฃจ๋Š” ๊ฒƒ.
  • 4. LEARN REPRESENTATION FROM DATA (๋ฐ์ดํ„ฐ์—์„œ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ) โ€ข 3 Prerequisites(Speech Recognition) โ€ข Input Data: recorded sound files ๋…น์Œ๋œ ๋Œ€ํ™” ํŒŒ์ผ โ€ข Answer: Transcription of the input sound files ํ•ด๋‹น ๋Œ€ํ™” ํŒŒ์ผ์˜ ์ „์‚ฌ(๋ฐ›์•„ ์“ด) ํ…์ŠคํŠธ โ€ข Measure of performance: difference from answer and output. Feedback for learning. โ€ข ๊ฐ€์„ค ๊ณต๊ฐ„์„ ์‚ฌ์ „์— ์ •์˜ํ•˜๊ณ  ํ”ผ๋“œ๋ฐฑ ์‹ ํ˜ธ์˜ ๋„์›€์„ ๋ฐ›์•„ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์œ ์šฉํ•œ ๋ณ€ํ™˜์„ ์ฐพ๋Š”๊ฒƒ
  • 7. NE โ€ข ์ธต(layer): ์‹ ๊ฒฝ๋ง์˜ ํ•ต์‹ฌ๊ตฌ์„ฑ์š”์†Œ / ๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ํ•„ํ„ฐ โ€ข Sequential ๋ชจ๋ธ โ€ข 2๊ฐœ์˜ layer โ€ข Activation function: relu โ€ข Output Activation function: softmax
  • 8. NETWORK COMPILE โ€ข Compile ๋‹จ๊ณ„์— ํฌํ•จ๋  3๊ฐ€์ง€ โ€ข Loss function: ์‹ ๊ฒฝ๋ง ์„ฑ๋Šฅ์ธก์ • โ€ข Optimizer: network update mechanism โ€ข Training ๊ณผ Test ๊ณผ์ •์„ Monitoring ํ•  Measure: ์ •ํ™•๋„(์ •ํ™•ํžˆ ๋ถ„๋ฅ˜๋œ ์ด๋ฏธ์ง€ ์˜ ๋น„์œจ) for this example!
  • 9. PREPROCESS โ€ข Data preprocess โ€ข Shape: (60000, 28, 28) => (60000, 28*28) โ€ข Value: [0,255] uint8 => [0,1] float32 One-hot encoding ( one-dimensional labels => (n, k) n: number of data, k: number of classes)
  • 10. โ€ข Train ์‹œ์— Keras ์—์„œ๋Š” fit ์‚ฌ์šฉ โ€ข Test ์‹œ์—๋Š” evaluate ์‚ฌ์šฉ
  • 11. DATA โ€ข Tensor: ์ž„์˜์˜ ์ฐจ์›์„ ๊ฐ€์ง€๋Š” ํ–‰๋ ฌ์˜ ์ผ๋ฐ˜ํ™”๋œ ๋ชจ์Šต โ€ข Scalar โ€ข 0d tensor โ€ข Just a value โ€ข Vector โ€ข 1d tensor โ€ข Array โ€ข Ex. [1,2,3,4,5] โ€ข Ambigous!! 5D Vector / 5D Tensor
  • 12. โ€ข Matrix โ€ข 2 axises โ€ข Row(ํ–‰) / Column(์—ด) โ€ข A = np.array([[1,2,3,4,5], [6,7,8,9,10], [0,0,0,0,0]]) / A.shape => (3,5) โ€ข 3D tensor โ€ข Array of Matrix โ€ข ์ง์œก๋ฉด์ฒด
  • 13. โ€ข Key Attribute โ€ข Axis(์ถ•, rank): ndim โ€ข Shape: ํ…์„œ์˜ ๊ฐ ์ถ•์„ ๋”ฐ๋ผ ์–ผ๋งˆ๋‚˜ ๋งŽ์€ ์ฐจ์›(dimension)์ด ์žˆ๋Š”์ง€ ๋‚˜ํƒ€๋‚ด๋Š” tuple. Length โ€ข Ex. [1,2,3,4,5] => (5,) โ€ข Data type: ํ…์„œ์— ๋‹ด๊ธด ๋ฐ์ดํ„ฐ์˜ ํƒ€์ž…. Float32, uint8, float64 โ€ข ์‚ฌ์ „์— ํ• ๋‹น๋œ ๋ฉ”๋ชจ๋ฆฌ(๋™์ ํ• ๋‹น X)์ด๋ฏ€๋กœ ๊ฐ€๋ณ€๊ธธ์ด ์ง€์› X
  • 15. BATCH โ€ข ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์€ ํ•œ๋ฒˆ์— ๋ชจ๋“  ๋ฐ์ดํ„ฐ์…‹์„ ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ์ž‘์€ ๋ฐฐ ์น˜๋กœ ๋‚˜๋ˆ„์–ด์„œ ํ•™์Šตํ•œ๋‹ค โ€ข ๋”ฅ๋Ÿฌ๋‹์—์„œ ์‚ฌ์šฉํ•˜๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ ํ…์„œ์˜ ์ฒซ๋ฒˆ์งธ ์ถ•์„ ์ƒ˜ํ”Œ(sample) ์ถ• ์ด๋ผ๊ณ  ํ•œ๋‹ค => ๋ฐ์ดํ„ฐ์˜ ๊ฐœ์ˆ˜
  • 16. TENSOR WITH EXAMPLES โ€ข Vector Data: ๊ฐ€์žฅ ํ”ํ•œ ๋ฐ์ดํ„ฐ ํ˜•ํƒœ โ€ข (samples, features) ํ˜•ํƒœ์˜ 2D Tensor โ€ข Ex. ํ–‰: ๋ฐ์ดํ„ฐ ๊ฐœ์ˆ˜ ์—ด: ๋‚˜์ด, ์šฐํŽธ๋ฒˆํ˜ธ, ์†Œ๋“๋“ฑ โ€ข Sequence Data or ์‹œ๊ณ„์—ด ๋ฐ์ดํ„ฐ โ€ข (samples, timesteps, features) ํ˜•ํƒœ์˜ 3D Tensor โ€ข Timestep์€ ์ง„์งœ ์‹œ๊ฐ„! ๋˜๋Š” ์ˆœ์„œ๋ฅผ ์˜๋ฏธํ•œ๋‹ค โ€ข Ex. ์ฃผ์‹ ๊ฐ€๊ฒฉ๋ฐ์ดํ„ฐ์…‹(Time): 1๋ถ„๋งˆ๋‹ค ํ˜„์žฌ ์ฃผ์‹๊ฐ€๊ฒฉ, ์ง€๋‚œ 1๋ถ„๋™์•ˆ์˜ ์ตœ๊ณ /์ตœ์ €๊ฐ€๊ฒฉ์„ ๊ธฐ๋กํ•œ๋‹ค๊ณ  ํ• ๋•Œ ํ•˜๋ฃจ์˜ ์ฃผ์‹ ๋ฐ์ดํ„ฐ์…‹์€ (390, 3) 2D ํ…์„œ๋กœ ์ €์žฅ ๊ฐ€๋Šฅ. 250 ์ผ์น˜์˜ ์ฃผ ์‹ ๋ฐ์ดํ„ฐ์…‹์€: (250, 390, 3) โ€ข Ex. Tweet Dataset(Sequence): 128๊ฐœ์˜ ์•ŒํŒŒ๋ฒณ. 280๊ฐœ์˜ ๋ฌธ์ž ์‹œํ€€์Šค์ผ๋•Œ 100๋งŒ๊ฐœ์˜ ํŠธ ์œ—์œผ๋กœ ์ด๋ฃจ์–ด์ง„ ๋ฐ์ดํ„ฐ์…‹์€ (1000000, 280, 128) cf. ์•ŒํŒŒ๋ฒณ์€ 128 D binary vector๋กœ one-hot encoding
  • 17. TENSOR WITH EXAMPLES โ€ข Image : (samples, height, width, channels) ๋˜๋Š” (samples, channels, height, width) ํ˜•ํƒœ์˜ 4D ํ…์„œ โ€ข MNIST ๋ฐ์ดํ„ฐ ์…‹ (๋ฐฐ์น˜, ๋†’์ด, ๋„ˆ๋น„, ์ฑ„๋„) โ†’ (N, 28, 28, 1) โ€ข Video : (samples, frames, height, width, channels) ๋˜๋Š” (samples, frames, channels, height, width) ํ˜•ํƒœ์˜ 5D ํ…์„œ โ€ข 60์ดˆ ์งœ๋ฆฌ 144$times$256 ์œ ํŠœ๋ธŒ ๋น„๋””์˜ค ํด๋ฆฝ์„ ์ดˆ๋‹น 4ํ”„๋ ˆ์ž„์œผ๋กœ ์ƒ˜ํ”Œ๋งํ•˜๋ฉด 240 ํ”„๋ ˆ์ž„์ด๋ฉฐ, ์ด๋Ÿฐ ๋น„๋””์˜ค ํด๋ฆฝ์„ 4๊ฐœ ๊ฐ€์ง„ ๋ฐฐ์น˜๋Š” (4, 240, 144, 256, 3) ํ…์„œ ์— ์ €์žฅ๋œ๋‹ค
  • 19. ELEMENT-WISE OPERATION โ€ข ์›์†Œ๋ณ„ ์—ฐ์‚ฐ(element-wise operation): ํ…์„œ์— ์žˆ๋Š” ๊ฐ ์›์†Œ์— ๋…๋ฆฝ์ ์œผ๋กœ ์ ์šฉ๋˜๋Š” ์—ฐ์‚ฐ โ€ข Numpy: BLAS ์‚ฌ์šฉ(C/fortran) So fast!!
  • 20. BROADCASTING โ€ข ๋‘ ํ…์„œ A, B ์ค‘ ํฌ๊ธฐ๊ฐ€ ์ž‘์€ ํ…์„œ๋ฅผ ํฌ๊ธฐ๊ฐ€ ํฐ ํ…์„œ์™€ ํ˜•ํƒœ(shape)๊ฐ€ ๋งž๊ฒŒ ๋” ๋Š˜๋ ค์ฃผ๋Š” ๊ฒƒ โ€ข ํฐ ํ…์„œ์˜ ๋žญํฌ(์ฐจ์›)์— ๋งž๋„๋ก ์ž‘์€ ํ…์„œ์— ์ถ•(axis)์ด ์ถ”๊ฐ€๋œ๋‹ค โ€ข ์ž‘์€ ํ…์„œ๊ฐ€ ์ƒˆ ์ถ•์„ ๋”ฐ๋ผ์„œ ํฐ ํ…์„œ์˜ ํ˜•ํƒœ์— ๋งž๋„๋ก ๋ฐ˜๋ณต๋œ๋‹ค ์‚ฌ์‹ค ๊ฐœ๋…์ ์œผ๋กœ ์ด๋ ‡๋‹ค๋Š”๊ฑฐ์ง€ ์‹ค์ œ๋กœ๋Š” ์ƒˆ๋กœ์šด 2D ํ…์„œ๊ฐ€ ์ƒ์„ฑ๋˜์ง€๋Š” ์•Š๋Š”๋‹ค
  • 21. TENSOR DOT โ€ข Tensorflow์—์„œ๋Š” matmul ๋ผ๊ณ  ๋ถ€๋ฅด์ง€๋งŒ keras์—์„œ๋Š” dot์ด๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค (but also numpy) โ€ข ๊ฐ€์žฅ ์œ ์šฉํ•œ ํ…์„œ ์—ฐ์‚ฐ (also called as โ€œtensor productโ€) โ€ข ์›์†Œ๋ณ„ ์—ฐ์‚ฐ๊ณผ๋Š” ๋‹ค๋ฅด๊ฒŒ ์ž…๋ ฅ ํ…์„œ์˜ ์›์†Œ๋“ค์„ ๊ฒฐํ•ฉ์‹œํ‚จ๋‹ค โ€ข ๋‘ ๋ฒกํ„ฐ์˜ dot์€ scalar ๊ฐ’์ด๋‹ค => ์›์†Œ๊ฐ€ ๊ฐ™์€ ๋ฒกํ„ฐ๋ผ๋ฆฌ ์ ๊ณฑ๊ฐ€๋Šฅ โ€ข Matrix * vector => column ๊ฐœ์ˆ˜์™€ vector์˜ ์›์†Œ๊ฐœ์ˆ˜๊ฐ€ ๊ฐ™์•„์•ผ ํ•จ โ€ข ์ฆ‰, matrix.shape[1] == vector.shape[0] โ€ข Return ๊ฐ’์€ matrix.shape[1] ํฌ๊ธฐ์˜ ๋ฒกํ„ฐ
  • 22. TENSOR DOT โ€ข Matrix * Matrix => A์˜ column ๊ฐœ์ˆ˜์™€ B ์˜ row ๊ฐœ์ˆ˜๊ฐ€ ๊ฐ™์•„์•ผํ•จ โ€ข ์ฆ‰, A.shape[1] == B.shape[0] โ€ข Return ๊ฐ’์€ (A.shape[0], B.shape[1])์˜ Matrix โ€ข (a, b) โˆ™ (b, c) โ†’ (a, c) โ€ข (a, b, c, d) โˆ™ (d,) โ†’ (a, b, c) โ€ข (a, b, c, d) โˆ™ (d, e) โ†’ (a, b, c, e)
  • 23. RESHAPING โ€ข ํŠน์ • ํ˜•ํƒœ์— ๋งž๊ฒŒ ํ–‰๊ณผ ์—ด์„ ์žฌ๋ฐฐ์—ด โ€ข NumPy์˜ .reshape()์„ ์ด์šฉํ•ด ํ˜•ํƒœ๋ฅผ ๋ณ€ํ™˜ โ€ข ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ: ์ „์น˜(Transposition) : ํ–‰๊ณผ ์—ด ๋ฐ”๊พธ๊ธฐ
  • 24. ํ…์„œ ์—ฐ์‚ฐ์˜ ๊ธฐํ•˜ํ•™์  ํ•ด์„ โ€ข ์•„ํ•€๋ณ€ํ™˜(affine transformation) ํšŒ์ „, ์Šค์ผ€์ผ๋ง(scaling) ๋“ฑ ๊ธฐ๋ณธ์ ์ธ ๊ธฐ ํ•˜ํ•™์  ์—ฐ์‚ฐ์„ ํ‘œํ˜„ โ€ข Affine transformation: ์ , ์ง์„ , ํ‰๋ฉด์„ ๋ณด์กดํ•˜๋Š” ์•„ํ•€ ๊ณต๊ฐ„์œผ๋กœ์˜ ๋ณ€ํ™˜ โ€ข ๊ฑฐ๋ฆฌ์˜ ๋น„์œจ๊ณผ ์ง์„ ์˜ ํ‰ํ–‰์„ ์œ ์ง€ํ•˜๋Š” ์ด๋™, ์Šค์ผ€์ผ๋ง, ํšŒ์ „ โ€ข Affine Space : ์•„ํ•€๊ณต๊ฐ„์€ ๋ฒกํ„ฐ๊ณต๊ฐ„์„ ํ‰ํ–‰์ด๋™ํ•œ ๊ฒƒ โ€ข ๋”ฐ๋ผ์„œ, Fully-Connected layer๋ฅผ Affine layer๋ผ๊ณ ๋„ ํ•จ

Editor's Notes

  1. Programming -> Learning(ํ›ˆ๋ จ)
  2. ๋จธ์‹ ๋Ÿฌ๋‹์—์„œ ์นดํ…Œ๊ณ ๋ฆฌ ๋ถ„๋ฅ˜ ๋ฌธ์ œ๋ฅผ ๋‹ค๋ฅผ๋•Œ ๋ถ„๋ฅ˜ํ•ด์•ผํ•  ์นดํ…Œ๊ณ ๋ฆฌ๋ฅผ โ€œํด๋ž˜์Šคโ€ ๋ผ๊ณ  ๋ถ€๋ฅด๊ณ , ๊ฐ ๋ฐ์ดํ„ฐ๋Š” sample์ด๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค. ๋˜ํ•œ ๊ฐ sample์˜ ํด๋ž˜์Šค๋Š” label์ด๋ผ๊ณ  ๋ถ€๋ฅธ๋‹ค.
  3. ์ด ๋ฒกํ„ฐ์˜ ์ฐจ์›: 1 / ndim: 1. ์›์†Œ ๊ฐœ์ˆ˜: 5๊ฐœ -> 5์ฐจ์› ๋ฒกํ„ฐ๋ผ๊ณ  ๋ถ€๋ฅด์ง€๋งŒ 5D ๋ฒกํ„ฐ์™€ 5D ํ…์„œ๋ฅผ ํ˜ผ๋™ํ•˜๋ฉด ์•ˆ๋œ๋‹ค! ์ฐจ์›์€ ํŠน์ • ์ถ•(axis)๋ฅผ ๋”ฐ๋ผ ๋†“์ธ ์›์†Œ์˜ ๊ฐœ์ˆ˜ ๋˜๋Š” ํ…์„œ์˜ ์ถ• ๊ฐœ์ˆ˜(5 axis) ๋ฅผ ์˜๋ฏธํ•˜๋ฏ€๋กœ ํ˜ผ๋™ํ•  ์ˆ˜ ์žˆ๋‹ค. ํ›„์ž์˜ ๊ฒฝ์šฐ์—๋Š” rank 5 tensor๋ผ๊ณ  ๋ถ€๋ฅด๊ธฐ๋„ ํ•œ๋‹ค