인공지능 기반 미디어아트 최신 기술, 동향 및 사례를 공유합니다. 특히, 딥러닝을 이용한 예술과 관련된 기술을 확인하고, 관련 작품들을 살펴보겠습니다. 이 세미나는 한전아트센터에서 진행하는 2019년 오픈 미디어아트 전시 세미나(2월 10일 오후 2시)의 하나로 기획되었습니다.
전시 링크 - https://vmspace.com/news/news_view.html?base_seq=NDM5
Charbagh / best call girls in Lucknow - Book 🥤 8923113531 🪗 Call Girls Availa...
AI - Media Art. 인공지능과 미디어아트
1. Media Art with AI
인공지능 기반 미디어아트 기술과 사례
2019.2
A.DAT - Open Media Art 전시 세미나
강태욱 공학박사
Ph.D Taewook, Kang
laputa99999@gmail.com
sites.google.com/site/bimprinciple
3. AI
A.DAT Open Media Art
Evolution of the interest to the Google research request “deep learning”. Obtained via
Google Trends (https://trends.google.com/trends/).
17. DL from keras.models import Sequential
import keras
import numpy as np
from keras.applications import vgg16, inception_v3, resnet50, mobilenet
#VGG 모델을 로딩함
vgg_model = vgg16.VGG16(weights='imagenet')
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.applications.imagenet_utils import decode_predictions
import matplotlib.pyplot as plt
filename = '/home/ktw/tensorflow/door1.jpg'
# 이미지 로딩. PIL format
original = load_img(filename, target_size=(224, 224))
print('PIL image size',original.size)
plt.imshow(original)
plt.show()
# PIL 이미지를 numpy 배열로 변환
# Numpy 배열 (height, width, channel)
numpy_image = img_to_array(original)
plt.imshow(np.uint8(numpy_image))
plt.show()
print('numpy array size',numpy_image.shape)
18. DL
# 이미지를 배치 포맷으로 변환
# 데이터 학습을 위해 특정 축에 차원 추가
# 네트워크 형태는 batchsize, height, width, channels 이 됨
image_batch = np.expand_dims(numpy_image, axis=0)
print('image batch size', image_batch.shape)
plt.imshow(np.uint8(image_batch[0]))
# 모델 준비
processed_image = vgg16.preprocess_input(image_batch.copy())
# 각 클래스 속할 확률 예측
predictions = vgg_model.predict(processed_image)
# 예측된 확률을 클래스 라벨로 변환. 상위 5개 예측된 클래스 표시
label = decode_predictions(predictions)
print(label)
19. from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
import tensorflow as tf
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(x, W) + b)
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),
reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_:
mnist.test.labels}))
DL
20. DL
from keras.models import Sequential from keras.layers
import LSTM, Dense
import numpy as np
data_dim = 16
timesteps = 8
num_classes = 10 # expected input data shape: (batch_size, timesteps,
data_dim)
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=(timesteps,
data_dim))) # returns a sequence of vectors of dimension 32
model.add(LSTM(32, return_sequences=True)) # returns a sequence of
vectors of dimension 32 model.add(LSTM(32)) # return a single vector
of dimension 32
model.add(Dense(10, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop',
metrics=['accuracy']) # Generate dummy training data x_train =
np.random.random((1000, timesteps, data_dim)) y_train =
np.random.random((1000, num_classes))
x_val = np.random.random((100, timesteps, data_dim)) y_val =
np.random.random((100, num_classes)) model.fit(x_train, y_train,
batch_size=64, epochs=5, validation_data=(x_val, y_val))
21. AI tools
A.DAT Open Media Art
AI
GPU
Open
data
Open
source
Collective
Intelligence
TPU
24. Open source
http://guswnsxodlf.github.io/software-license
GNU General Public License(GPL) 2.0
– 의무 엄격. SW 수정 및 링크 경우 소스코드 제공 의무
GNU Lesser GPL(LGPL) 2.1
– 저작권 표시. LPGL 명시. 수정한 라이브러리 소스코드 공개
Berkeley Software Distribution(BSD) License
– 소스코드 공개의무 없음. 상용 SW 무제한 사용 가능
Apache License
– BSD와 유사. 소스코드 공개의무 없음
Mozilla Public License(MPL)
– 소스코드 공개의무 없음. 수정 코드는 MPL에 의해 배포
MIT License
– 라이선스 / 저작권만 명시 조건
48. Media Art + AI
A.DAT Open Media Art
Blade Runner—Autoencoded, 2016
49. Media Art + AI
A.DAT Open Media Art
flyAI, 2017
50. Media Art + AI
A.DAT Open Media Art
flyAI, 2017
51. Media Art + AI
A.DAT Open Media Art
biometric mirror, and if you were perfect?
the artist Lucy McRae in her Biometric Mirror sits us in front of a mirror of a
futuristic beauty salon where the salon itself gives us a new image of ourselves,
born of an algorithm.
52. Media Art + AI
A.DAT Open Media Art
entangled, an infinite small space, 2018
53. Future of media art
A.DAT Open Media Art
ART
AI
MR
Robotics
IoT
54. 강태욱, 2017, 머신러닝 딥러닝 신경망 개념, 종류 및 개발
강태욱, 2018, 케라스 기반 이미지 인식 딥러닝 모델 구현
ARS Electronica Festival 2017, Media Art between Natural and Artificial Intelligence, 2017
DAVID BROWEN, FLYAI, www.dwbowen.com/flyai
Lucy Mcrae, 2019.1, Biometric Mirror
ImageNet classification with Python and Keras
Keras Tutorial : Using pre-trained Imagenet models
Models for image classification with weights trained on ImageNet
Google, 텐서플로우 메뉴얼
YOLO: real-time object detection (paper)
ConvNetJS
carpedm20.github.io/faces
Reference
A.DAT Open Media Art
57. Cloud platform – MQTT, RaspberryPI, Blynk, ITFFF
Packing
Wireless
Sensor
Gateway
IoT
Control
Big data
analysis
Protocol
IoT
connection
service
A BIM ANALYSIS OF HVAC AND RADIANT COOLING SOLUTIONS, ROBERT CUBICK, 2016
KICT
58. Cloud platform – MQTT, RaspberryPI, Blynk, ITFFF
Packing
Wireless
Sensor
Gateway
IoT
Control
Big data
analysis
Protocol
IoT
connection
service
A BIM ANALYSIS OF HVAC AND RADIANT COOLING SOLUTIONS, ROBERT CUBICK, 2016
KICT