8. Exploring Themes and Bias in Art using Machine Learning
Image Analysis
1. Compare three different convolutional
neural networks.
2. Add transparency and interpretability to our
models.
3. Implement a multi-label classification
model.
Quoted from the paper[2].
9. Association between TED talk and the paper
In the TED talk, a neural network
model was trained to output "bird"
when shown a picture of a bird.
In this paper, they implemented a
model that shows a work of art and
outputs what is in the picture.
Quoted from the video[4].
10. How computers are analyse images?
In this case, they used a technique called
convolutional neural networks (CNNs). This
model is capable of learning complex features of
an image.
Furthermore, to reduce the computational time
required to train the model, they used ImageNet,
a pre-trained network of image databases.
Lastly, since CNNs with many hidden layers do
not learn well, they used the residual learning
frameworks ResNet50, ResNet101, and
Inception-Resnet-V2 for comparison.
Quoted from the paper[2].
11. Major point of artical
Explanations of different
machine learning methods.
article:A Brief Review of Machine Learning and Its Application,
by Wang, Hua, Cuiqin Ma, and Lijuan Zhou.
12. Why is it related to the
TED Talk video.
We need to teach the machines how to become creative.
This article shows ways of doing so.
14. Problems
● possibilities for occupations to be lost by machines. (illustrator, novelist,
administrative assistant, etc)
● Which learning method is fit for purposes?
● Just using the methods from the article will not make the machine creative.
● How will we combined different methods to make machines more creative.
15. Benefits for humanity
● There will be no need to think or try hard to make something.
● It will be easier to meet creative demands.
● Less time and effort to make creative staff.
● The result is greater efficiency in work: the second article successfully
automated the task of annotating artworks. In the future, we can contribute to
humanity by focusing more resources on tasks that only people can do.
16. Conclusion
● Any creature that is able to do perceptual is also able to create because it's
exactly the same mechanisms.
● Machines can be creative by executing a reverse method of convolution:
deconvolution.
● The analysis of images is based on a combination of convolutional neural
networks (CNNs), pre-training, and a residual learning framework.
17. Reference
1. CHANEY, Allison JB, et al. Nonparametric Deconvolution Models. arXiv
preprint arXiv:2003.07718, 2020.
2. SURAPANENI, Sudeepti; SYED, Sana; LEE, Logan Yoonhyuk. Exploring
themes and bias in art using machine learning image analysis. In: 2020
Systems and Information Engineering Design Symposium (SIEDS). IEEE,
2020. p. 1-6.
3. Wang, Hua, Cuiqin Ma, and Lijuan Zhou. "A brief review of machine learning
and its application." 2009 international conference on information engineering
and computer science. IEEE, 2009.
4. Blaise Agüera y Arcas. “How computers are learning to be creative”. 2016,
TED.