A set of programming assignments about using python and OpenCV for implementing Digital Image Processing. Starting with easy problems, they become more challenging gradually. This fie is made of the concatenation of 4 series of practical assignments in a relevant course in the CE department of the Sharif University of Technology.
1. 10/6/2019 first_page - Google Docs
https://docs.google.com/document/d/1KNos3JPXzHM77fr200wA67EJm5r2fdgRD2kM4U-o98A/edit 1/2
Implementing Digital Image Processing:
A Hands-On Set of Programming Problems Using
Python and OpenCV
Digital Image Processing Course,
by Professor Shohreh Kasaei <pkasaei@gmail.com>
at CE Department, Sharif University of Technology
Created and Written by the Teacher Assistant of the course,
Nader Karimi Bavandpour <Nader.karimi.b@gmail.com>
Image Processing Laboratory (IPL)
<http://ipl.ce.sharif.edu>
CE Department, Sharif University of Technology
Spring, 2019
2. 10/6/2019 first_page - Google Docs
https://docs.google.com/document/d/1KNos3JPXzHM77fr200wA67EJm5r2fdgRD2kM4U-o98A/edit 2/23
About this problem set:
This file is the concatenation of 4 series of programming assignments which I have
designed for the Digital Image Processing course at the CE department of the Sharif
University of Technology in the spring of 2019. The difference in their styles is because I
have created them at different times during that course. These problems have some
additional resource files, which you can find at this link. They are designed to be easy at the
beginning and gradually get harder and more challenging. I’ve also put an installation guide
for using OpenCV on Linux before the assignments.
Back then in the spring of 2019, I already had a bank of problems for Digital Image
Processing using Matlab. But I was pretty determined to shift the assignments towards
using python and its most comprehensive relevant library, OpenCV. I started searching the
internet and I even took a look at some good books, but I couldn’t find a single good source
of practical assignments that could teach the students what I wanted them to learn. So I
created this problem set, some of which are purely designed by myself, and some of them
are adapted from the internet, for example from the OpenCV’s official docs.
Unfortunately, I have not kept track of the links to the sources of some problems which are
adapted from the Internet, because I was (and I am) using them solely for educational
purposes and not a single penny is made from them. I’m ready to update this file and put
references in it if someone informs me and asks me to.
Everyone are welcome to use this file for non-profit purposes. You can adapt and use the
problems without puting any reference there. Please email me and let me know where and
how you are going to use it and make me happy!
I hope this problem set can help you get started with Implementing Digital Image Processing
using OpenCV and python, which are one of the most powerful tools available for this task
today.
Nader Karimi Bavandpour,
Computer Vision and Deep Learning expert,
Fall of 2019
3. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 1/9
Preparing Ubuntu 18.x-Based Linux
for Using OpenCV 4.1,
Python 3.7.x and PyCharm 2018.3.x
Professor Shohreh Kasaei
pkasaei@gmail.com
Written by Nader Karimi
Nader.karimi.b@gmail.com
Digital Image Processing Course
Spring, 2019
Sharif University of Technology
4. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 2/9
Installing Python 3.7 on Ubuntu 18.x-based Linux
Installing python 3.7
Run the sequence of commands below:
sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt install python3.7
If those commands run successfully, you can run Python 3.7 by typing:
python3.7
If you want to use python3.7, be sure to type its name completely. If you just type python3, you
may start python 3.6.x instead.
Finding the absolute path to the python interpreter
You can find the path to the installed interpreter by typing:
which python3.7
Which in my case is:
/usr/bin/python3.7
Installing pip
You should install pip, which is a powerful package manager for python. Here is how to install
for python3:
sudo apt install python3-pip
2
5. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 3/9
Installing and Using Virtual Environments for Python
Installing virtualenv
python3.7 -m pip install --user virtualenv
Creating a new virtualenv with python3.7 as the default python interpreter
First, create a directory for storing your virtual environments’ data. You can use your home
directory for example:
cd
mkdir myenvs
cd myenvs
Now you are in the right path to create a new virtualenv. Create a new one by typing:
python3.7 -m virtualenv env
Activating a virtualenv
First, change your working directory to the environments folder. In our case:
cd ~/myenvs
Then activate the environment you want, for example “env”, by typing:
source env/bin/activate
Your terminal should look like this after activating a virtualenv:
Note that we just typed “python” without version number and we could start python 3.7.
If you check python’s absolute path, you will notice that it is now pointing to a file in the specific
virtualenv directory that is activated:
/home/nader/env/bin/python
Sometime it will still point to the system-wide python interpreter. If that happened for you, it’s ok.
3
6. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 4/9
Deactivating the virtualenv
Use this simple command to deactivate the activated virtualenv:
deactivate
4
7. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 5/9
Installing and Using Anaconda for Scientific
Programming with Python
Downloading and Installing Anaconda
Download Anaconda for Linux from here. Don’t forget to choose Python 3.7 version.
Change your directory to the download folder. Then enter the command below (check to see if
the name of your file is different):
bash Anaconda3-5.2.0-Linux-x86_64.sh
and answer to the prompts till you finish installing it.
Creating a new virtualenv with python3.7.2 as the default python using conda
Conda is the package manager, somehow similar to pip, which comes with Anaconda. We can
create and manage virtual environments with it, too.
Here we create a new virtual environment using conda. We specify that we want the python to
be version 3.7.2, although we have not installed this version on our machine. Conda will handle
downloading and installing it for us! Also, we don’t need to think about where to store our
virtualenvs since conda has a default path for that.
Type these command to create a new virtualenv named “cenv” (which stands for conda env):
conda create -n cenv python=3.7.2
Finish the process by answering to the prompts it gives you. Note that we specify the name of
the virtualenv after “-n”.
Activating and deactivating a virtualenv using conda
We show how to activate and deactivate virtualenvs in the picture below. See how easy it is to
activate and deactivate a virtualenv using the command “source”. We don’t have to worry about
any system path while working with conda virtualenvs.
Note that instead of the command ‘source’, you can use the equivalent command ‘conda’. For
example, instead of using:
source activate
you can use:
conda activate
5
9. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 7/9
Installing OpenCV 4.1 in an Activated Virtualenv
First, activate the virtualenv you want to use. Then type this command to install opencv version
4.x.x (the latest version will be installed):
pip install opencv-python
For installing other versions, find the appropriate command from this link.
You can check if it is installed correctly like this:
7
10. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 8/9
Installing and Configuring PyCharm for Using a Virtual
Environment
First, download PyCharm from here. You may want to download the free, community edition file.
Next, unzip the tar.gz file using this command:
tar -xvzf pycharm_filename.tar.gz
Then go to PyCharm folder, and next to the “bin” folder and start pycharm like this:
Note that there is a file named “Install-Linux-tar.txt” in the PyCharm folder. You can check that
for further information about using PyCharm.
As we like to work with virtualenvs, we should instruct PyCharm to use our desired virtualenv.
You can use the pictures below for guidance. Start PyCharm and create a new project. Open
the Project Interpreter drop-down menu, choose existing interpreter and click on the button right
to it. Another windows will pop up. There is a button right to the Interpreter text field. Click on
that, and in the opened window, enter the path of the python interpreter of your virtualenv. You
can find that path by using “which” command, as explained before. Commit this changes and
finish making the new project.
8
11. 10/6/2019 OpenCV_Linux_Installation_Guide - Google Docs
https://docs.google.com/document/d/1LC6XlasyLBa21YvGfipwcheRSzab3f_PqUBiRsFIB30/edit 9/9
Finally, you can create a launcher for PyCharm, so that you won’t have to start it from
command-line each time. Start PyCharm, go to tools, and click on “Create Desktop Entry …”.
Enjoy!
9
12. Digital Image Processing
Professor Kasaei
Assignment 1: Practical Problems
Due: Esfand 13, 1397
Hello everyone.
Welcome to the Digital Image Processing course. I hope you start to enjoy coding computer-
vision tasks, which is a very fun thing to do. If you had any questions, feel free to e-mail me and
ask.
As it is the first assignment, I stress the importance of reading the file ‘rules_considerations’
which we have uploaded to the courseware website. Those are rules which you should follow for
all of the assignments.
Finally, there are questions labeled as extra points. The sole purpose of extra points in this
assignment is to compensate for your missed points (if any) in this very practical part of
assignment 1, and nothing else. By the way, I think they are more fun to do than the other
problems.
Best regards,
Nader Karimi
e-mail: nader.karimi.b@gmail.com
13. Problems
Question 1: Implement Kronecker product between two matrices. The only library you are
allowed to use is Numpy. You should complete the method kron, in the module named ‘math’
which is placed under the package ‘ipl_utils’. We imported this module in the file named ‘q1.py’
and prepared some code for you to complete.
1. We have defined 4 pairs of numpy variables. Some of them are initialized to None. Please
make changes to the code and initialize them as the comments say. You can use this link
to sample from a Normal distribution using numpy.
2. Compute the Kronecker products of each pair using ipl_math.MatrixOperations.kron
3. Compute the Kronecker products of each pair using numpy.kron. Related docs can be
found here.
4. Use assert statements to make sure your function has computed the true values. In the
case of floating point matrices, the assertion may fail due to different implementations of
your function and Numpy’s. In order to fix this, first convert your floating point matrices
to integer ones. There is a sample line of commented code there in the python file
provided. We used the astype method in order to do this. You have to perform the said
step regardless, whether the assert statement works fine with or without it.
Question 2: Read all of the images in the folder ‘q2_images’ using OpenCV’s function imread and
a for loop. imread gets a numpy array. Code in the ‘q2.py’ file.
1. Store all of these arrays in a python list.
2. In python jargon, we call a .py file which contains class and function implementations, a
‘module’. Create a module and name it interpolate.py under the ‘ipl_utils’ package. In
that module, write a function named avg_interpolate which takes a Numpy array as an
image, does interpolation on it and then returns it. This function must scale the width of
the image by 2. You can use the average of two nearest pixels for interpolation. Be sure
to check the results, and see if you have not scaled the height instead of the width. Use
this function in the ‘q2.py’ file.
Question 3: In this problem, we want to take Fourier transform of an image, apply a low-pass
filter to it and then take the inverse Fourier transform to get the manipulated image. You can use
this tutorial as a baseline for your implementation. Read all of the images in the folder
‘q3_images’ using OpenCV’s function imread and a for loop. Code in the ‘q3.py’ file.
1. Apply low-pass filters which preserve 1/10, 1/5 and 1/2 of the whole range of frequencies
of each one of the axis you have. Then apply inverse Fourier transform to see what
happened to the image. You must have 3 results.
2. [Extra points] We have provided a file named ‘trackbar.py’ for you. Run it and see what
happens. Here is where we found this code. Use this code as a baseline. Write a code that
computes what percentage of the frequency range must be preserved from the trackbar’s
value. Then take inverse Fourier transform and show the image below the trackbar.
Question 4: Use the red channel of the image ‘give_red.jpg’ to be the red channel of the image
‘base.jpg’ and save the result into a file. Code in the ‘q4.py’ file.
14. • [Extra points] As you may have noticed, by changing the red channel for the whole
base.jpg image we have altered its color to something else. Can you devise a method that
outputs the same result, but preserves the true color of ‘base.jpg’ where there is no track
of ‘give_red.jpg’? We have not considered any specific right answer for this question, and
we wait to see your innovations.
15. 10/6/2019 hw2_practical - Google Docs
https://docs.google.com/document/d/17CCDela47Kj8xPsKAKVYUr8Iw7siRWGDmLsQo1-XAJg/edit 1/5
Assignment 2: Practical Problems
Due: Ordibehesht 17, 1398
Please note:
● If you found non-trivial mistakes in the questions or codes, please email me and ask.
● Use OpenCV’s imwrite to save images and pyplot.savefig to save figures to files. If you
had a RGB image, convert it to BGR before using imwrite.
● Pay attention to the order of color channels and range of values that your variables
contain. I have included some print statements in the codes to attract your attention to
the range of values. You will miss points if your code misses some information of
variables due to such mistakes.
● In all of the problems, have numbered sections in your report and briefly explain about
the problem and your solution, and show and justify your results.
Nader Karimi
email: nader.karimi.b@gmail.com
16. 10/6/2019 hw2_practical - Google Docs
https://docs.google.com/document/d/17CCDela47Kj8xPsKAKVYUr8Iw7siRWGDmLsQo1-XAJg/edit 2/5
Problem 1: histogram equalization and color spaces
In this problem, we want to enhance an image of brain in grayscale color space. We will
also work on enhancing a color image of stellar winds, which is taken by Hubble space
telescope!
1.1. Open p1.py. Load the image named brain.jpg into a variable named brain_light,
brain_darker.jpg into brain_dark, nasa.jpg into nasa_colored. Brain_light and brain_dark must
be in grayscale space, and nasa_colored must be in RGB space.
Note: OpenCV’s method imread gives BGR images by default.
1.2. Write a function hist_eq(img) that takes a two dimensional matrix as a grayscale image,
enhances it by histogram equalization and returns it.
1.3. Use hist_eq function to enhance the three images. For nasa_colored, hist_eq each of its
three channels separately and save the result into a variable named nasa_separate.
1.4. Change the color space of nasa_colored and save it in a new variable named nasa_hsv.
Use hist_eq on the Value channel only, convert it back to RGB and save it into a variable named
nasa_enhanced.
1.5 Save the data that table below describes. By ‘brain_x’ in the table, we want you to change
‘x’ to ‘light’ or ‘dark’.
Your report must at least cover these points:
● How does CDF graphs look like after applying hist_eq and whether that follow the theory
of histogram equalization.
● Compare the results of brain_dark and brain_light. Justify your answer.
● Briefly talk about HSV color space, its usages and why it’s suitable for this problem.
● Explain the difference between nasa_enhanced and nasa_colored.
Data File name
Histogram of the original image brain_x_hist_org.jpg
[example: brain_dark_hist_org.jpg]
Histogram after applying hist_eq brain_x_hist_eq.jpg
CDF of the original image brain_x_cdf_org.jpg
CDF after applying hist_eq brain_x_cdf_eq.jpg
Enhanced brain_light brain_light_enhanced.jpg
Enhanced brain_dark brain_dark_enhanced.jpg
Enhanced nasa_colored, each RGB channel
separately
nasa_separate.jpg
Enhanced nasa_colored, using HSV nasa_enhanced.jpg
2
17. 10/6/2019 hw2_practical - Google Docs
https://docs.google.com/document/d/17CCDela47Kj8xPsKAKVYUr8Iw7siRWGDmLsQo1-XAJg/edit 3/5
Problem 2: deconvolution and inverse filtering
Physical tools of acquiring an image usually add some distortion to the real image that
we are after. This is a common problem in optical imaging systems. We can model this
type of distortion with convolution. Point Spread Function (PSF) of the optical system
can be measured and used for inverse filtering.
In this problem, we first convolve an image with a PSF function to get a distorted image.
Then we use Richardson–Lucy deconvolution, Wiener filter and inverse filtering to
recover the original image.
2.1. Complete the file p2.py as the comments in that code ask you to. First go to
miscellaneous.py and learn how to use imshow function. Then complete the function
sum_of_absolutes(m1, m2). These two functions are imported into p2.py.
2.2. In the rest of p2.py, use the convolution property of Fourier transform to implement inverse
filtering.
3
18. 10/6/2019 hw2_practical - Google Docs
https://docs.google.com/document/d/17CCDela47Kj8xPsKAKVYUr8Iw7siRWGDmLsQo1-XAJg/edit 4/5
Problem 3: Quantization
In this problem we quantize images to fewer number of values and measure their
degradation.
3.1. Open p3.py. Write the rest of quantize function. Quantize mobile1.jpg and mobile2.jpg with
num_levels=5. Compute sum_of_absolutes between each image and it’s corresponding
quantized image. Report the results and explain why the two values are different. Save the
quantized images in a file named mobile1_q.jpg and mobile1_q.jpg.
3.2. Design an experiment for testing whether the most significant bit of the 8-bit picture hnd.jpg
is more important or the least significant one is. Produce some output data and analyze and
explain them. Quantize the image to 7 most important bits (according to your experiment’s
result) and save it into a variable named hnd_q. Report average sum of absolute differences
between the original image and the quantized one. Save the quantized image in a file named
hnd_q.jpg.
4
19. 10/6/2019 hw2_practical - Google Docs
https://docs.google.com/document/d/17CCDela47Kj8xPsKAKVYUr8Iw7siRWGDmLsQo1-XAJg/edit 5/522
Problem 4: Comparison of two images
In this problem we write a program that solves ‘spot the difference’ game.
4.1. Open p4.py. Load wiki.jpg into a variable named img. Split it into two variables img_l and
img_r, filled with left and right half of the original image. Now find a way to mark the differences
between these images. The output must be a file named wiki_differences.jpg which contains left
half of the original image (img_l) and some signs around the differences to help us see them.
5
21. 10/6/2019 hw3_practical - Google Docs
https://docs.google.com/document/d/1mGFJnykAkaKwbdnM6xXCmwXTIneMBy5SZKW8B7MatF4/edit 2/6
Please note:
● In this document:
○ files and folders’ names are highlighted by yellow. Examples: p1.py, folder
○ Programming objects like function, variable and package names, and
generally strings that python interpreter will recognise if they are given to it
are highlighted by light green. Examples: scipy.ndimage, some_variable
● Code in p1.py, p2.py and p3.py for problems 1 to 3. You can add additional python
files if you needed. You can also create Jupyter notebooks with similar names and
use them instead.
● Save your final (and intermediate if useful) results to the folder results. For choosing
result files’ names, use the variable’ names in your code that you want to save.
● In all of the problems, have numbered sections in your report and briefly explain
about the problem and your solution, and show and justify your results.
If you had any questions, please email me and ask.
Nader Karimi
email: nader.karimi.b@gmail.com
22. 10/6/2019 hw3_practical - Google Docs
https://docs.google.com/document/d/1mGFJnykAkaKwbdnM6xXCmwXTIneMBy5SZKW8B7MatF4/edit 3/6
Problem 1: Morphological Image Processing Functions
In this problem, we use morphological image processing functions to understand what
they can do for us. Using them we can find edges, skeletons of shapes, search for
patterns and even sometimes remove the background from an image!
1.1 Load circles.png into a variable named circles. Find the boundary of circles by using
morphological functions of OpenCV. You can not use cv.MORPH_GRADIENT. Instead, combine
other functions to find the result. Load se_rad1.npy and se_rad4.npy which are structuring
elements with radius 1 and 4 from the folder resources/numpy_files and use them, and save the
result of using each. Check this link if you don’t know how to load a .npy file into a numpy array.
1.2 Find the skeleton of circles using skeletonize from the package skimage.morphology.
1.3 Load chess.npy into a variable named chess. It contains an image of a half chessboard.
Perform the hit-or-miss transform on chess. se_hit and se_miss are two structuring elements
defined below. se_hit must hit the pattern and se_miss must miss it. You can use
ndimage.binary_hit_or_miss function from scipy package. How should the structuring elements
be designed in order to detect the top left corner of all the white squares?
se_hit se_miss
[
[0, 0, 0],
[1, 1, 0],
[0, 1, 0],
]
[
[0, 1, 1],
[0, 0, 1],
[0, 0, 0],
]
1.4 Load digits.png into a variable named digits.This image has a noisy background with varying
illumination, so a little preprocessing before the main operations won’t hurt.
Estimate the background of digits by morphological functions and store the result in a variable
named bg. Ue bg to remove digits‘s background. Then apply thresholding to extract digits in a
binary matrix form. In my case, there is something like salt and pepper noise in the image at this
level. If you saw that noise too, use median_filter from scipy.ndimage, or other effective
morphological functions to reduce that. Next, try to fix gaps or connected symbols in the digits, if
there were any. The file digits_sample_solution.png contains the result of our solution for this
problem. Try to be at least as good as that.
23. 10/6/2019 hw3_practical - Google Docs
https://docs.google.com/document/d/1mGFJnykAkaKwbdnM6xXCmwXTIneMBy5SZKW8B7MatF4/edit 4/6
Problem 2: Hough Transform
Detecting lines, circles and other primitive shapes in an image is important in image
processing and it naturally appears as a part of the solution of many real world problems.
This problem is based on the tutorial on this link. We want to use hough transform to
detect lines. We added adaptive histogram equalization as a preprocessing step.
2.1 Load sudoku.png into a variable named img. Apply OpenCV’s equalizeHist function and
CLAHE (Contrast Limited Adaptive Histogram Equalization) to img separately and report the
results. Store the result of CLAHE into a variable named img_he. This is a tutorial on this.
2.2 Use cv2.Canny to extract img_he’s edges. Check this link about Canny edge detector. Like
the code in the link, you must have a trackbar that can change the threshold values of Canny
detector. Find the value for which makes the edges look better by using the trackbar. We want
most of the edges of the Sudoku table to be visible after thresholding. Create a variable named
best_low_threshold and store the found best value in it. Mention this value in your report. Then
use cv2.Canny with best_low_threshold and store the result in a variable named edges. Save
this into a file.
2.3 Use cv2.HoughLines and cv2.HoughLinesP to find the lines in edges and save the results.
24. 10/6/2019 hw3_practical - Google Docs
https://docs.google.com/document/d/1mGFJnykAkaKwbdnM6xXCmwXTIneMBy5SZKW8B7MatF4/edit 5/6
Problem 3: Image Smo hing
In this problem we learn how to add noise to images and how to denoise noisy ones. You
will work with three types of noise which are typical in image processing. The denoising
methods that you will use are five major denoising functions from three different
libraries.
3.1 Load fantasy.jpg into a variable named img. Replace the value of img with its resized one.
The new size must be one third of its original one in each dimension.
Use Image module from Pillow library for loading, saving and and resizing in all of problem 3’s
codes. Pillow is an image processing library, while OpenCV is said to be a computer vision
library. Sometimes using Pillow is more simple for implementing some image processing tasks.
3.2 Use the package skimage.util to add noise to img, as the table below describes.
Noise Type Variable Name Parameters
additive Gaussian img_ga mean = 0
var = 0.0256
speckle img_mult mean = 0
var = 0.6
salt and pepper img_sp portion of pixels that must
become noise = 0.06
salt type portion = 0.55
pepper type portion = 0.45
25. 10/6/2019 hw3_practical - Google Docs
https://docs.google.com/document/d/1mGFJnykAkaKwbdnM6xXCmwXTIneMBy5SZKW8B7MatF4/edit 6/637
3.3 Denoise the variables in the above table. The next table describes the variables that you
must create and save to disk. We have five different functions for denoising. So given three
input noisy images, you will have 15 results. In your report:
1. Conclude from your experiments which method is more suitable for each type of noise
2. Measure the denoising time for each of your 15 results. This means you have to find and
use a library that can work with system’s time. Use a table to report your time
measurements. Does the type of noise has any effect on the denoising time? If yes, try
to justify it. If no, no further discussion is required.
Search for finding the best parameters. As it is a time consuming task, pick a reasonable set of
guess values (can use the libraries’ default values to create your guess list of values) and find
which one gives a visually better result. I recommend using a trackbar for testing the guess set.
Report the found parameters and use them for generating the required files.
Denoising Method
Variable Name =
name of noisy variable
+ ‘_’
+ short name of denoising method
File Name
denoise_tv_chambolle
(from skimage.restoration)
... + ‘tvc’
Example: img_ga_tvc
Variable name + ‘.jpg’
Example:
img_ga_tvc.jpg
img_mult_bil.jpg
denoise_bilateral
(from skimage.restoration)
… + ‘bil’
Example: img_mult_bil
denoise_wavelet
(from skimage.restoration)
… + ‘wav’
median_filter
(from scipy.ndimage)
… + ‘med’
fastNlMeansDenoisingColored
(from cv2)
… + ‘cvd’
27. 10/6/2019 hw4_practical - Google Docs
https://docs.google.com/document/d/1IhvTrjDZn5v3E9d0-Dm2daSEMTNREVMwwLSreiQdOXE/edit 2/3
Please note: Most of the score of this assignment is based on your report file. Try to cover
everything the problems ask you to. Include and justify the results for each problem.
If you have any questions, please email me and ask.
Nader Karimi
email: nader.karimi.b@gmail.com
28. 10/6/2019 hw4_practical - Google Docs
https://docs.google.com/document/d/1IhvTrjDZn5v3E9d0-Dm2daSEMTNREVMwwLSreiQdOXE/edit 3/32
This assignment is about Image Style Transfer and Google’s Deep Dream. The main sources for
answering the questions are the file ‘Style Transfer Using CNNs.pdf’ in your assignment folder and
this Google AI Blog post.
[+0.1] 1. Write one or two paragraphs about what Deep Dream is and how it works. Explain the
problem statement and its solution. You don’t need to include math. Try to be smooth and clear.
[+0.1] 2. Get the code of Neural Transfer from this link. Find a pair of new input images from the
internet and run the code. Save and report your results. Try to get an acceptable output here,
since this algorithm is not suitable for combining some images.
[+0.1] 3. Explain why Gram matrix works as a good measure for capturing the style of an image.
[+0.2] 4. Find a creative and new way of generating images using fixed and pre-trained Neural
Networks as feature extractors. You don’t need to implement your idea, but you have to justify it
and state why do you think it works.
[+0.5] 5. Find a new function of a layer of a Neural Network’s features that is suitable for style
extraction. Implement this function and produce the output using images that you downloaded
for problem 2. Report and justify your results.