Neural Networks. Deep Learning. TensorFlow. What are these buzzwords? What is the latest Artificial Intelligence craze? This advanced session contains cutting-edge information not easily found online, and does not require a PhD in Machine Learning to understand. Recurrent Neural Networks (RNNs) can be used to generate text that will look like its original training data. There are many articles out there that show hilarious end-results of such adventures, but start-from-scratch walkthroughs that show the raw code are hard to come by.
6. Make it Easy to play with AI
Easy to set up
Easy to use
Easy access to Docker Hub with full guidance
Connection to Chat Bot with Node.js
Menu | Help | Small talk
7. Run and train RNN
To Run
To pull or run Sarah’s pre-trained Docker snapshot to avoid waiting 8 hours, type:
docker pull saelia/rnn-js
To Train
The way to actually make the RNN generate new Shakespeare text is with the data
sampling script:
th sample.lua -gpu -1 -checkpoint cv/checkpoint_12900.t7 -length 150 -temperature .7
GPU: Setting the flag gpu to -1 tells the code to train using CPU; otherwise it defaults to
GPU 0.
Checkpoints: While the model is training, it will periodically write checkpoint files to
the cv folder. The frequency with which these checkpoints are written is controlled by the
number of iterations, specified with the eval_val_every option. (E.g., if this is 1, then a
checkpoint is written every iteration.)
Length: An important flag is -length. 100 would generate a body of text 100 characters in
length. The default is 2000.
Temperature: An important parameter you may want to play with is -temperature, which
takes a number in range (0 to 1, 0 not included), default = 1. Lower temperature will cause
the model to make more “likely” but more boring and conservative predictions. Higher
temperatures cause the model to take more chances and increase diversity of results, but
at a cost of more mistakes.
10. Superheroes Designed by Neural Netw
ork
Speet Stank
Red Fart
Mister Man
Rad Food
Sapgirl
Woop
Ann Man
Boomss
Boark II
Supperman
Superbore
Slonk
Lid Man
Green Hooter II
Starm Surper
Shartar
Goons
Nana
Rider Farm
Captain In
Redink
Wolver Man
Wizler
http://aiweirdness.com/post/140829108357/superheroes-designed-by-neural-network
12. Recipes at your own risk!
http://aiweirdness.com/post/163878889437/try-these-neural-network-generated
-recipes-at-your
13. Craft beer names, invented by neural net
work
IPAs
• Dang River
• Yamquak
• Bigly Bomb Session IPA
• Binglezard Flack
• Earth 2 Sanebus
• Tower Of Ergelon
• Juicy Dripple IPA
• Wicked Geee
• Yampy
• Widee Banger Fripper IPA
Strong Pale Ales
• The Great Rebelgion
• Thick Back
• The Fraggerbar
• Dankering
• Third Maus
• Sip’s The Stunks Belgian
• Slambertangeriss
• Devil’s Chard
• Spore Of Gold
• The Oldumbrett’s Ring
• Gunder Of Traz
• Cherry Boof Cornester
• Humple Bobstore Barrel Aged
Amber Ales
• Snarging Red
• Warmel Halce’s Comp Ale
• Fire Pipe
• Blangelfest
• Stoodemfest
• Ole Blood Whisk
• Frog Trail Ale
• Ricias Donkey Brain
• Sacky Rover
• Gate Rooster
• Cramberhand
• O’Brien Irish Red
• River Smush Hoppy Amber Ale
• Rivernillion Amber
• Special North Imperial Red
• Ambre O’Woo’s Omella
Imperial Red Ale
Stouts
• The Moon
• The Bopberry Stout
• Cherry Coconut Mint Chocolate
Stout
• Black Morning
• Sir Coffee
• Shock State
• Take Bean
• Single Horde
• Whata Stout
• Shany Lace
• Barrel Aged Chocolate Milksmoke
• Shump
http://aiweirdness.com/post/163753995072/craft-beer-names-invented-by-neural-network
14. Harry Potter and the difference between word-level and character-le
vel RNN
http://aiweirdness.com/post/164291045392/harry-potter-and-the-word-level-recurrent-neural
15. A character-by-character, or “char” model takes one text file as
input, and trains an RNN to predict the next character in a sequence.
The RNN can then be used to generate text character by character
that will look like the original training data.
16. New paint colors invented by neural network
http://aiweirdness.com/post/160985569682/paint-colors-designed-by-neural-network-part-2
17. • The temperature flag makes the most difference. (Expects a number between 0 and 1.)
• Changes the novelty and noise is the system,
• Creates dramatically different output.
• Lower temperatures (e.g. 0.2) makes the RNN more confident, but more conservative
• It generates less noise, but less novel results.
• Using -temperature 0.2 gives clear English, but includes a lot of repeated words.
• Higher temperature makes more interesting/novel output, but more nonsense, misspelle
d words
• Everything is a trade-off.
• Experiment with all settings.
Temperature
20. • There are lots of things that affect how well the algorithm does. Temperature adjusts:
• whether the RNN always picks the most likely next character as it’s generating text,
or whether it will go with something farther down the list.
• Setting the temperature higher or lower can make the algorithm produce a much better
output.
Temperature 0.7 (my favorite)
There are lots of things that affect how well the algorithm does. One simple change turns out to be the “temperature” (think: creativity) variable, which adjusts whether the neural network always picks the most likely next character as it’s generating text, or whether it will go with something farther down the list. I had the temperature originally set pretty high, but it turns out that when I turn it down ever so slightly, the algorithm does a lot better. Not only do the names better match the colors, but it begins to reproduce color gradients that must have been in the original dataset all along. Colors tend to be grouped together in these gradients, so it shifts gradually from greens to browns to blues to yellows, etc. and does eventually cover the rainbow, not just beige.