This document summarizes how deep learning can be applied in information retrieval systems. It discusses using neural networks like convolutional neural networks (CNNs) to learn feature representations of queries and documents. CNNs and other neural networks can be used as trainable feature extractors to encode sentences and images into fixed-length vectors. These vectors can then be compared using similarity functions to retrieve relevant results from a database. Specific examples discussed include using CNNs and recurrent neural networks to classify and retrieve sentences, and training CNNs on large datasets to learn image feature vectors for image retrieval.
Deep Neural Networks that talk (Back)… with styleRoelof Pieters
Talk at Nuclai 2016 in Vienna
Can neural networks sing, dance, remix and rhyme? And most importantly, can they talk back? This talk will introduce Deep Neural Nets with textual and auditory understanding and some of the recent breakthroughs made in these fields. It will then show some of the exciting possibilities these technologies hold for "creative" use and explorations of human-machine interaction, where the main theorem is "augmentation, not automation".
http://events.nucl.ai/track/cognitive/#deep-neural-networks-that-talk-back-with-style
Representation Learning Using Multi-Task Deep Neural Networks for Semantic Cl...marujirou
Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval
Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, Ye-Yi Wang
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
Deep Neural Networks that talk (Back)… with styleRoelof Pieters
Talk at Nuclai 2016 in Vienna
Can neural networks sing, dance, remix and rhyme? And most importantly, can they talk back? This talk will introduce Deep Neural Nets with textual and auditory understanding and some of the recent breakthroughs made in these fields. It will then show some of the exciting possibilities these technologies hold for "creative" use and explorations of human-machine interaction, where the main theorem is "augmentation, not automation".
http://events.nucl.ai/track/cognitive/#deep-neural-networks-that-talk-back-with-style
Representation Learning Using Multi-Task Deep Neural Networks for Semantic Cl...marujirou
Representation Learning Using Multi-Task Deep Neural Networks for Semantic Classification and Information Retrieval
Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, Ye-Yi Wang
Deep Learning for Information Retrieval: Models, Progress, & OpportunitiesMatthew Lease
Talk given at the 8th Forum for Information Retrieval Evaluation (FIRE, http://fire.irsi.res.in/fire/2016/), December 10, 2016, and at the Qatar Computing Research Institute (QCRI), December 15, 2016.
A Simple Introduction to Word EmbeddingsBhaskar Mitra
In information retrieval there is a long history of learning vector representations for words. In recent times, neural word embeddings have gained significant popularity for many natural language processing tasks, such as word analogy and machine translation. The goal of this talk is to introduce basic intuitions behind these simple but elegant models of text representation. We will start our discussion with classic vector space models and then make our way to recently proposed neural word embeddings. We will see how these models can be useful for analogical reasoning as well applied to many information retrieval tasks.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”
Thesis defense presentation of Justin Phillips (SDSU). "The Role of Relatedness and Autonomy in Motivation of Youth Physical Activity: A Self-Determination Perspective."
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Skynet? Really? How close are we to self aware, self replicating machines? In this fun session learn some of what computers can do and what they can’t. You think you know. You may be surprised.
The emerging focus on Cognitive computing, general AI, Computer Vision, Internet of Things, etc. signpost the way to new opportunities and new challenges for computers and humans alike. We decided to see how far we could get in building our own version of an all powerful controlling entity.
In this session we’ll cover how we did it, what we learned and answer those important questions like: “Can we build a Skynet yet?”, “Can my computer be my best friend?”, ”Will I ever able to program without a keyboard?”, ”Can a computer read my mind?” and the all important “will drones be able to deliver beer at the right temperature?”
What if we stored events instead of state?Jef Claes
Traditional systems typically only store the current state. This paradigm often puts us in rather nasty situations. How do you optimize for reads, without impacting writes? How can you have a clue of what's going in your system? How do you reproduce bugs..?
Learn how event sourced systems address these concerns by, instead of storing the current state, storing the sequence of events that lead to the current state - unleashing a bunch of scenarios impossible using traditional systems.
Prof Willy Susilo presented a seminar titled "Blockchain and its Applications" as part of the SMART Seminar Series on 20th September 2018.
More information: https://news.eis.uow.edu.au/event/blockchain-and-its-applications/
Keep updated with future events: http://www.uoweis.co/events/category/smart-infrastructure-facility/
A Simple Introduction to Word EmbeddingsBhaskar Mitra
In information retrieval there is a long history of learning vector representations for words. In recent times, neural word embeddings have gained significant popularity for many natural language processing tasks, such as word analogy and machine translation. The goal of this talk is to introduce basic intuitions behind these simple but elegant models of text representation. We will start our discussion with classic vector space models and then make our way to recently proposed neural word embeddings. We will see how these models can be useful for analogical reasoning as well applied to many information retrieval tasks.
NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, NVIDIA is increasingly known as “the AI computing company.”
Thesis defense presentation of Justin Phillips (SDSU). "The Role of Relatedness and Autonomy in Motivation of Youth Physical Activity: A Self-Determination Perspective."
Deep Learning - The Past, Present and Future of Artificial IntelligenceLukas Masuch
In the last couple of years, deep learning techniques have transformed the world of artificial intelligence. One by one, the abilities and techniques that humans once imagined were uniquely our own have begun to fall to the onslaught of ever more powerful machines. Deep neural networks are now better than humans at tasks such as face recognition and object recognition. They’ve mastered the ancient game of Go and thrashed the best human players. “The pace of progress in artificial general intelligence is incredible fast” (Elon Musk – CEO Tesla & SpaceX) leading to an AI that “would be either the best or the worst thing ever to happen to humanity” (Stephen Hawking – Physicist).
What sparked this new hype? How is Deep Learning different from previous approaches? Let’s look behind the curtain and unravel the reality. This talk will introduce the core concept of deep learning, explore why Sundar Pichai (CEO Google) recently announced that “machine learning is a core transformative way by which Google is rethinking everything they are doing” and explain why “deep learning is probably one of the most exciting things that is happening in the computer industry“ (Jen-Hsun Huang – CEO NVIDIA).
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Skynet? Really? How close are we to self aware, self replicating machines? In this fun session learn some of what computers can do and what they can’t. You think you know. You may be surprised.
The emerging focus on Cognitive computing, general AI, Computer Vision, Internet of Things, etc. signpost the way to new opportunities and new challenges for computers and humans alike. We decided to see how far we could get in building our own version of an all powerful controlling entity.
In this session we’ll cover how we did it, what we learned and answer those important questions like: “Can we build a Skynet yet?”, “Can my computer be my best friend?”, ”Will I ever able to program without a keyboard?”, ”Can a computer read my mind?” and the all important “will drones be able to deliver beer at the right temperature?”
What if we stored events instead of state?Jef Claes
Traditional systems typically only store the current state. This paradigm often puts us in rather nasty situations. How do you optimize for reads, without impacting writes? How can you have a clue of what's going in your system? How do you reproduce bugs..?
Learn how event sourced systems address these concerns by, instead of storing the current state, storing the sequence of events that lead to the current state - unleashing a bunch of scenarios impossible using traditional systems.
Prof Willy Susilo presented a seminar titled "Blockchain and its Applications" as part of the SMART Seminar Series on 20th September 2018.
More information: https://news.eis.uow.edu.au/event/blockchain-and-its-applications/
Keep updated with future events: http://www.uoweis.co/events/category/smart-infrastructure-facility/
Presentation on Neural Networks in Tensorflow. Code available at https://github.com/nfmcclure/tensorflow_cookbook . Presentation for Open Source Bridge, Portland, 2016.
Slides for my session at Virtual ML.NET Conference about developing an image recognition machine learning model for a rock-paper-scissors mobile game with ML.NET and Xamarin
The same presentation previously uploaded by the same name with only very minor edits to slides #1 and #50. Presentation given to ACSD #14 on August 10, 2007. The wiki discussed in this presentation is here: http://acsd14.pbwiki.com
Similar to Pycon2016- Applying Deep Learning in Information Retrieval System (16)
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
11. Feature Vector Representation
Bengio, 2014, Representation Learning: A Review and New Perspectives
FeatureVector
Representation
Learning
http://deeplearning4j.org/convolutionalnets
NN
feature
extractor
15. Sentence Retrieval
Results:
• Training acc: 99% up
• Validation acc: 99% up
Experiment:
• 5 classes sentences
• Training Set 80%
• Validation Set 20%
Feature vector PCA tSNE
16. Sentence Retrieval
Recurrent Neural Network
Kyunghyun Cho et. al., 2014,
Learning Phrase Representations using
RNN Encoder–Decoder for Statistical
Machine Translation
https://devblogs.nvidia.com/parallelforall/
introduction-neural-machine-translation-gpus-part-2/
17. Skip-Thought Vectors
Skip-Thought Vectors http://arxiv.org/pdf/1506.06726v1.pdf
Encode a sentence into a thought vector hi by predicting its neighbor
sentence.
Ryan Kiros et. al., 2015, Skip-Thought Vectors
… I just got back home. I could see the cat on the steps. This was strange. …
21. Limitations
Skip-Thought Vectors
• Requiring huge size of corpus
• … I just got back home. I could see the cat on the steps. This was strange. …
• … I got back to office. I could see the cat on the steps. This was cool. …
11,051 novels with 17,515,150 sentences
• Scenario dependency
query sen skt result skt score
0.697
… … 0.697
0.697
24. Image Retrieval
Input
Image
pre-
processing
(2)
(1)
Hashing
Vector
Layer
FC + Softmax
Layer
.
.
.
.
.
.
CNN
network
1st: various targets
Step1
Training
copy
CNN
network
Step2
Training
Kevin Lin et. al, 2015, Deep Learning of Binary Hash Codes for Fast Image Retrieval
.
.
.
.
.
.
2nd: task specific targets
encoding
application Hashing
Vector
Layer
copy
CNN
network
copy
Feature Vector
AlexNet
VGG
25. Image Retrieval
Kevin Lin et. al, 2015, Deep Learning of Binary Hash Codes for Fast Image Retrieval
Demo
http://tweddielin.pythonanywhere.com/
https://github.com/tweddielin/flask-imsearch
Github
26. Conclusion
1. Deep Learning => Trainable Feature Extractor
2. Type of Neural Nets: MLP, CNN, RNN
3.
• http://www.slideshare.net/tw_dsconf/ss-62245351
• http://speech.ee.ntu.edu.tw/~tlkagk/courses_MLSD15_2.html