In many practical machine learning classification applications, the training data for one or all of the classes may be limited. We will examine how semi-supervised learning using Generative Adversarial Networks (GANs) can be used to improve generalization in these settings. The full approach from training to model deployment will be demonstrated, using AWS Lambda and/or AWS Sagemaker
An introduction and basic concepts of machine learning without mathematics. This is a short presentation for beginners in machine learning.Andrews Cordolino Sobral, Ph.D., Computer Vision and Machine Learning Researcher, Activeeon
Using GANs to improve generalization in a semi-supervised setting - trying it...PyData
In many practical machine learning classification applications, the training data for one or all of the classes may be limited. We will examine how semi-supervised learning using Generative Adversarial Networks (GANs) can be used to improve generalization in these settings. The full approach from training to model deployment will be demonstrated, using AWS Lambda and/or AWS Sagemaker
In many practical machine learning classification applications, the training data for one or all of the classes may be limited. We will examine how semi-supervised learning using Generative Adversarial Networks (GANs) can be used to improve generalization in these settings. The full approach from training to model deployment will be demonstrated, using AWS Lambda and/or AWS Sagemaker
An introduction and basic concepts of machine learning without mathematics. This is a short presentation for beginners in machine learning.Andrews Cordolino Sobral, Ph.D., Computer Vision and Machine Learning Researcher, Activeeon
Using GANs to improve generalization in a semi-supervised setting - trying it...PyData
In many practical machine learning classification applications, the training data for one or all of the classes may be limited. We will examine how semi-supervised learning using Generative Adversarial Networks (GANs) can be used to improve generalization in these settings. The full approach from training to model deployment will be demonstrated, using AWS Lambda and/or AWS Sagemaker
basics of GAN neural network
GAN is a advanced tech in area of neural networks which will help to generate new data . This new data will be developed based over the past experiences and raw data.
VSSML16 LR1. Summary Day 1
Valencian Summer School in Machine Learning 2016
Day 1
Summary Day 1
Mercè Martin (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2016
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
Machine Learning for Dummies (without mathematics)ActiveEon
It presents an introduction and the basic concepts of machine learning without mathematics. This is a short presentation for beginners in machine learning.
No Training Data? No Problem! Weak Supervision to the Rescue!
A talk on NLP Weak Supervision at the Singapore Quantum Black Meetup.
This talk talks about
1. ML's insatiable need for large datasets
2. Contemporary ML leaving out domain knowledge from Subject Matter Experts
3. How Weak Supervision, an approach of Data-Centric AI, solves both the problems simultaneously by encoding domain subject matter expertise into programmatic labeling functions.
4. The WRENCH benchmark to compare various weak supervision algorithms on several standard datasets.
5. Snorkel to combine the various labeling functions.
6. COSINE to fine-tune a final transformer based model that overcomes the noise in weak labels
7. Future Directions and Resources
Feel free to use the slides but please remember to credit me with a link to my Linkedin profile: www.linkedin.com/in/marie-stephen-leo.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
Building a Tensorflow-based model that extracts the "best" frames from a video, which are then used as auto-generated thumbnails and thumbstrips. We used transfer learning on Google's Inceptionv3 model, which was pretrained with ImageNet data and retrained on JW Player's thumbnail library.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
basics of GAN neural network
GAN is a advanced tech in area of neural networks which will help to generate new data . This new data will be developed based over the past experiences and raw data.
VSSML16 LR1. Summary Day 1
Valencian Summer School in Machine Learning 2016
Day 1
Summary Day 1
Mercè Martin (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2016
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
Machine Learning for Dummies (without mathematics)ActiveEon
It presents an introduction and the basic concepts of machine learning without mathematics. This is a short presentation for beginners in machine learning.
No Training Data? No Problem! Weak Supervision to the Rescue!
A talk on NLP Weak Supervision at the Singapore Quantum Black Meetup.
This talk talks about
1. ML's insatiable need for large datasets
2. Contemporary ML leaving out domain knowledge from Subject Matter Experts
3. How Weak Supervision, an approach of Data-Centric AI, solves both the problems simultaneously by encoding domain subject matter expertise into programmatic labeling functions.
4. The WRENCH benchmark to compare various weak supervision algorithms on several standard datasets.
5. Snorkel to combine the various labeling functions.
6. COSINE to fine-tune a final transformer based model that overcomes the noise in weak labels
7. Future Directions and Resources
Feel free to use the slides but please remember to credit me with a link to my Linkedin profile: www.linkedin.com/in/marie-stephen-leo.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
Building a Tensorflow-based model that extracts the "best" frames from a video, which are then used as auto-generated thumbnails and thumbstrips. We used transfer learning on Google's Inceptionv3 model, which was pretrained with ImageNet data and retrained on JW Player's thumbnail library.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
5. How GANs help in a semi-supervised setup
Using Colaboratory to train a GAN for semi-supervised learning
Options for putting the model in production
What are GANs
Outline
Short Intro
14. Conceptual representation of a GAN
Source : https://stats.stackexchange.com/questions/277756/some-general-questions-on-generative-adversarial-networks
15. 1D example of GAN
Source : https://ireneli.eu/2016/11/17/deep-learning-13-understanding-generative-adversarial-network/
16. GANs in one image
Source : https://news.developer.nvidia.com/edit-photos-with-gans
17. Types of GANs
Source : https://becominghuman.ai/unsupervised-image-to-image-translation-with-generative-adversarial-networks-adb3259b11b9
18. Limitations of GANs (1/3)
● Mode Collapse: Generator does not generate the full range of plausible samples
● More likely if the generator is optimized while the discriminator is kept constant for many iterations
19. Limitations of GANs (2/3)
● GANs predict the entire sample (e.g. image) at once
● Difficult to predict pixel neighborhoods
20. Limitations of GANs (3/3)
● Difficult training process / convergence: Generator and discriminator keep oscillating
● Network does not converge to a stable, (near) optimal solution
22. Why using GANs in a semi-supervised setting?
● Create a more diverse set of unlabeled data
● Get a better / more descriptive decision boundary
● Improve generalization when the amount of training samples is small
23. Influence of unlabeled data in semi-supervised learning
● Continuity assumption
● Cluster assumption
● Manifold assumption
24. Example of GAN in a semi-supervised learning role
Source : https://towardsdatascience.com/semi-supervised-learning-with-gans-9f3cb128c5e
25. Sources of information and their role for the discriminator
● Real samples that have labels, similar to normal supervised learning
● Real samples that do not have labels. For those, the discriminator only learns that these images are
real
● Samples coming from the generator. For those, the discriminator learns to classify as fake
26. What do we have to modify to make it work?
● Adjust the loss functions so that we tackle both problems:
○ as per the original GAN task, use binary cross entropy for the GAN part
○ as per multiclass classification task, use softmax cross entropy (or sparse cross entropy with
logits) for the supervised task
○ apply masks to ignore the label not seen in the respective subtask
○ take the average of the GAN and supervised loss
● The discriminator can use the generator’s images, as well as the labeled and unlabeled training data,
to classify the dataset
27. Improving the GAN part
● Training GANs is a challenging problem -> feature matching approach
a. take the moments for some features for a “real” minibatch
b. take the moments for same features for a “fake” minibatch
c. minimize mean absolute error between the two
30. Classifieds - Data Science areas of Application
● Spam can be a problem in open platforms like classifieds
● Its annoying and can also be used as a vehicle for fraud
● How can we keep bad users out without annoying good users with stringent policies?
31. Why is it a challenge?
● It’s a recurring battle, with spammers becoming more inventive to circumvent the algorithms
● We may have only a few cases of cases of annotated examples for certain types of bad content
33. Results
● In the presented example we only have a few tens of instances for a certain type of spam
● We are able to quickly explore the input space and map to a latent feature space combining the
different types of signal we have available from text, image, user behaviour, etc.
● Semi supervised learning with GANs does better (AUC = 0.999) vs Random Forest (AUC = 0.997)
40. Resources
● Using Chalice to serve SageMaker predictions
○ https://medium.com/@julsimon/using-chalice-to-serve-sagemaker-predictions-a2015c02b033
● SageMaker examples
○ https://github.com/awslabs/amazon-sagemaker-examples
● How to Deploy Deep Learning Models with AWS Lambda and Tensorflow
○ https://aws.amazon.com/blogs/machine-learning/how-to-deploy-deep-learning-models-with-aws-la
mbda-and-tensorflow/
41. References
● Getting started with ML (Google)
○ https://drive.google.com/file/d/0B0phzgXZajPFZFRYNFRicjlZTVk/view
● Semi-supervised learning with Generative Adversarial Networks (GANs)
○ https://towardsdatascience.com/semi-supervised-learning-with-gans-9f3cb128c5e
● CNNs, three things you need to know
○ https://www.mathworks.com/solutions/deep-learning/convolutional-neural-network.html
● Semi-supervised learning
○ https://en.wikipedia.org/wiki/Semi-supervised_learning
● Variants of GANs
○ https://www.slideshare.net/thinkingfactory/variants-of-gans-jaejun-yoo
● From GAN to WGAN
○ https://lilianweng.github.io/lil-log/2017/08/20/from-GAN-to-WGAN.html