Self supervised learning (SSL) is one of the most fast-growing research topic in recent years. SSL provides algorithm that directly learn visual representation from data itself rather than human manual labels. From theoretical point of view, SSL explores information theory & the nature of large scale dataset.
Lecture slides in DASI spring 2018, National Cheng Kung University, Taiwan. The content is about deep reinforcement learning: policy gradient including variance reduction and importance sampling
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Lecture slides in DASI spring 2018, National Cheng Kung University, Taiwan. The content is about deep reinforcement learning: policy gradient including variance reduction and importance sampling
http://imatge-upc.github.io/telecombcn-2016-dlcv/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Reinforcement Learning In AI Powerpoint Presentation Slide Templates Complete...SlideTeam
Showcase how machines are built to perform intelligent tasks by using our content-ready Reinforcement Learning In AI PowerPoint Presentation Slide Templates Complete Deck. Take advantage of these artificial intelligence PowerPoint visuals, and describe how machine learning models are trained to make sequences of decisions in a complex environment. Showcase the types of artificial intelligence such as deep learning, machine learning. Explain the concept of machine learning which delivers predictive models based on the data fed into machine learning algorithms. Take the assistance of our visually attention-grabbing reinforcement learning PowerPoint templates and discuss the effective uses of artificial intelligence in various areas such as supply chain, human resources, fraud detection, knowledge creation, research, and development, etc. You can also present the usage of AI in healthcare. This includes treatment, diagnosis, training and research, early detection, etc. Explain the working of machine learning by downloading our attention-grabbing supervised learning PowerPoint presentation. https://bit.ly/3kQBnEZ
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
https://telecombcn-dl.github.io/dlmm-2017-dcu/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
The slide is written based on the first part of survey paper: W. Li, Z. Wang, G. Wei, L. Ma, J. Hu, and D. Ding, “A survey
on multi-sensor fusion and consensus filtering for sensor
networks,” Discrete Dynamics in Nature and Society, vol.
2015, Article ID 683701, 12 pages, 2015., see more in: http://dx.doi.org/10.1155/2015/683701.
[PR12] Inception and Xception - Jaejun YooJaeJun Yoo
Introduction to Inception and Xception
video: https://youtu.be/V0dLhyg5_Dw
Papers:
Going Deeper with Convolutions
Rethinking the Inception Architecture for Computer Vision
Inception-v4, Inception-RestNet and the Impact of Residual Connections on Learning
Xception: Deep Learning with Depthwise Separable Convolutions
In some applications, the output of the system is a sequence of actions. In such a case, a single action is not important
game playing where a single move by itself is not that important.in the case of the agent acts on its environment, it receives some evaluation of its action (reinforcement),
but is not told of which action is the correct one to achieve its goal
Reinforcement Learning 6. Temporal Difference LearningSeung Jae Lee
A summary of Chapter 6: Temporal Difference Learning of the book 'Reinforcement Learning: An Introduction' by Sutton and Barto. You can find the full book in Professor Sutton's website: http://incompleteideas.net/book/the-book-2nd.html
Check my website for more slides of books and papers!
https://www.endtoend.ai
A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In two dimentional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 231번째 논문 review 입니다
이번 논문은 Google Brain에서 나온 A Simple Framework for Contrastive Learning of Visual Representations입니다. Geoffrey Hinton님이 마지막 저자이시기도 해서 최근에 더 주목을 받고 있는 논문입니다.
이 논문은 최근에 굉장히 핫한 topic인 contrastive learning을 이용한 self-supervised learning쪽 논문으로 supervised learning으로 학습한 ResNet50와 동일한 성능을 얻을 수 있는 unsupervised pre-trainig 방법을 제안하였습니다. Data augmentation, Non-linear projection head, large batch size, longer training, NTXent loss 등을 활용하여 훌륭한 representation learning이 가능함을 보여주었고, semi-supervised learning이나 transfer learning에서도 매우 뛰어난 결과를 보여주었습니다. 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/2002.05709
영상링크: https://youtu.be/FWhM3juUM6s
Transformer modality is an established architecture in natural language processing that utilizes a framework of self-attention with a deep learning approach.
This presentation was delivered under the mentorship of Mr. Mukunthan Tharmakulasingam (University of Surrey, UK), as a part of the ScholarX program from Sustainable Education Foundation.
Reinforcement Learning In AI Powerpoint Presentation Slide Templates Complete...SlideTeam
Showcase how machines are built to perform intelligent tasks by using our content-ready Reinforcement Learning In AI PowerPoint Presentation Slide Templates Complete Deck. Take advantage of these artificial intelligence PowerPoint visuals, and describe how machine learning models are trained to make sequences of decisions in a complex environment. Showcase the types of artificial intelligence such as deep learning, machine learning. Explain the concept of machine learning which delivers predictive models based on the data fed into machine learning algorithms. Take the assistance of our visually attention-grabbing reinforcement learning PowerPoint templates and discuss the effective uses of artificial intelligence in various areas such as supply chain, human resources, fraud detection, knowledge creation, research, and development, etc. You can also present the usage of AI in healthcare. This includes treatment, diagnosis, training and research, early detection, etc. Explain the working of machine learning by downloading our attention-grabbing supervised learning PowerPoint presentation. https://bit.ly/3kQBnEZ
Slides for a talk about Graph Neural Networks architectures, overview taken from very good paper by Zonghan Wu et al. (https://arxiv.org/pdf/1901.00596.pdf)
https://telecombcn-dl.github.io/dlmm-2017-dcu/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
The slide is written based on the first part of survey paper: W. Li, Z. Wang, G. Wei, L. Ma, J. Hu, and D. Ding, “A survey
on multi-sensor fusion and consensus filtering for sensor
networks,” Discrete Dynamics in Nature and Society, vol.
2015, Article ID 683701, 12 pages, 2015., see more in: http://dx.doi.org/10.1155/2015/683701.
[PR12] Inception and Xception - Jaejun YooJaeJun Yoo
Introduction to Inception and Xception
video: https://youtu.be/V0dLhyg5_Dw
Papers:
Going Deeper with Convolutions
Rethinking the Inception Architecture for Computer Vision
Inception-v4, Inception-RestNet and the Impact of Residual Connections on Learning
Xception: Deep Learning with Depthwise Separable Convolutions
In some applications, the output of the system is a sequence of actions. In such a case, a single action is not important
game playing where a single move by itself is not that important.in the case of the agent acts on its environment, it receives some evaluation of its action (reinforcement),
but is not told of which action is the correct one to achieve its goal
Reinforcement Learning 6. Temporal Difference LearningSeung Jae Lee
A summary of Chapter 6: Temporal Difference Learning of the book 'Reinforcement Learning: An Introduction' by Sutton and Barto. You can find the full book in Professor Sutton's website: http://incompleteideas.net/book/the-book-2nd.html
Check my website for more slides of books and papers!
https://www.endtoend.ai
A Support Vector Machine (SVM) is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data (supervised learning), the algorithm outputs an optimal hyperplane which categorizes new examples. In two dimentional space this hyperplane is a line dividing a plane in two parts where in each class lay in either side.
Part 1 of the Deep Learning Fundamentals Series, this session discusses the use cases and scenarios surrounding Deep Learning and AI; reviews the fundamentals of artificial neural networks (ANNs) and perceptrons; discuss the basics around optimization beginning with the cost function, gradient descent, and backpropagation; and activation functions (including Sigmoid, TanH, and ReLU). The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
PR-231: A Simple Framework for Contrastive Learning of Visual RepresentationsJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 231번째 논문 review 입니다
이번 논문은 Google Brain에서 나온 A Simple Framework for Contrastive Learning of Visual Representations입니다. Geoffrey Hinton님이 마지막 저자이시기도 해서 최근에 더 주목을 받고 있는 논문입니다.
이 논문은 최근에 굉장히 핫한 topic인 contrastive learning을 이용한 self-supervised learning쪽 논문으로 supervised learning으로 학습한 ResNet50와 동일한 성능을 얻을 수 있는 unsupervised pre-trainig 방법을 제안하였습니다. Data augmentation, Non-linear projection head, large batch size, longer training, NTXent loss 등을 활용하여 훌륭한 representation learning이 가능함을 보여주었고, semi-supervised learning이나 transfer learning에서도 매우 뛰어난 결과를 보여주었습니다. 자세한 내용은 영상을 참고해주세요
논문링크: https://arxiv.org/abs/2002.05709
영상링크: https://youtu.be/FWhM3juUM6s
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Transfer Learning CV - Souradip and SayakSayak Paul
Transfer learning is specifically very helpful when there is a scarcity of data, limited bandwidth that might not allow training deep models from scratch, and so on. In the world of computer vision, ImageNet pre-training has been widely successful across a number of different tasks, image classification being the most popular one. All of that success has been possible mainly because of the ImageNet dataset which is a collection of images spanning across 1000 labels. This is where a stern limitation comes in - the need for having labeled data. In this session, we want to take a deep dive into the world of self-supervised learning which allows models to exploit the implicit labels of input data. In the first half of the session, we will be covering the basics of transfer learning, its successes, and its challenges. We will then start by formulating the problem that self-supervised learning tries to address. In the second half of the session, we will be discussing the ABCs of self-supervised learning along with some examples. We will conclude by a shortcode walk-through and a discussion on the challenges of self-supervised learning.
Semantic Segmentation on Satellite ImageryRAHUL BHOJWANI
This is an Image Semantic Segmentation project targeted on Satellite Imagery. The goal was to detect the pixel-wise segmentation map for various objects in Satellite Imagery including buildings, water bodies, roads etc. The data for this was taken from the Kaggle competition <https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection>.
We implemented FCN, U-Net and Segnet Deep learning architectures for this task.
From Conventional Machine Learning to Deep Learning and Beyond.pptxChun-Hao Chang
In this slide, Deep Learning are compared with Conventional Learning and the strength of DNN models will be explained.
The target audience are people who have the knowledge of Machine Learning or Data Mining but not familiar with Deep Learning.
https://telecombcn-dl.github.io/dlmm-2017-dcu/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
https://telecombcn-dl.github.io/2017-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Finding the best solution for Image ProcessingTech Triveni
What is beyond using Tensorflow, GPU or TPU to process images seamlessly? Do we have a silver bullet for image processing? Over the years, image processing has picked up a different level of attraction. Everyone can think about its ease of usability because it has become a reality now. We have started seeing how Residual Neural Network architecture is being used for different cases and not only that, how Residual Neural network is being tweaked to solve different problems. Along with tweaking the ResNet, preprocessing is also being improved to support different architecture for this matter.
Everyone has almost become cyborg already with mobile phones in our hands and apparently until human beings bring the AI/ML to the phones completely they are not taking any rest. We are going to see the development of different architecture and algorithms around running AI/ML on low configuration devices.
In this session, we are going to talk about different research papers submitted for these matters and some implementations for the same as well.
The Frontier of Deep Learning in 2020 and BeyondNUS-ISS
This talk will be a summary of the recent advances in deep learning research, current trends in the industry, and the opportunities that lie ahead.
We will discuss topics in research such as:
Transformers, GPT-3, BERT
Neural Architecture Search, Evolutionary Search
Distillation, self-learning
NeRF
Self-Attention
Also shifting industry trends such as:
The move to free data
Rising importance of 3D vision
Using synthetic data (Sim2Real)
Mobile vision & Federated Learning
This talk was presented in Startup Master Class 2017 - http://aaiitkblr.org/smc/ 2017 @ Christ College Bangalore. Hosted by IIT Kanpur Alumni Association and co-presented by IIT KGP Alumni Association, IITACB, PanIIT, IIMA and IIMB alumni.
My co-presenter was Biswa Gourav Singh. And contributor was Navin Manaswi.
http://dataconomy.com/2017/04/history-neural-networks/ - timeline for neural networks
Similar to Learning visual representation without human label (20)
A new paper published by OpenAI discussing generalization in deep learning and provide an observation that how model & data complexity influence each other.
Learning to discover monte carlo algorithm on spin ice manifoldKai-Wen Zhao
The global update Monte Carlo sampler can be discovered naturally by trained machine using policy gradient method on topologically constrained environment.
Toward Disentanglement through Understand ELBOKai-Wen Zhao
Disentangled representation is the holy grail for representation learning which factorizes human-understandable factors in unsupervised way what help us move forward to interpretable machine learning.
Deep Reinforcement Learning: Q-LearningKai-Wen Zhao
This slide reviews deep reinforcement learning, specially Q-Learning and its variants. We introduce Bellman operator and approximate it with deep neural network. Last but not least, we review the classical paper: DeepMind Atari Game beats human performance. Also, some tips of stabilizing DQN are included.
High Dimensional Data Visualization using t-SNEKai-Wen Zhao
Review of the t-SNE algorithm which helps visualizing the high dimensional data on manifold by projecting them onto 2D or 3D space with metric preserving.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
5. Today’s Topic
Visual Representation
● Global: style, semantics
● Local: attribute
● Metric: embedding
Label
● Much more expensive than general data (can not scale)
● Usually annotated for the specific task
● Contains far less information than data itself
General visual features
6. Goal of self-supervised learning
● Explore the structure of data distribution
● Task-driven representations are limited by targets (requirements of the task)
● Rapid generalization to new tasks and applications
Motivation
7. ● Hot research topics
● Performance approaches supervised setting
● Relate to deep metric learning
Practical Motivation
SSLFTW → Self-Supervised Learning F**k The World !!!
https://twitter.com/ylecun/status/1228763787244843013
9. ● Are manual-label necessary for learning useful concepts? Are data itself contain rich
information?
● Can we treat each image as a single class? Or even single pixel as a class?
● Can we implicitly increase batch size? How to maintain embedding space stability?
● Does final layer contain rich representation?
● Do we reach the complexity upper bound of ResNet-50? If not, what is the efficient
training procedure?
● Is data augmentation a trick or an important DL feature?
Outline: Questions to be Discussed
16. Two Major Ideas
Pretext Task (Surrogate Loss)
● Rotation
● Jigsaw Puzzle
● Colorization
`Data-Centric` Loss Function
● Mutual Information
● Energy-based model
Aside:
Disentanglement in β TC-VAE
18. Autoregressive Generative Model & PixelCNN
Product chain rule of probability
Conditional Image Generation with PixelCNN Decoders
19. Self-Supervised without Reconstruction
● PixelCNN is almost the best likelihood model
● But log-likelihood model is flaw to encode high-level information
● Deep network learns hierarchical internal representation of the data
● Learn the dataset, not the data points
● Use high-level information to organize low-level data rather than annotate it
20. CPC: Contrastive Predictive Coding
Rather than directly model the distribution, extract shared information between x & c
be better for the purpose
Summarize pixels into context (or history into current state)
MI: generalized correlation function
c x
22. CPC: Contrastive Predictive Coding
To maximize mutual information,
We model the ratio of prob. density
Simply use log-bilinear model as f, and linear map to predict future
Note: bilinear map f(u, v) = dot(u, v)
where
23. CPC: Contrastive Predictive Coding
● It’s called InfoNCE (also called categorical cross-entropy or softmax loss)
● 1 Postive sample; N-1 Negative samples
● N is crucial to the performance
● Loose theoretical lower bound estimation
all predictions
real future
Learning Deep Representations of Fine-grained Visual Descriptions
25. Evaluation Protocol
● Linear Evaluation
● Efficient Classification
● Transfer Learning
Labelled images in ImageNet
(14 million images)
● 1% : 12.8 per class
● 10%: 128 per class
Balanced distribution over class.
26. CPC: Contrastive Predictive Coding
● First paper shows significant
improvement in real dataset
● Top 1 = 71.5% on ImageNet
● Label efficiency becomes
advantage
27. CPC: Contrastive Predictive Coding
From v1 to v2
MC: Model Capacity
BU: Bottom-up Spatical Prediction
LN: Layer Normalization
RC: Random Crop
HP: Horizontal Spatical Prediction
PA: Patch Augmentation
MC & LN
● R101 → R161 & increase feat. dim
● BN statisics may cheat
31. Qualitative Study
Remarks
● Pretext task does not always translate well
● Skip connection prevent degradation of
representation quality
● Model Capacity (depth & rep. size) strongly
influence quality
Revisiting Self-Supervised Visual Representation Learning
32. MoCo: Momentum Contrast
Contrastive Learning as Dictionary Look-Up
Dictionary should be large & consistent
● Context: Right Key
Momentum Contrast for Unsupervised Visual Representation Learning
33. MoCo: Momentum Contrast
Memory Bank (as Replay Buffer in RL)
● Batch size is limited by hardware
● Maintain all keys in memory O(N)
MoCo
● Dynamic queue rather than memory bank O(N) → O(K)
● Momentum update encoder rather than key
Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination
37. SimCLR: Simple Framework for Contrastive Learning
Up to date, in 2020, Hinton publishes
- Stacked Capsule AE
- SimCLR
- Subclass Distillation
A Simple Framework for Contrastive Learning of Visual Representations
38. SimCLR: Simple Framework for Contrastive Learning
● Data augmentation is crucial for
contrastive learning
● Non-linear mapping preserve the
information
● Larger batch size
● Normalized embedding
41. SimCLR: Simple Framework for Contrastive Learning
Random Cropping + Color Distortion
Histogram of different crops in two images
42. SimCLR: Simple Framework for Contrastive Learning
Remind ReID Strong Baseline
● h: triplet embedding
● z: inference embedding & ID loss
● g: batch norm
Loss of information induced by
contrastive loss for downstream task.
Projection
Representation
Bag of Tricks and A Strong Baseline for Deep Person Re-identification
43. SimCLR: Simple Framework for Contrastive Learning
Money Talks
● Batch Size
● Training Epoch
● Simple
47. Teacher-Student
Teacher-Student is kind of self-training
framework
● Train teacher with labelled data D
● Run trained teacher on unlabelled
examples D’
● Train a new student on D’
● Finetune student on D
Billion-scale semi-supervised learning for image classification
49. Noisy Student
● Train teacher with labelled
data D
● Run trained teacher on
unlabelled examples D’
● Train a equal size or larger
student on D & D’ with noise
added to student
→ Knowledge Expansion
Self-training with Noisy Student improves ImageNet classification
50. Noisy Student
Data Noise
● RandAug
→ Local Smoothness
Model Noise
● Dropout
● Stochastic Depth
→ Ensemble teacher
Others
● Data Balancing
Deep Networks with Stochastic Depth,
RandAugment: Practical automated data augmentation with a reduced search space
53. Affinity and Diversity
Affinity and Diversity: Quantifying Mechanisms of Data Augmentation
● Affinity: Distribution shift caused by augmentation
● Diversity: Complexity of augmentation applied
(Both are model-dependent measures)
Increase the effective unique
number of training data
54. Affinity and Diversity
Performance Boost Tricks
● Decaying learning rate on an appropriate
schedule
● Turning off l2 regularization at the right time in
training does not hurt performance
● Relaxing architectural constraints mid-training can
boost final performance
● Turning augmentations off and fine-tuning on
clean data can improve final test accuracy
55. Conclusion
Insights & Techniques
● Usage of Unlabelled or Pseudo Labelled Data
● Contrastive Loss Extracts Representative Features
● Distallation Squeezes Large-Scale Dataset
● Data Balance
● Dynamic Queue for Negative Samples
● Momentum Update for Encoder
● Non-linaer Head for Representation Preservation
● Augmentation Composition
● Constraint Relaxation during Training
56. References
1. Unsupervised Representation Learning by Predicting Image Rotations
2. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
3. Tracking Emerges by Colorizing Videos
4. Conditional Image Generation with PixelCNN Decoders
5. Representation Learning with Contrastive Predictive Coding
6. Learning Deep Representations of Fine-grained Visual Descriptions
7. Data-Efficient Image Recognition with Contrastive Predictive Coding
8. Contrastive Multiview Coding
9. Revisiting Self-Supervised Visual Representation Learning
10. Momentum Contrast for Unsupervised Visual Representation Learning
11. Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination
12. A Simple Framework for Contrastive Learning of Visual Representations
13. Billion-scale semi-supervised learning for image classification
14. Self-training with Noisy Student improves ImageNet classification
15. Deep Networks with Stochastic Depth
16. RandAugment: Practical automated data augmentation with a reduced search space
17. Affinity and Diversity: Quantifying Mechanisms of Data Augmentation