The performance of deep neural networks improves with more annotated data. The problem is that the budget for annotation is limited. One solution to this is active learning, where a model asks human to annotate data that it perceived as uncertain. A variety of recent methods have been proposed to apply active learning to deep networks but most of them are either designed specific for their target tasks or computationally inefficient for large networks. In this paper, we propose a novel active learning method that is simple but task-agnostic, and works efficiently with the deep networks. We attach a small parametric module, named “loss prediction module,” to a target network, and learn it to predict target losses of unlabeled inputs. Then, this module can suggest data that the target model is likely to produce a wrong prediction. This method is task-agnostic as networks are learned from a single loss regardless of target tasks. We rigorously validate our method through image classification, object detection, and human pose estimation, with the recent network architectures. The results demonstrate that our method consistently outperforms the previous methods over the tasks
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
발표자: 최윤제(고려대 석사과정)
최윤제 (Yunjey Choi)는 고려대학교에서 컴퓨터공학을 전공하였으며, 현재는 석사과정으로 Machine Learning을 공부하고 있는 학생이다. 코딩을 좋아하며 이해한 것을 다른 사람들에게 공유하는 것을 좋아한다. 1년 간 TensorFlow를 사용하여 Deep Learning을 공부하였고 현재는 PyTorch를 사용하여 Generative Adversarial Network를 공부하고 있다. TensorFlow로 여러 논문들을 구현, PyTorch Tutorial을 만들어 Github에 공개한 이력을 갖고 있다.
개요:
Generative Adversarial Network(GAN)은 2014년 Ian Goodfellow에 의해 처음으로 제안되었으며, 적대적 학습을 통해 실제 데이터의 분포를 추정하는 생성 모델입니다. 최근 들어 GAN은 가장 인기있는 연구 분야로 떠오르고 있고 하루에도 수 많은 관련 논문들이 쏟아져 나오고 있습니다.
수 없이 쏟아져 나오고 있는 GAN 논문들을 다 읽기가 힘드신가요? 괜찮습니다. 기본적인 GAN만 완벽하게 이해한다면 새로 나오는 논문들도 쉽게 이해할 수 있습니다.
이번 발표를 통해 제가 GAN에 대해 알고 있는 모든 것들을 전달해드리고자 합니다. GAN을 아예 모르시는 분들, GAN에 대한 이론적인 내용이 궁금하셨던 분들, GAN을 어떻게 활용할 수 있을지 궁금하셨던 분들이 발표를 들으면 좋을 것 같습니다.
발표영상: https://youtu.be/odpjk7_tGY0
PR-214: FlowNet: Learning Optical Flow with Convolutional NetworksHyeongmin Lee
제 PR12 첫번째 발표 논문은 FlowNet이라는 논문입니다.
Optical Flow는 비디오의 인접한 Frame에 대하여 각 Pixel이 첫 번째 Frame에서 두 번째 Frame으로 얼마나 이동했는지의 Vector를 모든 위치에 대하여 나타낸 Map입니다. Video에 Motion을 분석하는 일은 매우 중요하기 때문에, 이러한 Optical Flow 역시 굉장히 중요한 요소 중 하나인데요, 이번 영상에서는 고전적인 Computer Vision에서 쓰였던 다양한 Optical Flow 알고리즘들과, Deep Learning Based로 Optical Flow를 구하는 Neural Network인 FlowNet에 대하여 알아보겠습니다.
감사합니다!!
영상 링크: https://youtu.be/Z_t0shK98pM
논문 링크: http://openaccess.thecvf.com/content_iccv_2015/html/Dosovitskiy_FlowNet_Learning_Optical_ICCV_2015_paper.html
In this talk, Dmitry shares his approach to feature engineering which he used successfully in various Kaggle competitions. He covers common techniques used to convert your features into numeric representation used by ML algorithms.
발표자: 최윤제(고려대 석사과정)
최윤제 (Yunjey Choi)는 고려대학교에서 컴퓨터공학을 전공하였으며, 현재는 석사과정으로 Machine Learning을 공부하고 있는 학생이다. 코딩을 좋아하며 이해한 것을 다른 사람들에게 공유하는 것을 좋아한다. 1년 간 TensorFlow를 사용하여 Deep Learning을 공부하였고 현재는 PyTorch를 사용하여 Generative Adversarial Network를 공부하고 있다. TensorFlow로 여러 논문들을 구현, PyTorch Tutorial을 만들어 Github에 공개한 이력을 갖고 있다.
개요:
Generative Adversarial Network(GAN)은 2014년 Ian Goodfellow에 의해 처음으로 제안되었으며, 적대적 학습을 통해 실제 데이터의 분포를 추정하는 생성 모델입니다. 최근 들어 GAN은 가장 인기있는 연구 분야로 떠오르고 있고 하루에도 수 많은 관련 논문들이 쏟아져 나오고 있습니다.
수 없이 쏟아져 나오고 있는 GAN 논문들을 다 읽기가 힘드신가요? 괜찮습니다. 기본적인 GAN만 완벽하게 이해한다면 새로 나오는 논문들도 쉽게 이해할 수 있습니다.
이번 발표를 통해 제가 GAN에 대해 알고 있는 모든 것들을 전달해드리고자 합니다. GAN을 아예 모르시는 분들, GAN에 대한 이론적인 내용이 궁금하셨던 분들, GAN을 어떻게 활용할 수 있을지 궁금하셨던 분들이 발표를 들으면 좋을 것 같습니다.
발표영상: https://youtu.be/odpjk7_tGY0
PR-214: FlowNet: Learning Optical Flow with Convolutional NetworksHyeongmin Lee
제 PR12 첫번째 발표 논문은 FlowNet이라는 논문입니다.
Optical Flow는 비디오의 인접한 Frame에 대하여 각 Pixel이 첫 번째 Frame에서 두 번째 Frame으로 얼마나 이동했는지의 Vector를 모든 위치에 대하여 나타낸 Map입니다. Video에 Motion을 분석하는 일은 매우 중요하기 때문에, 이러한 Optical Flow 역시 굉장히 중요한 요소 중 하나인데요, 이번 영상에서는 고전적인 Computer Vision에서 쓰였던 다양한 Optical Flow 알고리즘들과, Deep Learning Based로 Optical Flow를 구하는 Neural Network인 FlowNet에 대하여 알아보겠습니다.
감사합니다!!
영상 링크: https://youtu.be/Z_t0shK98pM
논문 링크: http://openaccess.thecvf.com/content_iccv_2015/html/Dosovitskiy_FlowNet_Learning_Optical_ICCV_2015_paper.html
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
발표자: 박태성 (UC Berkeley 박사과정)
발표일: 2017.6.
Taesung Park is a Ph.D. student at UC Berkeley in AI and computer vision, advised by Prof. Alexei Efros.
His research interest lies between computer vision and computational photography, such as generating realistic images or enhancing photo qualities. He received B.S. in mathematics and M.S. in computer science from Stanford University.
개요:
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
However, for many tasks, paired training data will not be available.
We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples.
Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
Because this mapping is highly under-constrained, we couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc.
Quantitative comparisons against several prior methods demonstrate the superiority of our approach.
Introduction to Statistical Machine Learningmahutte
This course provides a broad introduction to the methods and practice
of statistical machine learning, which is concerned with the development
of algorithms and techniques that learn from observed data by
constructing stochastic models that can be used for making predictions
and decisions. Topics covered include Bayesian inference and maximum
likelihood modeling; regression, classi¯cation, density estimation,
clustering, principal component analysis; parametric, semi-parametric,
and non-parametric models; basis functions, neural networks, kernel
methods, and graphical models; deterministic and stochastic
optimization; over¯tting, regularization, and validation.
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...Simplilearn
This presentation on "Supervised and Unsupervised Learning" will help you understand what is machine learning, what are the types of Machine learning, what is supervised machine learning, types of supervised machine learning, what is unsupervised learning, types of unsupervised learning and what are the differences between supervised and unsupervised machine learning. In supervised learning, the model learns from a labeled data whereas in unsupervised learning, model trains itself on unlabeled data. Now, let us get started and understand supervised and unsupervised learning and how they are different from each other.
Below are the topics explained in this supervised and unsupervised learning in Machine Learning presentation-
1. What is Machine Learning
- Types of Machine Learning
- Supervised Learning
- Unsupervised Learning
2. Supervised Learning
- Types of Supervised Learning
3. Unsupervised Learning
- Types of Unsupervised Learning
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with the knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
By the end of this Machine Learning course, you will be able to:
1. Master the concepts of supervised, unsupervised and reinforcement learning concepts and modeling.
2. Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire a thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
5. Be able to model a wide variety of robust Machine Learning algorithms including deep learning, clustering, and recommendation systems
Learn more at: https://www.simplilearn.com/
By popular demand, here is a case study of my first Kaggle competition from about a year ago. Hope you find it useful. Thank you again to my fantastic team.
Machine Learning 2 deep Learning: An IntroSi Krishan
Provides a brief introduction to machine learning, reasons for its popularity, a simple walk through example and then a need for deep learning and some of its characteristics. This is an updated version of an earlier presentation.
This presentation covers Decision Tree as a supervised machine learning technique, talking about Information Gain method and Gini Index method with their related Algorithms.
Part 2 of the Deep Learning Fundamentals Series, this session discusses Tuning Training (including hyperparameters, overfitting/underfitting), Training Algorithms (including different learning rates, backpropagation), Optimization (including stochastic gradient descent, momentum, Nesterov Accelerated Gradient, RMSprop, Adaptive algorithms - Adam, Adadelta, etc.), and a primer on Convolutional Neural Networks. The demos included in these slides are running on Keras with TensorFlow backend on Databricks.
발표자: 박태성 (UC Berkeley 박사과정)
발표일: 2017.6.
Taesung Park is a Ph.D. student at UC Berkeley in AI and computer vision, advised by Prof. Alexei Efros.
His research interest lies between computer vision and computational photography, such as generating realistic images or enhancing photo qualities. He received B.S. in mathematics and M.S. in computer science from Stanford University.
개요:
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
However, for many tasks, paired training data will not be available.
We present an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples.
Our goal is to learn a mapping G: X → Y such that the distribution of images from G(X) is indistinguishable from the distribution Y using an adversarial loss.
Because this mapping is highly under-constrained, we couple it with an inverse mapping F: Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc.
Quantitative comparisons against several prior methods demonstrate the superiority of our approach.
Introduction to Statistical Machine Learningmahutte
This course provides a broad introduction to the methods and practice
of statistical machine learning, which is concerned with the development
of algorithms and techniques that learn from observed data by
constructing stochastic models that can be used for making predictions
and decisions. Topics covered include Bayesian inference and maximum
likelihood modeling; regression, classi¯cation, density estimation,
clustering, principal component analysis; parametric, semi-parametric,
and non-parametric models; basis functions, neural networks, kernel
methods, and graphical models; deterministic and stochastic
optimization; over¯tting, regularization, and validation.
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...Simplilearn
This presentation on "Supervised and Unsupervised Learning" will help you understand what is machine learning, what are the types of Machine learning, what is supervised machine learning, types of supervised machine learning, what is unsupervised learning, types of unsupervised learning and what are the differences between supervised and unsupervised machine learning. In supervised learning, the model learns from a labeled data whereas in unsupervised learning, model trains itself on unlabeled data. Now, let us get started and understand supervised and unsupervised learning and how they are different from each other.
Below are the topics explained in this supervised and unsupervised learning in Machine Learning presentation-
1. What is Machine Learning
- Types of Machine Learning
- Supervised Learning
- Unsupervised Learning
2. Supervised Learning
- Types of Supervised Learning
3. Unsupervised Learning
- Types of Unsupervised Learning
About Simplilearn Machine Learning course:
A form of artificial intelligence, Machine Learning is revolutionizing the world of computing as well as all people’s digital interactions. Machine Learning powers such innovative automated technologies as recommendation engines, facial recognition, fraud protection and even self-driving cars. This Machine Learning course prepares engineers, data scientists and other professionals with the knowledge and hands-on skills required for certification and job competency in Machine Learning.
Why learn Machine Learning?
Machine Learning is taking over the world- and with that, there is a growing need among companies for professionals to know the ins and outs of Machine Learning
The Machine Learning market size is expected to grow from USD 1.03 Billion in 2016 to USD 8.81 Billion by 2022, at a Compound Annual Growth Rate (CAGR) of 44.1% during the forecast period.
By the end of this Machine Learning course, you will be able to:
1. Master the concepts of supervised, unsupervised and reinforcement learning concepts and modeling.
2. Gain practical mastery over principles, algorithms, and applications of Machine Learning through a hands-on approach which includes working on 28 projects and one capstone project.
3. Acquire a thorough knowledge of the mathematical and heuristic aspects of Machine Learning.
4. Understand the concepts and operation of support vector machines, kernel SVM, naive Bayes, decision tree classifier, random forest classifier, logistic regression, K-nearest neighbors, K-means clustering and more.
5. Be able to model a wide variety of robust Machine Learning algorithms including deep learning, clustering, and recommendation systems
Learn more at: https://www.simplilearn.com/
By popular demand, here is a case study of my first Kaggle competition from about a year ago. Hope you find it useful. Thank you again to my fantastic team.
Machine Learning 2 deep Learning: An IntroSi Krishan
Provides a brief introduction to machine learning, reasons for its popularity, a simple walk through example and then a need for deep learning and some of its characteristics. This is an updated version of an earlier presentation.
A Brief Introduction to Machine Learning techniques applied in data science. Definitions and applications of machine learning algorithms. Classification and Regression Techniques.
How Machine Learning Helps Organizations to Work More Efficiently?Tuan Yang
Data is increasing day by day and so is the cost of data storage and handling. However, by understanding the concepts of machine learning one can easily handle the excessive data and can process it in an affordable manner.
The process includes making models by using several kinds of algorithms. If the model is created precisely for certain task, then the organizations have a very wide chance of making use of profitable opportunities and avoiding the risks lurking behind the scenes.
Learn more about:
» Understanding Machine Learning Objectives.
» Data dimensions in Machine Learning.
» Fundamentals of Algorithms and Mapping from Input/Output.
» Parametric and Non-parametric Machine Learning Algorithms.
» Supervised, Unsupervised and Semi-Supervised Learning.
» Estimating Over-fitting and Under-fitting.
» Use Cases.
Explores the feasibility of a graph-based approach to model student knowledge in the domain of programming. The key idea of this approach is that programming concepts are truly learned not in isolation, but rather in combination with other concepts. Following this idea, we represent a student model as a graph where links are gradually added when the student’s ability to work with connected pairs of concepts in the same context is confirmed. We also hypothesize that with this graph-based approach a number of traditional graph metrics could be used to better measure student knowledge than using more traditional scalar models of student knowledge. To collect some early evidence in favor of this idea, we used data from several classroom studies to correlate graph metrics with various performance and motivation metrics.
Winning Kaggle 101: Introduction to StackingTed Xiao
An Introduction to Stacking by Erin LeDell, from H2O.ai
Presented as part of the "Winning Kaggle 101" event, hosted by Machine Learning at Berkeley and Data Science Society at Berkeley. Special thanks to the Berkeley Institute of Data Science for the venue!
H2O.ai: http://www.h2o.ai/
ML@B: ml.berkeley.edu
DSSB: http://dssberkeley.org
BIDS: http://bids.berkeley.edu/
Hands-On Machine Learning with Scikit-Learn and TensorFlow - Chapter8Hakky St
This is the documentation of the study-meeting in lab.
Tha book title is "Hands-On Machine Learning with Scikit-Learn and TensorFlow" and this is the chapter 8.
Machine Learning Foundations for Professional ManagersAlbert Y. C. Chen
20180804@Taiwan AI Academy, Hsinchu
6 hour lecture for those new to machine learning, to grasps the concepts, advantages and limitations of various classical machine learning methods. More importantly, to learn the skills to break down large complicated AI projects into manageable pieces, where features and functionalities could be added incrementally and annotated data accumulated. Take home message: machine learning is always a delicate balance between model complexity M and number of data N so that the trained classifier generalizes well and does not overfit.
Big data is set to offer tremendous insight. But with terabytes and petabytes of data pouring in to organizations today, traditional architectures and infrastructures are not up to the challenge. This begs the question: How do you present big data in a way that can be quickly understood and used? These data present tremendous opportunities in data mining, a burgeoning field in computer science that focuses on the development of methods that can extract knowledge from data. In many real world problems, data mining algorithms have access to massive amounts of data. Mining all the available data is prohibitive due to computational (time and memory) constraints. Much of the current research is concerned with scaling up data mining algorithms (i.e. improving on existing data mining algorithms for larger datasets). An alternative approach is to scale down the data. Thus, determining a smallest sufficient training set size that obtains the same accuracy as the entire available dataset remains an important research question. Our research focuses on selecting how many (sampling) instances to present to the data mining algorithm and also how to improve the quality of the data.
Dr. Ashwin Satyanarayana is an Assistant Professor in the Computer Systems Technology department at CityTech. Prior to joining CityTech, Ashwin was a Research Scientist at Microsoft, where he worked on several Big Data problems including Query Reformulation on Microsoft's search engine Bing. Ashwin's prior experience also includes a Senior Research Scientist on the area of Location Analytics at Placed Inc. He holds a PhD in Computer Science (Data Mining) from SUNY, with particular emphasis on Data Mining, Machine Learning and Applied Probability with applications in Real World Learning Problems.
Utilizing additional information in factorization methods (research overview,...Balázs Hidasi
This presentation contains the main points of my recommender systems related research. It describes the arc of my research starting from improving matrix factorization, through the developement of my context-aware algorithms & addressing scalability issues to developing a general factorization framework & dealing with context dimension modeling. The slides were presented at the Delft University of Technology where I was invited to give this introductory talk as part of the collaboration between participiants of the CrowdRec project. The presentation was given on 11th April 2014.
A very high level introduction to the field of Data Science, Artificial Intelligence. Covers an introduction to Supervised Learning, Unsupervised Learning, Deep Learning and Neural Networks. Given as part of Industry Lectures event at GVP College of Engineering
비행기 설계를 왜 통일 해야 할까?
디자인 시스템을 하는 이유
비행기들이 다 용도가 다르다...어떻게 설계하지?
맥락이 다른 페이지와 패턴
경유지까지 아직 멀었다... 언제 수리하지?
디자인 시스템을 적용하는 시점
엔지니어랑 얘기해서 정비해야하는데...어떻게 수리하지?
디자인 시스템을 적용하는 프로세스
비행기 설계가 바뀐걸 어떻게 알리지?
디자인 시스템의 전파
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Learning loss for active learning
1. Learning Loss
for Active Learning
Donggeun Yoo In So Kweon
CVPR 2019 (Oral presentation)
Lunit KAIST
2. Introduction
•Very important for deep learning
•It is not questionable that
more data still improves network performance
[Mahajan et al., ECCV’18]
천만~10억장
11. Active Learning: Limitations
• Heuristic approach
• Highest entropy [Joshi et al., CVPR’09]
• Distance to decision boundaries [Tong & Koller, JMLR’01]
(−) Task-specific design
• Ensemble approach [Freund et al., ML’97], [Beluch et al., CVPR’18]
(−) Not scale to large CNNs and data
• Bayesian approach
• Expected error [Roy & McCallum, ICML’01]/model [Kapoor et al., ICCV’07]
• Bayesian inference by dropouts [Gal & Ghahramani ICML’17]
(−) Not scale to large data and CNNs [Sener & Savarese, ICLR’18]
• Distribution approach
• Density-based [Liu & Ferrari, ICCV’17], diversity-based [Sener & Savarese, ICLR’18]
(−) Task-specific design
12. *Entropy
• An information-theoretic measure that represents the
information amount needed to “encode” a distribution.
• The use of entropy in active learning
• Dense prediction (0.33, 0.33, 0.33) → maximum
• Sparse prediction (1.00, 0.00, 0.00) → minimum
13. *Entropy
• An information-theoretic measure that represents the
information amount needed to “encode” a distribution.
• The use of entropy in active learning
• Dense prediction (0.33, 0.33, 0.33) → maximum
• Sparse prediction (1.00, 0.00, 0.00) → minimum
(+) Very simple but works well (also in deep networks)
(−) Specific for classification problem
14. Active Learning: Limitations
• Heuristic approach
• Highest entropy [Joshi et al., CVPR’09]
• Distance to decision boundaries [Tong & Koller, JMLR’01]
(−) Task-specific design
• Ensemble approach [Freund et al., ML’97], [Beluch et al., CVPR’18]
(−) Not scale to large CNNs and data
• Bayesian approach
• Expected error [Roy & McCallum, ICML’01]/model [Kapoor et al., ICCV’07]
• Bayesian inference by dropouts [Gal & Ghahramani ICML’17]
(−) Not scale to large data and CNNs [Sener & Savarese, ICLR’18]
• Distribution approach
• Density-based [Liu & Ferrari, ICCV’17], diversity-based [Sener & Savarese, ICLR’18]
(−) Task-specific design
15. *Bayesian Inference
• Training
• Dropout layer inserted to every convolution layer
• Inference
• N feed forwards → N predictions
• Uncertainty = variance between predictions
16. *Bayesian Inference
• Training
• Dropout layer inserted to every convolution layer
(−) Super slow convergence
→ impractical for current deep nets
• Inference
• N feed forwards → N predictions
• Uncertainty = variance between predictions
(−) Computationally expensive
17. Active Learning: Limitations
• Heuristic approach
• Highest entropy [Joshi et al., CVPR’09]
• Distance to decision boundaries [Tong & Koller, JMLR’01]
(−) Task-specific design
• Ensemble approach [Freund et al., ML’97], [Beluch et al., CVPR’18]
(−) Not scale to large CNNs and data
• Bayesian approach
• Expected error [Roy & McCallum, ICML’01]/model [Kapoor et al., ICCV’07]
• Bayesian inference by dropouts [Gal & Ghahramani ICML’17]
(−) Not scale to large data and CNNs [Sener & Savarese, ICLR’18]
• Distribution approach
• Density-based [Liu & Ferrari, ICCV’17], diversity-based [Sener & Savarese, ICLR’18]
(−) Task-specific design
21. *Diversity:
Core-set
(+) can be task-agnostic
as it only depends on feature space
(−) not considering ”hard” examples
near the decision boundaries
(−) Expensive optimization for large pool
22. Active Learning: Limitations
• Heuristic approach
• Highest entropy [Joshi et al., CVPR’09]
• Distance to decision boundaries [Tong & Koller, JMLR’01]
(−) Task-specific design
• Ensemble approach [Freund et al., ML’97], [Beluch et al., CVPR’18]
(−) Not scale to large CNNs and data
• Bayesian approach
• Expected error [Roy & McCallum, ICML’01]/model [Kapoor et al., ICCV’07]
• Bayesian inference by dropouts [Gal & Ghahramani ICML’17]
(−) Not scale to large CNNs and data [Sener & Savarese, ICLR’18]
• Distribution approach
• Density-based [Liu & Ferrari, ICCV’17], diversity-based [Sener & Savarese, ICLR’18]
(−) Not considering hard examples
23. Active Learning: Our approach
• Active learning by learning loss
• Attach a “loss prediction module” to a target network
• Learn the module to predict the loss
Unlabeled
pool
⋯Predicted
losses
Labeled
training set
Human oracles
annotate top-𝐾
data points
24. Active Learning: Our approach
• Requirements
• Task-agnostic method
• Not heuristic, learning-based
• Scalable to state-of-the-art networks and large data
25. Active Learning by Learning Loss
Model
Loss prediction module
Input
Target
prediction
Loss
prediction
Target
GT
Target
loss
Loss-prediction
loss
26. Active Learning by Learning Loss
Model
Loss prediction module
Input
Target
prediction
Loss
prediction
Target
GT
Target
loss
Loss-prediction
loss
Multi-task learning
27. Active Learning by Learning Loss
Model
Loss prediction module
Input
Target
prediction
Loss
prediction
Target
GT
Target
loss
Loss-prediction
loss
(+) Applicable to
• any network and data
• any tasks
(+) Nearly zero cost
28. Active Learning by Learning Loss
Model
Loss prediction module
Input
Target
prediction
Loss
prediction
Target
GT
Target
loss
Loss-prediction
loss
(+) Applicable to
• any network and data
• any tasks
(+) Nearly zero cost
𝑥
ො𝑦
𝑦
መ𝑙
𝑙
𝐿loss
መ𝑙, 𝑙
29. Active Learning by Learning Loss
•The loss for loss-prediction 𝐿loss
መ𝑙, 𝑙
•Mean square error?
𝐿𝑙𝑜𝑠𝑠
መ𝑙, 𝑙 = መ𝑙 − 𝑙
2
30. Active Learning by Learning Loss
•The loss for loss-prediction 𝐿loss
መ𝑙, 𝑙
•Mean square error?
→ target task loss 𝑙 reduced as training progresses
𝐿𝑙𝑜𝑠𝑠
መ𝑙, 𝑙 = መ𝑙 − 𝑙
2
Scale changes
31. Active Learning by Learning Loss
•The loss for loss-prediction 𝐿loss
መ𝑙, 𝑙
•To ignore scale changes of 𝑙,
we use a ranking loss
32. Active Learning by Learning Loss
•The loss for loss-prediction 𝐿loss
መ𝑙, 𝑙
•To ignore scale changes of 𝑙,
we use a ranking loss as
𝐿loss
መ𝑙𝑖, መ𝑙𝑗, 𝑙𝑖, 𝑙𝑗 = max 0, −𝟏 𝑙𝑖, 𝑙𝑗 ⋅ መ𝑙𝑖 − መ𝑙𝑗 + 𝜉
where 𝟏 𝑙𝑖, 𝑙𝑗 = ቊ
+1, if 𝑙𝑖 > 𝑙𝑗
−1, otherwise
A pair of
predicted losses
A pair of
real losses
Margin (=1)
33. Active Learning by Learning Loss
•Given a mini-batch B,
the total loss is defined as
1
B
𝑥,𝑦 ∈B
𝐿task ො𝑦, 𝑦 + 𝜆
1
B
⋅
𝑥 𝑖,𝑦 𝑖,𝑥 𝑗,𝑦 𝑗 ∈B
𝐿loss
መ𝑙𝑖, መ𝑙𝑗, 𝑙𝑖, 𝑙𝑗
where 𝑙𝑖 = 𝐿task ො𝑦𝑖, 𝑦𝑖
Target task Loss prediction
A pair 𝑖, 𝑗 within a mini-batch B
34. Active Learning by Learning Loss
•MSE loss VS. Ranking loss
MSE
ResNet-18
CIFAR-10
35. Active Learning by Learning Loss
•MSE loss VS. Ranking loss
MSE
Ranking
ResNet-18
CIFAR-10
36. Active Learning by Learning Loss
•Loss prediction module
Target model
Mid-
block
Mid-
block
Mid-
block
Out
block
Target
prediction
FC
Loss
predictionConcat.
37. Active Learning by Learning Loss
•Loss prediction module
Enough convolutions
Mid-
block
Mid-
block
Mid-
block
Out
block
Target
prediction
FC
Loss
predictionConcat.
Convolved
features
38. Active Learning by Learning Loss
•Loss prediction module
Enough convolutions
Mid-
block
Mid-
block
Mid-
block
Out
block
Target
prediction
FC
Loss
predictionConcat.
Backprop.
to convs
39. Active Learning by Learning Loss
•Loss prediction module
Enough convolutions
• The convolutions would be learned by
the loss prediction loss as well as the target loss
• Sufficiently large receptive field size
40. Active Learning by Learning Loss
•Loss prediction module
Enough convolutions
• The convolutions would be learned by
the loss prediction loss as well as the target loss
• Sufficiently large receptive field size
→ Don’t need more convolutions,
we just focus on merging the multiple features
41. Active Learning by Learning Loss
•Loss prediction module
Target model
FC
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
Loss
prediction
Mid-
block
Mid-
block
Mid-
block
Out
block
Target
prediction
Concat.
(+) very efficient as GAP reduces the feature dimension
42. Active Learning by Learning Loss
•Loss prediction module
Target model
Target
model
FC
Loss
prediction
Mid-
block
Mid-
block
Mid-
block
Out
block
Target
prediction
Concat.
Conv
BN
ReLU
GAP
FC
ReLU
Conv
BN
ReLU
GAP
FC
ReLU
Conv
BN
ReLU
GAP
FC
ReLU
: Added layer
43. Active Learning by Learning Loss
•Loss prediction module
More convolutions VS. Just FC
ResNet-18
CIFAR-10
44. Active Learning by Learning Loss
•Loss prediction module
Target model
FC
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
Loss
prediction
Mid-
block
Mid-
block
Mid-
block
Out
block
Target
prediction
Concat.
45. Experiments (1)
•To validate “task-agnostic” + “state-of-the-art architectures”
Classification
Task Image
classification
Data CIFAR-10
Net ResNet-18
[He et al., CVPR’16]
46. Experiments (1)
•To validate “task-agnostic” + “state-of-the-art architectures”
Classification Classification
+ regression
Task Image
classification
Object
detection
Data CIFAR-10 PASCAL VOC
2007+2012
Net ResNet-18
[He et al., CVPR’16]
SSD
[Liu et al., ECCV’16]
47. Experiments (1)
•To validate “task-agnostic” + “state-of-the-art architectures”
Classification Classification
+ regression
Regression
Task Image
classification
Object
detection
Human pose
estimation
Data CIFAR-10 PASCAL VOC
2007+2012
MPII
Net ResNet-18
[He et al., CVPR’16]
SSD
[Liu et al., ECCV’16]
Stacked
Hourglass
Networks
[Newell et al., ECCV’16]
48. Results
•Image classification over CIFAR 10
FC
GAP
FC
ReLU
Loss
prediction
Concat.
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
ResNet-18
[He et al., CVPR’16]
Target
prediction
512×4×4
256×8×8
64×32×32
128×16×16
128
128
128
128
512
52. Results
•Image classification
over CIFAR 10
(mean of 5 trials)
[Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
+3.37%
Data selection VS. Architecture
Data selection by active learning → +3.37%
DenseNet121[Huang et al.] − ResNet18 → +2.02%
53. Results
•Object detection
SSD (ImageNet pre-trained)
[Liu et al., ECCV’16]
FC
Loss
prediction
Concat.
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
Target
prediction
512×38×38
1024×19×19
512×10×10
256×5×5
256×3×3
256×1×1
128
768
57. Results
•Object detection on
PASCAL VOC 07+12
(mean of 3 trials)
[Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
+2.21%
Data selection VS. Architecture
Data selection by active learning → +2.21%
YOLOv2[Redmon et al.] − SSD → +1.80%
58. Results
•Human pose estimation
over MPII dataset Stacked Hourglass Network
[Newell et al., ECCV’16]
FC
GAP
FC
ReLU
Loss
prediction
Concat.
GAP
FC
ReLU
GAP
FC
ReLU
GAP
FC
ReLU
Target
prediction
An hourglass256×64×64
256×64×64
256×64×64
256×64×64
128
128
128
128
1024
60. [Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
Results
•Human pose estimation
over MPII dataset
(mean of 3 trials)
61. [Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
Results
•Human pose estimation
over MPII dataset
(mean of 3 trials)
+1.84%
62. [Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
Results
•Human pose estimation
over MPII dataset
(mean of 3 trials)
+1.84%
Data selection VS. Number of stacks
Data selection by active learning → +1.84%
8-stacked − 2-stacked → +0.25%
64. Experiments (2)
•To validate “active domain adaptation”,
Dataset Data stats Active learning
Source
domain
MNIST #train:60k
#test: 10k
Use 60k as an
initial labeled pool
Target
domain
MNIST +
background
#train: 12k
#test: 50k
Add 1k for
each cycle
65. Results
•Image classification over MNIST
*https://github.com/pytorch/examples/tree/master/mnist
FC
GAP
FC
ReLU
Loss
prediction
Concat.
GAP
FC
ReLU
GAP
FC
ReLU
PyTorch MNIST model*
Target
prediction
Conv
ReLU
Conv
ReLU
FC
ReLU
FC
Image
10×12×12
20×4×4
50
64
64
64
192
67. Results
•Domain adaptation
from MNIST
to MNIST+background
•Target domain
performance
[Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
Feature space overfitted
to source domain
68. Results
•Domain adaptation
from MNIST
to MNIST+background
•Target domain
performance
[Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
Feature space overfitted
to source domain
+1.20%
69. Results
•Domain adaptation
from MNIST
to MNIST+background
•Target domain
performance
[Joshi, CVPR’09]→
[Sener et al., ICLR’18]→
Feature space overfitted
to source domain
+1.20%
Data selection VS. Architecture
Data selection by active learning → +1.20%
WideResNet14 − PytorchMNIST(4 layers) → +2.85%
70. Conclusion
•Introduced a novel active learning method that is
• Works well with current deep networks
• Task-agnostic
•Verified with
• Three major visual recognition tasks
• Three popular network architectures
71. Conclusion
•Introduced a novel active learning method that is
• Works well with current deep networks
• Task-agnostic
•Verified with
• Three major visual recognition tasks
• Three popular network architectures
“
”
Pick more important data,
and get better performance!