발표자: 윤재홍(KAIST 박사과정)
발표일: 2018.7.
We propose a novel deep network architecture for lifelong learning which we refer to as Dynamically Expandable Network (DEN), that can dynamically decide its network capacity as it trains on a sequence of tasks, to learn a compact overlapping knowledge sharing structure among tasks. DEN is efficiently trained in an online manner by performing selective retraining, dynamically expands network capacity upon arrival of each task with only the necessary number of units, and effectively prevents semantic drift by splitting/duplicating units and timestamping them. We validate DEN on multiple public datasets under lifelong learning scenarios, on which it not only significantly outperforms existing lifelong learning methods for deep networks, but also achieves the same level of performance as the batch counterparts with substantially fewer number of parameters. Further, the obtained network fine-tuned on all tasks obtained significantly better performance over the batch models, which shows that it can be used to estimate the optimal network structure even when all tasks are available in the first place.
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
Variational Continual Learning
Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner
Published at International Conference on Learning Representations (ICLR) 2018
Continual/Lifelong Learning with Deep ArchitecturesVincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we’ll implement different Continual Learning strategies and assess them on common vision benchmarks. We’ll conclude the workshop with a look at possible real world applications of CL.
Knowledge is constantly revised (evolves) as new pieces of information is made available over time. We term it “knowledge augmentation”. Hence it is feasible to achieve knowledge augmentation via incremental learning.
The Power of Auto ML and How Does it WorkIvo Andreev
Automated ML is an approach to minimize the need of data science effort by enabling domain experts to build ML models without having deep knowledge of algorithms, mathematics or programming skills. The mechanism works by allowing end-users to simply provide data and the system automatically does the rest by determining approach to perform particular ML task. At first this may sound discouraging to those aiming to the “sexiest job of the 21st century” - the data scientists. However, Auto ML should be considered as democratization of ML, rather that automatic data science.
In this session we will talk about how Auto ML works, how is it implemented by Microsoft and how it could improve the productivity of even professional data scientists.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
Continual Learning is one of the most promising research areas to shift machine learning from solving a single task to something more similar to general intelligence.
Machine learning (and especially deep neural networks research) has shown outstanding results in the past 10 years, bringing us to the deep learning era, where learning models are everywhere and they interact with many aspect of our life.
However, machine learning have an enormous issue, which completely diversity it from biological learning: machine cannot learn continuously.
This is the so called catastrophic forgetting problem, and continual learning is trying to address it, making artificial intelligence able to continually learn for the entire duration of its "life".
Convolutional neural network (CNN / ConvNet's) is a part of Computer Vision. Machine Learning Algorithm. Image Classification, Image Detection, Digit Recognition, and many more. https://technoelearn.com .
As proposed by the Paper, High-Resolution Image Synthesis with Latent Diffusion Models, latent diffusion models are a simple and efficient way that improve both the training and sampling efficiency of denoising diffusion models while retaining their quality
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
Variational Continual Learning
Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, Richard E. Turner
Published at International Conference on Learning Representations (ICLR) 2018
Continual/Lifelong Learning with Deep ArchitecturesVincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we’ll implement different Continual Learning strategies and assess them on common vision benchmarks. We’ll conclude the workshop with a look at possible real world applications of CL.
Knowledge is constantly revised (evolves) as new pieces of information is made available over time. We term it “knowledge augmentation”. Hence it is feasible to achieve knowledge augmentation via incremental learning.
The Power of Auto ML and How Does it WorkIvo Andreev
Automated ML is an approach to minimize the need of data science effort by enabling domain experts to build ML models without having deep knowledge of algorithms, mathematics or programming skills. The mechanism works by allowing end-users to simply provide data and the system automatically does the rest by determining approach to perform particular ML task. At first this may sound discouraging to those aiming to the “sexiest job of the 21st century” - the data scientists. However, Auto ML should be considered as democratization of ML, rather that automatic data science.
In this session we will talk about how Auto ML works, how is it implemented by Microsoft and how it could improve the productivity of even professional data scientists.
Tijmen Blankenvoort, co-founder Scyfer BV, presentation at Artificial Intelligence Meetup 15-1-2014. Introduction into Neural Networks and Deep Learning.
Continual Learning is one of the most promising research areas to shift machine learning from solving a single task to something more similar to general intelligence.
Machine learning (and especially deep neural networks research) has shown outstanding results in the past 10 years, bringing us to the deep learning era, where learning models are everywhere and they interact with many aspect of our life.
However, machine learning have an enormous issue, which completely diversity it from biological learning: machine cannot learn continuously.
This is the so called catastrophic forgetting problem, and continual learning is trying to address it, making artificial intelligence able to continually learn for the entire duration of its "life".
Convolutional neural network (CNN / ConvNet's) is a part of Computer Vision. Machine Learning Algorithm. Image Classification, Image Detection, Digit Recognition, and many more. https://technoelearn.com .
As proposed by the Paper, High-Resolution Image Synthesis with Latent Diffusion Models, latent diffusion models are a simple and efficient way that improve both the training and sampling efficiency of denoising diffusion models while retaining their quality
Deep learning (also known as deep structured learning or hierarchical learning) is the application of artificial neural networks (ANNs) to learning tasks that contain more than one hidden layer. Deep learning is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised.
Objective Evaluation of a Deep Neural Network Approach for Single-Channel Spe...csandit
Single-channel speech intelligibility enhancement is much more difficult than multi-channel
intelligibility enhancement. It has recently been reported that machine learning training-based
single-channel speech intelligibility enhancement algorithms perform better than traditional
algorithms. In this paper, the performance of a deep neural network method using a multiresolution
cochlea-gram feature set recently proposed to perform single-channel speech
intelligibility enhancement processing is evaluated. Various conditions such as different
speakers for training and testing as well as different noise conditions are tested. Simulations
and objective test results show that the method performs better than another deep neural
networks setup recently proposed for the same task, and leads to a more robust convergence
compared to a recently proposed Gaussian mixture model approach.
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
UNIT I INTRODUCTION
Neural Networks-Application Scope of Neural Networks-Artificial Neural Network: An IntroductionEvolution of Neural Networks-Basic Models of Artificial Neural Network- Important Terminologies of
ANNs-Supervised Learning Network.
비행기 설계를 왜 통일 해야 할까?
디자인 시스템을 하는 이유
비행기들이 다 용도가 다르다...어떻게 설계하지?
맥락이 다른 페이지와 패턴
경유지까지 아직 멀었다... 언제 수리하지?
디자인 시스템을 적용하는 시점
엔지니어랑 얘기해서 정비해야하는데...어떻게 수리하지?
디자인 시스템을 적용하는 프로세스
비행기 설계가 바뀐걸 어떻게 알리지?
디자인 시스템의 전파
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Enhancing Performance with Globus and the Science DMZGlobus
ESnet has led the way in helping national facilities—and many other institutions in the research community—configure Science DMZs and troubleshoot network issues to maximize data transfer performance. In this talk we will present a summary of approaches and tips for getting the most out of your network infrastructure using Globus Connect Server.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Welcome to the first live UiPath Community Day Dubai! Join us for this unique occasion to meet our local and global UiPath Community and leaders. You will get a full view of the MEA region's automation landscape and the AI Powered automation technology capabilities of UiPath. Also, hosted by our local partners Marc Ellis, you will enjoy a half-day packed with industry insights and automation peers networking.
📕 Curious on our agenda? Wait no more!
10:00 Welcome note - UiPath Community in Dubai
Lovely Sinha, UiPath Community Chapter Leader, UiPath MVPx3, Hyper-automation Consultant, First Abu Dhabi Bank
10:20 A UiPath cross-region MEA overview
Ashraf El Zarka, VP and Managing Director MEA, UiPath
10:35: Customer Success Journey
Deepthi Deepak, Head of Intelligent Automation CoE, First Abu Dhabi Bank
11:15 The UiPath approach to GenAI with our three principles: improve accuracy, supercharge productivity, and automate more
Boris Krumrey, Global VP, Automation Innovation, UiPath
12:15 To discover how Marc Ellis leverages tech-driven solutions in recruitment and managed services.
Brendan Lingam, Director of Sales and Business Development, Marc Ellis
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
2. Introduction
Korea Advanced Institute of Science and Technology (KAIST)
• Ph.D. in School of Computing (Aug. 2018. – )
• Advisor: Prof. Sung Ju Hwang
Ulsan National Institute of Science and Technology (UNIST)
• M. S. in Computer Engineering (Aug. 2016 – Feb. 2018)
• Advisor: Prof. Sung Ju Hwang
• B. S. in Computer Science Engineering (Mar. 2012 – Aug. 2016)
• Biological Science Minor
Jaehong Yoon
- Education
3. Introduction
Juho Lee, S. Kim, J. Yoon, H. B. Lee, E. Yang, S. J. Hwang, “Adaptive Network Sparsification via Dependent
Variational Beta-Bernoulli Dropout”, arXiv preprint arXiv:1805.10896 (2018).
Jaehong Yoon, E. Yang, J. Lee, and S. J. Hwang, “Lifelong Learning with Dynamically Expandable Networks”,
International Conference on Learning Representation (ICLR), 2018
Jaehong Yoon, and S. J. Hwang, “Combined Group and Exclusive Sparsity for Deep Neural Networks”,
International Conference on Machine Learning (ICML), 2017
- Experience
- Publications
Korea Advanced Institute of Science and Technology (KAIST)
• Contract Research Scientist (Feb. 2018 ~ Aug. 2018)
AItrics
• Research Intern (Mar. 2018 ~ May 2018)
4. Challenge: Incomplete, Growing Dataset
In many large-scale learning scenarios, not all training data might be available when
we want to begin training the network.
Car
Convertible Sports car
ImageNet
22,000 classes
Sedan
Roadster
5. Challenge: Incomplete, Growing Dataset
In many large-scale learning scenarios, not all training data might be available when
we want to begin training the network.
Car
Sports car
Sedan
Roadster
1M classes
BMW Z4
Ferrari 458 spider
Convertible Ferrari 458 Italia
Porsche 911
Turbo
Hyundai Sonata
BMW 3 series
6. Challenge: Incomplete, Growing Dataset
Even worse, the set of tasks may dynamically grow as new tasks are introduced.
Car
Sports car
Sedan
Roadster
BMW Z4
Ferrari 458 spider
Convertible
2015 Mustang
Convertible
Ferrari 458 Italia
Porsche 911
Turbo
Hyundai Sonata
Tesla Model SBMW 3 series
1M classes
7. Solution: Lifelong Learning
Humans learn forever throughout their lives - couldn’t we build a similar system
that basically learns forever while becoming increasingly smarter over time?
We integrate our model into a lifelong learning framework, that continuously learns by
actively discovering new categories and learning them in the context of known ones.
t-2 t-1 t
Learning
Model
t+1
Learned knowledge
3) New knowledge is
stored for
future use
2) Knowledge is
transferred from
previously
Learned tasks
1) Tasks are received in
a sequential order
4) Refine existing
knowledge
Humans learn forever throughout their lives
8. Lifelong Learning of a Deep Neural Network
However, if the classes we had in the early stages of learning significantly differs from
the new class, utilization of prior knowledge may degenerate performance.
𝑾𝑾 1
𝑾𝑾 2
t-2 t-1 t t+1
New class
+
𝑾𝑾 2
9. Semantic Drift
Introduction of new units can also result in semantic drift or catastrophic forgetting,
where original meaning of the features change as they fit to later tasks.
𝑾𝑾 1
𝑾𝑾 2
New class
+
10. Network Expansion
To learn new tasks which are relatively different from early stages of learning, model
may need to expand network capacity.
𝑾𝑾 1
𝑾𝑾 2
+
New k hidden units (fixed)
New class
t-2 t-1 t t+1
+
…
11. Dynamically Expandable Network (DEN)
To prevent this, we propose a novel deep network that can selectively utilize prior
knowledge for each task while dynamically expanding its capacity when necessary.
𝑾𝑾 1
𝑾𝑾 2
+
New hidden units
New class
t-2 t-1 t t+1
+
+
12. Dynamically Expandable Network (DEN)
Existing models simply retrain the network for the new task, or expand the network
with fixed number of neurons without retraining.
Elastic Weight Consolidation
[Kirkpatrick et al. 16]
Progressive Network
[Rusu et al. 16]
Dynamically Expandable Network
[Ours]
Our dynamically expandable network, on the other hand, partially retrain the existing
network and add in only the necessary number of neurons.
13. Incremental Training of a DEN
We further prevent semantic drift by splitting/duplicating units that have significantly
changed in their meanings after learning for each task 𝑡𝑡, and timestamping units.
Selective retraining Dynamic network
expansion
Network split /
duplication
For all hidden unit i,
We first identify and retrain only the relevant parameters for task 𝑡𝑡. If the loss is still
high, we expand each layer by 𝑘𝑘 neurons with group sparsity to drop unnecessary ones.
14. Incremental Training of a DEN
minimize
𝑾𝑾𝐿𝐿,𝑡𝑡
𝑡𝑡
𝓛𝓛 𝑾𝑾𝐿𝐿,𝑡𝑡
𝑡𝑡
; 𝑾𝑾1:𝐿𝐿–1
𝑡𝑡–1
, 𝒟𝒟𝑡𝑡 + μ 𝑾𝑾𝐿𝐿,𝑡𝑡
𝑡𝑡
1
minimize
𝑾𝑾𝑆𝑆
𝑡𝑡
𝓛𝓛 𝑾𝑾𝑆𝑆
𝑡𝑡
; 𝑾𝑾𝑆𝑆 𝑐𝑐
𝑡𝑡–1
, 𝒟𝒟𝑡𝑡 + μ 𝑾𝑾𝑆𝑆
𝑡𝑡
2
1. Selective Retraining
• Initially, train the network with ℓ𝟏𝟏-regularization
to promote sparsity in the weights.
• Fit a sparse linear model to predict task 𝑡𝑡 using
topmost hidden units of the neural network.
• Perform breadth-first search on the network
starting from selected nodes.
When the model learns new tasks, the network finds relevant neurons, and retrains
only them.
t-1 t
𝒙𝒙𝟐𝟐 𝒙𝒙𝒊𝒊. . .𝒙𝒙𝟏𝟏
15. Incremental Training of a DEN
minimize
𝑾𝑾𝑙𝑙
𝑁𝑁
𝓛𝓛 𝑾𝑾𝑙𝑙
𝑁𝑁
; 𝑾𝑾𝑙𝑙
𝑡𝑡–1
, 𝒟𝒟𝑡𝑡 + λ∑𝑔𝑔 𝑾𝑾𝑙𝑙,𝑔𝑔
𝑁𝑁
2
When loss is higher than threshold 𝝉𝝉, expand constant k neurons at each layer,
and remove useless ones among them.
t-1 t
𝒙𝒙𝟐𝟐 𝒙𝒙𝒊𝒊. . .𝒙𝒙𝟏𝟏
+
+
2. Dynamically Network Expansion
• Perform group sparsity regularization on the
added parameters.
where 𝑔𝑔 ∈ 𝐺𝐺 is a group defined on the incoming weights
for each neuron.
• The model captures new features that were not
previously represented by 𝑾𝑾𝑙𝑙
𝑡𝑡−1
.
16. Group Sparsity Regularization
Ω 𝑾𝑾 𝑙𝑙
= �
𝑔𝑔
𝑾𝑾𝑔𝑔
𝑙𝑙
2
Group sparsity
Layer 𝒍𝒍 − 𝟏𝟏
Layer 𝒍𝒍
[Wen16] Wen, Wei, et al. "Learning structured sparsity in deep neural networks." Advances in Neural Information Processing Systems. 2016.
Layer 𝒍𝒍 − 𝟏𝟏
Layer 𝒍𝒍
Grouping !
(2,1)-norm, which is the 1-norm over 2-norm groups, promotes feature sharing
and results in complete elimination of the features that are not shared.
17. Incremental Training of a DEN
minimize
𝑾𝑾𝑡𝑡
𝓛𝓛 𝑾𝑾𝑡𝑡
; 𝒟𝒟𝑡𝑡 + λ 𝑾𝑾𝑡𝑡
– 𝑾𝑾𝑡𝑡−1
2
2
After 2., if the similarity with neurons of previous step is larger than the threshold σ,
we split & duplicate those neurons and restore them to previous step.
t-1 t
+
+
Copy
3. Network Split / Duplication
• Measure the amount of semantic drift 𝜌𝜌𝑖𝑖
𝑡𝑡
for each
hidden unit 𝑖𝑖, if 𝜌𝜌𝑖𝑖
𝑡𝑡
> 𝜎𝜎, copy it.
• After the duplication, retrain the network since
split changes the overall structure.
18. Incremental Training of a DEN
We timestamp each newly added units to record the stage 𝑡𝑡 when it is added to the
network, to further prevent drift by the introduction of new hidden units.
t-2 tt-1
19. Datasets and Networks
We validate our method on four public datasets for classification, with various
networks.
CIFAR-100
• 100 animal and
vehicle classes
• Used modified
version of AlexNet
MNIST-variation
• Modified MNIST
dataset including
perturbation
• Used LeNet-4
(2 of conv., 2 of fc.)
Permuted-MNIST
• Different random
permutation of the
input pixels
• Used LeNet-4
AwA
• 50 animal classes
• Used feedforward
network
20. Baselines
We compare our networks against relevant baselines.
D D D
M1 M2 M3
D D D
M1 M3 M3
STL
MTL
minimize
𝑾𝑾𝑡𝑡
𝓛𝓛 𝑾𝑾𝑡𝑡; 𝒟𝒟𝑡𝑡 +
+ λ 𝑾𝑾𝑡𝑡– 𝑾𝑾𝑡𝑡−1
2
2
L2
EWC
Progressive Networks
Rusu, Andrei A., et al. "Progressive neural networks." arXiv preprint arXiv:1606.04671 (2016).
Kirkpatrick, James, et al. "Overcoming catastrophic forgetting in neural networks." Proceedings of the National Academy of Sciences 114.13 (2017): 3521-3526.
DEN
21. Results
Incremental training with DEN results in obtaining a much smaller network that
performs almost the same as the networks that are trained in batch.
Further fine-tuning of DEN on all tasks obtains the best performance, which shows
that DEN is also useful for network capacity estimation.
22. Results
DEN maintains the performance obtained on the previous tasks and allows for
higher performance improvements for later tasks.
Also, timestamped inference is highly effective in preventing semantic drift.
23. Results
Selective retraining takes significantly less time than the full retraining of the
network, even shows much higher AUROC.
DNN-selective mostly selects less portion of upper level units which are more task-
specific, while selecting larger portion of more generic lower layer units.
24. Results
There are the models with a variant of our model that does selective retraining and
layer expansion, but without network split on MNIST-Variation dataset.
DEN-Dynamic even outperforms DEN-Constant with similar capacity, since the model
can dynamically adjust the number of neurons at each layers.
25. Results
In the permuted MNIST, our DEN outperforms all lifelong learning baselines while
using only 1.39 times of base network capacity.
Further, DEN-Finetune achieves the best AUROC among all models, including DNN-STL
and DNN-MTL.
26. Conclusion
• We proposed a novel deep neural network for lifelong learning, Dynamically
Expandable Network (DEN).
• DEN performs partial retraining of the network trained on old while
increasing its capacity when necessary.
• DEN significantly outperforms the existing lifelong learning methods,
achieving almost the same performance as the network trained in batch.
• Further fine-tuning of the models on all tasks results in obtaining models that
outperform the batch models, which shows that DEN is useful for network
structure estimation as well.