Slides of my presentation at the Self-supervised Learning for Next-Generation Industry-level Autonomous Driving workshop at ICCV 2021 for the first prize talk of the competition on continual object recognition.
Continual Learning is one of the most promising research areas to shift machine learning from solving a single task to something more similar to general intelligence.
Machine learning (and especially deep neural networks research) has shown outstanding results in the past 10 years, bringing us to the deep learning era, where learning models are everywhere and they interact with many aspect of our life.
However, machine learning have an enormous issue, which completely diversity it from biological learning: machine cannot learn continuously.
This is the so called catastrophic forgetting problem, and continual learning is trying to address it, making artificial intelligence able to continually learn for the entire duration of its "life".
Continual Learning over Small non-i.i.d. Batches of Natural Video StreamsGabriele Graffieti
Learning continually from sequential data is a difficult task, especially if the stream is divided into many small batches that contains non-i.i.d. data. Stochastic gradient descent usually fails in this settings, and the models experience high forgetting of past acquired knowledge.
In this talk I'll try to address the problem of continually acquire new knowledge from many (almost 400) small batches (300 highly correlated frames each) of images from natural video streams.
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
The last decade has witnessed massive progresses in the field of Artificial Intelligence (AI). With supervision from labelled data, machines have, to some extent, exceeded human-level perception on visual recognitions, while fed with feedback reward, single AI units (aka agents) defeat humans in various games including Atari video games, Go game, and card game. Yet, true human intelligence embraces social and collective wisdom and many real-world AI applications often require multiple AI agents to work in a collaborative effort. A next grand challenge is to answer how large-scale multiple AI agents could learn human-level collaborations, or competitions, from their experiences with the environment where both of their incentives and economic constraints co-exist. In this talk, I shall sample some of our recent research on what is called artificial collective intelligence, ranging from machine bidders competing against each other in an auction environment for buying advertising placements, to image/text/music generation with minimax adversarial games, to coordinating multiple AI agents as a team to defeat their enemies in StarCraft combat games. I will finally conclude the talk by pointing out the future direction on this exciting field.
Open-Source Frameworks for Deep Learning: an OverviewVincenzo Lomonaco
The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.
Continual Learning is one of the most promising research areas to shift machine learning from solving a single task to something more similar to general intelligence.
Machine learning (and especially deep neural networks research) has shown outstanding results in the past 10 years, bringing us to the deep learning era, where learning models are everywhere and they interact with many aspect of our life.
However, machine learning have an enormous issue, which completely diversity it from biological learning: machine cannot learn continuously.
This is the so called catastrophic forgetting problem, and continual learning is trying to address it, making artificial intelligence able to continually learn for the entire duration of its "life".
Continual Learning over Small non-i.i.d. Batches of Natural Video StreamsGabriele Graffieti
Learning continually from sequential data is a difficult task, especially if the stream is divided into many small batches that contains non-i.i.d. data. Stochastic gradient descent usually fails in this settings, and the models experience high forgetting of past acquired knowledge.
In this talk I'll try to address the problem of continually acquire new knowledge from many (almost 400) small batches (300 highly correlated frames each) of images from natural video streams.
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
The last decade has witnessed massive progresses in the field of Artificial Intelligence (AI). With supervision from labelled data, machines have, to some extent, exceeded human-level perception on visual recognitions, while fed with feedback reward, single AI units (aka agents) defeat humans in various games including Atari video games, Go game, and card game. Yet, true human intelligence embraces social and collective wisdom and many real-world AI applications often require multiple AI agents to work in a collaborative effort. A next grand challenge is to answer how large-scale multiple AI agents could learn human-level collaborations, or competitions, from their experiences with the environment where both of their incentives and economic constraints co-exist. In this talk, I shall sample some of our recent research on what is called artificial collective intelligence, ranging from machine bidders competing against each other in an auction environment for buying advertising placements, to image/text/music generation with minimax adversarial games, to coordinating multiple AI agents as a team to defeat their enemies in StarCraft combat games. I will finally conclude the talk by pointing out the future direction on this exciting field.
Open-Source Frameworks for Deep Learning: an OverviewVincenzo Lomonaco
The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.
Deep Learning: concepts and use cases (October 2018)Julien SIMON
An introduction to Deep Learning theory
Neurons & Neural Networks
The Training Process
Backpropagation
Optimizers
Common network architectures and use cases
Convolutional Neural Networks
Recurrent Neural Networks
Long Short Term Memory Networks
Generative Adversarial Networks
Getting started
Although a new technological advancement, the scope of Deep Learning is expanding exponentially. Advanced Deep Learning technology aims to imitate the biological neural network, that is, of the human brain.
https://takeoffprojects.com/advanced-deep-learning-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Deep learning (Machine learning) tutorial for beginnersTerry Taewoong Um
비전공자들을 위한 머신러닝 / 딥러닝 튜토리얼입니다.
This is a deep learning (machine learning) tutorial for beginners.
Contents
1. Introduction to machine learning & deep learning
2. DL methods:
Convolutional neural networks (CNN)
Recurrent neural networks (RNN)
Variational autoencoder (VAE)
Generative adversarial networks (GAN)
3. Can we believe deep neural networks?
이 슬라이드는 부산 동아대학교에서 2018년 7월 16일 2시간 특강을 위해 마련된 자료로, 비전공자들을 위해 수식보다 개념 이해를 위해 힘쓴 강의자료입니다. 나중에 테리의 딥러닝톡에서도 한번 설명을 붙여볼게요~ https://www.facebook.com/deeplearningtalk/
https://www.youtube.com/playlist?list=PL0oFI08O71gKEXITQ7OG2SCCXkrtid7Fq
This slide was used by Mr.Viju Chacko at FAYA:80 that gave a basic introduction to Ai. It act as an introduction to different terminologies related to AI that could enable its audience to understand the technology better.
Intuitive introduction with easy-to-understand explanation of fundamental concepts in machine learning and neural networks. No prior machine learning or computing experience required.
Tutorial on Deep learning and ApplicationsNhatHai Phan
In this presentation, I would like to review basis techniques, models, and applications in deep learning. Hope you find the slides are interesting. Further information about my research can be found at "https://sites.google.com/site/ihaiphan/."
NhatHai Phan
CIS Department,
University of Oregon, Eugene, OR
The field of Artificial Intelligence (AI) has been revitalized in this decade, primarily due to the large-scale application of Deep Learning (DL) and other Machine Learning (ML) algorithms. This has been most evident in applications like computer vision, natural language processing, and game bots. However, extraordinary successes within a short period of time have also had the unintended consequence of causing a sharp difference of opinion in research and industrial communities regarding the capabilities and limitations of deep learning. A few questions you might have heard being asked (or asked yourself) include:
a. We don’t know how Deep Neural Networks make decisions, so can we trust them?
b. Can Deep Learning deal with highly non-linear continuous systems with millions of variables?
c. Can Deep Learning solve the Artificial General Intelligence problem?
The goal of this seminar is to provide a 1000-feet view of Deep Learning and hopefully answer the questions above. The seminar will touch upon the evolution, current state of the art, and peculiarities of Deep Learning, and share thoughts on using Deep Learning as a tool for developing power system solutions.
A Few Useful Things to Know about Machine Learningnep_test_account
Machine learning algorithms can figure out how to perform
important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming
is not. As more data becomes available, more ambitious
problems can be tackled. As a result, machine learning is
widely used in computer science and other fields. However,
developing successful machine learning applications requires
a substantial amount of “black art” that is hard to find in
textbooks. This article summarizes twelve key lessons that
machine learning researchers and practitioners have learned.
These include pitfalls to avoid, important issues to focus on,
and answers to common questions.
The term Machine Learning was coined by Arthur Samuel in 1959, an american pioneer in the field of computer gaming and artificial intelligence and stated that “ it gives computers the ability to learn without being explicitly programmed” And in 1997, Tom Mitchell gave a “ well-Posed” mathematical and relational definition that “ A Computer Program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E”.
Machine learning is needed for tasks that are too complex for humans to code directly. So instead, we provide a large amount of data to a machine learning algorithm and let the algorithm work it out by exploring that data and searching for a model that will achieve what the programmers have set it out to achieve.
In this Lunch & Learn session, Chirag Jain gives us a friendly & gentle introduction to Machine Learning & walks through High-Level Learning frameworks using Linear Classifiers.
CORe50: a New Dataset and Benchmark for Continuous Object Recognition PosterVincenzo Lomonaco
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.
Animal neurophysiology virtual lab: Pedagogical requirements and technologica...Nicolas Casel
Kachafoutdinova, E., Casel, N., Pecoraro, G., Zampunieris, D. (2008). Animal neurophysiology virtual lab: Pedagogical requirements and technological issues. 75-79, in : Eleonore ten Thij (Ed.), IADIS- International Conference on e-Learning, Amsterdam, Netherlands.
Deep Learning: concepts and use cases (October 2018)Julien SIMON
An introduction to Deep Learning theory
Neurons & Neural Networks
The Training Process
Backpropagation
Optimizers
Common network architectures and use cases
Convolutional Neural Networks
Recurrent Neural Networks
Long Short Term Memory Networks
Generative Adversarial Networks
Getting started
Although a new technological advancement, the scope of Deep Learning is expanding exponentially. Advanced Deep Learning technology aims to imitate the biological neural network, that is, of the human brain.
https://takeoffprojects.com/advanced-deep-learning-projects
We are providing you with some of the greatest ideas for building Final Year projects with proper guidance and assistance.
Deep learning (Machine learning) tutorial for beginnersTerry Taewoong Um
비전공자들을 위한 머신러닝 / 딥러닝 튜토리얼입니다.
This is a deep learning (machine learning) tutorial for beginners.
Contents
1. Introduction to machine learning & deep learning
2. DL methods:
Convolutional neural networks (CNN)
Recurrent neural networks (RNN)
Variational autoencoder (VAE)
Generative adversarial networks (GAN)
3. Can we believe deep neural networks?
이 슬라이드는 부산 동아대학교에서 2018년 7월 16일 2시간 특강을 위해 마련된 자료로, 비전공자들을 위해 수식보다 개념 이해를 위해 힘쓴 강의자료입니다. 나중에 테리의 딥러닝톡에서도 한번 설명을 붙여볼게요~ https://www.facebook.com/deeplearningtalk/
https://www.youtube.com/playlist?list=PL0oFI08O71gKEXITQ7OG2SCCXkrtid7Fq
This slide was used by Mr.Viju Chacko at FAYA:80 that gave a basic introduction to Ai. It act as an introduction to different terminologies related to AI that could enable its audience to understand the technology better.
Intuitive introduction with easy-to-understand explanation of fundamental concepts in machine learning and neural networks. No prior machine learning or computing experience required.
Tutorial on Deep learning and ApplicationsNhatHai Phan
In this presentation, I would like to review basis techniques, models, and applications in deep learning. Hope you find the slides are interesting. Further information about my research can be found at "https://sites.google.com/site/ihaiphan/."
NhatHai Phan
CIS Department,
University of Oregon, Eugene, OR
The field of Artificial Intelligence (AI) has been revitalized in this decade, primarily due to the large-scale application of Deep Learning (DL) and other Machine Learning (ML) algorithms. This has been most evident in applications like computer vision, natural language processing, and game bots. However, extraordinary successes within a short period of time have also had the unintended consequence of causing a sharp difference of opinion in research and industrial communities regarding the capabilities and limitations of deep learning. A few questions you might have heard being asked (or asked yourself) include:
a. We don’t know how Deep Neural Networks make decisions, so can we trust them?
b. Can Deep Learning deal with highly non-linear continuous systems with millions of variables?
c. Can Deep Learning solve the Artificial General Intelligence problem?
The goal of this seminar is to provide a 1000-feet view of Deep Learning and hopefully answer the questions above. The seminar will touch upon the evolution, current state of the art, and peculiarities of Deep Learning, and share thoughts on using Deep Learning as a tool for developing power system solutions.
A Few Useful Things to Know about Machine Learningnep_test_account
Machine learning algorithms can figure out how to perform
important tasks by generalizing from examples. This is often feasible and cost-effective where manual programming
is not. As more data becomes available, more ambitious
problems can be tackled. As a result, machine learning is
widely used in computer science and other fields. However,
developing successful machine learning applications requires
a substantial amount of “black art” that is hard to find in
textbooks. This article summarizes twelve key lessons that
machine learning researchers and practitioners have learned.
These include pitfalls to avoid, important issues to focus on,
and answers to common questions.
The term Machine Learning was coined by Arthur Samuel in 1959, an american pioneer in the field of computer gaming and artificial intelligence and stated that “ it gives computers the ability to learn without being explicitly programmed” And in 1997, Tom Mitchell gave a “ well-Posed” mathematical and relational definition that “ A Computer Program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E”.
Machine learning is needed for tasks that are too complex for humans to code directly. So instead, we provide a large amount of data to a machine learning algorithm and let the algorithm work it out by exploring that data and searching for a model that will achieve what the programmers have set it out to achieve.
In this Lunch & Learn session, Chirag Jain gives us a friendly & gentle introduction to Machine Learning & walks through High-Level Learning frameworks using Linear Classifiers.
CORe50: a New Dataset and Benchmark for Continuous Object Recognition PosterVincenzo Lomonaco
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.
Animal neurophysiology virtual lab: Pedagogical requirements and technologica...Nicolas Casel
Kachafoutdinova, E., Casel, N., Pecoraro, G., Zampunieris, D. (2008). Animal neurophysiology virtual lab: Pedagogical requirements and technological issues. 75-79, in : Eleonore ten Thij (Ed.), IADIS- International Conference on e-Learning, Amsterdam, Netherlands.
ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
An ensemble is itself a supervised learning algorithm, because it can be trained and then used to make predictions. The trained ensemble, therefore, represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built.
Yulia Honcharenko "Application of metric learning for logo recognition"Fwdays
Typical approaches of solving classification problems require the collection of a dataset for each new class and retraining of the model. Metric learning allows you to train a model once and then easily add new classes with 5-10 reference images.
So we’ll talk about metric learning based on YouScan experience: task, data, different losses and approaches, metrics we used, pitfalls and peculiarities, things that worked and didn’t.
Brains@Bay Meetup: How to Evolve Your Own Lab Rat - Thomas MiconiNumenta
Meetup page: https://www.meetup.com/Brains-Bay/events/284481247/
A hallmark of intelligence is the ability to learn new flexible, cognitive behaviors - that is, behaviors that require discovering, storing and exploiting novel information for each new instance of the task. In meta-learning, agents are trained with external algorithms to learn one specific cognitive task. However, animals are able to pick up such cognitive tasks automatically, as a result of their evolved neural architecture and synaptic plasticity mechanisms, including neuromodulation. Here we evolve neural networks, endowed with plastic connections and reward-based neuromodulation, over a sizable set of simple meta-learning tasks based on a framework from computational neuroscience. The resulting evolved networks can automatically acquire a novel simple cognitive task, never seen during evolution, through the spontaneous operation of their evolved neural organization and plasticity system. We suggest that attending to the multiplicity of loops involved in natural learning may provide useful insight into the emergence of intelligent behavior.
A Multiple Classifiers System For Solving The Character Recognition Problem I...Randa Elanwar
In this paper we proposed a multiple classifiers system for handwritten Arabic alphabet recognition to investigate if it will really achieve a remarkable increase in the recognition accuracy compared to a single feature-based classifier system result
Dataset Distillation by Matching Training Trajectories taeseon ryu
이 논문은 데이터셋 디스틸레이션에 대한 새로운 접근법을 제안합니다. 데이터셋 디스틸레이션은 전체 데이터셋에서 학습된 모델의 테스트 정확도를 일치시킬 수 있는 작은 데이터셋을 합성하는 작업입니다. 제안된 방법은 디스틸레이션 데이터를 최적화하여 실제 데이터로 학습된 네트워크와 유사한 상태로 이끌어냅니다. 이 방법은 기존 방법들을 능가하며, 더 높은 해상도의 시각 데이터를 디스틸레이션할 수 있게 합니다. 데이터셋 디스틸레이션은 지속적인 학습, 신경 아키텍처 검색, 개인정보 보호 ML 등 다양한 응용 분야가 있습니다.
Similar to Self-supervised Learning for Next-Generation Industry-level Autonomous Driving Workshop (ICCV 2021) - 1st Prize Continual Object Recognition Challenge (20)
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
Goals, Risks and Countermeasures in the Artificial Intelligence EraGabriele Graffieti
In the last decade, Artificial intelligence (AI) has definitely transformed our society, making us deeply affected by its development and the services that AI provides to us. But who owns and control AI? What are the major risks associated to it? How institutions could intervene to avoid that such an important technology will be in the hands of few individuals?
In this talk we'll explore the major technological, ethical and social risks that can be associated with an incorrect development of AI, and some countermeasures in order to avoid such risks. After that we'll discuss some examples and projects of AI used as a technology at the service of people.
UniBo Team solution to the Lifelong Object Recognition Challenge @ IROS 2019 (Macau)
Training deep networks on light computational devices
is nowadays very challenging. Continual learning techniques, where complex models are incrementally trained on
small batches of new data, can make the learning problem
tractable even for CPU-only edge devices. However, a number of practical problems need to be solved: catastrophic
forgetting before anything else. In this work we introduce an
original technique named “Latent Rehearsal” where, instead
of storing a portion of past data in the input space, we store
activations volumes at some intermediate layer. This can
significantly reduce the computation and storage required by
native rehearsal. To keep the representation stable and the
stored activations valid we propose to slow-down learning
at all the layers below the latent replay one, leaving the
layers above free to learn at full pace. In our experiments we
show that Latent Replay, combined with existing continual
learning techniques, achieves state-of-the-art accuracy on a
difficult benchmark such as OpenLORIS and CORe50 NICv2 with nearly 400 small and highly non-i.i.d. batches.
Finally, we demonstrate the feasibility of nearly real-time continual learning on the edge through the porting of the proposed technique on a smartphone device.
From art to deepfakes: an introduction to Generative Adversarial NetworksGabriele Graffieti
Since their introduction, Generative Adversarial Networks (GANs) have had an unprecedent prominence in the machine learning research, especially in computer vision, so that they were defined by Yann LeCun, director of Facebook AI Research and 2018 Turing award "The most interesting idea in the last 10 years in machine learning".
In this talk a comprehensive overview of why GANs have been developed, and why nowadays they are used in almost any field of machine learning will be provided. After a brief introduction on machine learning and generative models, GANs will be introduced, starting from their theoretical basis to a working implementation of a GAN with the PyTorch framework. At the end of the talk the most recent research topics will be introduced, including creative models for the generation of artificial art (Creative Adversarial Networks) and a discussion on the risks of these model and of the increasing reality of the results (deepfakes, fake news).
Image-to-image Translation with Generative Adversarial Networks (without math)Gabriele Graffieti
Presentation of techniques, models and applications of unpaired image-to-image translation, for the intergroup meetings @ University of Bologna - Cesena.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
1. Report of the Continual Learning
Supervised Classification challenge (Track 3A)
Gabriele Graffieti, Guido Borghi, Davide Maltoni, Matteo Ferrara
{name.surname}@unibo.it
Department of Computer Science and Engineering,
University of Bologna
Italy
ICCV 2021 Workshop: Self-supervised Learning for Next-Generation Industry-level Autonomous Driving
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 1 / 9
2. The Team
Gabriele Graffieti
Ph.D. student
Guido Borghi
Assistant prof.
Davide Maltoni
Full prof.
Matteo Ferrara
Associate prof.
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 2 / 9
3. Model and Hyperparamenters
I ResNet-50 [1] pretrained on ImageNet [2].
Last layer substituted with a fully-connected layer with 7 output neurons + biases.
I Stochastic Gradient Descent (SGD) optimizer.
Learning rate = 10−2
, weight decay and momentum = 0.
I Weighted Cross Entropy (CE) loss:
L(y, l) = −αl log
exp(yl )
P
j exp{(yj )}
!
, α0 = 0, α1,..,6 = 1
I Batch size = 10.
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 3 / 9
4. Training Procedure
Every experience
I Tensors resized from 64 × 64 → 224 × 224 to resemble the original training image size
• Using a Nearest Neighbor interpolation.
• Even though the patches are now 12× larger, network’s filters respond better after the resize.
I On-the-fly data augmentation, flipping horizontally all the images.
• Both the original and the flipped patches are fed to the network.
Only in the first experience
I Current batch put temporally inside the memory and passed twice through the model.
• Loss and optimization computed after each single batch.
• Boost the learning of the model in the first experience (especially for underrepresented
classes).
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 4 / 9
5. Classification Head Protection
Motivation
I “Learning in isolation” problem when few classes are present in an experience or some
classes are underrepresented.
I Forgetting of underrepresented classes, especially in the classification head.
Solution
I We use the CWR algorithm [3] to control forgetting in the classification head.
I Two set of weights (of the head) are maintained (7k more weights than ResNet-50, inside
the 105% limit)
• cw: Weights from the previous experience used in the consolidation phase.
• tw: Weight used to train the model in the current experince, initialized to 0 with only the
weight of the classes in the current experience loaded from cw.
I We do not freeze the feature extractor, as proposed in [4].
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 5 / 9
6. Replay Memory
I We divide the memory in 6 buffers (one per class) of 100 samples each (total 600
samples).
• Important to limit the number of samples per class in memory, due to unbalance in the data.
• The goal here is to have a memory balanced per class.
• Buffer size bounded by the number of the least represented class (tricycle, 82 samples).
• We empirically found that 100 sample per class is a good compromise.
I Batch is composed of 5 sample from the current experience and 5 samples sampled
randomply from memory
• The sampling is without replacement.
• Once all the patterns in memory are sampled the sampling start again.
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 6 / 9
7. Memory Management
I We use Reservoir Sampling [5] to insert pattern in memory with the same probability.
I Memory should not be altered when used!
• We use the 400 remaining slot to store the pattern we want to insert from the current
experience
• We want to insert a maximum of 100/i pattern per class from the i-th experience, thus
maximum allocation of memory is 900 patterns (600 from the replay memory + 50 per class
to insert in the 2nd experience).
• We randomly inserted the new patterns inside the memory at the start of a new experience.
Current used memory Sampled and to be inserted
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 7 / 9
8. Contribution of the Components
Component Contribution
Model **
Learning rate *
Optimizer ***
Loss weights *
Image resizing ***
Data augmentation **
Double batch (1st exp.) *
CWR **
Not freezing the feature extractor ***
Replay memory ***
Balanced memory ***
Reservoir sampling **
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 8 / 9
9. Bibliography
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in
Proceedings of the IEEE conference on computer vision and pattern recognition, 2016,
pp. 770–778.
[2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale
hierarchical image database,” in 2009 IEEE conference on computer vision and pattern
recognition, IEEE, 2009, pp. 248–255.
[3] D. Maltoni and V. Lomonaco, “Continuous learning in single-incremental-task scenarios,”
Neural Networks, vol. 116, pp. 56–73, 2019.
[4] L. Pellegrini, G. Graffieti, V. Lomonaco, and D. Maltoni, “Latent replay for real-time
continual learning,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), IEEE, 2020, pp. 10 203–10 209.
[5] J. S. Vitter, “Random sampling with a reservoir,” ACM Trans. Math. Softw., vol. 11,
no. 1, pp. 37–57, Mar. 1985, issn: 0098-3500. doi: 10.1145/3147.3165. [Online].
Available: https://doi.org/10.1145/3147.3165.
Choco Leibniz team (Unibo team) Continual Learning Supervised Classification challenge Report 9 / 9