Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - Slides
This document describes a study comparing Convolutional Neural Networks (CNNs) and Hierarchical Temporal Memories (HTMs) on object recognition tasks. The study implements a CNN using Theano, creates a new benchmark of image sequences from the NORB dataset, and evaluates the performance of CNNs and HTMs on the original NORB dataset and new image sequences. The results show that while CNNs achieve higher accuracy on the original NORB data, HTMs are more competitive on the image sequences and can achieve comparable performance using less training data. The study proves that bio-inspired approaches like HTM can advance deep learning research.
Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzo Lomonaco
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
Scene classification using Convolutional Neural Networks - Jayani WithanawasamWithTheBest
Scene Classification is used in Convolutional Neural Networks (CNNs). We seek to redefine computer vision as an AI problem, understand the importance of scene classification as well as challenges, and the difference between traditional machine learning and deep learning. Additionally, we discuss CNNs, using caffe for implementing CNNs and importact reosources to imorove.
CNNs
Jayani Withanawasam
Comparing Incremental Learning Strategies for Convolutional Neural NetworksVincenzo Lomonaco
In the last decade, Convolutional Neural Networks (CNNs) have shown to perform incredibly well in many computer vision tasks such as object recognition and object detection, being able to extract meaningful high-level invariant features. However, partly because of their complex training and tricky hyper-parameters tuning, CNNs have been scarcely studied in the context of incremental learning where data are available in consecutive batches and retraining the model from scratch is unfeasible. In this work we compare different incremental learning strategies for CNN based architectures, targeting real-word applications.
If you are interested in this work please cite:
Lomonaco, V., & Maltoni, D. (2016, September). Comparing Incremental Learning Strategies for Convolutional Neural Networks. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition (pp. 175-184). Springer International Publishing.
For further information visit my website: http://www.vincenzolomonaco.com/
A comprehensive tutorial on Convolutional Neural Networks (CNN) which talks about the motivation behind CNNs and Deep Learning in general, followed by a description of the various components involved in a typical CNN layer. It explains the theory involved with the different variants used in practice and also, gives a big picture of the whole network by putting everything together.
Next, there's a discussion of the various state-of-the-art frameworks being used to implement CNNs to tackle real-world classification and regression problems.
Finally, the implementation of the CNNs is demonstrated by implementing the paper 'Age ang Gender Classification Using Convolutional Neural Networks' by Hassner (2015).
Scene classification using Convolutional Neural Networks - Jayani WithanawasamWithTheBest
Scene Classification is used in Convolutional Neural Networks (CNNs). We seek to redefine computer vision as an AI problem, understand the importance of scene classification as well as challenges, and the difference between traditional machine learning and deep learning. Additionally, we discuss CNNs, using caffe for implementing CNNs and importact reosources to imorove.
CNNs
Jayani Withanawasam
AI&BigData Lab 2016. Александр Баев: Transfer learning - зачем, как и где.GeeksLab Odessa
4.6.16 AI&BigData Lab
Upcoming events: goo.gl/I2gJ4H
Поговорим об одной из базовых практических техник обучения нейронных сетей - предобучение, finetuning, transfer learning. В каких случаях применять, какие модели использовать, где их брать и как адаптировать.
Transformer Architectures in Vision
[2018 ICML] Image Transformer
[2019 CVPR] Video Action Transformer Network
[2020 ECCV] End-to-End Object Detection with Transformers
[2021 ICLR] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
AI&BigData Lab. Артем Чернодуб "Распознавание изображений методом Lazy Deep ...GeeksLab Odessa
23.05.15 Одесса. Impact Hub Odessa. Конференция AI&BigData Lab
Артем Чернодуб (Computer Vision Team, ZZ Wolf)
"Распознавание изображений методом Lazy Deep Learning в фото-органайзере ZZ Photo"
В докладе рассматривается проблема распознавания изображений методами машинного зрения. Проводится краткий обзор существующих подзадач в этой области (детекция обьектов, классификация сцен, ассоциативный поиск в базах изображений, распознавание лиц и др.) и современных методов их решения с акцентом на глубокое обучение (Deep Learning).
Подробнее:
http://geekslab.co/
https://www.facebook.com/GeeksLab.co
https://www.youtube.com/user/GeeksLabVideo
https://telecombcn-dl.github.io/dlmm-2017-dcu/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
AI&BigData Lab 2016. Александр Баев: Transfer learning - зачем, как и где.GeeksLab Odessa
4.6.16 AI&BigData Lab
Upcoming events: goo.gl/I2gJ4H
Поговорим об одной из базовых практических техник обучения нейронных сетей - предобучение, finetuning, transfer learning. В каких случаях применять, какие модели использовать, где их брать и как адаптировать.
Transformer Architectures in Vision
[2018 ICML] Image Transformer
[2019 CVPR] Video Action Transformer Network
[2020 ECCV] End-to-End Object Detection with Transformers
[2021 ICLR] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
AI&BigData Lab. Артем Чернодуб "Распознавание изображений методом Lazy Deep ...GeeksLab Odessa
23.05.15 Одесса. Impact Hub Odessa. Конференция AI&BigData Lab
Артем Чернодуб (Computer Vision Team, ZZ Wolf)
"Распознавание изображений методом Lazy Deep Learning в фото-органайзере ZZ Photo"
В докладе рассматривается проблема распознавания изображений методами машинного зрения. Проводится краткий обзор существующих подзадач в этой области (детекция обьектов, классификация сцен, ассоциативный поиск в базах изображений, распознавание лиц и др.) и современных методов их решения с акцентом на глубокое обучение (Deep Learning).
Подробнее:
http://geekslab.co/
https://www.facebook.com/GeeksLab.co
https://www.youtube.com/user/GeeksLabVideo
https://telecombcn-dl.github.io/dlmm-2017-dcu/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or image captioning.
In this work we started to develop a novel framework for statically detecting deadlocks in a concurrent Java environment with asynchronous method calls and cooperative scheduling of method activations. Since this language features recursion and dynamic resource creation, dead-
lock detection is extremely complex and state-of-the-art solutions either give imprecise answers or do not scale. The basic component of the framework is a front-end inference algorithm that ex-
tracts abstract behavioral descriptions of methods, called contracts, which retain resource dependency information. This component is integrated with a back-end that analyze contracts and derive deadlock information computing a fixpoint semantics.
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of big annotated data and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which had been addressed until now with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks and Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles and applications of deep learning to computer vision problems, such as image classification, object detection or text captioning.
Similar to Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - Slides
Slides from a talk at the Montreal Neurological Institute 10/2016 - Progress and challenges for standardized (pre)processing in functional magnetic resonance imaging
Towards Dropout Training for Convolutional Neural Networks Mah Sa
Design inspired by : https://www.slideshare.net/roelofp/python-for-image-understanding-deep-learning-with-convolutional-neural-nets?qid=06301e83-f65e-40a9-92a2-201664cd6119&v=&b=&from_search=1
Special tank to him....
How can you handle defects? If you are in a factory, production can produce objects with defects. Or values from sensors can tell you over time that some values are not "normal". What can you do as a developer (not a Data Scientist) with .NET o Azure to detect these anomalies? Let's see how in this session.
Come puoi gestire i difetti? Se sei in una fabbrica, la produzione può produrre oggetti con difetti. Oppure i valori dei sensori possono dirti nel tempo che alcuni valori non sono "normali". Cosa puoi fare come sviluppatore (non come Data Scientist) con .NET o Azure per rilevare queste anomalie? Vediamo come in questa sessione.
Continual Reinforcement Learning in 3D Non-stationary EnvironmentsVincenzo Lomonaco
Dynamic and always-changing environments constitute an hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained in very static and reproducible conditions in simulation, where the common assumption is that observations can be sampled i.i.d from the environment. However, tackling more complex problems and real-world settings this can be rarely considered the case, with environments often non-stationary and subject to unpredictable, frequent changes. In this talk we discuss about a new open benchmark for learning continually through reinforce in a complex 3D non-stationary object picking task based on VizDoom and subject to several environmental changes. We further propose a number of end-to-end, model-free continual reinforcement learning strategies showing competitive results even without any access to previously encountered environmental conditions or observations.
Semantic Concept Detection in Video Using Hybrid Model of CNN and SVM Classif...CSCJournals
In today's era of digitization and fast internet, many video are uploaded on websites, a mechanism is required to access this video accurately and efficiently. Semantic concept detection achieve this task accurately and is used in many application like multimedia annotation, video summarization, annotation, indexing and retrieval. Video retrieval based on semantic concept is efficient and challenging research area. Semantic concept detection bridges the semantic gap between low level extraction of features from key-frame or shot of video and high level interpretation of the same as semantics. Semantic Concept detection automatically assigns labels to video from predefined vocabulary. This task is considered as supervised machine learning problem. Support vector machine (SVM) emerged as default classifier choice for this task. But recently Deep Convolutional Neural Network (CNN) has shown exceptional performance in this area. CNN requires large dataset for training. In this paper, we present framework for semantic concept detection using hybrid model of SVM and CNN. Global features like color moment, HSV histogram, wavelet transform, grey level co-occurrence matrix and edge orientation histogram are selected as low level features extracted from annotated groundtruth video dataset of TRECVID. In second pipeline, deep features are extracted using pretrained CNN. Dataset is partitioned in three segments to deal with data imbalance issue. Two classifiers are separately trained on all segments and fusion of scores is performed to detect the concepts in test dataset. The system performance is evaluated using Mean Average Precision for multi-label dataset. The performance of the proposed framework using hybrid model of SVM and CNN is comparable to existing approaches.
Similar to Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - Slides (20)
The Deep Continual Learning community should move beyond studying forgetting in Class-Incremental Learning Scenarios! In this tutorial we gave at
#CoLLAs2023, me and Antonio Carta try to explain why and how! 👇
Do you agree?
Continual Learning with Deep Architectures - Tutorial ICML 2021Vincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial “continual learning” agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills (Parisi, 2019). However, despite early speculations and few pioneering works (Ring, 1998; Thrun, 1998; Carlson, 2010), very little research and effort has been devoted to address this vision. Current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for (Goodfellow, 2013). Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus.
In this tutorial, we propose to summarize the application of these ideas in light of the more recent advances in machine learning research and in the context of deep architectures for AI (Lomonaco, 2019). Starting from a motivation and a brief history, we link recent Continual Learning advances to previous research endeavours on related topics and we summarize the state-of-the-art in terms of major approaches, benchmarks and key results. In the second part of the tutorial we plan to cover more exploratory studies about Continual Learning with low supervised signals and the relationships with other paradigms such as Unsupervised, Semi-Supervised and Reinforcement Learning. We will also highlight the impact of recent Neuroscience discoveries in the design of original continual learning algorithms as well as their deployment in real-world applications. Finally, we will underline the notion of continual learning as a key technological enabler for Sustainable Machine Learning and its societal impact, as well as recap interesting research questions and directions worth addressing in the future.
Authors: Vincenzo Lomonaco, Irina Rish
Official Website: https://sites.google.com/view/cltutorial-icml2021
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning, constantly and efficiently updating our biased understanding of the external world. On the contrary, current AI systems are usually trained offline on huge datasets and later deployed with frozen learning capabilities as they have been shown to suffer from catastrophic forgetting if trained continuously on changing data distributions. A common, practical solution to the problem is to re-train the underlying prediction model from scratch and re-deploy it as a new batch of data becomes available. However, this naive approach is incredibly wasteful in terms of memory and computation other than impossible to sustain over longer timescales and frequent updates. In this talk, we will introduce an efficient continual learning strategy, which can reduce the amount of computation and memory overhead of more than 45% w.r.t. the standard re-train & re-deploy approach, further exploring its real-world application in the context of continual object recognition, running on the edge on highly-constrained hardware platforms such as widely adopted smartphones devices.
Continual Learning: Another Step Towards Truly Intelligent MachinesVincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Vision with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.
Artificial agents interacting in highly dynamic environments are required to continually acquire and fine-tune their knowledge overtime. In contrast to conventional deep neural networks that typically rely on a large batch of annotated training samples, lifelong learning systems must account for situations in which the number of tasks is not known a priori and the data samples become incrementally available over time. Despite recent advances in deep learning, lifelong machine learning has remained a long-standing challenge due to neural networks being prone to catastrophic forgetting, i.e., the learning of new tasks interferes with previously learned ones and leads to abrupt disruptions of performance. Recently proposed deep supervised and reinforcement learning models for addressing catastrophic forgetting suffer from flexibility, robustness, and scalability issues with respect to biological systems. In this tutorial, we will present and discuss well-established and emerging neural network approaches motivated by lifelong learning factors in biological systems such as neurosynaptic plasticity, complementary memory systems, multi-task transfer learning, and intrinsically motivated exploration.
Continual/Lifelong Learning with Deep ArchitecturesVincenzo Lomonaco
Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial continually learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex skills and knowledge.
"Continual Learning" (CL) is indeed a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the topic, we’ll implement different Continual Learning strategies and assess them on common vision benchmarks. We’ll conclude the workshop with a look at possible real world applications of CL.
Humans have the extraordinary ability to learn continually from experience. Not only we can apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of Artificial Intelligence (AI) is building an artificial continual learning agent that constructs a sophisticated understanding of the world from its own experience through the autonomous incremental development of ever more complex knowledge and skills. However, current AI systems greatly suffer from the exposure to new data or environments which even slightly differ from the ones for which they have been trained for. Moreover, the learning process is usually constrained on fixed datasets within narrow and isolated tasks which may hardly lead to the emergence of more complex and autonomous intelligent behaviors. In essence, continual learning and adaptation capabilities, while more than often thought as fundamental pillars of every intelligent agent, have been mostly left out of the main AI research focus. In this talk, we explore the application of these ideas in the context of Robotics with a focus on (deep) continual learning strategies for object recognition running at the edge on highly-constrained hardware devices.
Don't forget, there is more than forgetting: new metrics for Continual Learni...Vincenzo Lomonaco
Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.
Open-Source Frameworks for Deep Learning: an OverviewVincenzo Lomonaco
The rise of deep learning over the last decade has led to profound changes in the landscape of the machine learning software stack both for research and production. In this talk we will provide a comprehensive overview of the open-source deep learning frameworks landscape with both a theoretical and hands-on approach. After a brief introduction and historical contextualization, we will highlight common features and distinctions of their recent developments. Finally, we will take at deeper look into three of the most used deep learning frameworks today: Caffe, Tensorflow, PyTorch; with practical examples and considerations worth reckoning in the choice of such libraries.
Continual Learning with Deep Architectures Workshop @ Computer VISIONers Conf...Vincenzo Lomonaco
Continual Learning (CL) is a fast emerging topic in AI concerning the ability to efficiently improve the performance of a deep model over time, dealing with a long (and possibly unlimited) sequence of data/tasks. In this workshop, after a brief introduction of the subject, we’ll analyze different Continual Learning strategies and assess them on common Vision benchmarks. We’ll conclude the workshop with a look at possible real world application of CL.
CORe50: a New Dataset and Benchmark for Continual Learning and Object Recogni...Vincenzo Lomonaco
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.
One of the greatest goals of AI is building an artificial continuous learning agent which can construct a sophisticated understanding about the external world from its own experience through the adaptive, goal-oriented and incremental development of ever more complex skills and knowledge. Yet, Continuous/Lifelong Learning (CL) from high-dimensional streaming data is a challenging research problem far from being solved. In fact, fully retraining deep prediction models each time a new piece of data becomes available is infeasible, due to computational and storage issues, while naïve continuous learning strategies have been shown to suffer from catastrophic forgetting. This talk will cover some of the most common end-to-end continuous learning strategies for gradient-based architectures and the recently proposed AR-1 strategy, which can outperform other state-of-the-art regularization and architectural approaches on the CORe50 benchmark.
CORe50: a New Dataset and Benchmark for Continuous Object Recognition PosterVincenzo Lomonaco
Continuous/Lifelong learning of high-dimensional data streams is a challenging research problem. In fact, fully retraining models each time new data become available is infeasible, due to computational and storage issues, while na\"ive incremental strategies have been shown to suffer from catastrophic forgetting. In the context of real-world object recognition applications (e.g., robotic vision), where continuous learning is crucial, very few datasets and benchmarks are available to evaluate and compare emerging techniques. In this work we propose a new dataset and benchmark CORe50, specifically designed for continuous object recognition, and introduce baseline approaches for different continuous learning scenarios.
Continuous Unsupervised Training of Deep ArchitecturesVincenzo Lomonaco
A number of successful Computer Vision applications have been recently proposed based on Convolutional Networks. However, in most of the cases the system is fully supervised, the training set is fixed and the task completely defined a priori. Even though Transfer Learning approaches proved to be very useful to adapt heavily pre-trained models to ever-changing scenarios, the incremental learning and adaptation capabilities of existing models is still limited and catastrophic forgetting very difficult to control. In this talk we will discuss our experience in the design of deep architectures and algorithms capable of learning objects incrementally both in a supervised and unsupervised way. Finally we will introduce a new dataset and benchmark (CORe50) that we specifically collected to focus on continuous object recognition for Robotic Vision.
Deep Learning for Computer Vision: A comparision between Convolutional Neural...Vincenzo Lomonaco
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general.
However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of “intelligence”.
The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them.
CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems.
HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain.
In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Deep Learning libraries and first experiments with TheanoVincenzo Lomonaco
In recent years, neural networks and deep learning techniques have shown to perform well on many
problems in image recognition, speech recognition, natural language processing and many other tasks.
As a result, a large number of libraries, toolkits and frameworks came out in different languages and
with different purposes. In this report, firstly we take a look at these projects and secondly we choose the
framework that best suits our needs: Theano. Eventually, we implement a simple convolutional neural net
using this framework to test both its ease-of-use and efficiency.
Word2vec on the italian language: first experimentsVincenzo Lomonaco
Word2vec model and application by Mikolov et al. have attracted a great amount of attention in recent years. The vector representations of words learned by word2vec models have been proven to be able to carry semantic meanings and are useful in various NLP tasks. In this work I try to reproduce the previously obtained results for the English language and to explore the possibility of doing the same for the Italian language.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
Deep Learning for Computer Vision: A comparision between Convolutional Neural Networks and Hierarchical Temporal Memories on object recognition tasks - Slides
1. Alma Mater Studiorum - University of Bologna
School of Science
Department of Computer Science and Engineering DISI
Deep Learning for Computer Vision
Candidate
dott. Vincenzo Lomonaco
Supervisor
prof. Davide Maltoni
Co-examiner
prof. Mauro Gaspari
A comparison between Convolutional Neural
Networks and Hierarchical Temporal Memories on
object recognition tasks
2. 08.09.15 Vincenzo Lomonaco 2
ContentsBackground & Motivations
Objectives
Introduction
CNN and HTM
Key features
Implementations
NORB-sequences
Original NORB dataset
New benchmark design
Experiments and Results
Experiments design
Results
Conclusions
Contents
3. 08.09.15 Vincenzo Lomonaco 3
ContentsBackground & Motivations
Objectives
Introduction
CNN and HTM
Key features
Implementations
NORB-sequences
Original NORB dataset
New benchmark design
Experiments and Results
Experiments design
Results
Conclusions
Contents
4. 08.09.15 Vincenzo Lomonaco 4
Deep Learning
In the last decade, Deep Learning techniques have shown to
perform incredibly well on a large variety of problems both in
Computer Vision and Natural Language Processing, resulting in
the state of the art in many tasks.
5. 08.09.15 Vincenzo Lomonaco 5
Deep Learning advantages
Deep Learning is a branch of machine learning based on a set of
algorithms that attempt to model high-level abstractions in data by
using model architectures composed of multiple non-linear
transformations.
6. 08.09.15 Vincenzo Lomonaco 6
Deep Learning disadvantages
● Poorly understood surrounding theory
● Non-optimal method
● Very difficult to train
● Huge quantity of data needed
● High Performance Computing environment needed
Possible limitations:
7. 08.09.15 Vincenzo Lomonaco 7
Objectives
Proving that taking inspiration from biological learning
systems can help again in advancing the field of DL.
Proving that, with less data, it is however possible to reach
good levels of accuracy.
8. 08.09.15 Vincenzo Lomonaco 8
How
We would like to show that, with a lower quantity of available
data, HTM can outperfom CNN on these tasks remaining
comparable in terms of training times.
Comparing two very different deep learning algorithms on
object recognition tasks:
– CNN: classical approach, state-of-the-art for object
recognition
– HTM: new biologically inspired approach
9. 08.09.15 Vincenzo Lomonaco 9
NORB-sequences
Experiments and Results
Conclusions
ContentsBackground & Motivations
Objectives
Introduction
CNN and HTM
Key features
Implementations
Original NORB dataset
New benchmark design
Experiments design
Results
Contents
10. 08.09.15 Vincenzo Lomonaco 10
CNN
CNNs are MLP variants where individual neurons are tiled in
such a way that they respond to overlapping regions in the
visual field. They are architectural inspired by Hubel and
Wiesel’s early work on the cat’s visual cortex.
● Python
● Using Theano
● 11 source files, 2550+ lns
● Pure supervised method
● Sparse Connectivity
● Shared Weights
Key features: Implementation:
11. 08.09.15 Vincenzo Lomonaco 11
HTM
HTM is known as a new emerging paradigm that is more
biologically inspired. It tries to incorporate concepts like time,
context and attention during the learning process that are
typical of the human brain.
● C#, OPENMP version
● Provided by Biometric
System Lab (DISI)
● Mainly unsupervised method
● Top down and bottom-up
information flow
● Bayesian probabilistic
formulation
Key features: Implementations:
12. 08.09.15 Vincenzo Lomonaco 12
Experiments and Results
Conclusions
Original NORB dataset
New benchmark design
ContentsBackground & Motivations
Objectives
Introduction
CNN and HTM
Key features
Implementations
Experiments design
Results
Contents
NORB-sequences
13. 08.09.15 Vincenzo Lomonaco 13
NORB-Sequences
Since the computer vision community is starting to investigate
object recognition algorithms on videos, we would like to move
our comparison to that direction.
To this purpose, a new benchmark of a large collection of image
sequences starting from the well-know small NORB DATASET
has been created.
THE original NORB DATASET:
● Stores 48,600 96x96 image (5 categories, 10 instances, 6 lightings,
9 elevations, and 18 azimuths).
● Is well-know and accepted by the research community in the
context of object-recognition
15. 08.09.15 Vincenzo Lomonaco 15
Java sequencer
NORB-sequences is made possible thanks to a Java software
that takes in input the small NORB DATASET, and given a
number of different tuning parameters, return a number of
training and a test image sequences.
time
● The sequences are created ad hoc to simulate a camera moving
around a specific object including changes in the surround lighting.
● Integrated KNN baseline, GUI, 10 source files, 2600+ lns
Key features:
17. 08.09.15 Vincenzo Lomonaco 17
NORB-sequences
Conclusions
ContentsBackground & Motivations
Objectives
Introduction
CNN and HTM
Key features
Implementations
Original NORB dataset
New benchmark design
Experiments design
Results
Contents
Experiments and Results
18. 08.09.15 Vincenzo Lomonaco 18
Experiments design
1) Validate the CNN implementation on the NORB dataset
2) Evaluate the performance of both algorithms on the plain
NORB dataset
3) Evaluate the performance of both algorithms on the NORB
sequences
19. 08.09.15 Vincenzo Lomonaco 19
CNN validation
In order to validate the new implementation,the goal was to
reproduce Y. LeCun original results on the plain NORB
DATASET.
20. 08.09.15 Vincenzo Lomonaco 20
Plain NORB results
Accuracy results comparison between CNN and HTM on the
plain NORB dataset.
21. 08.09.15 Vincenzo Lomonaco 21
Training times
Training times comparison between CNN and HTM on the
NORB sequences.
Training size CNN times HTM times
100 + 800jit 10.94 m 21.19 m
250 + 2000jit 31.15 m 23.13 m
500 + 4000jit 38.24 m 22.14 m
1000 + 4000jit 91.26 m 26.04 m
2500 + 4000jit 94.90 m 61.08 m
5000 + 4000jit 124.7 m 89.58 m
10000 + 4000jit 187.7 m 143.5 m
24300 + 4000jit 51.31 m 596.2 m
● CNN: GPU Tesla C2075 Fermi
(GPU speedup x3.2)
● HTM: CPU Xeon W3550, 4
cores.
Architectures:
23. 08.09.15 Vincenzo Lomonaco 23
NORB-sequences
Experiments and Results
Conclusions
ContentsBackground & Motivations
Objectives
Introduction
CNN and HTM
Key features
Implementations
Original NORB dataset
New benchmark design
Experiments design
Results
Contents
24. 08.09.15 Vincenzo Lomonaco 24
Conclusions
In this dissertation three different milestones have been
achieved:
1) A LeNet-7 with Theano has been successfully implemented.
2) A new benchmark for object recognition in image
sequences has been created.
3) HTM and CNN have been compared on different object
recognition tasks.
It has been proven that the HTM bio-inspired approach can
be highly competitive and could be instrumental for
advancing the field of Deep Learning
25. 08.09.15 Vincenzo Lomonaco 25
The End
http://vincenzolomonaco.com
vincenzo.lomonaco@studio.unibo.it
“If we want machines to think, we need to teach them to see”
Fei-Fei Li, Stanford Computer Vision Lab
Thank you for your attention
Vincenzo Lomonaco