This is the part where we will talk about model-based filtering. In here, we will see two types of models: SVM and SVD (for Singular Value Decomposition). We will introduce you to a useful library : Surprise library.
[notebook](https://colab.research.google.com/drive/1Xt3DImn43eMrEMMZadByrD1_bSS0fmGh)
Custom Star Creation for Ellucain's Enterprise Data WarehouseBryan L. Mack
Plugging in new fact & dimension tables to Ellucain's EDW product can be a daunting task. This presentation is an example of a custom star I've created to track employee benefit deductions at a detailed level for trend analysis. The purpose of this presentation is a guideline of how to plug any star into their product using 100% custom code.
Custom Star Creation for Ellucain's Enterprise Data WarehouseBryan L. Mack
Plugging in new fact & dimension tables to Ellucain's EDW product can be a daunting task. This presentation is an example of a custom star I've created to track employee benefit deductions at a detailed level for trend analysis. The purpose of this presentation is a guideline of how to plug any star into their product using 100% custom code.
As it is suggested in the name, we use recommender systems to recommend items to users bases on their preferences, and the preferences of other users.
We will talk about two categories of recommoncder systems : Content based filtering and Collaborative filtering. In the later one, there are two approaches: neighborhood approach, and model based approach. In this section, we see the first one.
[Notebook](https://colab.research.google.com/drive/12gM8EEa6gxhgpMB-QvCbfmwwZm7MVrku)
Movie recommendation Engine using Artificial IntelligenceHarivamshi D
My Academic Major Project Movie Recommendation using Artificial Intelligence. We also developed a website named movie engine for the recommendation of movies.
Système de recommandations de produits sur un site marchand par Koby KARP, Data Scientist (Equancy) & Hervé MIGNOT, Partner at Equancy
La recommandation reste un outil clé pour la personnalisation des sites marchands et le sujet est loin d’être épuisé. La prise en compte de la particularité d’un marché peut nécessité d’adapter le traitement et les algorithmes utilisés. Après une revue des techniques de recommandations, nous présenterons la démarche spécifique que nous avons adopté. Le système a été développé sous Spark pour la préparation des données et le calcul des modèles de recommandations. Une API simple et son service ont été développé pour délivrer les recommandations aux applications clientes.
Aaa ped-12-Supervised Learning: Support Vector Machines & Naive Bayes ClassiferAminaRepo
A particular type of models in supervised learning is SVM: Support Vector Machines. It can be used for both classification and regression. We will also see how to apply them in a face recognition problem.
Then, we will see a particular type of classifiers: Naive Bayes classifiers. We will talk precisely about the multinomial and the guassian naive bayes.
[Notebook](https://colab.research.google.com/drive/10hP0bCSt_H7AvY4EljEcP-q7EXEcb3Mt)
Certification Study Group - Professional ML Engineer Session 3 (Machine Learn...gdgsurrey
Dive into the essentials of ML model development, processes, and techniques to combat underfitting and overfitting, explore distributed training approaches, and understand model explainability. Enhance your skills with practical insights from a seasoned expert.
Recommender Systems from A to Z – The Right DatasetCrossing Minds
In the last years a lot of improvements were done in the field of Machine Learning and the Tools that support the community of developers. But still, implementing a recommender system is very hard.
That is why at Crossing Minds, we decided to create a series of 4 meetups to discuss how to implement a recommender system end-to-end:
Part 1 – The Right Dataset
Part 2 – Model Training
Part 3 – Model Evaluation
Part 4 – Real-Time Deployment
This first meetup will be about building the right dataset and doing all the preprocessing needed to create different models. We will talk about explicit vs implicit feedback, dataset analysis, likes/dislikes vs ratings, users and items features, normalization and similarities.
Context-aware recommender systems (CARS) help improve the effectiveness of recommendations by adapting to users' preferences in different contextual situations. One approach to CARS that has been shown to be particularly effective is Context-Aware Matrix Factorization (CAMF). CAMF incorporates contextual dependencies into the standard matrix factorization (MF) process, where users and items are represented as collections of weights over various latent factors. In this paper, we introduce another CARS approach based on an extension of matrix factorization, namely, the Sparse Linear Method (SLIM). We develop a family of deviation-based contextual SLIM (CSLIM) recommendation algorithms by learning rating deviations in different contextual conditions. Our CSLIM approach is better at explaining the underlying reasons behind contextual recommendations, and our experimental evaluations over five context-aware data sets demonstrate that these CSLIM algorithms outperform the state-of-the-art CARS algorithms in the top-$N$ recommendation task. We also discuss the criteria for selecting the appropriate CSLIM algorithm in advance based on the underlying characteristics of the data.
Automating Speed: A Proven Approach to Preventing Performance Regressions in ...HostedbyConfluent
"Regular performance testing is one of the pillars of Kafka Streams’ reliability and efficiency. Beyond ensuring dependable releases, regular performance testing supports engineers in new feature development with the ability to easily test the performance impact of their features, compare different approaches, etc.
In this session, Alex and John share their experience from developing, using, and maintaining a performance testing framework for Kafka Streams that has prevented multiple performance regressions over the last 5 years. They cover guiding principles and architecture, how to ensure statistical significance and stability of results, and how to automate regression detection for actionable notifications.
This talk sheds light on how Apache Kafka is able to foster a vibrant open-source community while maintaining a high performance bar across many years and releases. It also empowers performance-minded engineers to avoid common pitfalls and bring high-quality performance testing to their own systems."
Aaa ped-14-Ensemble Learning: About Ensemble LearningAminaRepo
In this section we will start talking effectively about ensemble learning. We will simply talk about the different methods that exist to combine different models. We will then implement those methods in Python.
[Notebook](https://colab.research.google.com/drive/1fNkOh7iQ_AnjNWxm3hWyR4DIGRUNwzsS)
The Reinforcement Learning (RL) is a particular type of learning. It is useful when we try to learn from an unknown environment. Which means, that our model will have to explore the environment in order to collect the necessary data to use for its training. The model is represented as an Agent, trying to achieve a certain goal in a particular environment. The Agent affects this environment by taking actions that change the state of the environment and generate rewards produced by this later one.
The learning relies on the generated rewards, and the goal will be to maximize them. To choose the actions to apply, the agents use a policy. It can be defined as the process that the agent use to choose the actions that will permit it to optimize the overall rewards. In this course, we will see two methods used to develop these polices: policy gradient and Q-Learning. We will implement our examples using the following libraries: OpenAI gym, keras , tensorflow and keras-rl.
[Notebook 1](https://colab.research.google.com/drive/1395LU6jWULFogfErI8CIYpi35Y00YiRj)
[Notebook 2](https://colab.research.google.com/drive/1MpDS5rj-PwzzLIZtAGYnZ_jjEwhWZEdC)
Empirical Evaluation of Active Learning in Recommender SystemsUniversity of Bergen
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor quality data during training. Active learning aims to remedy this problem by focusing on obtaining better quality data that more aptly reflects a user’s preferences. In attempt to do that, an active learning strategy selects the best items to be presented to the user in order to acquire her ratings and hence improve the output of the RS.
In this seminar, I present a set of active learning strategies with different characteristics and the evaluation results with respect to several evaluation measures (i.e., MAE, NDCG, Precision, Coverage, Recommendation Quality, and, Quantity of the acquired ratings and contextual conditions).
The traditional evaluation of active learning strategies has two major flaws: (1) Performance has been evaluated for each user independently (ignoring system-wide improvements) (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). Addressing these flaws, I present that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system-centric). Hence, I present a novel offline evaluation methodology and use it to evaluate some novel and state of the art rating elicitation strategies.
While the first set of experiments was done offline, the true value of active learning must be evaluated in an online setting. Hence, in the second part of the seminar, I present a novel active learning approach that exploits some additional information of the user (i.e. the user’s personality) to deal with the cold start problem in an up-and-running mobile context-aware RS called STS, that provides users with recommendations for places of interest (POIs). The results of live user studies, have shown that the proposed AL approach significantly increases the quantity of the ratings and contextual conditions acquired from the user as well as the recommendation accuracy.
Aaa ped-23-Artificial Neural Network: Keras and TensorfowAminaRepo
We will focus in this part on two important libraries for ANN: Tensorflow and Keras. Both of them propose two types of model creation. We will use the high level API of tenshorflow, and the sequential models of Keras.
We will introduce you to some basic important concept related to tensorflow, and we will present you tensorboard. The later one is used to visualize, among other things, quantitative values related to a training process.
[Notebook](https://colab.research.google.com/drive/13KlhoNvYmeRZTZ-TLKAtW3rOkFzQVGYC)
Aaa ped-22-Artificial Neural Network: Introduction to ANNAminaRepo
Finally we will talk about Artificial Neural Networks (ANN) . In this part we will focus on the building blocks of ANN's. We will describe the perceptron and its learning rules. We will talk in more details about the gradient descent algorithm.
We will also define an important concept in ANN which is : the back-propagation algorithm. For our examples we will use two libraries: neurolab and scikit-learn libraries.
[Notebook](https://colab.research.google.com/drive/1CqYwu9NzeXuUNeR8RmkDplUd71DHWNod)
More Related Content
Similar to Aaa ped-20-Recommender Systems: Model-based collaborative filtering
As it is suggested in the name, we use recommender systems to recommend items to users bases on their preferences, and the preferences of other users.
We will talk about two categories of recommoncder systems : Content based filtering and Collaborative filtering. In the later one, there are two approaches: neighborhood approach, and model based approach. In this section, we see the first one.
[Notebook](https://colab.research.google.com/drive/12gM8EEa6gxhgpMB-QvCbfmwwZm7MVrku)
Movie recommendation Engine using Artificial IntelligenceHarivamshi D
My Academic Major Project Movie Recommendation using Artificial Intelligence. We also developed a website named movie engine for the recommendation of movies.
Système de recommandations de produits sur un site marchand par Koby KARP, Data Scientist (Equancy) & Hervé MIGNOT, Partner at Equancy
La recommandation reste un outil clé pour la personnalisation des sites marchands et le sujet est loin d’être épuisé. La prise en compte de la particularité d’un marché peut nécessité d’adapter le traitement et les algorithmes utilisés. Après une revue des techniques de recommandations, nous présenterons la démarche spécifique que nous avons adopté. Le système a été développé sous Spark pour la préparation des données et le calcul des modèles de recommandations. Une API simple et son service ont été développé pour délivrer les recommandations aux applications clientes.
Aaa ped-12-Supervised Learning: Support Vector Machines & Naive Bayes ClassiferAminaRepo
A particular type of models in supervised learning is SVM: Support Vector Machines. It can be used for both classification and regression. We will also see how to apply them in a face recognition problem.
Then, we will see a particular type of classifiers: Naive Bayes classifiers. We will talk precisely about the multinomial and the guassian naive bayes.
[Notebook](https://colab.research.google.com/drive/10hP0bCSt_H7AvY4EljEcP-q7EXEcb3Mt)
Certification Study Group - Professional ML Engineer Session 3 (Machine Learn...gdgsurrey
Dive into the essentials of ML model development, processes, and techniques to combat underfitting and overfitting, explore distributed training approaches, and understand model explainability. Enhance your skills with practical insights from a seasoned expert.
Recommender Systems from A to Z – The Right DatasetCrossing Minds
In the last years a lot of improvements were done in the field of Machine Learning and the Tools that support the community of developers. But still, implementing a recommender system is very hard.
That is why at Crossing Minds, we decided to create a series of 4 meetups to discuss how to implement a recommender system end-to-end:
Part 1 – The Right Dataset
Part 2 – Model Training
Part 3 – Model Evaluation
Part 4 – Real-Time Deployment
This first meetup will be about building the right dataset and doing all the preprocessing needed to create different models. We will talk about explicit vs implicit feedback, dataset analysis, likes/dislikes vs ratings, users and items features, normalization and similarities.
Context-aware recommender systems (CARS) help improve the effectiveness of recommendations by adapting to users' preferences in different contextual situations. One approach to CARS that has been shown to be particularly effective is Context-Aware Matrix Factorization (CAMF). CAMF incorporates contextual dependencies into the standard matrix factorization (MF) process, where users and items are represented as collections of weights over various latent factors. In this paper, we introduce another CARS approach based on an extension of matrix factorization, namely, the Sparse Linear Method (SLIM). We develop a family of deviation-based contextual SLIM (CSLIM) recommendation algorithms by learning rating deviations in different contextual conditions. Our CSLIM approach is better at explaining the underlying reasons behind contextual recommendations, and our experimental evaluations over five context-aware data sets demonstrate that these CSLIM algorithms outperform the state-of-the-art CARS algorithms in the top-$N$ recommendation task. We also discuss the criteria for selecting the appropriate CSLIM algorithm in advance based on the underlying characteristics of the data.
Automating Speed: A Proven Approach to Preventing Performance Regressions in ...HostedbyConfluent
"Regular performance testing is one of the pillars of Kafka Streams’ reliability and efficiency. Beyond ensuring dependable releases, regular performance testing supports engineers in new feature development with the ability to easily test the performance impact of their features, compare different approaches, etc.
In this session, Alex and John share their experience from developing, using, and maintaining a performance testing framework for Kafka Streams that has prevented multiple performance regressions over the last 5 years. They cover guiding principles and architecture, how to ensure statistical significance and stability of results, and how to automate regression detection for actionable notifications.
This talk sheds light on how Apache Kafka is able to foster a vibrant open-source community while maintaining a high performance bar across many years and releases. It also empowers performance-minded engineers to avoid common pitfalls and bring high-quality performance testing to their own systems."
Aaa ped-14-Ensemble Learning: About Ensemble LearningAminaRepo
In this section we will start talking effectively about ensemble learning. We will simply talk about the different methods that exist to combine different models. We will then implement those methods in Python.
[Notebook](https://colab.research.google.com/drive/1fNkOh7iQ_AnjNWxm3hWyR4DIGRUNwzsS)
The Reinforcement Learning (RL) is a particular type of learning. It is useful when we try to learn from an unknown environment. Which means, that our model will have to explore the environment in order to collect the necessary data to use for its training. The model is represented as an Agent, trying to achieve a certain goal in a particular environment. The Agent affects this environment by taking actions that change the state of the environment and generate rewards produced by this later one.
The learning relies on the generated rewards, and the goal will be to maximize them. To choose the actions to apply, the agents use a policy. It can be defined as the process that the agent use to choose the actions that will permit it to optimize the overall rewards. In this course, we will see two methods used to develop these polices: policy gradient and Q-Learning. We will implement our examples using the following libraries: OpenAI gym, keras , tensorflow and keras-rl.
[Notebook 1](https://colab.research.google.com/drive/1395LU6jWULFogfErI8CIYpi35Y00YiRj)
[Notebook 2](https://colab.research.google.com/drive/1MpDS5rj-PwzzLIZtAGYnZ_jjEwhWZEdC)
Empirical Evaluation of Active Learning in Recommender SystemsUniversity of Bergen
The accuracy of collaborative-filtering recommender systems largely depends on three factors: the quality of the rating prediction algorithm, and the quantity and quality of available ratings. While research in the field of recommender systems often concentrates on improving prediction algorithms, even the best algorithms will fail if they are fed poor quality data during training. Active learning aims to remedy this problem by focusing on obtaining better quality data that more aptly reflects a user’s preferences. In attempt to do that, an active learning strategy selects the best items to be presented to the user in order to acquire her ratings and hence improve the output of the RS.
In this seminar, I present a set of active learning strategies with different characteristics and the evaluation results with respect to several evaluation measures (i.e., MAE, NDCG, Precision, Coverage, Recommendation Quality, and, Quantity of the acquired ratings and contextual conditions).
The traditional evaluation of active learning strategies has two major flaws: (1) Performance has been evaluated for each user independently (ignoring system-wide improvements) (2) Active learning strategies have been evaluated in isolation from unsolicited user ratings (natural acquisition). Addressing these flaws, I present that an elicited rating has effects across the system, so a typical user-centric evaluation which ignores any changes of rating prediction of other users also ignores these cumulative effects, which may be more influential on the performance of the system as a whole (system-centric). Hence, I present a novel offline evaluation methodology and use it to evaluate some novel and state of the art rating elicitation strategies.
While the first set of experiments was done offline, the true value of active learning must be evaluated in an online setting. Hence, in the second part of the seminar, I present a novel active learning approach that exploits some additional information of the user (i.e. the user’s personality) to deal with the cold start problem in an up-and-running mobile context-aware RS called STS, that provides users with recommendations for places of interest (POIs). The results of live user studies, have shown that the proposed AL approach significantly increases the quantity of the ratings and contextual conditions acquired from the user as well as the recommendation accuracy.
Similar to Aaa ped-20-Recommender Systems: Model-based collaborative filtering (20)
Aaa ped-23-Artificial Neural Network: Keras and TensorfowAminaRepo
We will focus in this part on two important libraries for ANN: Tensorflow and Keras. Both of them propose two types of model creation. We will use the high level API of tenshorflow, and the sequential models of Keras.
We will introduce you to some basic important concept related to tensorflow, and we will present you tensorboard. The later one is used to visualize, among other things, quantitative values related to a training process.
[Notebook](https://colab.research.google.com/drive/13KlhoNvYmeRZTZ-TLKAtW3rOkFzQVGYC)
Aaa ped-22-Artificial Neural Network: Introduction to ANNAminaRepo
Finally we will talk about Artificial Neural Networks (ANN) . In this part we will focus on the building blocks of ANN's. We will describe the perceptron and its learning rules. We will talk in more details about the gradient descent algorithm.
We will also define an important concept in ANN which is : the back-propagation algorithm. For our examples we will use two libraries: neurolab and scikit-learn libraries.
[Notebook](https://colab.research.google.com/drive/1CqYwu9NzeXuUNeR8RmkDplUd71DHWNod)
This is the other category of recommender systems: Content-based filtering.
We will apply three of the various available methods: Decision Trees, Nearest Neighbor method and Polynomial Regression.
[Notebook](https://colab.research.google.com/drive/1ohF6b5LO1XA0cCSLITSApaioUIKyQowQ)
Aaa ped-18-Unsupervised Learning: Association Rule LearningAminaRepo
We use association rule methods to find the relationships between the data attributes. We will see 2 methods: apriori and eclat. We will also use 2 libraries : ML-extend and fim library.
Association rule learning can be for example used to identify the items that are more often bought together.
[notebook](https://colab.research.google.com/drive/1gABWBOK176R0q9HRYrDVpYREWKPKfbLK)
In order to be able to visulaize the data, or simply to speed up the process of learning without loosing the important features, we apply dimensionality reduction. methods.
We will talk about 2 methods: PCA and manifold.
[Notebook](https://colab.research.google.com/drive/1_ksjf1K49dUA8XtyDGoL5V3JEajHvFHb)
Unsupervised learning involves using unlabeled data. It is used for specific problematic like : clustering, dimensionality reduction and association rule learning.
In the first section we will talk about some of the clustering methods: k-mean, mean shift, Gaussian mixture and affinity propagation model . We will also define and use silhouette scores that will help to select the most appropriate number of clusters that the data may have.
[Notebook](https://colab.research.google.com/drive/1g4hcSfiO-TW35JbiQ_kGQAsgMZDPkp7L)
We have already seen some of the operations related to data preparation, but in this section we will talk about a particular aspect of this preparation, which is preprocessing data. Even if the data is clean, and complete, it has to go throw a certain steps of treatment to be usable in machine learning.
Beside data, we will define single and multi variable regressors. We will also define the classification regression model: the logistic regression. An other important element that we will define is the regularization added to regreesion models to avoid overfitting.
[Notebook](https://colab.research.google.com/drive/1cMINaTmIGKmOCoZPf44Oog_j9z8bRuTv)
Aaa ped-10-Supervised Learning: Introduction to Supervised LearningAminaRepo
Machine learning is a branch of AI that involves learning from data. In other words, you build a model, then you adjust it using the available data. Then, you will use that model to predict a certain behaviour from other data.
In supervised learning, you will use labeled data. And your model will learn how to predict these lables for any other given data.
You will encouter two types of supervised learning: classification, and regression.
In classification, the labels will represent categories of data. And in regression, the labels are simply values that correspond to each sample.
[Notebook](https://colab.research.google.com/drive/1CgOXIsafoxgDo3jDOFjq56aL5RNVrWJq)
Aaa ped-9-Data manipulation: Time Series & Geographical visualizationAminaRepo
Data can represent information about time and space. In this section, we will talk about time data, and the related operations.
We will also see how to plot and visualise geographic data using basemap library.
[Notebook](https://colab.research.google.com/drive/16TyzQ1w8km6RwTZwHbmv_FZlZM2ptM77)
Aaa ped-Data-8- manipulation: Plotting and VisualizationAminaRepo
The best way to understand your data is to visualise it. To do so, just use one of the available python libraries.
We will introduce you to some of them: seaborn, matplotlib, plotly ... etc. Some of them are dedicated to data visualisation, others offer the plotting as an additional functionality.
[Notebook](https://colab.research.google.com/drive/1LnRQo7194PdvITDzHZOLiTNeJ9JG5jDY)
Aaa ped-8- Data manipulation: Data wrangling, aggregation, and group operationsAminaRepo
We will continue with data manipulation operations, especially the ones that cover indexing data, and aggreagating it.
These operations are necessary, because in real life, the data may be messy, not complete, and not in the right format. So in order to use it, it has to be cleaned, unified and transformed.
[Notebook ](https://colab.research.google.com/drive/1kexu32solZIT-P2pC_iyMFMwYcZ8JKmP)
Aaa ped-6-Data manipulation: Data Files, and Data Cleaning & PreparationAminaRepo
Since machine learning is an important part of AI, manipulating data represents an important part in the process of building learning models.
So we will talk about reading and writing data to disk, handling missing data issue, and other important utilities.
[Notebook](https://colab.research.google.com/drive/1RVrn0NVrUtx5gZsOKv-Ecw1pj5zDjJuS)
An other important library to discover: Pandas. It defines two other important structures : Series and Data Frames.
Of course, you will find examples of how to use this library, and all the operations related to its data structures.
[Notebooke](https://colab.research.google.com/drive/1hrOfW62t8iYZhclc378J1i15AyPNu9GD)
One of the most interesting things about python, is its libraries. They are just modules of predefined classes, methods, and functions. They encapsulate the most important utilities that facilitate greatly programming in general, and AI application in particular.
One of the indispensable python libraries is Numpy library.It defines the ndarray class among other classes and utilities.
In this lesson you will find examples of how to use this library.
[Notebook] (https://colab.research.google.com/drive/1zsuCMTsXrHHRnT7RzcNkJIdH4uN-TVHz)
In this section, we will see advanced concepts related to Python. We will introduce you to different types of data structures, like: lists, tuples, and dictionaries.
We will also define an important concept in programming which is control flow statements. We will show how to use conditional and repetitive statements.
Finally, we will talk about different concepts of object oriented programming (beside other concepts), and how to implement them in python.
[Notebook link] (https://drive.google.com/file/d/11AjOGxmhz-YOHVVqDVQVlpjVJMrSrDn2/view?usp=drivesdk)
In this lesson, we will start by defining the basic concept related to Pyhton: operations, variables, and basic types.
Then we will define the concept of a function, and how to use it.
[Notebook link]( https://drive.google.com/file/d/1T4T32lyCkT4J992tfnKUADQU88Qvlrvc/view?usp=drivesdk)
Finally, we will explain the concept of modules and libraries. And, we will show how to install these libraries in Google Colab.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
4. 4
1-SVDfltering
WithSurprise
[By Amina Delali]
ConceptConcept
●
Make the assumption that there are factors (characteristics)
related to eaih item. Eaih item ian be desiribed by the degree of
the presence of eaih characteristic in that item. At the same
time, eaih user ian have diferent degrees of interest on eaih of
those characteristics.
●
These two relationships ian be modeled by two matriies:
➢ P(m,f)
: models the interests of eaih user u in f iharaiteristiis in a
row veitor: pu
➢ Q(n,f)
: models the extent of presenie of eaih iharaiteristii in an
Item i in a row veitor qi
●
The interaition between eaih user and item is iomputed by:
➢ qi
T
. pu
whiih iould estimate the rating of the user u for the item i
➢
The estimation is enhanied by other parameters to explain the
bias in ratings:
^rui=μ+bu+bi+qi
T
⋅pu
5. 5
1-SVDfltering
WithSurprise
[By Amina Delali]
ComputationComputation
●
Singular Value deiomposition (SVD) iould be used to extrait the
matriies P and Q. The values of the ratings iould also estimate
the bias values with the mean of all the ratings, the mean of the
ratings of eaih user and the mean of the ratings of eaih item.
●
The problem is the fait that not all the ratings of all the users for
all the items are available. This is why, we have to fnd another
way to estimate these values.
●
The values estimated should minimize the following equation:
∑rui ∈Rtrain
(rui− ^rui)2
+λ(bi
2
+bu
2
+‖qi‖2
+‖pu‖2
)
Consider
only
available
ratings
A regularization
parameter= a
constant value
●
The square of the norm of the
vector qi
● The norm of qi
is the square
root of the sum of the squares
of qi
values.
6. 6
2-SVDFiltering:
Moredetails
[By Amina Delali]
Stochastic Gradient DescentStochastic Gradient Descent
●
The gradient descent is an iterative algorithm that tries to fnd
the (a loial) minimum of funition. In maihine learning, the
gradient desient variations algorithms are used to estimate a
model’s parameters by minimizing a iost funition by reiursively
updating these parameters.
●
The SGD (stochastic gradient descent) is a variation in whiih,
in one iteration (epoih), the parameters are updated for eaih
sample (in our iase for eaih rating). So in one epoih the
parameters iould be updated several times:
➢
The 4 parameters are initialized.
➢
For eaih rating a prediition is made and the diferenie:
is iomputed.
➔
Then, the diferenie is used to update the parameters
values as this way:
bu←bu+γ(eui−λ bu)
bi←bi+γ(eui−λ bi)
pu← pu+γ(eui⋅qi−λ pu)
qi←qi+γ(eui⋅pu−λ qi)
The learning
rate: another
constant that
defines the
rui ^rui
eui
eui=rui− ^rui
7. 7
2-SVDFiltering:
Moredetails
[By Amina Delali]
Stochastic Gradient Descent (suite)Stochastic Gradient Descent (suite)
➢
The proiess is repeated for a iertain number of iterations in order
to fnd a loial minimum for the previous equation.
●
In Surprise library, the parameters are as follow:
➢ The parameters: bu
and bi
(also ialled baselines) are initialized to
0
➢ User and Item faitors: pi
and qi
are randomly initialized aiiording
to a normal distribution defned by the mean init_mean and the
standard deviation init_std_dev parameters.
➢
(lr_all) is set by default to 0.02, and (reg_all) to 0.005
➢
By default the number of faitors is 100
➢
The number of iterations is by default set to 20 (n_epoch)
➢
To use the biases (baselines) parameters, the biased parameter is
set by default to True
λ γ
9. 9
3-Filteringwith
SVMClassifcation
[By Amina Delali]
ConieptConiept
●
The other way to perform a model-based iollaborative fltering, is
to train a model on user’s reviews, and then to use that model to
prediit new ones for new items.
●
In this lesson we will present an implementation using an SVM
(Support Veitor Maihine). Preiisely we will use a Linear SVM
classifer to prediit the new reviews.
●
As desiribed in [Xia et al., 2006] , there are two ways to ionsider
the problem:
➢
Eaih item represents a ilass, and training set is the users
ratings for eaih item other than that item.
➢
Eaih user represents a ilass, and training set is the item’s rating
aiiording to eaih user other than that user.
●
But, the problem here is that the matriies representing the rating
will not be iomplete. So, we will use default values for missing
ratings.
10. 10
3-Filteringwith
SVMClassifcation
[By Amina Delali]
The original dataThe original data
●
We will use the data we already downloaded using Dataset
module from Surprise. But, frst, we will aiiess directly to the
downloaded dataset fle, to see its iontent
11. 11
3-Filteringwith
SVMClassifcation
[By Amina Delali]
The features and LabelsThe features and Labels
●
We will apply an SVC ilassifer for one user, and the ilasses will be
the diferent ratings.
●
We have to ionstruit the features matrix iorresponding to eaih
item ratings done by the user "226". And ionstruit the the
iorresponding label veitor using the ratings of that user.
●
It is more ionvenient to use the data built by Surprise library,
than the original fle.
12. 12
3-Filteringwith
SVMClassifcation
[By Amina Delali]
The features and Labels (suite)The features and Labels (suite)
All these values are
unavailable ratings:
which mean that the
corresponding users
didn’t rate the
corresponding items
13. 13
3-Filteringwith
SVMClassifcation
[By Amina Delali]
Prediition for one itemPrediition for one item
●
A linear SVM classifier
After dropping the
column
corresponding to the
user 218 (“226”)
All the model we used
to predict the ratings
for the user of that
item, all predicted
values either
approaching 4 or
slightly bigger than 4
14. 14
4-SomeTests
[By Amina Delali]
Splitting the dataSplitting the data
●
We will just split the data that we have already ireated using 2
methods:
➢
split into test and training sets
➢
split into folds (iross-validation)
●
We will not run our tests on all
the data as in the previous
examples.
●
We will use only the 50 items
related to to the (active) user
“226”
15. 15
4-SomeTests
[By Amina Delali]
The prediition with the test, train splitThe prediition with the test, train split
●
The missing label
is not represented
16. 16
4-SomeTests
[By Amina Delali]
Prediition with iross-validationPrediition with iross-validation
●
To see the available
measures (scoring)
Same results as with Knn collaborative
filtering
17. 17
5-Predictionswith
CustomData:
Preparation
[By Amina Delali]
The dataThe data
●
We will use the data available at :
Artificial Intelligence with Python GitHub Repository
No rating available for
the movie “Ranging
Bull” by “Bill Duffy”
How the data is organized
Is not ionvenient for Surprise.
So we will have to rearrange the
data
A user’s name:
later it will be
the user’s
raw_id
Movies
names
18. 18
5-Predictionswith
CustomData:
Preparation
[By Amina Delali]
Prepare the dataPrepare the data
●
To use with Surprise, the dataframe must have the iolumns
organized this way: user_id, item_is and ratings. Whiih is not
the iase in our DataFrame.
Now, the movies
names are in a
column
All the users and
the corresponding
ratings are in 2
columns (wide to
long conversion)
20. 20
6-Predictionswith
CustomData:
Prediction
[By Amina Delali]
Prediit a review for One itemPrediit a review for One item
●
We will use SVD teihnique to prediit the review of the user Adam
Cohen for the movie Ranging Bull
●
If we wanted to use an SVM ilassifer, we would:
➢
Use the original dataframe, and seleit only the rows iorresponding
to the movies rated by “Adam”
➢
Use the Ranging Bull raw values for prediition
➢
The NaN values must be replaied by a default value
Load the data
from the
dataframe we
already prepared.
21. 21
6-Predictionswith
CustomData:
Prediction
[By Amina Delali]
Make a list of reiommendationMake a list of reiommendation
●
●
The user Chris
Duncan rated only
2 movies. We will
make a list of
recommendations
of movies he didn't
rate by:
●
predicting its
reviews on these
movies
●
ordering the
predicted reviews
22. References
●
[Buitinik et al., 2013] Buitinik, L., Louppe, G., Blondel, M.,
Pedregosa, F., Mueller, A., Grisel, O., Niiulae, V., Prettenhofer, P.,
Gramfort, A., Grobler, J., Layton, R., VanderPlas, J., Joly, A., Holt,
B., and Varoquaux, G. (2013).
API design for maihine learning software: experienies from the
siikit-learn projeit. In ECML PKDD Workshop: Languages for
Data Mining and Maihine Learning, pages 108–122.
●
[Franiesio et al., 2011] Franiesio, R., Lior, R., Braiha, S., and
Paul B., K., editors (2011). Reiommender Systems Handbook.
Springer Siienie+Business Media.
●
[Hug, 2017] Hug, N. (2017). Surprise, a Python library for
reiommender systems. http://surpriselib.iom.
●
[Xia et al., 2006] Xia, Z., Dong, Y., and Xing, G. (2006). Support
veitor maihines for iollaborative fltering. In Proieedings of the
44th annual Southeast regional ionferenie, pages 169–174.
ACM.