Introduction to LSTM, the deep learning algorithm behind Google Voice Transcriptions, explained without any mathematics equation. Mostly for non-technical audience without any data-science background.
Natural Language Processing (NLP) - IntroductionAritra Mukherjee
This presentation provides a beginner-friendly introduction towards Natural Language Processing in a way that arouses interest in the field. I have made the effort to include as many easy to understand examples as possible.
Natural language processing (NLP) is introduced, including its definition, common steps like morphological analysis and syntactic analysis, and applications like information extraction and machine translation. Statistical NLP aims to perform statistical inference for NLP tasks. Real-world applications of NLP are discussed, such as automatic summarization, information retrieval, question answering and speech recognition. A demo of a free NLP application is presented at the end.
Convolutional Neural Networks and Natural Language ProcessingThomas Delteil
Presentation on Convolutional Neural Networks and their application to Natural Language Processing. In-depth walk-through the Crepe architecture from Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
Loosely based on ODSC London 2016 talk: https://www.slideshare.net/MiguelFierro1/deep-learning-for-nlp-67182819
Code: https://github.com/ThomasDelteil/TextClassificationCNNs_MXNet
Demo: https://thomasdelteil.github.io/TextClassificationCNNs_MXNet/
(flattened pdf, no animation, email author for .pptx)
IBM Bluemix Paris Meetup #27 20171219 - Introduction to NLP with Recast.aiIBM France Lab
This document provides an introduction to natural language processing (NLP) and discusses its history and current state of the art. It covers early work in machine translation in the 1950s-60s and rule-based NLP systems in the 1960s-80s. It then discusses the increasing use of machine learning for NLP beginning in the 1980s with algorithms like decision trees and HMMs for part-of-speech tagging. More recent developments include the rise of neural networks and their application to tasks like intent classification, named entity recognition, and dialog management. The document concludes by framing chatbots as applications that integrate many ML techniques for NLP.
NLTK - Natural Language Processing in Pythonshanbady
For full details, including the address, and to RSVP see: http://www.meetup.com/bostonpython/calendar/15547287/ NLTK is the Natural Language Toolkit, an extensive Python library for processing natural language. Shankar Ambady will give us a tour of just a few of its extensive capabilities, including sentence parsing, synonym finding, spam detection, and more. Linguistic expertise is not required, though if you know the difference between a hyponym and a hypernym, you might be able to help the rest of us! Socializing at 6:30, Shankar's presentation at 7:00. See you at the NERD.
Deep learning for fun and profit [pyconde 2018]Königsweg GmbH
The document discusses deep learning and neural networks. It begins by introducing the speaker and outlining his background in data science. It then describes some of his previous talks on generating short stories with neural networks and on style transfer techniques. The rest of the document discusses potential applications of neural networks like generating artwork, text, and speech in the style of an existing work (The Three Investigators book series) but also acknowledges the challenges in developing the needed datasets and models. It reflects on common assumptions about neural networks and their abilities, finding that in reality they require a lot more resources and are not as smart as sometimes believed. The document advocates for managing expectations on what neural networks can achieve with current capabilities.
There are all these great blog posts about Deep Learning describing all that awesome stuff. - Is it all that easy? Let's check! This is part 2 of on ongoing series of adventures in Deep Learning for fun, research and business.
Alexander' professional career was always about digitalisation: starting from vinyl records in the nineties to to advanced data analytics nowadays. He's program chair of Europe's main Python conference EuroPython, one of the 25 mongoDB masters, organiser of PyConDE and a regular contributor to the tech community. He has spoken at many international conferences in Silicon Valley, New York, London, Florence or Paris. He's a partner at Königsweg (http://koenigsweg.com) consultancy for digitalisation, high-tech and data science where he consults enterprises on data matters and trains individuals in Python and AI.
This talk covers style transfer (making a picture look like painting), speech generation (like Siri or Alexa) and text generation (writing a story). In this talk describes the whole journey: A fun ride from the idea to the very end including all the struggles, failures and successes.
Steps covered:
- The data challenge: get the data ready
- Have it run on your Mac with PyTorch and an eGPU
- Creating a character-level language models with an Recurrent Neural Network
- Creating a text generator
- Creating artwork
- Data challenges and solutions in the non English NLP space
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
Natural Language Processing (NLP) - IntroductionAritra Mukherjee
This presentation provides a beginner-friendly introduction towards Natural Language Processing in a way that arouses interest in the field. I have made the effort to include as many easy to understand examples as possible.
Natural language processing (NLP) is introduced, including its definition, common steps like morphological analysis and syntactic analysis, and applications like information extraction and machine translation. Statistical NLP aims to perform statistical inference for NLP tasks. Real-world applications of NLP are discussed, such as automatic summarization, information retrieval, question answering and speech recognition. A demo of a free NLP application is presented at the end.
Convolutional Neural Networks and Natural Language ProcessingThomas Delteil
Presentation on Convolutional Neural Networks and their application to Natural Language Processing. In-depth walk-through the Crepe architecture from Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for Text Classification. Advances in Neural Information Processing Systems 28 (NIPS 2015).
Loosely based on ODSC London 2016 talk: https://www.slideshare.net/MiguelFierro1/deep-learning-for-nlp-67182819
Code: https://github.com/ThomasDelteil/TextClassificationCNNs_MXNet
Demo: https://thomasdelteil.github.io/TextClassificationCNNs_MXNet/
(flattened pdf, no animation, email author for .pptx)
IBM Bluemix Paris Meetup #27 20171219 - Introduction to NLP with Recast.aiIBM France Lab
This document provides an introduction to natural language processing (NLP) and discusses its history and current state of the art. It covers early work in machine translation in the 1950s-60s and rule-based NLP systems in the 1960s-80s. It then discusses the increasing use of machine learning for NLP beginning in the 1980s with algorithms like decision trees and HMMs for part-of-speech tagging. More recent developments include the rise of neural networks and their application to tasks like intent classification, named entity recognition, and dialog management. The document concludes by framing chatbots as applications that integrate many ML techniques for NLP.
NLTK - Natural Language Processing in Pythonshanbady
For full details, including the address, and to RSVP see: http://www.meetup.com/bostonpython/calendar/15547287/ NLTK is the Natural Language Toolkit, an extensive Python library for processing natural language. Shankar Ambady will give us a tour of just a few of its extensive capabilities, including sentence parsing, synonym finding, spam detection, and more. Linguistic expertise is not required, though if you know the difference between a hyponym and a hypernym, you might be able to help the rest of us! Socializing at 6:30, Shankar's presentation at 7:00. See you at the NERD.
Deep learning for fun and profit [pyconde 2018]Königsweg GmbH
The document discusses deep learning and neural networks. It begins by introducing the speaker and outlining his background in data science. It then describes some of his previous talks on generating short stories with neural networks and on style transfer techniques. The rest of the document discusses potential applications of neural networks like generating artwork, text, and speech in the style of an existing work (The Three Investigators book series) but also acknowledges the challenges in developing the needed datasets and models. It reflects on common assumptions about neural networks and their abilities, finding that in reality they require a lot more resources and are not as smart as sometimes believed. The document advocates for managing expectations on what neural networks can achieve with current capabilities.
There are all these great blog posts about Deep Learning describing all that awesome stuff. - Is it all that easy? Let's check! This is part 2 of on ongoing series of adventures in Deep Learning for fun, research and business.
Alexander' professional career was always about digitalisation: starting from vinyl records in the nineties to to advanced data analytics nowadays. He's program chair of Europe's main Python conference EuroPython, one of the 25 mongoDB masters, organiser of PyConDE and a regular contributor to the tech community. He has spoken at many international conferences in Silicon Valley, New York, London, Florence or Paris. He's a partner at Königsweg (http://koenigsweg.com) consultancy for digitalisation, high-tech and data science where he consults enterprises on data matters and trains individuals in Python and AI.
This talk covers style transfer (making a picture look like painting), speech generation (like Siri or Alexa) and text generation (writing a story). In this talk describes the whole journey: A fun ride from the idea to the very end including all the struggles, failures and successes.
Steps covered:
- The data challenge: get the data ready
- Have it run on your Mac with PyTorch and an eGPU
- Creating a character-level language models with an Recurrent Neural Network
- Creating a text generator
- Creating artwork
- Data challenges and solutions in the non English NLP space
This presentation on Recurrent Neural Network will help you understand what is a neural network, what are the popular neural networks, why we need recurrent neural network, what is a recurrent neural network, how does a RNN work, what is vanishing and exploding gradient problem, what is LSTM and you will also see a use case implementation of LSTM (Long short term memory). Neural networks used in Deep Learning consists of different layers connected to each other and work on the structure and functions of the human brain. It learns from huge volumes of data and used complex algorithms to train a neural net. The recurrent neural network works on the principle of saving the output of a layer and feeding this back to the input in order to predict the output of the layer. Now lets deep dive into this presentation and understand what is RNN and how does it actually work.
Below topics are explained in this recurrent neural networks tutorial:
1. What is a neural network?
2. Popular neural networks?
3. Why recurrent neural network?
4. What is a recurrent neural network?
5. How does an RNN work?
6. Vanishing and exploding gradient problem
7. Long short term memory (LSTM)
8. Use case implementation of LSTM
Simplilearn’s Deep Learning course will transform you into an expert in deep learning techniques using TensorFlow, the open-source software library designed to conduct machine learning & deep neural network research. With our deep learning course, you'll master deep learning and TensorFlow concepts, learn to implement algorithms, build artificial neural networks and traverse layers of data abstraction to understand the power of data and prepare you for your new role as deep learning scientist.
Why Deep Learning?
It is one of the most popular software platforms used for deep learning and contains powerful tools to help you build and implement artificial neural networks.
Advancements in deep learning are being seen in smartphone applications, creating efficiencies in the power grid, driving advancements in healthcare, improving agricultural yields, and helping us find solutions to climate change. With this Tensorflow course, you’ll build expertise in deep learning models, learn to operate TensorFlow to manage neural networks and interpret the results.
And according to payscale.com, the median salary for engineers with deep learning skills tops $120,000 per year.
You can gain in-depth knowledge of Deep Learning by taking our Deep Learning certification training course. With Simplilearn’s Deep Learning course, you will prepare for a career as a Deep Learning engineer as you master concepts and techniques including supervised and unsupervised learning, mathematical and heuristic aspects, and hands-on modeling to develop algorithms. Those who complete the course will be able to:
Learn more at: https://www.simplilearn.com/
Two decades ago Extreme Programming revolutionized software development with a set of principles and practices that help to improve product quality, user experience, efficiency and well-being of teams. In this presentation we will discuss how such a methodology can be even more important to deliver valuable and reliable Data Science products meeting ever-growing speed-to-market expectations.
Il y a 20 ans, l’Extreme Programming était un framework novateur avec des pratiques de génie logiciel sans lesquelles nous n’imaginerions plus travailler aujourd’hui pour produire des logiciels de qualité. Dans cette présentation, nous découvrirons comment les pratiques de l’Extrême Data-Science, qui se reposent sur les épaules du géant Extrême Programming, nous permettent d’intégrer avec succès les data-scientists et leurs projets dans les équipes, et aident à assurer la qualité des livrables data-science qui offrent des fonctionnalités optimales pour l’utilisateur.
Make Data Science Great Again. Pourquoi et comment crafter la Data Science su...Anastasia Bobyreva
Il n'est pas évident d'intégrer de la Data Science dans les sociétés qui développent un business qui de base ne prévoyait pas de l’intelligence artificielle (IA), et pour lequel l’IA n'est pas au cœur du métier. Malgré la motivation d'utiliser l’IA, de nombreux projets Data Science dans ces sociétés échouent.
C'est autant frustrant pour les responsables d'entreprises que démotivant pour les data-scientists, dont les projets finissent au placard. On va analyser ensemble cette situation, pour déterminer les raisons de ces échecs. On va également étudier comment éviter les erreurs les plus courantes, et comment mener ce changement sans encombre afin d'enrichir vos produits avec l’IA.
L’objectif du talk est que peu importe le profil que vous avez - dev front, dev back, data-scientist, CTO, CEO, Product Manager - vous retournerez lundi dans votre société en sachant à la fois identifier et mener à bien les opportunités de Data Science.
Presentation of Learn Link, the first social network that aims to connect people depending on what they want to learn or teach, and boost the motivation during the learning.
https://twitter.com/swmtp/status/1005849400466464768
Thanks to my great teammates (slide 14) for their work and motivation !
Big Data Science in Scala ( Joker 2017, slides in Russian)Anastasia Bobyreva
«Нужно бежать со всех ног, чтобы только оставаться на месте, а чтобы куда-то попасть, надо бежать как минимум вдвое быстрее!» — data scientist в Стране Чудес.
Наука о данных вынуждена, как минимум, идти в ногу с постоянно увеличивающимися объемами и сложностью данных, а в идеальном случае стараться опережать и предупреждать потенциальные проблемы, возникающие при их обработке.
В этом докладе вы увидите, как Scala-библиотеки Saddle, Smile и Spark помогают науке о данных отвечать постоянно эволюционирующим требованиям инфраструктуры, облегчая анализ и расширяя возможности описательной статистики, обработки данных и машинного обучения. В этом им помогают функциональные аспекты языка Scala, его благоприятная экосистема больших данных и гибридность с объектно-ориентированным программированием.
На примере предсказания кликов на рекламных пространствах интернета мы исследуем с вами возможности, преимущества и пути развития Scala для науки о данных.
How to get the best of both worlds : Big Data and Data Science?
Run Deep Learning on Spark easily with BigDL library!
Slides of my short conference, introduction to BigDL, for Christmas JUG event in Montpellier
1. The document discusses big data and data science libraries in Scala for tasks like preprocessing, machine learning, and evaluation.
2. It demonstrates using Spark and Smile libraries on a real dataset to optimize click-through rates by analyzing features like OS, categories, and time.
3. The document compares the performance of Spark and Smile for random forest classification and regression on a 13GB dataset.
Which library should you choose for data-science? That's the question!Anastasia Bobyreva
This talk presents you the data-science ecosystem in two languages : Python and Scala. It demonstrates the use of their libraries on real dataset to solve binary classification problem with decision tree algorithm.
This talk presents you how three scala libraries - Smile, Saddle and Spark ML - satisfy requirements of new Big Data Science projects. Let's see it on example of click-through rate prediction.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Two decades ago Extreme Programming revolutionized software development with a set of principles and practices that help to improve product quality, user experience, efficiency and well-being of teams. In this presentation we will discuss how such a methodology can be even more important to deliver valuable and reliable Data Science products meeting ever-growing speed-to-market expectations.
Il y a 20 ans, l’Extreme Programming était un framework novateur avec des pratiques de génie logiciel sans lesquelles nous n’imaginerions plus travailler aujourd’hui pour produire des logiciels de qualité. Dans cette présentation, nous découvrirons comment les pratiques de l’Extrême Data-Science, qui se reposent sur les épaules du géant Extrême Programming, nous permettent d’intégrer avec succès les data-scientists et leurs projets dans les équipes, et aident à assurer la qualité des livrables data-science qui offrent des fonctionnalités optimales pour l’utilisateur.
Make Data Science Great Again. Pourquoi et comment crafter la Data Science su...Anastasia Bobyreva
Il n'est pas évident d'intégrer de la Data Science dans les sociétés qui développent un business qui de base ne prévoyait pas de l’intelligence artificielle (IA), et pour lequel l’IA n'est pas au cœur du métier. Malgré la motivation d'utiliser l’IA, de nombreux projets Data Science dans ces sociétés échouent.
C'est autant frustrant pour les responsables d'entreprises que démotivant pour les data-scientists, dont les projets finissent au placard. On va analyser ensemble cette situation, pour déterminer les raisons de ces échecs. On va également étudier comment éviter les erreurs les plus courantes, et comment mener ce changement sans encombre afin d'enrichir vos produits avec l’IA.
L’objectif du talk est que peu importe le profil que vous avez - dev front, dev back, data-scientist, CTO, CEO, Product Manager - vous retournerez lundi dans votre société en sachant à la fois identifier et mener à bien les opportunités de Data Science.
Presentation of Learn Link, the first social network that aims to connect people depending on what they want to learn or teach, and boost the motivation during the learning.
https://twitter.com/swmtp/status/1005849400466464768
Thanks to my great teammates (slide 14) for their work and motivation !
Big Data Science in Scala ( Joker 2017, slides in Russian)Anastasia Bobyreva
«Нужно бежать со всех ног, чтобы только оставаться на месте, а чтобы куда-то попасть, надо бежать как минимум вдвое быстрее!» — data scientist в Стране Чудес.
Наука о данных вынуждена, как минимум, идти в ногу с постоянно увеличивающимися объемами и сложностью данных, а в идеальном случае стараться опережать и предупреждать потенциальные проблемы, возникающие при их обработке.
В этом докладе вы увидите, как Scala-библиотеки Saddle, Smile и Spark помогают науке о данных отвечать постоянно эволюционирующим требованиям инфраструктуры, облегчая анализ и расширяя возможности описательной статистики, обработки данных и машинного обучения. В этом им помогают функциональные аспекты языка Scala, его благоприятная экосистема больших данных и гибридность с объектно-ориентированным программированием.
На примере предсказания кликов на рекламных пространствах интернета мы исследуем с вами возможности, преимущества и пути развития Scala для науки о данных.
How to get the best of both worlds : Big Data and Data Science?
Run Deep Learning on Spark easily with BigDL library!
Slides of my short conference, introduction to BigDL, for Christmas JUG event in Montpellier
1. The document discusses big data and data science libraries in Scala for tasks like preprocessing, machine learning, and evaluation.
2. It demonstrates using Spark and Smile libraries on a real dataset to optimize click-through rates by analyzing features like OS, categories, and time.
3. The document compares the performance of Spark and Smile for random forest classification and regression on a 13GB dataset.
Which library should you choose for data-science? That's the question!Anastasia Bobyreva
This talk presents you the data-science ecosystem in two languages : Python and Scala. It demonstrates the use of their libraries on real dataset to solve binary classification problem with decision tree algorithm.
This talk presents you how three scala libraries - Smile, Saddle and Spark ML - satisfy requirements of new Big Data Science projects. Let's see it on example of click-through rate prediction.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
9. Human Talks Montpellier @lievAnastazia
L
A
Y
E
R
1
L
A
Y
E
R
2
L
A
Y
E
R
3
L
A
Y
E
R
4
L
A
Y
E
R
5
Input
Data
Expected
Learning by Backpropagation
Ground truth
10. Human Talks Montpellier @lievAnastazia
L
A
Y
E
R
1
L
A
Y
E
R
2
L
A
Y
E
R
3
L
A
Y
E
R
4
L
A
Y
E
R
5
Input
Data
Prediction
Learning by Backpropagation
First iteration truth
11. Human Talks Montpellier @lievAnastazia
L
A
Y
E
R
1
L
A
Y
E
R
2
L
A
Y
E
R
3
L
A
Y
E
R
4
L
A
Y
E
R
5
Input
Data
Prediction
Error
Learning by Backpropagation
12. Human Talks Montpellier @lievAnastazia
L
A
Y
E
R
1
L
A
Y
E
R
2
L
A
Y
E
R
3
L
A
Y
E
R
4
L
A
Y
E
R
5
Input
Data
Prediction
Error
Update weights in every layer w/ an optimization algorithm
Learning by Backpropagation
13. Human Talks Montpellier @lievAnastazia
L
A
Y
E
R
1
L
A
Y
E
R
2
L
A
Y
E
R
3
L
A
Y
E
R
4
L
A
Y
E
R
5
Input
Data
Prediction
Error
Update weights in every layer w/ an optimization algorithm
Retry prediction with updated weights
LeaLearning by Backpropagation
ning by Backpropagation
21. Human Talks Montpellier @lievAnastazia
Je vais passer les vacances en Occitanie.
Je veux bien apprendre à parler Occitan
22. Human Talks Montpellier @lievAnastazia
Use cases
- Time series
- Video processing
- Natural language processing
- Audio processing
- Sequence data in bio-informatics
- Automatic cars
- ...
23. Human Talks Montpellier @lievAnastazia
Variations of RNN & LSTM
- Bi-directional RNNs
- Hierarchical RNNS
- Depth Gated RNN
- GRU
- LSTM with peephole connections
- ...
24. Human Talks Montpellier @lievAnastazia
To go further ( or deeper :P )
- Andrej Karpathy blog
http://karpathy.github.io/2015/05/21/rnn-effectiveness/
- Colah blog
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
- Books and courses on stanford.edu
- Articles of Jürgen Schmidhuber
- Recent Advances in Recurrent Neural Networks
https://arxiv.org/abs/1801.01078
25. Google voice transcriptions demystified:
Introduction to recurrent neural networks & LSTM
Lieva Anastasia @lievAnastazia
Human Talks Montpellier