Adams Wei Yu is a PhD candidate at CMU working on machine reading comprehension and large scale optimization. His advisors are Jaime Carbonell and Alex Smola. He has worked on question answering models and datasets like SQuAD. QANet is one of his contributions, which uses self-attention and convolutional layers instead of RNNs for question answering. It achieves state-of-the-art results while being much faster to train and run than previous models.
Discovering Your AI Super Powers - Tips and Tricks to Jumpstart your AI ProjectsWee Hyong Tok
In this session, we will share about cutting-edge deep learning innovations, and present emerging trends in the AI community. This session is for data scientists, developers who have a keen interest in getting started in an AI project, and wants to learn the tools of the trade. We will draw on practical experiences from working on various AI projects, and share the key learning, and pitfalls
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
David Kale and Ruben Fizsel from Skymind talk about deep learning for the JVM and enterprise using deeplearning4j (DL4J). Deep learning (nouveau neural nets) have sparked a renaissance in empirical machine learning with breakthroughs in computer vision, speech recognition, and natural language processing. However, many popular deep learning frameworks are targeted to researchers and poorly suited to enterprise settings that use Java-centric big data ecosystems. DL4J bridges the gap, bringing high performance numerical linear algebra libraries and state-of-the-art deep learning functionality to the JVM.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
Discovering Your AI Super Powers - Tips and Tricks to Jumpstart your AI ProjectsWee Hyong Tok
In this session, we will share about cutting-edge deep learning innovations, and present emerging trends in the AI community. This session is for data scientists, developers who have a keen interest in getting started in an AI project, and wants to learn the tools of the trade. We will draw on practical experiences from working on various AI projects, and share the key learning, and pitfalls
Training at AI Frontiers 2018 - Lukasz Kaiser: Sequence to Sequence Learning ...AI Frontiers
Sequence to sequence learning is a powerful way to train deep networks for machine translation, various NLP tasks, but also image generation and recently video and music generation. We will give a hands-on tutorial showing how to use the open-source Tensor2Tensor library to train state-of-the-art models for translation, image generation, and a task of your choice!
David Kale and Ruben Fizsel from Skymind talk about deep learning for the JVM and enterprise using deeplearning4j (DL4J). Deep learning (nouveau neural nets) have sparked a renaissance in empirical machine learning with breakthroughs in computer vision, speech recognition, and natural language processing. However, many popular deep learning frameworks are targeted to researchers and poorly suited to enterprise settings that use Java-centric big data ecosystems. DL4J bridges the gap, bringing high performance numerical linear algebra libraries and state-of-the-art deep learning functionality to the JVM.
Artificial Intelligence, Machine Learning and Deep LearningSujit Pal
Slides for talk Abhishek Sharma and I gave at the Gennovation tech talks (https://gennovationtalks.com/) at Genesis. The talk was part of outreach for the Deep Learning Enthusiasts meetup group at San Francisco. My part of the talk is covered from slides 19-34.
The slides for the techniques to use Convolutional Neural Networks (CNN) for the sequence modeling tasks, including image captioning and natural machine translation (NMT). The slides contain the main building blocks from different papers. Used in group paper reading in University of Sydney.
A presentation on how and why to engage upstream projects productively, and ensure that work is accepted upstream first.
Originally delivered at Linux Foundation Collaboration Summit 2015 in Santa Rosa.
Recurrent Neural Networks hold great promise as general sequence learning algorithms. As such, they are a very promising tool for text analysis. However, outside of very specific use cases such as handwriting recognition and recently, machine translation, they have not seen wide spread use. Why has this been the case?
In this presentation, we will first introduce RNNs as a concept. Then we will sketch how to implement them and cover the tricks necessary to make them work well. With the basics covered, we will investigate using RNNs as general text classification and regression models, examining where they succeed and where they fail compared to more traditional text analysis models. A straightforward open-source Python and Theano library for training RNNs with a scikit-learn style interface will be introduced and we’ll see how to use it through a tutorial on a real world text dataset
Crafting Recommenders: the Shallow and the Deep of it! Sudeep Das, Ph.D.
I present a brief review, and an outlook on the rapid changes happening in the field of recommendation engine research on the heels of the deep learning revolution!
Deep Learning and Automatic Differentiation from Theano to PyTorchinside-BigData.com
Inquisitive minds want to know what causes the universe to expand, how M-theory binds the smallest of the small particles or how social dynamics can lead to revolutions. In recent centuries, developments in science and technology brought us closer to explore the expanding universe, discover unknown particles like bosons or find out how and why a society interacts and reacts. To explain the fascinating phenomena of nature, Natural scientists develop complex 'mechanistic models' of deterministic or stochastic nature. But the hard question is how to choose the best model for our data or how to calibrate the model given the data.
The way that statisticians answer these questions is with Approximate Bayesian Computation (ABC), which we learn on the first day of the summer school and which we combine with High Performance Computing (HPC). The second day focuses on a popular machine learning approach 'Deep-learning' which mimics the deep neural network structure in our brain, in order to predict complex phenomena of nature. The summer school takes a route of open discussion and brainstorming sessions where we explore two cornerstones of today's data-science, ABC and Deep Learning being accelerated by HPC with hands on examples and exercises.
Watch the video: https://wp.me/p3RLHQ-hQQ
Learn more: https://github.com/probprog/CSCS-summer-school-2017
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Babar: Knowledge Recognition, Extraction and RepresentationPierre de Lacaze
Babar is a research project in the field of Artificial Intelligence. It aims to bridge together Neural AI and Symbolic AI. As such it is implemented in three different programming languages: Clojure, Python and CLOS.
The Clojure component (Clobar) implements the graphical user interface to Babar. Examples of the Clojure Hiccup library and interfacing Clojure to Javascript will be presented. The Python module (Pybar) implements the web crawling and scraping and the Neural Networks aspect of Babar. The Word Embedding and and LSTM (Long Short-Term Memory) components of Pybar will be described in detail. Finally the Common Lisp module (Lispbar) implements the Symbolic AI aspect of Babar. This latter includes an English Language Parser and Semantic Networks implemented as an in-memory Hypergraph.
We will present each of these components and target individual aspects with code examples. Specifically we will first present the web developments and Neural Networks components. Then the English Language parser will be examined in detail. We will also present the knowledge extraction aspect and bridge this with the Neural Network component.
Ultimately we will argue what can be termed "Neural AI" and "Symbolic AI" are at not at odds with each other but rather complement each other. In summary Artificial Intelligence is not a question of "brain" or "mind", but rather a question of "brain" and "mind".
Entity Linking, the task of linking mentions (of persons, organizations, etc…) found in a document to a unique entity in a knowledge base, while deceptively simple, has proven to be very challenging to perform. This task is even harder when documents in different languages, or from restricted domains, are considered.
Entity Linking is important to understand the topic of articles or social media posts and can be used for marketing, advertising, and many more applications.
Most of the existing research on the topic is based on Natural Language Processing and on supervised models, which provide little flexibility and generalization capabilities.
Instead, it is possible to leverage the graph-like structure of large knowledge bases like DBpedia to vastly improve the quality of Entity Linking.
Furthermore, it is possible to represent input documents in a graph-like way and exploit measures of topological similarity between the original document and the knowledge base to collectively link all the mentions in a document at the same time.
In this work, we implement and extend the state-of-the-art Entity Linking system called Quantified Collective Validation, by using Oracle PGX to analyze in-memory and in a parallelized way the full DBpedia graph, in order to efficiently and effectively perform entity linking on tweets and news articles.
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017MLconf
Corinna Cortes is a Danish computer scientist known for her contributions to machine learning. She is currently the Head of Google Research, New York. Cortes is a recipient of the Paris Kanellakis Theory and Practice Award for her work on theoretical foundations of support vector machines.
Cortes received her M.S. degree in physics from Copenhagen University in 1989. In the same year she joined AT&T Bell Labs as a researcher and remained there for about ten years. She received her Ph.D. in computer science from the University of Rochester in 1993. Cortes currently serves as the Head of Google Research, New York. She is an Editorial Board member of the journal Machine Learning.
Cortes’ research covers a wide range of topics in machine learning, including support vector machines and data mining. In 2008, she jointly with Vladimir Vapnik received the Paris Kanellakis Theory and Practice Award for the development of a highly effective algorithm for supervised learning known as support vector machines (SVM). Today, SVM is one of the most frequently used algorithms in machine learning, which is used in many practical applications, including medical diagnosis and weather forecasting.
Abstract Summary:
Harnessing Neural Networks:
Deep learning has demonstrated impressive performance gain in many machine learning applications. However, unveiling and realizing these performance gains is not always straightforward. Discovering the right network architecture is critical for accuracy and often requires a human in the loop. Some network architectures occasionally produce spurious outputs, and the outputs have to be restricted to meet the needs of an application. Finally, realizing the performance gain in a production system can be difficult because of extensive inference times.
In this talk we discuss methods for making neural networks efficient in production systems. We also discuss an efficient method for automatically learning the network architecture, called AdaNet. We provide theoretical arguments for the algorithm and present experimental evidence for its effectiveness.
The speed of Deep Neural Networks (DNN), in both training and inference, is important for its practical usage. This talk presents adaptive deep reuse, a novel optimization to enhance the speed of DNN by efficiently and effectively identifying unnecessary computations in DNN training on the fly. By avoiding these computations, the technique cuts the training time of DNN by 69% and inference time by 50%, with virtually no accuracy loss. The method is fully automatic and ready to be adopted, requiring neither manual code changes nor extra computing resource. It offers a promising way to substantially reduce both the time and energy cost in both the development and deployment of AI products. Since its recent publication, the technique has drawn a lot of interest in media, industry practitioners, and research community.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2QcNaOA.
Jessica Yung talks about the foundational concepts about neural networks. She highlights key things to pay attention to: learning rates, how to initialize a network, how the networks are constructed and trained, and understanding why these parameters are so important. She ends the talk with practical takeaways used by state-of-the-art models to help us start building neural networks. Filmed at qconlondon.com.
Jessica Yung is a research masters student in ML at University College London. She was previously at the University of Cambridge and was an NVIDIA Self-Driving Car Engineer Scholar. She applied machine learning to finance at Jump Trading and consults on machine learning. She is keen on sharing knowledge and writes about ML and how to learn effectively on her blog at JessicaYung.com.
Interest in Deep Learning has been growing in the past few years. With advances in software and hardware technologies, Neural Networks are making a resurgence. With interest in AI based applications growing, and companies like IBM, Google, Microsoft, NVidia investing heavily in computing and software applications, it is time to understand Deep Learning better!
In this workshop, we will discuss the basics of Neural Networks and discuss how Deep Learning Neural networks are different from conventional Neural Network architectures. We will review a bit of mathematics that goes into building neural networks and understand the role of GPUs in Deep Learning. We will also get an introduction to Autoencoders, Convolutional Neural Networks, Recurrent Neural Networks and understand the state-of-the-art in hardware and software architectures. Functional Demos will be presented in Keras, a popular Python package with a backend in Theano and Tensorflow.
The slides for the techniques to use Convolutional Neural Networks (CNN) for the sequence modeling tasks, including image captioning and natural machine translation (NMT). The slides contain the main building blocks from different papers. Used in group paper reading in University of Sydney.
A presentation on how and why to engage upstream projects productively, and ensure that work is accepted upstream first.
Originally delivered at Linux Foundation Collaboration Summit 2015 in Santa Rosa.
Recurrent Neural Networks hold great promise as general sequence learning algorithms. As such, they are a very promising tool for text analysis. However, outside of very specific use cases such as handwriting recognition and recently, machine translation, they have not seen wide spread use. Why has this been the case?
In this presentation, we will first introduce RNNs as a concept. Then we will sketch how to implement them and cover the tricks necessary to make them work well. With the basics covered, we will investigate using RNNs as general text classification and regression models, examining where they succeed and where they fail compared to more traditional text analysis models. A straightforward open-source Python and Theano library for training RNNs with a scikit-learn style interface will be introduced and we’ll see how to use it through a tutorial on a real world text dataset
Crafting Recommenders: the Shallow and the Deep of it! Sudeep Das, Ph.D.
I present a brief review, and an outlook on the rapid changes happening in the field of recommendation engine research on the heels of the deep learning revolution!
Deep Learning and Automatic Differentiation from Theano to PyTorchinside-BigData.com
Inquisitive minds want to know what causes the universe to expand, how M-theory binds the smallest of the small particles or how social dynamics can lead to revolutions. In recent centuries, developments in science and technology brought us closer to explore the expanding universe, discover unknown particles like bosons or find out how and why a society interacts and reacts. To explain the fascinating phenomena of nature, Natural scientists develop complex 'mechanistic models' of deterministic or stochastic nature. But the hard question is how to choose the best model for our data or how to calibrate the model given the data.
The way that statisticians answer these questions is with Approximate Bayesian Computation (ABC), which we learn on the first day of the summer school and which we combine with High Performance Computing (HPC). The second day focuses on a popular machine learning approach 'Deep-learning' which mimics the deep neural network structure in our brain, in order to predict complex phenomena of nature. The summer school takes a route of open discussion and brainstorming sessions where we explore two cornerstones of today's data-science, ABC and Deep Learning being accelerated by HPC with hands on examples and exercises.
Watch the video: https://wp.me/p3RLHQ-hQQ
Learn more: https://github.com/probprog/CSCS-summer-school-2017
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Babar: Knowledge Recognition, Extraction and RepresentationPierre de Lacaze
Babar is a research project in the field of Artificial Intelligence. It aims to bridge together Neural AI and Symbolic AI. As such it is implemented in three different programming languages: Clojure, Python and CLOS.
The Clojure component (Clobar) implements the graphical user interface to Babar. Examples of the Clojure Hiccup library and interfacing Clojure to Javascript will be presented. The Python module (Pybar) implements the web crawling and scraping and the Neural Networks aspect of Babar. The Word Embedding and and LSTM (Long Short-Term Memory) components of Pybar will be described in detail. Finally the Common Lisp module (Lispbar) implements the Symbolic AI aspect of Babar. This latter includes an English Language Parser and Semantic Networks implemented as an in-memory Hypergraph.
We will present each of these components and target individual aspects with code examples. Specifically we will first present the web developments and Neural Networks components. Then the English Language parser will be examined in detail. We will also present the knowledge extraction aspect and bridge this with the Neural Network component.
Ultimately we will argue what can be termed "Neural AI" and "Symbolic AI" are at not at odds with each other but rather complement each other. In summary Artificial Intelligence is not a question of "brain" or "mind", but rather a question of "brain" and "mind".
Entity Linking, the task of linking mentions (of persons, organizations, etc…) found in a document to a unique entity in a knowledge base, while deceptively simple, has proven to be very challenging to perform. This task is even harder when documents in different languages, or from restricted domains, are considered.
Entity Linking is important to understand the topic of articles or social media posts and can be used for marketing, advertising, and many more applications.
Most of the existing research on the topic is based on Natural Language Processing and on supervised models, which provide little flexibility and generalization capabilities.
Instead, it is possible to leverage the graph-like structure of large knowledge bases like DBpedia to vastly improve the quality of Entity Linking.
Furthermore, it is possible to represent input documents in a graph-like way and exploit measures of topological similarity between the original document and the knowledge base to collectively link all the mentions in a document at the same time.
In this work, we implement and extend the state-of-the-art Entity Linking system called Quantified Collective Validation, by using Oracle PGX to analyze in-memory and in a parallelized way the full DBpedia graph, in order to efficiently and effectively perform entity linking on tweets and news articles.
Corinna Cortes, Head of Research, Google, at MLconf NYC 2017MLconf
Corinna Cortes is a Danish computer scientist known for her contributions to machine learning. She is currently the Head of Google Research, New York. Cortes is a recipient of the Paris Kanellakis Theory and Practice Award for her work on theoretical foundations of support vector machines.
Cortes received her M.S. degree in physics from Copenhagen University in 1989. In the same year she joined AT&T Bell Labs as a researcher and remained there for about ten years. She received her Ph.D. in computer science from the University of Rochester in 1993. Cortes currently serves as the Head of Google Research, New York. She is an Editorial Board member of the journal Machine Learning.
Cortes’ research covers a wide range of topics in machine learning, including support vector machines and data mining. In 2008, she jointly with Vladimir Vapnik received the Paris Kanellakis Theory and Practice Award for the development of a highly effective algorithm for supervised learning known as support vector machines (SVM). Today, SVM is one of the most frequently used algorithms in machine learning, which is used in many practical applications, including medical diagnosis and weather forecasting.
Abstract Summary:
Harnessing Neural Networks:
Deep learning has demonstrated impressive performance gain in many machine learning applications. However, unveiling and realizing these performance gains is not always straightforward. Discovering the right network architecture is critical for accuracy and often requires a human in the loop. Some network architectures occasionally produce spurious outputs, and the outputs have to be restricted to meet the needs of an application. Finally, realizing the performance gain in a production system can be difficult because of extensive inference times.
In this talk we discuss methods for making neural networks efficient in production systems. We also discuss an efficient method for automatically learning the network architecture, called AdaNet. We provide theoretical arguments for the algorithm and present experimental evidence for its effectiveness.
The speed of Deep Neural Networks (DNN), in both training and inference, is important for its practical usage. This talk presents adaptive deep reuse, a novel optimization to enhance the speed of DNN by efficiently and effectively identifying unnecessary computations in DNN training on the fly. By avoiding these computations, the technique cuts the training time of DNN by 69% and inference time by 50%, with virtually no accuracy loss. The method is fully automatic and ready to be adopted, requiring neither manual code changes nor extra computing resource. It offers a promising way to substantially reduce both the time and energy cost in both the development and deployment of AI products. Since its recent publication, the technique has drawn a lot of interest in media, industry practitioners, and research community.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/2QcNaOA.
Jessica Yung talks about the foundational concepts about neural networks. She highlights key things to pay attention to: learning rates, how to initialize a network, how the networks are constructed and trained, and understanding why these parameters are so important. She ends the talk with practical takeaways used by state-of-the-art models to help us start building neural networks. Filmed at qconlondon.com.
Jessica Yung is a research masters student in ML at University College London. She was previously at the University of Cambridge and was an NVIDIA Self-Driving Car Engineer Scholar. She applied machine learning to finance at Jump Trading and consults on machine learning. She is keen on sharing knowledge and writes about ML and how to learn effectively on her blog at JessicaYung.com.
Interest in Deep Learning has been growing in the past few years. With advances in software and hardware technologies, Neural Networks are making a resurgence. With interest in AI based applications growing, and companies like IBM, Google, Microsoft, NVidia investing heavily in computing and software applications, it is time to understand Deep Learning better!
In this workshop, we will discuss the basics of Neural Networks and discuss how Deep Learning Neural networks are different from conventional Neural Network architectures. We will review a bit of mathematics that goes into building neural networks and understand the role of GPUs in Deep Learning. We will also get an introduction to Autoencoders, Convolutional Neural Networks, Recurrent Neural Networks and understand the state-of-the-art in hardware and software architectures. Functional Demos will be presented in Keras, a popular Python package with a backend in Theano and Tensorflow.
Similar to [246]QANet: Towards Efficient and Human-Level Reading Comprehension on SQuAD (20)
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
8. Stanford Question Answer Dataset (SQuAD)
In education, teachers facilitate student learning, often in a school or
academy or perhaps in another environment such as outdoors. A teacher
who teaches on an individual basis may be described as a tutor.
Passage:
What is the role of teachers in education?Question:
facilitate student learningGroundtruth:
facilitate student learningPrediction 1: EM = 1, F1 = 1
student learningPrediction 2: EM = 0, F1 = 0.8
teachers facilitate student learningPrediction 3: EM = 0, F1 = 0.86
Data: Crowdsourced 100k question-answer pairs on 500 Wikipedia articles.
9. Roadmap
● Models for text
● General neural structures for QA
● Building blocks for QANet
○ Fully parallel (CNN + Self-attention)
○ data augmentation via back-translation
○ transfer learning from unsupervised tasks
26. Attention: a weighted average
The cat stuck out its tongue and licked its owner
The cat stuck out its tongue and licked its owner
27. Multi-head Attention
Parallel attention layers with different linear transformations on input and output.
The cat stuck out its tongue and licked its owner
The cat stuck out its tongue and licked its owner
29. <s> The quick brown fox
embed embed embed embed embed
f(x,h) f(x,h) f(x,h) f(x,h) f(x,h)
project
The quick brown fox jumped
Language Models with attention
30.
31. Roadmap
● Models for text
● General neural structures for QA
● Building blocks for QANet
○ Fully parallel (CNN + Self-attention)
○ data augmentation via back-translation
○ transfer learning from unsupervised tasks
40. First challenge: hard to capture long dependency
h1
h3
h4
h5
h6
h2
Being a long-time fan of Japanese film, I expected more than this. I can't really be
bothered to write too much, as this movie is just so poor. The story might be the cutest
romantic little something ever, pity I couldn't stand the awful acting, the mess they called
pacing, and the standard "quirky" Japanese story. If you've noticed how many Japanese
movies use characters, plots and twists that seem too "different", forcedly so, then steer
clear of this movie. Seriously, a 12-year old could have told you how this movie was
going to move along, and that's not a good thing in my book. Fans of "Beat" Takeshi: his
part in this movie is not really more than a cameo, and unless you're a rabid fan, you
don't need to suffer through this waste of film.
42. 1. local context
input
hidden state
h1
h3
h4
h5
h6
h2
2. global interaction
3. Temporal info
What do RNNs Capture?
Substitution?
43. Roadmap
● Models for text
● General neural structures for QA
● Building blocks for QANet
○ Fully parallel (CNN + Self-attention)
○ data augmentation via back-translation
○ transfer learning from unsupervised tasks
48. Convolution: Capturing Local Context
0.6
0.2
0.8
0.4
0.1
0.6
0.4
0.1
0.4
0.9
0.1
0.8
0.2
0.3
0.1
0.4 0.72.51.10.6
1.8 0.90.30.41.6
k = 3k = 2
d = 3
1.2 0.81.40.52.1
k = 3
0.0
0.0
0.0
k-gram features
Fully parallel!
49. How about Global Interaction?
The todayniceisweather
layer 1
layer 2
layer 3
1. May need O(logk
N) layers
2. Interaction may become weaker
N: Seq length.
k: Filter size.
59. Roadmap
● Models for text
● General neural structures for QA
● Building blocks for QANet
○ Fully parallel (CNN + Self-attention)
○ data augmentation via back-translation
○ transfer learning from unsupervised tasks
61. More data with NMT back-translation
Input
Paraphrase
Translation
English → French
English ← French
Previously, tea had been used primarily for
Buddhist monks to stay awake during meditation.
Autrefois, le thé avait
été utilisé surtout pour
les moines bouddhistes
pour rester éveillé
pendant la méditation.
In the past, tea was used mostly for Buddhist
monks to stay awake during the meditation.
62. More data with NMT back-translation
Input
Paraphrase
Translation
English → French
English ← French
Previously, tea had been used primarily for
Buddhist monks to stay awake during meditation.
In the past, tea was used mostly for Buddhist
monks to stay awake during the meditation.
● More data
○ (Input, label)
○ (Paraphrase, label)
Applicable to virtually any NLP tasks!
64. Roadmap
● Models for text
● General neural structures for QA
● Building blocks for QANet
○ Fully parallel (CNN + Self-attention)
○ data augmentation via back-translation
○ transfer learning from unsupervised tasks
70. Transfer learning for richer presentation
● Pretrained language model
(ELMo, [Peters et al., NAACL’18])
○ + 4.0 F1
71. Transfer learning for richer presentation
71
● Pretrained language model
(ELMo, [Peters et al., NAACL’18])
○ + 4.0 F1
● Pretrained machine translation
model (CoVe [McCann, NIPS’17])
○ + 0.3 F1
72. QANet – 3 key ideas
● Deep Architecture without RNN
○ 130-layer (Deepest in NLP)
● Transfer Learning
○ leverage unlabeled data
● Data Augmentation
○ with back-translation
#1 on SQuAD (Mar-Aug 2018)