Meta Dropout: Learning to Perturb Latent Features for Generalization MLAI2
A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we know how to optimally perturb training examples to account for test examples, we may achieve better generalization performance. However, obtaining such perturbation is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel regularization method, meta-dropout, which learns to perturb the latent features of training examples for generalization in a meta-learning framework. Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner. Then, the learned noise generator can perturb the training examples of unseen tasks at the meta-test time for improved generalization. We validate our method on few-shot classification datasets, whose results show that it significantly improves the generalization performance of the base model, and largely outperforms existing regularization methods such as information bottleneck, manifold mixup, and information dropout.
In problem solving, there are four basic steps.
Define the problem. Diagnose the situation so that your focus is on the problem, not just its symptoms. ...
Generate alternative solutions. ...
Evaluate and select an alternative. ...
Implement and follow up on the solution.
In this presentation we introduce SAMOD, a.k.a. Simplified Agile Methodology for Ontology Development, a novel agile methodology for the development of ontologies by means of small steps of an iterative workflow that focuses on creating well-developed and documented models starting from exemplar domain descriptions.
The learning ecosystem metamodel is a framework to support Model-Driven Development of learning ecosystems based on Open Source software. The metamodel must be validated in order to provide a robust solution for the development of this type of technological solutions. The first phase of the validation process has done manually, but to ensure the quality of the metamodel, the last phase should be made using a tool. The first version of the metamodel is an instance of MOF, the standard defined by the Object Management Group. There are not stable tools to support the definition and mapping of metamodels and models using the standards. For this reason, is necessary to transform the metamodel from MOF to Ecore in order to use the tools provided by Eclipse. This work describes the transformation process and the measures to ensure the quality of the learning ecosystem metamodel in Ecore.
Meta Dropout: Learning to Perturb Latent Features for Generalization MLAI2
A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we know how to optimally perturb training examples to account for test examples, we may achieve better generalization performance. However, obtaining such perturbation is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel regularization method, meta-dropout, which learns to perturb the latent features of training examples for generalization in a meta-learning framework. Specifically, we meta-learn a noise generator which outputs a multiplicative noise distribution for latent features, to obtain low errors on the test instances in an input-dependent manner. Then, the learned noise generator can perturb the training examples of unseen tasks at the meta-test time for improved generalization. We validate our method on few-shot classification datasets, whose results show that it significantly improves the generalization performance of the base model, and largely outperforms existing regularization methods such as information bottleneck, manifold mixup, and information dropout.
In problem solving, there are four basic steps.
Define the problem. Diagnose the situation so that your focus is on the problem, not just its symptoms. ...
Generate alternative solutions. ...
Evaluate and select an alternative. ...
Implement and follow up on the solution.
In this presentation we introduce SAMOD, a.k.a. Simplified Agile Methodology for Ontology Development, a novel agile methodology for the development of ontologies by means of small steps of an iterative workflow that focuses on creating well-developed and documented models starting from exemplar domain descriptions.
The learning ecosystem metamodel is a framework to support Model-Driven Development of learning ecosystems based on Open Source software. The metamodel must be validated in order to provide a robust solution for the development of this type of technological solutions. The first phase of the validation process has done manually, but to ensure the quality of the metamodel, the last phase should be made using a tool. The first version of the metamodel is an instance of MOF, the standard defined by the Object Management Group. There are not stable tools to support the definition and mapping of metamodels and models using the standards. For this reason, is necessary to transform the metamodel from MOF to Ecore in order to use the tools provided by Eclipse. This work describes the transformation process and the measures to ensure the quality of the learning ecosystem metamodel in Ecore.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.it/.
http://www.ivanomalavolta.com
Some slides put together on analogies between biosamples and model samples. Prepared for the Biosamples workshop at The University of Manchester, 17th June 2015.
An analytical approach to effective risk based test planning Joe Kevens
Regression is not easily understood, as it seemingly manifests from nowhere. But if you can identify methods to help spot quality failure ‘trends’, you stand a better chance of understanding the root causes. This presentation serves to highlight a number of risk identification and planning techniques that you could add to your arsenal! Presented at TestExpo in London, UK, on 31 Oct 2017.
Valencian Summer School 2015
Day 2
Lecture 11
The Future of Machine Learning
José David Martín-Guerrero (IDAL, UV)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2015
Machine learning: A Walk Through School ExamsRamsha Ijaz
When it comes to studying, Machines and Students have one thing in common: Examinations. To perform well on their final evaluations, humans require taking classes, reading books and solving practice quizzes. Similarly, machines need artificial intelligence to memorize data, infer feature correlations, and pass validation standards in order to solve almost any problem. In this quick introductory session, we'll walk through these analogies to learn the core concepts behind Machine Learning, and why it works so well!
The information in this slide is very useful for me to do the assignment regarding the simulation in which we have to report together with the presentation...
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Scale your Testing and Quality with Automation Engineering and ML - Carlos Ki...QA or the Highway
Many teams and organizations struggle to scale their quality and testing strategies once they reach tens of teams and hundreds of developers and services across their systems. Traditional strategies and techniques, like testing phases and code freezes, do not work at scale and quickly add friction, reduce productivity, and make testing and quality harder.
In this presentation, we will cover different ideas and strategies to make things like BDD and TDD easier to adopt at the beginning, how to include observability and operability in your definition of quality, and how leveraging ML/AI can augment your devs and testers and reduce risk while accelerating value.
By the end, you will have some "low quality" indicators that you can use to identify patterns and practices that won\'t scale well. You will have new insights and ideas for how you can set up your teams and strategies for success long term, and you will see tangible, practical examples you can take to your team and company to start this transformation now.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.it/.
http://www.ivanomalavolta.com
Some slides put together on analogies between biosamples and model samples. Prepared for the Biosamples workshop at The University of Manchester, 17th June 2015.
An analytical approach to effective risk based test planning Joe Kevens
Regression is not easily understood, as it seemingly manifests from nowhere. But if you can identify methods to help spot quality failure ‘trends’, you stand a better chance of understanding the root causes. This presentation serves to highlight a number of risk identification and planning techniques that you could add to your arsenal! Presented at TestExpo in London, UK, on 31 Oct 2017.
Valencian Summer School 2015
Day 2
Lecture 11
The Future of Machine Learning
José David Martín-Guerrero (IDAL, UV)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2015
Machine learning: A Walk Through School ExamsRamsha Ijaz
When it comes to studying, Machines and Students have one thing in common: Examinations. To perform well on their final evaluations, humans require taking classes, reading books and solving practice quizzes. Similarly, machines need artificial intelligence to memorize data, infer feature correlations, and pass validation standards in order to solve almost any problem. In this quick introductory session, we'll walk through these analogies to learn the core concepts behind Machine Learning, and why it works so well!
The information in this slide is very useful for me to do the assignment regarding the simulation in which we have to report together with the presentation...
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Scale your Testing and Quality with Automation Engineering and ML - Carlos Ki...QA or the Highway
Many teams and organizations struggle to scale their quality and testing strategies once they reach tens of teams and hundreds of developers and services across their systems. Traditional strategies and techniques, like testing phases and code freezes, do not work at scale and quickly add friction, reduce productivity, and make testing and quality harder.
In this presentation, we will cover different ideas and strategies to make things like BDD and TDD easier to adopt at the beginning, how to include observability and operability in your definition of quality, and how leveraging ML/AI can augment your devs and testers and reduce risk while accelerating value.
By the end, you will have some "low quality" indicators that you can use to identify patterns and practices that won\'t scale well. You will have new insights and ideas for how you can set up your teams and strategies for success long term, and you will see tangible, practical examples you can take to your team and company to start this transformation now.
Big data in the cloud - welcome to cost oriented designArnon Rotem-Gal-Oz
Video: https://youtu.be/gBI5vm5d25o
How working with big in the cloud makes cost considerations a primary concern (quality attribute) that you need to take care of
On building machine learning models using Spark @ Appsflyer
The presentation includes a short intro to AppsFlyer (architcture, data architecture) and shows the process for building a model through a use case of building a fingerprinting model for matching clicks to installs
An introduction to big data.
What's big data, why we'd want it , how is it applicable to CSPs, short intro to Hadoop
(some of the info is in the slide notes)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
4. A Perfect Representation of the Machine Learning Cycle from start to end | Image Source:
MLOps (Published under Creative Commons Attribution 4.0 International Public License and used
First model wasn’t good
General model
sun or rain?
Intubation model?
Identify the deterioration?
Predict the intubation?
Peditc an respiratory deterioration?
Deterioration – when?
What’s a good alg
Specificity = FPR = fall out
Sensitivity = TPR = recall = hit rate
PPV (positive Predictive Power ) precision = TP/ TP+FP
Accuracy (tp +tn) / (tp+tn +fp +fn)
27,000 seconds with cases
Only ~7,000 are actually relevant
(out of 60,000,000 minutes)
HR, BP (nurse effect)
Different systems
Changes in origin systems
Timing from different systems
Imputation
Challlenge 4 – what about missing data
1. do nothing
2. mean/median (bad for catrgorical, not accurate, doesn’t take uncertainity
3. Most freq/constant/zero – doesn’t factor correlation, can introduce bias
4. Find similar (k-nn) lots of compute
5 deep leadring – even more compute
Demographics (categorical
Doctot’s notes <- can we even use it? is it self fulfilling? (also how to handle PII)
Time series
Intubation is a judgment call – are we
What’s a good “quiet” period
Semi-supervised learning can help
27,000 seconds with cases
Only ~7,000 are actually relevant
(out of 60,000,000 minutes)
Undersampling, oversampling
Initial efforts
Deep learning over fit
Trees
Ensamble models
Training set – construct classifier
Validation set – fine tune model (change parmeters)
Test set – estimate future error – the generalization of the model
Data limit
New systems at the client
Timing
The difference between theory and reality is that in theory they are the same
Need lots of historical data for each new data point
Need to generalize for different systems