Machine Learning has become a must to improve insight, quality and time to market. But it's also been called the 'high interest credit card of technical debt' with challenges in managing both how it's applied and how its results are consumed.
If you are curious what is ML all about, this is a gentle introduction to Machine Learning and Deep Learning. This includes questions such as why ML/Data Analytics/Deep Learning ? Intuitive Understanding o how they work and some models in detail. At last I share some useful resources to get started.
Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.
Module 1 introduction to machine learningSara Hooker
We believe in building technical capacity all over the world.
We are building and teaching an accessible introduction to machine learning for students passionate about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our work, visit www.deltanalytics.org
Data Science, Machine Learning and Neural NetworksBICA Labs
Lecture briefly overviewing state of the art of Data Science, Machine Learning and Neural Networks. Covers main Artificial Intelligence technologies, Data Science algorithms, Neural network architectures and cloud computing facilities enabling the whole stack.
If you are curious what is ML all about, this is a gentle introduction to Machine Learning and Deep Learning. This includes questions such as why ML/Data Analytics/Deep Learning ? Intuitive Understanding o how they work and some models in detail. At last I share some useful resources to get started.
Machine learning is a type of artificial intelligence (AI) that provides computers with the ability to learn without being explicitly programmed. Machine learning focuses on the development of computer programs that can teach themselves to grow and change when exposed to new data.
Module 1 introduction to machine learningSara Hooker
We believe in building technical capacity all over the world.
We are building and teaching an accessible introduction to machine learning for students passionate about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our work, visit www.deltanalytics.org
Data Science, Machine Learning and Neural NetworksBICA Labs
Lecture briefly overviewing state of the art of Data Science, Machine Learning and Neural Networks. Covers main Artificial Intelligence technologies, Data Science algorithms, Neural network architectures and cloud computing facilities enabling the whole stack.
Video: http://videos.re-work.co/videos/464-agile-deep-learning
Deep Learning has been called the ‘new electricity’ — transforming every industry. Innovative architectures and applications receive deserved attention. But to turn innovation into value requires integrating deep learning into practical technology products. Such products, including Spotify's, are often developed following the principles of agile. This talk focuses on approaching deep learning in an agile way and on integrating deep learning into the agile cadence of a modern software development organization.
Introduction to machine learning. Basics of machine learning. Overview of machine learning. Linear regression. logistic regression. cost function. Gradient descent. sensitivity, specificity. model selection.
Top 10 Data Science Practitioner PitfallsSri Ambati
Top 10 Data Science Practitioner Pitfalls Meetup with Erin LeDell and Mark Landry on 09.09.15
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
H2O World - Top 10 Data Science Pitfalls - Mark LandrySri Ambati
H2O World 2015 - Mark Landry
Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Machine learning the next revolution or just another hypeJorge Ferrer
These are the slides of my session and ModConf / Liferay DevCon 2016.
It attempts to make it easy for any developer to get started with Machine Learning. It presents three exercises which I'm giving as homework (yup, homework, you missed it, right? ;) to the audience.
The video for this session is now available at https://www.facebook.com/liferay/videos/vl.383534535315216/10154154247423108/?type=1 (starts at min 34)
Two hour lecture I gave at the Jyväskylä Summer School. The purpose of the talk is to give a quick non-technical overview of concepts and methodologies in data science. Topics include a wide overview of both pattern mining and machine learning.
See also Part 2 of the lecture: Industrial Data Science. You can find it in my profile (click the face)
Machine Learning: Understanding the Invisible Force Changing Our WorldKen Tabor
Readers will gain an appreciation for machine learning, and take away valuable strategies including:
• What is machine learning.
• How it’s changing the world.
• Who the major players are.
• How you can control it.
Machine learning. It’s in the news. It’s discussed in corporate boardrooms. It’s on your mind. ML algorithms seem to be at once everywhere, yet nowhere. Can we possibly understand how this invisible force is shaping our world? How will it reform your industry, and change your job?
Fairly Measuring Fairness In Machine LearningHJ van Veen
We look at a case and two research papers on measuring discrimination in machine learning models for extending credit. Presentation given as part of the Sao Paulo Machine Learning Meetup, theme "Ethics in Data Science".
The term Machine Learning was coined by Arthur Samuel in 1959, an american pioneer in the field of computer gaming and artificial intelligence and stated that “ it gives computers the ability to learn without being explicitly programmed” And in 1997, Tom Mitchell gave a “ well-Posed” mathematical and relational definition that “ A Computer Program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E”.
Machine learning is needed for tasks that are too complex for humans to code directly. So instead, we provide a large amount of data to a machine learning algorithm and let the algorithm work it out by exploring that data and searching for a model that will achieve what the programmers have set it out to achieve.
Machine learning_ Replicating Human BrainNishant Jain
Slides will make you realize how humans makes decision and following the same pattern how Machines are trained to learn and make decisions. Slides gives an overview of all the steps involved in designing an efficient decision making machine.
Video: http://videos.re-work.co/videos/464-agile-deep-learning
Deep Learning has been called the ‘new electricity’ — transforming every industry. Innovative architectures and applications receive deserved attention. But to turn innovation into value requires integrating deep learning into practical technology products. Such products, including Spotify's, are often developed following the principles of agile. This talk focuses on approaching deep learning in an agile way and on integrating deep learning into the agile cadence of a modern software development organization.
Introduction to machine learning. Basics of machine learning. Overview of machine learning. Linear regression. logistic regression. cost function. Gradient descent. sensitivity, specificity. model selection.
Top 10 Data Science Practitioner PitfallsSri Ambati
Top 10 Data Science Practitioner Pitfalls Meetup with Erin LeDell and Mark Landry on 09.09.15
- Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
- To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
H2O World - Top 10 Data Science Pitfalls - Mark LandrySri Ambati
H2O World 2015 - Mark Landry
Powered by the open source machine learning software H2O.ai. Contributors welcome at: https://github.com/h2oai
To view videos on H2O open source machine learning software, go to: https://www.youtube.com/user/0xdata
Machine learning the next revolution or just another hypeJorge Ferrer
These are the slides of my session and ModConf / Liferay DevCon 2016.
It attempts to make it easy for any developer to get started with Machine Learning. It presents three exercises which I'm giving as homework (yup, homework, you missed it, right? ;) to the audience.
The video for this session is now available at https://www.facebook.com/liferay/videos/vl.383534535315216/10154154247423108/?type=1 (starts at min 34)
Two hour lecture I gave at the Jyväskylä Summer School. The purpose of the talk is to give a quick non-technical overview of concepts and methodologies in data science. Topics include a wide overview of both pattern mining and machine learning.
See also Part 2 of the lecture: Industrial Data Science. You can find it in my profile (click the face)
Machine Learning: Understanding the Invisible Force Changing Our WorldKen Tabor
Readers will gain an appreciation for machine learning, and take away valuable strategies including:
• What is machine learning.
• How it’s changing the world.
• Who the major players are.
• How you can control it.
Machine learning. It’s in the news. It’s discussed in corporate boardrooms. It’s on your mind. ML algorithms seem to be at once everywhere, yet nowhere. Can we possibly understand how this invisible force is shaping our world? How will it reform your industry, and change your job?
Fairly Measuring Fairness In Machine LearningHJ van Veen
We look at a case and two research papers on measuring discrimination in machine learning models for extending credit. Presentation given as part of the Sao Paulo Machine Learning Meetup, theme "Ethics in Data Science".
The term Machine Learning was coined by Arthur Samuel in 1959, an american pioneer in the field of computer gaming and artificial intelligence and stated that “ it gives computers the ability to learn without being explicitly programmed” And in 1997, Tom Mitchell gave a “ well-Posed” mathematical and relational definition that “ A Computer Program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E”.
Machine learning is needed for tasks that are too complex for humans to code directly. So instead, we provide a large amount of data to a machine learning algorithm and let the algorithm work it out by exploring that data and searching for a model that will achieve what the programmers have set it out to achieve.
Machine learning_ Replicating Human BrainNishant Jain
Slides will make you realize how humans makes decision and following the same pattern how Machines are trained to learn and make decisions. Slides gives an overview of all the steps involved in designing an efficient decision making machine.
Linguistic Considerations of Identity Resolution (2008)David Murgatroyd
Identity resolution systems indicate if two individuals really are the same person. Identity retrieval systems help you find the individual you’re after. These systems appear anywhere from analysts’ desks to border crossings. But how do can you tell if a system's any good before it's deployed? You need to understand the problems it should tackle and how to measure how well it’s doing.
This talk considers metrics and data for evaluating identity resolution and retrieval systems. It also explores the linguistic challenges these systems face.
What if we could measure the indirect costs of pain building up on a software project? What if we could measure the effects of learning curves, collaboration pain, and problems building up in the code?
We could:
Identify the highest leverage opportunities for improvement
Make the case to management that budget should be allocated for a solution
Lead the organization in making better decisions with a data-driven feedback loop to guide the way
Several years ago, I stumbled into a solution for measuring the growing “friction” in developer experience. Visibility turned my world upside-down.
We've been trying to explain the pain of Technical Debt for generations, but we've never been able to measure it. Visibility introduces a whole new world of possibilities.
In this talk, I'll show you what I'm measuring, how exactly I'm measuring it, then we'll talk through the implications for our teams, our organizations, and our industry.
We can identify the highest leverage improvement opportunities and steer our projects with a data-driven feedback loop.
We can breakdown the "wall of ignorance" between developers and management by defining an explicit language for managing technical risk.
We can teach the art of software development with a data-driven feedback loop and codify our knowledge into sharable decision principles.
We can revolutionize our business accounting methods to take the pain of software development into account, so the costs and risks are visible at the highest levels of the organization.
We can conquer the challenges across the software industry by working together, learning together, and sharing our knowledge with the world.
With visibility, we can start a revolution in data-driven learning.
What makes software development complex isn't the code, it's the humans. The most effective way to improve our capabilities in software development is to better understand ourselves.
In this talk, I'll introduce a conceptual model for human interaction, identity, culture, communication, relationships, and learning based on the foundational model of Idea Flow. If you were to write a simulator to describe the interaction of humans, this talk would describe the architecture.
Learn how to understand the humans on your team and fix the bugs in communication, by thinking about your teammates like code!
Edit
Archive
Delete
I'm not a scientist or a psychologist. These ideas are based on a combination of personal experience, reading lots of cognitive science books, and a couple years of running experiments on developers. As I struggled through the challenges of getting a software concept from my head to another developer's head (interpersonal Idea Flow), I learned a whole lot about human interaction.
As software developers, we have to work together, think together, and solve problems together to do our jobs. Code? We get it. Humans? WTF?!
Fortunately, humans are predictably irrational, predictably emotional, and predictably judgmental creatures. Of course those pesky humans will always do a few unexpected things, but once we know the algorithm for peace and harmony among humans, we can start debugging the communication problems on our team.
Course 2 Machine Learning Data LifeCycle in Production - Week 1Ajay Taneja
This is the Machine Learning Engineering in Production Course notes. This is the Week 1 of Machine Learning Data Life Cycle in Production (Course 2) course. This is the course 2 of MLOps specialization on coursera
Reviewing progress in the machine learning certification journey
𝗦𝗽𝗲𝗰𝗶𝗮𝗹 𝗔𝗱𝗱𝗶𝘁𝗶𝗼𝗻 - Short tech talk on How to Network by Qingyue(Annie) Wang
C𝗼𝗻𝘁𝗲𝗻𝘁 𝗿𝗲𝘃𝗶𝗲𝘄 𝗼𝗻 AI and ML on Google Cloud by Margaret Maynard-Reid
𝗔 𝗳𝗼𝗰𝘂𝘀𝗲𝗱 𝗰𝗼𝗻𝘁𝗲𝗻𝘁 𝗿𝗲𝘃𝗶𝗲𝘄 𝗼𝗻 𝗠𝗟 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 𝗳𝗿𝗮𝗺𝗶𝗻𝗴, 𝗺𝗼𝗱𝗲𝗹 𝗲𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗼𝗻, 𝗮𝗻𝗱 𝗳𝗮𝗶𝗿𝗻𝗲𝘀𝘀 by Sowndarya Venkateswaran.
A discussion on sample questions to aid certification exam preparation.
An interactive Q&A session to clarify doubts and questions.
Previewing next steps and topics, including course completions and material reviews.
Data Science for Business Managers - An intro to ROI for predictive analyticsAkin Osman Kazakci
This module addresses critical business aspects related to launching a predictive analytics project. How to establish the relationship with business KPIs is discussed. A notion of data hunt, for planning & acquiring external data for better predictions is introduced. Model quality and it's role for ROI of data and prediction tasks are explained. The module is concluded with a glimpse on how collaborative data challenges can improve predictive model quality in no time.
A practical guide for startups to drive growth and innovation.
Denver Startup Week Product Track presentation by Argie Angeleas, Taylor Names, Matt Reynolds
The 4 Machine Learning Models Imperative for Business TransformationRocketSource
Machine learning is hot right now and for good reason. We're going to break down what you need to know about what goes into a model and give you four machine learning models your business should have in production right now.
Machine Learning vs Decision Optimization comparisonAlain Chabrier
Data science is an interdisciplinary field about scientific methods, processes, and systems to extract knowledge or insights from data in various forms, either structured or unstructured.
Data science community is made of people coming from different areas, and who do not always understand each others. Everyone is using his own concepts and not always understands how these map when applied to other techniques.
In particular, Machine Learning experts do not always understand how Decision Optimization concepts maps or differs from their own concepts.
Afternoons with Azure - Azure Machine Learning CCG
Journey through programming languages such as R, and Python that can be used for Machine Learning. Next, explore Azure Machine Learning Studio see the interconnectivity.
For more information about Microsoft Azure, call (813) 265-3239 or visit www.ccganalytics.com/solutions
This talk covers the PM framework needed to lead AI incubations. Product school webinar video at https://www.linkedin.com/video/live/urn:li:ugcPost:6690684172895322113/
DataTalkClub Conference, Feb 12 2021
Creating a machine learning model is not an easy task.
Creating a useful machine learning model that gets into production and generates actual business value - is an even harder one.
There are many ways for an ML project or product to fail even when the data is there and the model technically performs well. From the wrong problem statement to lack of trust from stakeholders, in this talk I will discuss what issues to look out for, and how to avoid them.
AI TESTING: ENSURING A GOOD DATA SPLIT BETWEEN DATA SETS (TRAINING AND TEST) ...ijsc
Artificial Intelligence and Machine Learning have been around for a long time. In recent years, there has been a surge in popularity for applications integrating AI and ML technology. As with traditional development, software testing is a critical component of a successful AI/ML application. The development methodology used in AI/ML contrasts significantly from traditional development. In light of these distinctions, various software testing challenges arise. The emphasis of this paper is on the challenge of effectively splitting the data into training and testing data sets. By applying a k-Means clustering strategy to the data set followed by a decision tree, we can significantly increase the likelihood of the training data set to represent the domain of the full dataset and thus avoid training a model that is likely to fail because it has only learned a subset of the full data domain.
Choose the Right Problems to Solve with ML by Spotify PMProduct School
Main takeaways:
-What problems are best solved with ML and what problems are NOT
-What you need to understand and how technical you need to get as a PM of an ML product
A workshop to demonstrate how we can apply agile and continuous delivery principles to continuously deliver value in machine learning and data science projects.
Code: https://github.com/davified/ci-workshop-app
Machine intelligence data science methodology 060420Jeremy Lehman
Machine learning and artificial intelligence project methodology that focuses on business results, builds alignment across the entire business, and forms enduring capabilities.
Machine learning: A Walk Through School ExamsRamsha Ijaz
When it comes to studying, Machines and Students have one thing in common: Examinations. To perform well on their final evaluations, humans require taking classes, reading books and solving practice quizzes. Similarly, machines need artificial intelligence to memorize data, infer feature correlations, and pass validation standards in order to solve almost any problem. In this quick introductory session, we'll walk through these analogies to learn the core concepts behind Machine Learning, and why it works so well!
Leveraging AI the Right Way (for Product Managers)David Murgatroyd
Artificial Intelligence is transforming almost every kind of product as innovative techniques receive deserved attention. But careful leadership from Product Managers is crucial in turning that innovation into something that’s not only valuable but that also respects your own values. This talk provides frameworks to identify where AI can impact our products in the ways we want and to maximize that impact throughout the product life cycle.
Applying machine learning to a particular business need becomes more straightforward with each technological advance. But today’s businesses have a variety of needs which are too numerous to be addressed one-at-a-time and too different to be addressed one-size-fits-all. We examine three significant challenges to building an effective ML portfolio and ways to address them thru the framework of the ML product lifecycle.
Machine Learning is transforming every industry with innovative techniques receiving deserved attention. But turning innovation into value requires integrating into practical technology products, often with the leadership of product managers. We'll talk about how to help your friendly neighborhood Product Owner: identify where ML can make a difference, develop metrics to validate and refine it, identify data to feed it, prioritize work to develop it, and structure teams to deliver it in a satisfying way.
Delivered at the 2017 Missions Conference of Park Street Church, Boston
Summary:
* In deciding if we're using tech well, ask if it's improving our relationship with Our Loved Ones, Our Skills and Gifts, Our Bodies, Our World, and Our God
* In deciding if our building tech is improving lives, ask if it's doing so for our users, our team, and ourselves.
* The way to build tech well, is to Know God better than Tech, Choose employers based on values, Seek purpose, not just craft or team, and Consider who’s underserved
Think about these things when choosing a job, especially in technology:
Purpose
Mastery
Autonomy
(these first three were well articulated by Daniel Pink in his book Drive)
Culture
Domain
Effectiveness
Compensation
For HLT applications where error reduction is mission-critical like name matching, combining multiple systems can provide significant opportunities for improving. However such combination can also bring significant challenges in execution. This talk will explore how to identify when the opportunities justify the challenges and how to get system combination from idea to implementation.
We all know normalization is crucial to delivering high quality search results. We don’t want uninteresting variations between the query and the document to lead to missed hits (e.g., “celebrity” v. “celebrities”). Normalization of dictionary words is well understood, but what if your application focuses on names? Whether you’re tackling patent examination, sports records, e-commerce, watchlist screening or many other topics, names are often the key. Can your users find “Abdul Jabbar, Karim” if they search for “Kareem AbdalJabar” or “كريم عبد الجبار”? Solr application architects have attempted to address this through custom integration of nickname lists, edit distance, case normalization, phonetic encoding and n-grams (see example #1 or example #2), but doing so requires significant effort and may not address all desired variations. A simpler approach is to use a Solr field type for names that handles these linguistic nuances behind-the-scenes. We’ll talk about how we built this sort of field type via a Solr plug-in for the Rosette Name Indexer. We’ll also discuss examples of use cases this has enabled, how it can be tuned if necessary, and how it connects to the broader trend of entity-centric search.
Entity extraction finds names in documents, providing important raw material for big decisions. But finding all mentions of the name “George Bush” is very different than finding all mentions of the 43rd US President. Making big decisions from big data is hopeless unless analytics advance from providing snippets of text to providing statements of truth. Such advances present challenges both of accuracy and of usability. We’ll explore these challenges and demonstrate ways of addressing them.
http://basistechweek.com/hlt.html
There's never been a more exciting time to be involved in Human Language Technology (HLT). Advances in algorithms, architectures, and applications are making real differences in fulfilling missions around the world. We'll use the perspective of one specific, end-to-end use case starting from primary source collection going all the way through finished intelligence to show the value and importance of moving your HLT thinking from strings to things, from configuration to adaption, from isolation to collaboration, and from small scale to Big Text. This perspective will serve as a guide to the other talks of the day which together will give you greater insight in applying HLT to your mission.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
6. 6
Problem: I can’t fully specify the behavior I want.
Solution: Machine Learning
7. Where does
machine learning fit in
the technology
universe?
Valuable
... a star of the Data Science orchestra.
- John Mount, Win-Vector
Central
... the new algorithms ... at the heart of most of
what computer science does.
- Hal Daumé III, U. Maryland Professor
Last Resort
… for cases when the desired behavior cannot
be effectively expressed in software logic
without dependency on external data.
- D. Sculley et al., Google
7
8. Where does machine learning fit in developing
technology?
8
Stuff to do Demonstrable ValueStuff to do now
9. How does machine
learning affect value
demonstration?
Distill business goal into a repeatable,
balanced metric.
Measure on the most representative data you
can get.
Distinguish intrinsic errors from
implementation bugs.
Let your customer override the model when
they absolutely must get some answer.
9
Demonstrable Value
10. Distill business goal into
a repeatable, balanced
metric.
10
Demonstrable Value
Business goals in our example:
● fewer incorrect candidates sent to
analysts for review
● no increased volume of work for
analysts
● confidence to help analysts prioritize
Example metric: area under an error trade-off
curve based on confidence, constrained to
max volume. Sometimes called an ‘overall
evaluation criteria’ (OEC).
Note that the more skewed the OEC (e.g., if #
of positives varies by day and season) the
more samples are required to be sure of
statistical significance.
11. Measure on the most
representative data you
can get.
11
Demonstrable Value
Considerations when selecting data:
● online v offline: A/B test in production with
feature flags (one or two variables at a time,
agile-y) vs. stable data set
● implicit v explicit: implicit can correlate
more with value but omits unseen states
● broad v targeted: if explicitly annotating
consider targeting based on diagnostic
value or where systems disagree
Resist the temptation to ‘clean’ data -- you may
kill it. Instead include normalization in your
model.
12. Distinguish intrinsic
errors from
implementation bugs.
12
Demonstrable Value
Distinction
● Error: incorrect output from a model
despite the model being correctly
implemented.
● Bug: incorrect implementation, doing
something other than what was
intended
Useful to manage expectations about quality
and effort required to improve/fix.
Providing an explanation for output can help
make this distinction.
Bug Error
13. Let your customer
override the model
when they absolutely
must get some answer.
13
Demonstrable Value
Varieties of overrides:
● Always give this answer.
● Never give this answer.
Can apply for sub-models or overall.
Beware of potential toward ‘whack a mole’.
Feel sad every time they use it.
14. Where does machine learning fit in developing
technology?
14
Stuff to do Demonstrable ValueStuff to do now
15. How does machine
learning affect team
organization?
15
Machine Learning Expert
Spectrum of options between:
Integrate machine learning expertise in every
team that needs it.
Separate it in an independent, specialist
team.
16. Option 1: integrated
teams with cross-team
interest groups
16
Encourages alignment with
business goals.
Challenges machine learning
collaboration, depth and reuse.
Best for small, diverse products.
17. Option 2: independent
machine learning team
delivering models
17
Encourages machine learning
collaboration, depth and reuse.
Challenges alignment with
business goals.
Best for products with large,
complex model(s).
18. How does machine
learning affect iteration
structure?
18
Pros for shorter:
● More simple experiments are better
than fewer complex ones
● The value of machine learning leads to
high cost of delay
Pros for longer:
● Innovation takes deep thinking
● More time to control technical debt
creation
19. Where does machine learning fit in developing
technology?
19
Stuff to do Demonstrable ValueStuff to do now
20. How does machine
learning affect chunks
of work?
Focus on experiments following the
scientific-method: hypothesis, measurement
and error analysis.
Continuously test for regression versus
expected measurements.
Decouple functional tests from model
variations.
20
Stuff to do now
22. Continuously test for
regression versus
expected
measurements.
22
Stuff to do now
With machine learning’s dependence on data
changing anything changes everything. This
makes it the “high-interest credit card of
technical debt”.
Determine what’s a significant change,
including looking at aggregate effect across
different data sets.
23. Decouple functional
tests from model
variations.
23
Stuff to do now
Options:
Black-box style: enforce “can’t be wrong”
(“earmark”) input/output pairs. Might lead to
spurious test failures.
Clear-box style: use a mock implementation
of the model that produces expected answers.
24. Decouple functional
tests from model
variations.
24
Stuff to do now
Options:
Black-box style: ensure “can’t be wrong”
(“earmark”) input/output pairs. Might lead to
spurious test failures.
Clear-box style: use a mock implementation
of the model that produces expected answers.
25. Decouple functional
tests from model
variations.
25
Stuff to do now
Options:
Black-box style: ensure “can’t be wrong”
(“earmark”) input/output pairs. Might lead to
spurious test failures.
Clear-box style: use a mock implementation
of the model that produces expected answers.
42
26. Where does machine learning fit in developing
technology?
26
Stuff to do Demonstrable ValueStuff to do now
27. How does machine
learning affect
prioritization?
27
Stuff to do
Do we need more training data?
Do we need a richer representation of our
data?
Do we need a combination of models?
How much could improving a sub-component
of the model help?
What development milestones should we
target?
28. Do we need more
training data?
28
Stuff to do
The learning curve implies adding training
data should bring down the test error closer
to the desired level.
29. Do we need a richer
representation of our
data?
29
Stuff to do
The learning curve implies adding data won’t
help but a richer data representation may.
Could be more features identified by
someone with domain expertise analyzing
errors. Though remember more features
often means less speed.
Could require a new model if the domain
information identified is not representable in
the existing one.
30. Do we need a
combination of models?
30
Stuff to do
The learning curve implies the model is
overfitting the training set.
Consider training multiple models on random
subsets of the data and combine them at
runtime to decrease the variance while
retaining a low bias. Presuming you can spend
the compute.
31. How much could
improving a
sub-component of the
model help?
31
Stuff to do
Build an ‘oracle’ for the sub-component --
something that takes perfect output from
data.
Annotate to get that perfect output on some
test data to feed the oracle.
Measure the overall system with the oracle
turned on.
32. What development
milestones should we
target?
32
Stuff to do
Make it…
● Glued-together with some rules
(Prototype)
● Function (Alpha)
● Measurable & inspectable (early Beta)
● Accurate, not slow, nice demo,
documented & configurable (late Beta)
● Simple & fast (GA)
● Handle new kinds of input (post-GA)
33. Questions?
33
Stuff to do Demonstrable ValueStuff to do now
Suggested questions:
Say more about integrating domain expertise?
Say more about online vs. offline testing?
How to manage acquiring data?
How to recruit machine learning folks?
What bad habits can ML enable?
Where can I try your stuff? api.rosette.com
You hiring? Yes - basistech.com/careers/
@dmurga
36. Recruiting machine learning experts
36
who
◦ expertise in sequence models > in domain
◦ depth in specific model > breadth over many
where to find them
◦ local network: meet-ups, LinkedIn
◦ academic conferences
◦ communities (e.g., Kaggle, users of ML tools)
how to attract them
◦ explain purpose & uniqueness of the problem
37. Online vs. offline evaluation
37
Online (e.g., A/B)
● Individual decisions need to not be mission critical
● Enough use to get sufficient statistics in short time
● Helps motivate aligning production and development environments
● If the model is updated online, validate it against offline data periodically to
watch out for drift
● Usually focused on extrinsic or distant measures
Offline
● Always have some of this to for long-term protection against regression
● May be required for intrinsic measurement
38. 38
Epistemology Exact
sciences
Experimental
sciences
Engineering Art
Example ... Theoretical C.S. Physics Software Management
Deals with ... Theorems Theories Artifacts People
Truth is ... Forever Temporary “It works” In the eye of
the beholder
Parts of
machine
learning fit all
four...
Learning theory Model &
measure
Systems Users
This is great, as long as we don’t confuse one kind of work for another.
(This table is an expansion of one in Bottou’s ICML 2015 talk.)
Editor's Notes
Balance:
Consistency v correctness
Extrinsic v intrinsic
Interpretability v correctness
Precision v recall (volume)
Exploitation v exploration
Data:
Historic
Diagnostic
Online v offline
Balance:
Consistency v correctness
Extrinsic v intrinsic
Interpretability v correctness
Precision v recall (volume)
Exploitation v exploration
Data:
Historic
Diagnostic
Online v offline
Balance:
Consistency v correctness
Extrinsic v intrinsic
Interpretability v correctness
Precision v recall (volume)
Exploitation v exploration
Data:
Historic
Diagnostic
Online v offline
Balance:
Consistency v correctness
Extrinsic v intrinsic
Interpretability v correctness
Precision v recall (volume)
Exploitation v exploration
Data:
Historic
Diagnostic
Online v offline
Balance:
Consistency v correctness
Extrinsic v intrinsic
Interpretability v correctness
Precision v recall (volume)
Exploitation v exploration
Data:
Historic
Diagnostic
Online v offline
For online A/B tests, choose control.
Oracle
Experiments both for data collection and speed (esp of adding caches)