Personalized defect prediction models can more accurately predict buggy changes. The researchers propose two personalized approaches:
1) Personalized Change Classification (PCC) trains a separate model for each developer using their change history.
2) Confidence-based Hybrid PCC (PCC+) combines the predictions from the CC and PCC models, selecting the one with the highest confidence.
The approaches were evaluated on six projects, finding up to 155 more bugs by inspecting only 20% of code locations compared to non-personalized models. PCC and PCC+ consistently outperformed the baseline across different settings, demonstrating the benefits of personalization.
Partitioning Composite Code Changes to Facilitate Code Review (MSR2015)Sung Kim
Yida's presentation at MSR 2015!
Abstract—Developers expend significant effort on reviewing source code changes, hence the comprehensibility of code changes directly affects development productivity. Our prior study has suggested that composite code changes, which mix multiple development issues together, are typically difficult to review. Unfortunately, our manual inspection of 453 open source code changes reveals a non-trivial occurrence (up to 29%) of such composite changes.
In this paper, we propose a heuristic-based approach to automatically partition composite changes, such that each sub-change in the partition is more cohesive and self-contained. Our quantitative and qualitative evaluation results are promising in demonstrating the potential benefits of our approach for facilitating code review of composite code changes.
Partitioning Composite Code Changes to Facilitate Code Review (MSR2015)Sung Kim
Yida's presentation at MSR 2015!
Abstract—Developers expend significant effort on reviewing source code changes, hence the comprehensibility of code changes directly affects development productivity. Our prior study has suggested that composite code changes, which mix multiple development issues together, are typically difficult to review. Unfortunately, our manual inspection of 453 open source code changes reveals a non-trivial occurrence (up to 29%) of such composite changes.
In this paper, we propose a heuristic-based approach to automatically partition composite changes, such that each sub-change in the partition is more cohesive and self-contained. Our quantitative and qualitative evaluation results are promising in demonstrating the potential benefits of our approach for facilitating code review of composite code changes.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
Developers often wonder how to implement a certain functionality
(e.g., how to parse XML files) using APIs. Obtaining
an API usage sequence based on an API-related natural
language query is very helpful in this regard. Given a query,
existing approaches utilize information retrieval models to
search for matching API sequences. These approaches treat
queries and APIs as bags-of-words and lack a deep understanding
of the semantics of the query.
We propose DeepAPI, a deep learning based approach to
generate API usage sequences for a given natural language
query. Instead of a bag-of-words assumption, it learns the
sequence of words in a query and the sequence of associated
APIs. DeepAPI adapts a neural language model named
RNN Encoder-Decoder. It encodes a word sequence (user
query) into a fixed-length context vector, and generates an
API sequence based on the context vector. We also augment
the RNN Encoder-Decoder by considering the importance
of individual APIs. We empirically evaluate our approach
with more than 7 million annotated code snippets collected
from GitHub. The results show that our approach generates
largely accurate API sequences and outperforms the related
approaches.
Code Coverage and Test Suite Effectiveness: Empirical Study with Real Bugs in...Pavneet Singh Kochhar
In this paper, we analyse two large software systems to
measure the relationship of code coverage and its effectiveness in killing real bugs from the software systems.
This talk covers the process of using Coverity to carry out a static analysis of open source projects in order to find bugs. and improve the code base.
Build systems orchestrate how human-readable source code is translated into executable programs. In a software project, source code changes can induce changes in the build system (aka. build co-changes). It is difficult for developers to identify when build co-changes are necessary due to the complexity of build systems. Prediction of build co-changes works well if there is a sufficient amount of training data to build a model. However, in practice, for new projects, there exists a limited number of changes. Using training data from other projects to predict the build co-changes in a new project can help improve the performance of the build co-change prediction. We refer to this problem as cross-project build co-change prediction.
In this paper, we propose CroBuild, a novel cross-project build co-change prediction approach that iteratively learns new classifiers. CroBuild constructs an ensemble of classifiers by iteratively building classifiers and assigning them weights according to its prediction error rate. Given that only a small proportion of code changes are build co-changing, we also propose an imbalance-aware approach that learns a threshold boundary between those code changes that are build co-changing and those that are not in order to construct classifiers in each iteration. To examine the benefits of CroBuild, we perform experiments on 4 large datasets including Mozilla, Eclipse-core, Lucene, and Jazz, comprising a total of 50,884 changes. On average, across the 4 datasets, CroBuild achieves a F1-score of up to 0.408. We also compare CroBuild with other approaches such as a basic model, AdaBoost proposed by Freund et al., and TrAdaBoost proposed by Dai et al.. On average, across the 4 datasets, the CroBuild approach yields an improvement in F1-scores of 41.54%, 36.63%, and 36.97% over the basic model, AdaBoost, and TrAdaBoost, respectively.
Cross-project Defect Prediction Using A Connectivity-based Unsupervised Class...Feng Zhang
Defect prediction on projects with limited historical data has attracted great interest from both researchers and practitioners. Cross-project defect prediction has been the main area of progress by reusing classifiers from other projects. However, existing approaches require some degree of homogeneity (e.g., a similar distribution of metric values) between the training projects and the target project. Satisfying the homogeneity requirement often requires significant effort (currently a very active area of research).
An unsupervised classifier does not require any training data, therefore the heterogeneity challenge is no longer an issue. In this paper, we examine two types of unsupervised classifiers: a) distance-based classifiers (e.g., k-means); and b) connectivity-based classifiers. While distance-based unsupervised classifiers have been previously used in the defect prediction literature with disappointing performance, connectivity-based classifiers have never been explored before in our community.
We compare the performance of unsupervised classifiers versus supervised classifiers using data from 26 projects from three publicly available datasets (i.e., AEEEM, NASA, and PROMISE). In the cross-project setting, our proposed connectivity-based classifier (via spectral clustering) ranks as one of the top classifiers among five widely-used supervised classifiers (i.e., random forest, naive Bayes, logistic regression, decision tree, and logistic model tree) and five unsupervised classifiers (i.e., k-means, partition around medoids, fuzzy C-means, neural-gas, and spectral clustering). In the within-project setting (i.e., models are built and applied on the same project), our spectral classifier ranks in the second tier, while only random forest ranks in the first tier. Hence, connectivity-based unsupervised classifiers offer a viable solution for cross and within project defect predictions.
Developers often wonder how to implement a certain functionality
(e.g., how to parse XML files) using APIs. Obtaining
an API usage sequence based on an API-related natural
language query is very helpful in this regard. Given a query,
existing approaches utilize information retrieval models to
search for matching API sequences. These approaches treat
queries and APIs as bags-of-words and lack a deep understanding
of the semantics of the query.
We propose DeepAPI, a deep learning based approach to
generate API usage sequences for a given natural language
query. Instead of a bag-of-words assumption, it learns the
sequence of words in a query and the sequence of associated
APIs. DeepAPI adapts a neural language model named
RNN Encoder-Decoder. It encodes a word sequence (user
query) into a fixed-length context vector, and generates an
API sequence based on the context vector. We also augment
the RNN Encoder-Decoder by considering the importance
of individual APIs. We empirically evaluate our approach
with more than 7 million annotated code snippets collected
from GitHub. The results show that our approach generates
largely accurate API sequences and outperforms the related
approaches.
Code Coverage and Test Suite Effectiveness: Empirical Study with Real Bugs in...Pavneet Singh Kochhar
In this paper, we analyse two large software systems to
measure the relationship of code coverage and its effectiveness in killing real bugs from the software systems.
This talk covers the process of using Coverity to carry out a static analysis of open source projects in order to find bugs. and improve the code base.
Build systems orchestrate how human-readable source code is translated into executable programs. In a software project, source code changes can induce changes in the build system (aka. build co-changes). It is difficult for developers to identify when build co-changes are necessary due to the complexity of build systems. Prediction of build co-changes works well if there is a sufficient amount of training data to build a model. However, in practice, for new projects, there exists a limited number of changes. Using training data from other projects to predict the build co-changes in a new project can help improve the performance of the build co-change prediction. We refer to this problem as cross-project build co-change prediction.
In this paper, we propose CroBuild, a novel cross-project build co-change prediction approach that iteratively learns new classifiers. CroBuild constructs an ensemble of classifiers by iteratively building classifiers and assigning them weights according to its prediction error rate. Given that only a small proportion of code changes are build co-changing, we also propose an imbalance-aware approach that learns a threshold boundary between those code changes that are build co-changing and those that are not in order to construct classifiers in each iteration. To examine the benefits of CroBuild, we perform experiments on 4 large datasets including Mozilla, Eclipse-core, Lucene, and Jazz, comprising a total of 50,884 changes. On average, across the 4 datasets, CroBuild achieves a F1-score of up to 0.408. We also compare CroBuild with other approaches such as a basic model, AdaBoost proposed by Freund et al., and TrAdaBoost proposed by Dai et al.. On average, across the 4 datasets, the CroBuild approach yields an improvement in F1-scores of 41.54%, 36.63%, and 36.97% over the basic model, AdaBoost, and TrAdaBoost, respectively.
The promise of DevOps is that we can push new ideas out to market faster while avoiding delivering serious defects into production. Andreas Grabner explains that testers are no longer measured by the number of defect reports they enter, nor are developers measured by the lines of code they write. As a team, you are measured by how fast you can deploy high quality functionality to the end user. Achieving this goal requires testers to increase their skills. It’s all about finding solutions—not just problems. Testers must transition from reporting “app crashes” to providing details such as “memory leak caused by bad cache implementation.” Instead of reporting “it’s slow,” testers must discover “wrong hibernate configuration causes too much traffic from the database.” Using three real-life examples, Andreas illustrates what it takes for testing teams to become part of the DevOps transformation—bringing more value to the entire organization.
Driving Innovation with Kanban at Jaguar Land RoverLeanKit
Find out how Kanban is accelerating product design and development at Jaguar Land Rover.
Watch the recorded webinar here: https://vimeo.com/172780037
Hamish McMinn, Automotive and IT Project Manager, will explain how Kanban is improving time, cost and quality across new vehicle development projects at Jaguar Land Rover.
You'll learn:
-Why new product development provides rich opportunities for continuous process improvement.
-Benefits and challenges of transferring agile software techniques to hardware design and development.
-How to visualize work, focus on flow and increase cross-functional collaboration using LeanKit.
Hamish will share learnings from the initial pilot project, and how Kanban is now being scaled across multiple engineering teams.
Vskills certification for The Grinder Testing Professional assesses the candidate as per the company’s need for load testing web applications. The certification tests the candidates on various areas in agents, workers, properties file, logging, console, TCPProxy, scripts, Jython, Clojure, instrumentation, script gallery, plug-ins, statistics, SSL and garbage collection.
How to Better Manage Technical Debt While Innovating on DevOpsDynatrace
Forget the “Unicorns.” There is a lot to learn from “DevOps Unicorns” such as Etsy or Facebook, but for enterprises dealing with technical debt in legacy systems developed by teams no longer with the company, copying the unicorns is not an option.
Richard Dominguez, Operations Developer at Prep Sportswear, needed to “keep the lights on” for their legacy systems, while enabling his DevOps teams to launch new features much faster. Today Prep Sportswear releases more updates to their legacy systems than ever before by reducing MTTR (Mean Time To Repair), giving them more time to innovate on DevOps and Continuous Delivery on their new platform. You’ll learn:
• Top metrics for an Ops dashboard to catch potential issues early
• Tips to manage technical debt in legacy code caused by dev teams long gone
• Efficient ways to close loops while providing input to DevOps so they can optimize innovation and releases
Lessons Learned from Using Spark for Evaluating Road Detection at BMW Autonom...Databricks
Getting cars to drive autonomously is one of the most exciting problems these days. One of the key challenges is making them drive safely, which requires processing large amounts of data. In our talk we would like to focus on only one task of a self-driving car, namely road detection. Road detection is a software component which needs to be safe for being able to keep the car in the current lane. In order to track the progress of such a software component, a well-designed KPI (key performance indicators) evaluation pipeline is required. In this presentation we would like to show you how we incorporate Spark in our pipeline to deal with huge amounts of data and operate under strict scalability constraints for gathering relevant KPIs. Additionally, we would like to mention several lessons learned from using Spark in this environment.
[Meetup] a successful migration from elastic search to clickhouseVianney FOUCAULT
Paris Clickhouse meetup 2019: How Contentsquare successfully migrated to Clickhouse !
Discover the subtleties of a migration to Clickhouse. What to check before hand, then how to operate clickhouse in Production
Click Here---> http://bit.ly/2WtdgTj <---Get complete detail on CTAL-TTA exam guide to crack CTAL-TTA 4.0. You can collect all information on CTAL-TTA tutorial, practice test, books, study material, exam questions, and syllabus. Firm your knowledge on CTAL-TTA 4.0 and get ready to crack CTAL-TTA certification. Explore all information on CTAL-TTA exam with number of questions, passing percentage and time duration to complete test.
How to Build a Metrics-optimized Software Delivery PipelineDynatrace
Every company is under increased pressure to deliver software faster and better. The question is: “How do I get started?” Continuous firefighting is definitely not the answer!
XebiaLabs and Dynatrace share a practical step-by-step approach to optimizing your delivery process so you can deploy better quality software faster!
Learn:
• Why you should move to a metric-driven pipeline!
• Which key quality metrics to measure and how to integrate them to catch problems earlier
• How to use, measure and report on these metrics
• How finding architectural/quality issues earlier reduces cost spent investigating them
QSDA2022: Qlik Sense Data Architect | Q & APalakMazumdar1
Click Here---> https://bit.ly/3WUkZGI <---Get complete detail on QSDA2022 exam guide to crack Qlik Sense. You can collect all information on QSDA2022 tutorial, practice test, books, study material, exam questions, and syllabus. Firm your knowledge on Qlik Sense and get ready to crack QSDA2022 certification. Explore all information on QSDA2022 exam with number of questions, passing percentage and time duration to complete test.
Improving the Quality of Existing SoftwareSteven Smith
How do you improve the quality of your existing software, while continuing to add value for your customers? What are some heuristics and code smells you can look for, and principles and patterns you can use to guide you, as you make your software better over time instead of worse?
Accelerating Product Development FLOW: Kanban at Jaguar Land RoverHamish McMinn
Modern automotive engineering is extremely complex: A new vehicle can take 2-4 years to develop, with a cost of delay of about $2 million per day. Shortening feedback loops and minimizing handoff delays has massive impact in reducing product lead time.
We will cover:
• ·Why new product development provides rich opportunities for continuous process improvement
• Benefits and challenges of transferring Agile software techniques to hardware design and development
• How to visualize work, focus on flow and increase cross-functional collaboration using kanban
Evolution of Vehicle aftter it has been released, How its made and managedSamuel Festus
a research project based on the theme: Evolution of vehicle after it has been released, how it’s made, and how it’s managed.
Focusing on the Renault system design used in the manufacturing of Automotive and also Serial life management.
Improving the Quality of Existing Software - DevIntersection April 2016Steven Smith
How do you improve the quality of your existing software, while continuing to add value for your customers? What are some heuristics and code smells you can look for, and principles and patterns you can use to guide you, as you make your software better over time instead of worse? How can we improve our skills and techniques so that writing high quality software becomes our default, fastest way of working?
Mining Co-Change Information to Understand when Build Changes are NecessaryShane McIntosh
As a software project ages, its source code is modified to add new features, restructure existing ones, and fix defects. These source code changes often induce changes in the build system, i.e., the system that specifies how source code is translated into deliverables. However, since developers are often not familiar with the complex and occasionally archaic technologies used to specify build systems, they may not be able to identify when their source code changes require accompanying build system changes. This can cause build breakages that slow development progress and impact other developers, testers, or even users. In this paper, we mine the source and test code changes that required accompanying build changes in order to better understand this co-change relationship. We build random forest classifiers using language-agnostic and language-specific code change characteristics to explain when code-accompanying build changes are necessary based on historical trends. Case studies of the Mozilla C++ system, the Lucene and Eclipse open source Java systems, and the IBM Jazz proprietary Java system indicate that our classifiers can accurately explain when build co-changes are necessary with an AUC of 0.60-0.88. Unsurprisingly, our highly accurate C++ classifiers (AUC of 0.88) derive much of their explanatory power from indicators of structural change (e.g., was a new source file added?). On the other hand, our Java classifiers are less accurate (AUC of 0.60-0.78) because roughly 75% of Java build co-changes do not coincide with changes to the structure of a system, but rather are instigated by concerns related to release engineering, quality assurance, and general build maintenance.
Defect, defect, defect: PROMISE 2012 Keynote Sung Kim
Software prediction leveraging repositories has received a tremendous amount of attention within the software engineering community, including PROMISE. In this talk, I will first present great achievements in defect prediction research including new defect prediction features, promising algorithms, and interesting analysis results. However, there are still many challenges in defect prediction. I will talk about them and discuss potential solutions for them leveraging prediction 2.0.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
5. Developers are Different
Modulo %
FOR
Bitwise OR
CONTINUE
% of Buggy Changes
80
60
40
20
0
A
B
C
D
Average
Linux Kernel, 2005-2010
4
4
6. Developers are Different
Modulo %
FOR
Bitwise OR
CONTINUE
% of Buggy Changes
80
60
40
20
0
A
B
C
D
Average
Linux Kernel, 2005-2010
4
4
7. Developers are Different
Modulo %
FOR
Bitwise OR
CONTINUE
% of Buggy Changes
80
60
40
20
0
A
B
C
D
Average
Linux Kernel, 2005-2010
Personalized models can improve performance.
4
4
14. Contributions
•
Personalized Change Classification (PCC)
✦ One model for each developer
•
Confidence-based Hybrid PCC (PCC+)
✦ Picks predictions with highest confidence
•
Evaluate on six C and Java projects
✦ Find up to 155 more bugs by inspecting
20% LOC
✦ Improve F1 by up to 0.08
6
6
16. What is a Change?
Commit: 09a02f...
Author: John Smith
Message: I submitted some code.
file1.c
+
+
+
-
file2.c
+
-
file3.c
+
+
-
7
7
17. What is a Change?
Commit
Commit: 09a02f...
Author: John Smith
Message: I submitted some code.
file1.c
+
+
+
-
file2.c
+
-
file3.c
+
+
-
Change 1 Change 2 Change 3
7
7
18. What is a Change?
Commit
Commit: 09a02f...
Author: John Smith
Message: I submitted some code.
file1.c
+
+
+
-
file2.c
+
-
file3.c
+
+
-
Change 1 Change 2 Change 3
Change-Level: Inspect less code to locate a bug.
7
7
22. Change Classification (CC)
Training Phase
Software
History
Training
Instances
1. Label changes
with clean or buggy
Prediction Phase
Features
2. Extract
features
8
8
23. Change Classification (CC)
Training Phase
Software
History
Training
Instances
1. Label changes
with clean or buggy
Prediction Phase
Features
2. Extract
features
Classification
Algorithm
Model
3. Build prediction
model
8
8
24. Change Classification (CC)
Training Phase
Software
History
Training
Instances
1. Label changes
with clean or buggy
Prediction Phase
Features
2. Extract
features
Classification
Algorithm
3. Build prediction
model
Model
Future
Instances
4. Predict
8
8
26. Label Clean or Buggy
[Sliwerski et al. ’05]
Revision History
9
9
27. Label Clean or Buggy
[Sliwerski et al. ’05]
Revision History
Bug-Fixing Change
Commit: 1da57...
Message: I fixed a bug
fileA.c
- if (i < 128)
+if (i <= 128)
Contain keyword “fix”, or
ID of manually verified bug report [Herzif et al. ’13]
9
9
28. Label Clean or Buggy
[Sliwerski et al. ’05]
Revision History
Buggy Change
Bug-Fixing Change
Commit: 7a3bc...
Message: new feature
fileA.c
+...
+if (i < 128)
+...
Commit: 1da57...
Message: I fixed a bug
fileA.c
Fixed by a later change
git blame
- if (i < 128)
+if (i <= 128)
Contain keyword “fix”, or
ID of manually verified bug report [Herzif et al. ’13]
9
9
53. Confidence Measure
•
Bugginess
✦ Probability of a change being buggy
•
Confidence Measure
✦ Comparable measure of confidence
•
Select the prediction with the highest confidence.
17
17
56. Research Questions
•
•
RQ1: Do PCC and PCC+ outperform CC?
RQ2: Does PCC outperform CC in other setups?
✦ Classification algorithms
✦ Sizes of training sets
18
18
59. Two Metrics
•
F1-Score
✦ Harmonic mean of precision and recall
•
Cost Effectiveness
✦ Relevant in cost sensitive scenarios
✦ NofB20: Number of Bugs discovered by
inspecting top 20% lines of code
19
19
68. Test Subjects
Projects
Language
LOC
# of Changes
Linux kernel
C
7.3M
429K
PostgreSQL
C
289K
89K
Xorg
C
1.1M
46K
Eclipse
Java
1.5M
73K
Lucene*
Java
828K
76K
Jackrabbit*
Java
589K
61K
* With manually labelled bug report data [Herzif et al. ’13]
22
22
77. Related Work
•
Kim et al., Classifying software changes: Clean or
buggy?, TSE ’08
•
Bettenburg et al., Think locally, act globally: Improving
defect and effort prediction models, MSR ’12
28
28
78. Conclusions & Future Work
•
•
PCC and PCC+ improve prediction performance.
•
Personalized approach can be applied to other fields.
The improvement presents in other setups.
✦ Recommendation systems
✦ Vulnerability prediction
✦ Top crashes prediction
29
29