The document discusses automated boundary value testing and analysis. It defines program boundaries as places where behavior is supposed to change or actually changes. It explores using diversity to automatically test boundary cases by comparing multiple executions and observing differences. Challenges include defining appropriate diversity metrics and handling different data types and sizes.
TBar: Revisiting Template-based Automated Program RepairDongsun Kim
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
Mining Fix Patterns for FindBugs ViolationsDongsun Kim
Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them.
In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.
Bug fixing is a time-consuming and tedious task. To reduce the manual efforts in bug fixing, researchers have pre- sented automated approaches to software repair. Unfortunately, recent studies have shown that the state-of-the-art techniques in automated repair tend to generate patches only for a small number of bugs even with quality issues (e.g., incorrect behavior and nonsensical changes). To improve automated program repair (APR) techniques, the community should deepen its knowledge on repair actions from real-world patches since most of the techniques rely on patches written by human developers. Previous investigations on real-world patches are limited to statement level that is not sufficiently fine-grained to build this knowledge. In this work, we contribute to building this knowledge via a systematic and fine-grained study of 16,450 bug fix commits from seven Java open-source projects. We find that there are opportunities for APR techniques to improve their effectiveness by looking at code elements that have not yet been investigated. We also discuss nine insights into tuning automated repair tools. For example, a small number of statement and expression types are recurrently impacted by real-world patches, and expression-level granularity could reduce search space of finding fix ingredients, where previous studies never explored.
Learning to Spot and Refactor Inconsistent Method NamesDongsun Kim
To ensure code readability and facilitate software maintenance, program methods must be named properly. In particular, method names must be consistent with the corresponding method implementations. Debugging method names remains an important topic in the literature, where various approaches analyze commonalities among method names in a large dataset to detect inconsistent method names and suggest better ones. We note that the state-of-the-art does not analyze the implemented code itself to assess consistency. We thus propose a novel automated approach to debugging method names based on the analysis of consistency between method names and method code. The approach leverages deep feature representation techniques adapted to the nature of each artifact. Experimental results on over 2.1 million Java methods show that we can achieve up to 15 percentage points improvement over the state-of-the-art, establishing a record performance of 67.9% F1-measure in identifying inconsistent method names. We further demonstrate that our approach yields up to 25% accuracy in suggesting full names, while the state-of-the-art lags far behind at 1.1% accuracy. Finally, we report on our success in fixing 66 inconsistent method names in a live study on projects in the wild.
iFixR: Bug Report Driven Program RepairDongsun Kim
Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation).
Impact of Tool Support in Patch ConstructionDongsun Kim
Anil Koyuncu, Tegawendé F. Bissyandé, Dongsun Kim, Jacques Klein, Martin Monperrus, and Yves Le Traon, “Impact of Tool Support in Patch Construction,” in Proceedings of the 26th International Symposium on Software Testing and Analysis (ISSTA 2017), Santa Barbara, California, United States, July 10-14, 2017.
TBar: Revisiting Template-based Automated Program RepairDongsun Kim
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
Mining Fix Patterns for FindBugs ViolationsDongsun Kim
Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them.
In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.
Bug fixing is a time-consuming and tedious task. To reduce the manual efforts in bug fixing, researchers have pre- sented automated approaches to software repair. Unfortunately, recent studies have shown that the state-of-the-art techniques in automated repair tend to generate patches only for a small number of bugs even with quality issues (e.g., incorrect behavior and nonsensical changes). To improve automated program repair (APR) techniques, the community should deepen its knowledge on repair actions from real-world patches since most of the techniques rely on patches written by human developers. Previous investigations on real-world patches are limited to statement level that is not sufficiently fine-grained to build this knowledge. In this work, we contribute to building this knowledge via a systematic and fine-grained study of 16,450 bug fix commits from seven Java open-source projects. We find that there are opportunities for APR techniques to improve their effectiveness by looking at code elements that have not yet been investigated. We also discuss nine insights into tuning automated repair tools. For example, a small number of statement and expression types are recurrently impacted by real-world patches, and expression-level granularity could reduce search space of finding fix ingredients, where previous studies never explored.
Learning to Spot and Refactor Inconsistent Method NamesDongsun Kim
To ensure code readability and facilitate software maintenance, program methods must be named properly. In particular, method names must be consistent with the corresponding method implementations. Debugging method names remains an important topic in the literature, where various approaches analyze commonalities among method names in a large dataset to detect inconsistent method names and suggest better ones. We note that the state-of-the-art does not analyze the implemented code itself to assess consistency. We thus propose a novel automated approach to debugging method names based on the analysis of consistency between method names and method code. The approach leverages deep feature representation techniques adapted to the nature of each artifact. Experimental results on over 2.1 million Java methods show that we can achieve up to 15 percentage points improvement over the state-of-the-art, establishing a record performance of 67.9% F1-measure in identifying inconsistent method names. We further demonstrate that our approach yields up to 25% accuracy in suggesting full names, while the state-of-the-art lags far behind at 1.1% accuracy. Finally, we report on our success in fixing 66 inconsistent method names in a live study on projects in the wild.
iFixR: Bug Report Driven Program RepairDongsun Kim
Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and- validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation).
Impact of Tool Support in Patch ConstructionDongsun Kim
Anil Koyuncu, Tegawendé F. Bissyandé, Dongsun Kim, Jacques Klein, Martin Monperrus, and Yves Le Traon, “Impact of Tool Support in Patch Construction,” in Proceedings of the 26th International Symposium on Software Testing and Analysis (ISSTA 2017), Santa Barbara, California, United States, July 10-14, 2017.
Presentation by Céline Deknop of the paper "Advanced Differencing of Legacy Code and Migration Logs" @SATToSE2020 (Virtual event).
Rediffusion of the presentation can be found here : https://www.youtube.com/watch?v=YJxPzWqW9DI&fbclid=IwAR3voPfFsp-ywRUrXOejW4oq8axlFAqbxGidNh2WMEE_VR-pb0diK3Cb05Y (around the 3h mark)
Production model lifecycle management 2016 09Greg Makowski
This talk covers going over the various stages of building data mining models, putting them into production and eventually replacing them. A common theme throughout are three attributes of predictive models: accuracy, generalization and description. I assert you can have it all, and having all three is important for managing the lifecycle. A subtle point is that this is a step to developing embedded, automated data mining systems which can figure out themselves when they need to be updated.
Leaping over the Boundaries of Boundary Value AnalysisTechWell
Many books, articles, classes, and conference presentations tout equivalence class partitioning and boundary value analysis as core testing techniques. Yet many discussions of these techniques are shallow and oversimplified. Testers learn to identify classes based on little more than hopes, rumors, and unwarranted assumptions, while the "analysis" consists of little more than adding or subtracting one to a given number. Do you want to limit yourself to checking the product's behavior at boundaries? Or would you rather test the product to discover that the boundaries aren't where you thought they were, and that the equivalence classes aren't as equivalent as you've been told? Join Michael Bolton as he jumps over the partitions and leaps across the boundaries to reveal a topic far richer than you might have anticipated and far more complex than the simplifications that appear in traditional testing literature and folklore.
In this paper, we develop a vision of software evolution based
on a feature-oriented perspective. From the fact that features
provide a common ground to all stakeholders, we derive a
hypothesis that changes can be eectively managed in a
feature-oriented manner. Assuming that the hypothesis holds,
we argue that feature-oriented software evolution relying
on automatic traceability, analyses, and recommendations
reduces existing challenges in understanding and managing
evolution. We illustrate these ideas using an automotive
example and raise research questions for the community.
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
How much do we know about Object-Oriented Programming?Sandro Mancuso
This talk goes through many of the Object-Oriented Programming principles and characteristics. Things that all developers should have in mind while writing code.
Applying Anti-Reversing Techniques to Java BytecodeTeodoro Cipresso
CS266 Software Reverse Engineering (SRE)Applying Anti-Reversing Techniques to Java Bytecode
Teodoro (Ted) Cipresso, teodoro.cipresso@sjsu.edu
Department of Computer Science
San José State University
Spring 2015
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models
Presentation by Céline Deknop of the paper "Advanced Differencing of Legacy Code and Migration Logs" @SATToSE2020 (Virtual event).
Rediffusion of the presentation can be found here : https://www.youtube.com/watch?v=YJxPzWqW9DI&fbclid=IwAR3voPfFsp-ywRUrXOejW4oq8axlFAqbxGidNh2WMEE_VR-pb0diK3Cb05Y (around the 3h mark)
Production model lifecycle management 2016 09Greg Makowski
This talk covers going over the various stages of building data mining models, putting them into production and eventually replacing them. A common theme throughout are three attributes of predictive models: accuracy, generalization and description. I assert you can have it all, and having all three is important for managing the lifecycle. A subtle point is that this is a step to developing embedded, automated data mining systems which can figure out themselves when they need to be updated.
Leaping over the Boundaries of Boundary Value AnalysisTechWell
Many books, articles, classes, and conference presentations tout equivalence class partitioning and boundary value analysis as core testing techniques. Yet many discussions of these techniques are shallow and oversimplified. Testers learn to identify classes based on little more than hopes, rumors, and unwarranted assumptions, while the "analysis" consists of little more than adding or subtracting one to a given number. Do you want to limit yourself to checking the product's behavior at boundaries? Or would you rather test the product to discover that the boundaries aren't where you thought they were, and that the equivalence classes aren't as equivalent as you've been told? Join Michael Bolton as he jumps over the partitions and leaps across the boundaries to reveal a topic far richer than you might have anticipated and far more complex than the simplifications that appear in traditional testing literature and folklore.
In this paper, we develop a vision of software evolution based
on a feature-oriented perspective. From the fact that features
provide a common ground to all stakeholders, we derive a
hypothesis that changes can be eectively managed in a
feature-oriented manner. Assuming that the hypothesis holds,
we argue that feature-oriented software evolution relying
on automatic traceability, analyses, and recommendations
reduces existing challenges in understanding and managing
evolution. We illustrate these ideas using an automotive
example and raise research questions for the community.
It Does What You Say, Not What You Mean: Lessons From A Decade of Program RepairClaire Le Goues
In this talk we present lessons learned, good ideas, and thoughts on the future, with an eye toward informing junior researchers about the realities and opportunities of a long-running project. We highlight some notions from the original paper that stood the test of time, some that were not as prescient, and some that became more relevant as industrial practice advanced. We place the work in context, highlighting perceptions from software engineering and evolutionary computing, then and now, of how program repair could possibly work. We discuss the importance of measurable benchmarks and reproducible research in bringing scientists together and advancing the area. We give our thoughts on the role of quality requirements and properties in program repair. From testing to metrics to scalability to human factors to technology transfer, software repair touches many aspects of software engineering, and we hope a behind-the-scenes exploration of some of our struggles and successes may benefit researchers pursuing new projects.
How much do we know about Object-Oriented Programming?Sandro Mancuso
This talk goes through many of the Object-Oriented Programming principles and characteristics. Things that all developers should have in mind while writing code.
Applying Anti-Reversing Techniques to Java BytecodeTeodoro Cipresso
CS266 Software Reverse Engineering (SRE)Applying Anti-Reversing Techniques to Java Bytecode
Teodoro (Ted) Cipresso, teodoro.cipresso@sjsu.edu
Department of Computer Science
San José State University
Spring 2015
This paper advances the Domain Segmentation based on Uncertainty in the Surrogate (DSUS) framework which is a novel approach to characterize the uncertainty in surrogates. The leave-one-out cross-validation technique is adopted in the DSUS framework to measure local errors of a surrogate. A method is proposed in this paper to evaluate the performance of the leave-out-out cross-validation errors as local error measures. This method evaluates local errors by comparing: (i) the leave-one-out cross-validation error with (ii) the actual local error estimated within a local hypercube for each training point. The comparison results show that the leave-one-out cross-validation strategy can capture the local errors of a surrogate. The DSUS framework is then applied to key aspects of wind resource as- sessment and wind farm cost modeling. The uncertainties in the wind farm cost and the wind power potential are successfully characterized, which provides designers/users more confidence when using these models
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
7. 7/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
8. 8/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
Take 2: where behavior changes (function centric)
9. 9/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
Take 2: where behavior changes (function centric)
Tackle Take 1: Explicit specification, check edge cases. Requires manual work.
10. 10/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
Take 2: where behavior changes (function centric)
Tackle Take 1: Explicit specification, check edge cases. Requires manual work.
Tackle Take 2: Explore program boundaries, extract actual edge cases. Can be automated.
11. 11/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
Take 2: where behavior changes (function centric)
Tackle Take 1: Explicit specification, check edge cases. Requires manual work.
Tackle Take 2: Explore program boundaries, extract actual edge cases. Can be automated.
BVA vs. BVT?
12. 12/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
Take 2: where behavior changes (function centric)
Tackle Take 1: Explicit specification, check edge cases. Requires manual work.
Can be used in combination.
Tackle Take 2: Explore program boundaries, extract actual edge cases. Can be automated.
BVA vs. BVT?
13. 13/77
Boundary Value Analysis/Testing
Why? Find bugs, anomalies, deviation from intent
What are program boundaries?
Take 1: where behavior is supposed to change (specification centric)
Take 2: where behavior changes (function centric)
Tackle Take 1: Explicit specification, check edge cases. Requires manual work.
Can be used in combination.
Tackle Take 2: Explore program boundaries, extract actual edge cases. Can be automated.
BVA vs. BVT?
38. 38/77
Foundation: Diversity
Challenge: Describe relation between inputs and outputs for
arbitrary data types.
Kolmogorov Complexity has a potential
Applicable for all data types
Automated BVT/BVA?
x1 x2
SUT
y1 y2
?
39. 39/77
Foundation: Diversity
Challenge: Describe relation between inputs and outputs for
arbitrary data types.
Kolmogorov Complexity has a potential
Applicable for all data types
“Compression trick” makes it practical
CC ~ KC
Automated BVT/BVA?
x1 x2
SUT
y1 y2
?
40. 40/77
Foundation: Diversity
Challenge: Describe relation between inputs and outputs for
arbitrary data types.
Kolmogorov Complexity has a potential
Applicable for all data types
“Compression trick” makes it practical
CC ~ KC
Normalized Information Distance (NID)
Automated BVT/BVA?
x1 x2
SUT
y1 y2
?
56. 56/77
Input Mutation
How do I get bmin then?
May require exploring the SUT’s behavior on a number of “close
values”
e.g. via Search-based Software Engineering with Mutators
68. 68/77
This is where we are...
● All diversity measures may not be practical
● deterministic vs stochastic
69. 69/77
This is where we are...
● All diversity measures may not be practical
● deterministic vs stochastic
● Discrimination on boundary
70. 70/77
This is where we are...
● All diversity measures may not be practical
● deterministic vs stochastic
● Discrimination on boundary
● single metric may not discriminate well
● output diversity upper hand (?)
71. 71/77
This is where we are...
● All diversity measures may not be practical
● deterministic vs stochastic
● Discrimination on boundary
● single metric may not discriminate well
● output diversity upper hand (?)
● distance metric has an impact, NCD possibly
too simplistic (to be continued...)
77. 77/77
TestVikings
“We come over land and sea to break your code”
Find us at: https://testvikings.github.io/
I don’t tweet but post on LinkedIn (Felix Dobslaw)