Griffin is a technique that aims to improve the understanding of concurrency bugs by grouping suspicious memory access patterns from failing tests. It first performs fault localization to generate ranked lists of memory access patterns, then clusters related tests together based on similarity of patterns. Finally, it reconstructs bugs by clustering patterns based on call stack similarity and identifying suspicious methods and a bug graph.
LSRepair: Live Search of Fix Ingredients for Automated Program RepairDongsun Kim
Automated program repair (APR) has extensively been developed by leveraging search-based techniques, in which fix ingredients are explored and identified in different granular- ities from a specific search space. State-of-the approaches often find fix ingredients by using mutation operators or leveraging manually-crafted templates. We argue that the fix ingredients can be searched in an online mode, leveraging code search techniques to find potentially-fixed versions of buggy code fragments from which repair actions can be extracted. In this study, we present an APR tool, LSRepair, that automatically explores code repositories to search for fix ingredients at the method-level granularity with three strategies of similar code search. Our preliminary evaluation shows that code search can drive a faster fix process (some bugs are fixed in a few seconds). LSRepair helps repair 19 bugs from the Defects4J benchmark successfully. We expect our approach to open new directions for fixing multiple-lines bugs.
LSRepair: Live Search of Fix Ingredients for Automated Program RepairDongsun Kim
Automated program repair (APR) has extensively been developed by leveraging search-based techniques, in which fix ingredients are explored and identified in different granular- ities from a specific search space. State-of-the approaches often find fix ingredients by using mutation operators or leveraging manually-crafted templates. We argue that the fix ingredients can be searched in an online mode, leveraging code search techniques to find potentially-fixed versions of buggy code fragments from which repair actions can be extracted. In this study, we present an APR tool, LSRepair, that automatically explores code repositories to search for fix ingredients at the method-level granularity with three strategies of similar code search. Our preliminary evaluation shows that code search can drive a faster fix process (some bugs are fixed in a few seconds). LSRepair helps repair 19 bugs from the Defects4J benchmark successfully. We expect our approach to open new directions for fixing multiple-lines bugs.
Keynote given at the Asia Pacific Software Engineering Conference (APSEC), December 2020, on Automated Program Repair technologies and their applications.
You Cannot Fix What You Cannot Find! --- An Investigation of Fault Localizati...Dongsun Kim
Properly benchmarking Automated Program Re- pair (APR) systems should contribute to the development and adoption of the research outputs by practitioners. To that end, the research community must ensure that it reaches significant milestones by reliably comparing state-of-the-art tools for a better understanding of their strengths and weaknesses. In this work, we identify and investigate a practical bias caused by the fault localization (FL) step in a repair pipeline. We propose to highlight the different fault localization configurations used in the literature, and their impact on APR systems when applied to the Defects4J benchmark. Then, we explore the performance variations that can be achieved by “tweaking” the FL step. Eventually, we expect to create a new momentum for (1) full disclosure of APR experimental procedures with respect to FL, (2) realistic expectations of repairing bugs in Defects4J, as well as (3) reliable performance comparison among the state-of-the- art APR systems, and against the baseline performance results of our thoroughly assessed kPAR repair tool. Our main findings include: (a) only a subset of Defects4J bugs can be currently localized by commonly-used FL techniques; (b) current practice of comparing state-of-the-art APR systems (i.e., counting the number of fixed bugs) is potentially misleading due to the bias of FL configurations; and (c) APR authors do not properly qualify their performance achievement with respect to the different tuning parameters implemented in APR systems.
Automated Program Repair, Distinguished lecture at MPI-SWSAbhik Roychoudhury
MPI-SWS Distinguished Lecture 2019. The talk focuses on fuzzing, symbolic execution as background technologies and compares their relative power. Then the use of such technologies for automated program repair is investigated.
Impact of Tool Support in Patch ConstructionDongsun Kim
Anil Koyuncu, Tegawendé F. Bissyandé, Dongsun Kim, Jacques Klein, Martin Monperrus, and Yves Le Traon, “Impact of Tool Support in Patch Construction,” in Proceedings of the 26th International Symposium on Software Testing and Analysis (ISSTA 2017), Santa Barbara, California, United States, July 10-14, 2017.
TBar: Revisiting Template-based Automated Program RepairDongsun Kim
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
This presentation describes the results published in the following paper published in the Journal INFORMATION AND SOFTWARE TECHNOLOGY
TITLE: A Large Scale Empirical Comparison of State-of-the-art Search-based Test Case Generators
AUTHORS: Annibale Panichella, Fitsum Kifetew, Paolo Tonella
ABSTRACT: Context: Replication studies and experiments form an important foundation in advancing scientific research. While their prevalence in Software Engineering is increasing, there is still more to be done. Objective: This article aims to extend our previous replication study on search-based test generation techniques by performing a large-scale empirical comparison with further techniques from state of the art. Method: We designed a comprehensive experimental study involving six techniques, a benchmark composed of 180 non-trivial Java classes, and a total of 21,600 independent executions. Metrics regarding the effectiveness and efficiency of the techniques were collected and analyzed by means of statistical methods. Results: Our empirical study shows that single-target approaches are generally outperformed by multi-target approaches, while within the multi-target approaches, DynaMOSA/MOSA, which are based on many-objective optimization, outperform the others, in particular for complex classes. Conclusion: The results obtained from our large-scale empirical investigation con rm what has been reported in previous studies, while also highlighting striking differences and novel observations. Future studies, on different benchmarks and considering additional techniques, could further reinforce and extend our findings.
Introductory talk given to PhD students starting research at NUS PhD open day 2020. Covers research in Computer Science, and some experience in research on trustworthy software systems.
Bug fixing is a time-consuming and tedious task. To reduce the manual efforts in bug fixing, researchers have pre- sented automated approaches to software repair. Unfortunately, recent studies have shown that the state-of-the-art techniques in automated repair tend to generate patches only for a small number of bugs even with quality issues (e.g., incorrect behavior and nonsensical changes). To improve automated program repair (APR) techniques, the community should deepen its knowledge on repair actions from real-world patches since most of the techniques rely on patches written by human developers. Previous investigations on real-world patches are limited to statement level that is not sufficiently fine-grained to build this knowledge. In this work, we contribute to building this knowledge via a systematic and fine-grained study of 16,450 bug fix commits from seven Java open-source projects. We find that there are opportunities for APR techniques to improve their effectiveness by looking at code elements that have not yet been investigated. We also discuss nine insights into tuning automated repair tools. For example, a small number of statement and expression types are recurrently impacted by real-world patches, and expression-level granularity could reduce search space of finding fix ingredients, where previous studies never explored.
Bench4BL: Reproducibility Study on the Performance of IR-Based Bug LocalizationDongsun Kim
Jaekwon Lee, Dongsun Kim, Tegawendé F. Bissyandé, Woosung Jung and Yves Le Traon, “Bench4BL: Reproducibility Study on the Performance of IR-Based Bug Localization”, in Proceedings of the 27th International Symposium on Software Testing and Analysis (ISSTA 2018), Amsterdam, Netherlands, July 16 – 21, 2018.
Mining Fix Patterns for FindBugs ViolationsDongsun Kim
Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them.
In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.
Keynote given at the Asia Pacific Software Engineering Conference (APSEC), December 2020, on Automated Program Repair technologies and their applications.
You Cannot Fix What You Cannot Find! --- An Investigation of Fault Localizati...Dongsun Kim
Properly benchmarking Automated Program Re- pair (APR) systems should contribute to the development and adoption of the research outputs by practitioners. To that end, the research community must ensure that it reaches significant milestones by reliably comparing state-of-the-art tools for a better understanding of their strengths and weaknesses. In this work, we identify and investigate a practical bias caused by the fault localization (FL) step in a repair pipeline. We propose to highlight the different fault localization configurations used in the literature, and their impact on APR systems when applied to the Defects4J benchmark. Then, we explore the performance variations that can be achieved by “tweaking” the FL step. Eventually, we expect to create a new momentum for (1) full disclosure of APR experimental procedures with respect to FL, (2) realistic expectations of repairing bugs in Defects4J, as well as (3) reliable performance comparison among the state-of-the- art APR systems, and against the baseline performance results of our thoroughly assessed kPAR repair tool. Our main findings include: (a) only a subset of Defects4J bugs can be currently localized by commonly-used FL techniques; (b) current practice of comparing state-of-the-art APR systems (i.e., counting the number of fixed bugs) is potentially misleading due to the bias of FL configurations; and (c) APR authors do not properly qualify their performance achievement with respect to the different tuning parameters implemented in APR systems.
Automated Program Repair, Distinguished lecture at MPI-SWSAbhik Roychoudhury
MPI-SWS Distinguished Lecture 2019. The talk focuses on fuzzing, symbolic execution as background technologies and compares their relative power. Then the use of such technologies for automated program repair is investigated.
Impact of Tool Support in Patch ConstructionDongsun Kim
Anil Koyuncu, Tegawendé F. Bissyandé, Dongsun Kim, Jacques Klein, Martin Monperrus, and Yves Le Traon, “Impact of Tool Support in Patch Construction,” in Proceedings of the 26th International Symposium on Software Testing and Analysis (ISSTA 2017), Santa Barbara, California, United States, July 10-14, 2017.
TBar: Revisiting Template-based Automated Program RepairDongsun Kim
We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).
This presentation describes the results published in the following paper published in the Journal INFORMATION AND SOFTWARE TECHNOLOGY
TITLE: A Large Scale Empirical Comparison of State-of-the-art Search-based Test Case Generators
AUTHORS: Annibale Panichella, Fitsum Kifetew, Paolo Tonella
ABSTRACT: Context: Replication studies and experiments form an important foundation in advancing scientific research. While their prevalence in Software Engineering is increasing, there is still more to be done. Objective: This article aims to extend our previous replication study on search-based test generation techniques by performing a large-scale empirical comparison with further techniques from state of the art. Method: We designed a comprehensive experimental study involving six techniques, a benchmark composed of 180 non-trivial Java classes, and a total of 21,600 independent executions. Metrics regarding the effectiveness and efficiency of the techniques were collected and analyzed by means of statistical methods. Results: Our empirical study shows that single-target approaches are generally outperformed by multi-target approaches, while within the multi-target approaches, DynaMOSA/MOSA, which are based on many-objective optimization, outperform the others, in particular for complex classes. Conclusion: The results obtained from our large-scale empirical investigation con rm what has been reported in previous studies, while also highlighting striking differences and novel observations. Future studies, on different benchmarks and considering additional techniques, could further reinforce and extend our findings.
Introductory talk given to PhD students starting research at NUS PhD open day 2020. Covers research in Computer Science, and some experience in research on trustworthy software systems.
Bug fixing is a time-consuming and tedious task. To reduce the manual efforts in bug fixing, researchers have pre- sented automated approaches to software repair. Unfortunately, recent studies have shown that the state-of-the-art techniques in automated repair tend to generate patches only for a small number of bugs even with quality issues (e.g., incorrect behavior and nonsensical changes). To improve automated program repair (APR) techniques, the community should deepen its knowledge on repair actions from real-world patches since most of the techniques rely on patches written by human developers. Previous investigations on real-world patches are limited to statement level that is not sufficiently fine-grained to build this knowledge. In this work, we contribute to building this knowledge via a systematic and fine-grained study of 16,450 bug fix commits from seven Java open-source projects. We find that there are opportunities for APR techniques to improve their effectiveness by looking at code elements that have not yet been investigated. We also discuss nine insights into tuning automated repair tools. For example, a small number of statement and expression types are recurrently impacted by real-world patches, and expression-level granularity could reduce search space of finding fix ingredients, where previous studies never explored.
Bench4BL: Reproducibility Study on the Performance of IR-Based Bug LocalizationDongsun Kim
Jaekwon Lee, Dongsun Kim, Tegawendé F. Bissyandé, Woosung Jung and Yves Le Traon, “Bench4BL: Reproducibility Study on the Performance of IR-Based Bug Localization”, in Proceedings of the 27th International Symposium on Software Testing and Analysis (ISSTA 2018), Amsterdam, Netherlands, July 16 – 21, 2018.
Mining Fix Patterns for FindBugs ViolationsDongsun Kim
Several static analysis tools, such as Splint or FindBugs, have been proposed to the software development community to help detect security vulnerabilities or bad programming practices. However, the adoption of these tools is hindered by their high false positive rates. If the false positive rate is too high, developers may get acclimated to violation reports from these tools, causing concrete and severe bugs being overlooked. Fortunately, some violations are actually addressed and resolved by developers. We claim that those violations that are recurrently fixed are likely to be true positives, and an automated approach can learn to repair similar unseen violations. However, there is lack of a systematic way to investigate the distributions on existing violations and fixed ones in the wild, that can provide insights into prioritizing violations for developers, and an effective way to mine code and fix patterns which can help developers easily understand the reasons of leading violations and how to fix them.
In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.
Learn about Hitchhiker Trees from David Greenberg, a new functional, immutable, persistent variation of a fractal tree. In these slides, we'll learn how to understand immutable data strucutres and a variety of trees, introducing new concepts as we build up to the hitchhiker tree.
Python Data Wrangling: Preparing for the FutureWes McKinney
Given at PyCon HK on October 29, 2016. About open source work in progress to advance the Python pandas project internals and leverage synergies with other efforts in OSS data technology
Improving Python and Spark (PySpark) Performance and InteroperabilityWes McKinney
Slides from Spark Summit East 2017 — February 9, 2017 in Boston. Discusses ongoing development work to accelerate Python-on-Spark performance using Apache Arrow and other tools
Mesos: The Operating System for your DatacenterDavid Greenberg
Maybe you’ve heard of Mesos—that thing that you can run Hadoop on. I think it powers Twitter? Isn’t it an Apache project, or something?
In this talk, we’ll learn all about Mesos—what it is, how you can leverage it to simplify your infrastructure and reduce AWS/cloud computing costs, and why you should develop your next application on top of it. This talk will give you the tools you need to understand whether Mesos is the right fit for your infrastructure, and several starting points for learning more about Mesos.
Large-Scale Geographically Weighted Regression on SparkViet-Trung TRAN
Geographically Weighted Regression (GWR) is a local version of spatial regression that captures spatial dependency in regression analysis. GWR has many application in practice as a visualization and prediction tool for spatial exploration- (e.g in climate, economy, medical). However, this locally regression model is slow in process upon the volume of calculations and the spatial getting bigger. Improving performance of GWR is an critical issue, but their distributed implementations have not been studied. Recently, with the advent of Spark as well MapReduce framework, the development of machine learning applications and parallel programming becomes easier. In this article, we propose several large-scale implementations of distributed GWR, leveraging Spark framework. We implemented and evaluated these approaches with large datasets. To our best knowledge, this is the first work addressing GWR at large-scale.
FAST Approaches to Scalable Similarity-based Test Case Prioritizationbrenoafmiranda
Many test case prioritization criteria have been proposed for speeding up fault detection. Among them, similarity-based approaches give priority to the test cases that are the most dissimilar from those already selected. However, the proposed criteria do not scale up to handle the many thousands or even some millions test suite sizes of modern industrial systems and simple heuristics are used instead. We introduce the FAST family of test case prioritization techniques that radically changes this landscape by borrowing algorithms commonly exploited in the big data domain to find similar items. FAST techniques provide scalable similarity-based test case prioritization in both white-box and black-box fashion. The results from experimentation on real world C and Java subjects show that the fastest members of the family outperform other black-box approaches in efficiency with no significant impact on effectiveness, and also outperform white-box approaches, including greedy ones, if preparation time is not counted. A simulation study of scalability shows that one FAST technique can prioritize a million test cases in less than 20 minutes.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Controlled dropout: a different dropout for improving training speed on deep ...Byung Soo Ko
"Controlled Dropout" is a different dropout method which is for improving training speed on deep neural networks. Basic idea and algorithm of controlled dropout are based on the paper "Controlled Dropout: a Different Dropout for Improving Training Speed on Deep Neural Network" which was presented in IEEE International Conference on Systems, Man, and Cybernetics (SMC) 2017.
The slide of the talk in http://www.meetup.com/R-Users-Sydney/events/223867196/
There is a web version here: http://wush978.github.io/FeatureHashing/index.html
Using AI Planning to Automate the Performance Analysis of SimulatorsRoland Ewald
Analyzing simulation algorithm performance is cumbersome: execute some runs, observe a performance metric, and analyze the results. Often, the results motivate follow-up experiments, which in turn may lead to additional experiments, and so on. This time-consuming and error-prone process can be automated with planning approaches from artificial intelligence, making simulator performance analysis more convenient and rigorous. This paper introduces ALeSiA, a prototypical system for automatic simulator performance analysis. It is independent of any specific simulation system and realizes a hypothesis-driven approach to evaluate performance.
Machine learning for functional connectomesGael Varoquaux
A tutorial on using machine-learning for functional-connectomes, for instance on resting-state fMRI. This is typically useful for population imaging: comparing traits or conditions across subjects.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI AppGoogle
AI Fusion Buddy Review: Brand New, Groundbreaking Gemini-Powered AI App
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-fusion-buddy-review
AI Fusion Buddy Review: Key Features
✅Create Stunning AI App Suite Fully Powered By Google's Latest AI technology, Gemini
✅Use Gemini to Build high-converting Converting Sales Video Scripts, ad copies, Trending Articles, blogs, etc.100% unique!
✅Create Ultra-HD graphics with a single keyword or phrase that commands 10x eyeballs!
✅Fully automated AI articles bulk generation!
✅Auto-post or schedule stunning AI content across all your accounts at once—WordPress, Facebook, LinkedIn, Blogger, and more.
✅With one keyword or URL, generate complete websites, landing pages, and more…
✅Automatically create & sell AI content, graphics, websites, landing pages, & all that gets you paid non-stop 24*7.
✅Pre-built High-Converting 100+ website Templates and 2000+ graphic templates logos, banners, and thumbnail images in Trending Niches.
✅Say goodbye to wasting time logging into multiple Chat GPT & AI Apps once & for all!
✅Save over $5000 per year and kick out dependency on third parties completely!
✅Brand New App: Not available anywhere else!
✅ Beginner-friendly!
✅ZERO upfront cost or any extra expenses
✅Risk-Free: 30-Day Money-Back Guarantee!
✅Commercial License included!
See My Other Reviews Article:
(1) AI Genie Review: https://sumonreview.com/ai-genie-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIFusionBuddyReview,
#AIFusionBuddyFeatures,
#AIFusionBuddyPricing,
#AIFusionBuddyProsandCons,
#AIFusionBuddyTutorial,
#AIFusionBuddyUserExperience
#AIFusionBuddyforBeginners,
#AIFusionBuddyBenefits,
#AIFusionBuddyComparison,
#AIFusionBuddyInstallation,
#AIFusionBuddyRefundPolicy,
#AIFusionBuddyDemo,
#AIFusionBuddyMaintenanceFees,
#AIFusionBuddyNewbieFriendly,
#WhatIsAIFusionBuddy?,
#HowDoesAIFusionBuddyWorks
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Understanding Nidhi Software Pricing: A Quick Guide 🌟
Choosing the right software is vital for Nidhi companies to streamline operations. Our latest presentation covers Nidhi software pricing, key factors, costs, and negotiation tips.
📊 What You’ll Learn:
Key factors influencing Nidhi software price
Understanding the true cost beyond the initial price
Tips for negotiating the best deal
Affordable and customizable pricing options with Vector Nidhi Software
🔗 Learn more at: www.vectornidhisoftware.com/software-for-nidhi-company/
#NidhiSoftwarePrice #NidhiSoftware #VectorNidhi
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...
Griffin: Grouping Suspicious Memory-Access Patterns to Improve Understanding of Concurrency Bugs
1. Background Limitations Griffin Empirical Study Conclusion
Sangmin Park, Mary Jean Harrold, Richard Vuduc
Georgia Institute of Technology
Griffin:
Grouping Suspicious Memory-Access
Patterns to Improve Understanding of
Concurrency Bugs
2. Background Limitations Griffin Empirical Study Conclusion
Difficult to Debug and Fix
Time to Debug
Concurrency Bugs*
Hours
(28%)
Months
(9%)
Days (63%)
* P. Godefroid and N. Nagappan. Concurrency at Microsoft: An exploratory survey. (EC)2, 2008.
2
3. Background Limitations Griffin Empirical Study Conclusion
Difficult to Debug and Fix
* Z. Yin et al. How do fixes become bugs? — a comprehensive characteristic study on incorrect
fixes in commercial and open source operating systems, ESEC/FSE 2011
Incorrect Fix
of Concurrency Bugs*
Correct
(61%)
Incorrect
(39%)
3
4. Background Limitations Griffin Empirical Study Conclusion
Existing Techniques
• Automatic fault-localization
• Suspicious pair of interaction [Jin 10]
• Memory-interaction list [Lucia 11]
• Memory-access patterns [Park 10, Park12]
• Semi-automated fix
• Atomicity violation fix [Jin 11, Liu 12]
• Order/atomicity violation fix [Jin 12]
Limitations
• Low-level memory accesses
• Too much spurious information
Limitation
• Require developer input
4
6. Background Limitations Griffin Empirical Study Conclusion
Concurrency Bugs
• Order violation
A pair of memory accesses
unintended program behavior
• Atomicity violation
Code region should be atomic but is not
unintended program behavior
6* Lu et al. Learning from Mistakes---A comprehensive study on concurrency bugs. ASPLOS 2008.
7. Background Limitations Griffin Empirical Study Conclusion
Concurrency Bugs: Atomicity Violation
* Example from Java Collection Library (Vector).
b.size
b.array
k
k
a.size
a.array
Thread 2
b.addElement(c)
k
k
7
Thread 1
a.initObject(b)
b.copyElements(a)
a = new Data(b)
+ c
c
8. Background Limitations Griffin Empirical Study Conclusion
Concurrency Bugs: Atomicity Violation
* Example from Java Collection Library (Vector).
b.size
b.array
k
k
a.size
a.array
Thread 2
b.addElement(c)
k
k
8
Thread 1
a.initObject(b)
b.copyElements(a)
a = new Data(b)
+ c
cRb.size
Rb.size
Wb.size
Rb.array
Rb.size
Wb.size
Wb.array
9. Background Limitations Griffin Empirical Study Conclusion
Variables Type Memory Access Patterns
Single
Order
R1,S(x) W2,S(x)
W1,S(x) R2,S(x)
W1,S(x) W2,S(x)
Single-
Variable
Atomicity
R1,S1(x) W2,S2(x) R1,S3(x)
W1,S1(x) W2,S2(x) R1,S3(x)
W1,S1(x) R2,S2(x) W1,S3(x)
W1,S1(x) R2,S2(x) W1,S3(x)
W1,S1(x) W2,S2(x) W1,S3(x)
Multiple
Multi-
Variable
Atomicity
W1,S1(x) W2,S2(x) W2,S3(y) W1,S4(y)
W1,S1(x) W2,S2(y) W2,S3(x) W1,S4(y)
W1,S1(x) W2,S2(y) W1,S3(y) W2,S4(x)
W1,S1(x) R2,S2(x) R2,S3(y) W1,S4(y)
W1,S1(x) R2,S2(y) R2,S3(x) W1,S4(y)
R1,S1(x) W2,S2(x) W2,S3(y) R1,S4(y)
R1,S1(x) W2,S2(y) W2,S3(x) R1,S4(y)
R1,S1(x) W2,S2(y) R1,S3(y) W2,S4(x)
W1,S1(x) R2,S2(x) W1,S3(y) R2,S4(y)
Problematic Memory-Access Patterns
Patterns identified by
Vaziri, Tip, Dolby.
POPL 2006.
9
Fault-localization techniques
Record suspicious memory-access patterns
and report them in a ranked list
(e.g., [Jin 10, Lucia 11, Park 10, Park 12])
10. Background Limitations Griffin Empirical Study Conclusion
L1. Context Information
main
Data
getSize
copyElements
….
Dynamic Calling Context
main
addElement
addSize addArray
….
10* Example from Java Collection Library (Vector).
Thread 2Thread 1
b.addElement(c)
a.initObject(b)
b.copyElements(a)
a = new Data(b)
initObject
Thread 1 Thread 2
a.size = b.getSize()
b.addSize(c.array)
b.addArray(c.array)
Problems
Existing techniques
• report only low-level memory accesses
• lose context information
11. Background Limitations Griffin Empirical Study Conclusion
Thread 2Thread 1
b.addElement(c)
a.initObject(b)
b.copyElements(a)
a = new Data(b)
L2. Multiple Bugs
Rb.array
Rb.size
Wb.size
Wb.array
Sample Report
…
1)
2)
3)
4)
RWR – size
RWWR – size/array
mem-order 3
mem-order 4
Problem
Existing techniques
• do not handle multiple concurrency
bugs
11* Example from Java Collection Library (Vector).
12. Background Limitations Griffin Empirical Study Conclusion
L3. False-positive Patterns
Sample Report
…
1)
2)
3)
4)
RWR – size
RWWR – size/array
mem-order 3
mem-order 4
12* Example from Java Collection Library (Vector).
Thread 2Thread 1
b.addElement(c)
a.initObject(b)
b.copyElements(a)
a = new Data(b)
Rb.array
Rb.size
Wb.size
Wb.array
Problem
Existing techniques
• do not handle false-positive
memory accesses
13. Background Limitations Griffin Empirical Study Conclusion
Our Technique: Griffin
13
1. Fault
Localization
2. Test
Clustering
3. Bug
Reconstruction
Test Case
Ranked Lists
Clustered Lists
Bug Graph
Patterns
Methods
Bug Graph
Patterns
Methods
Program
14. Background Limitations Griffin Empirical Study Conclusion
Step 1: Fault Localization
* Park, Vuduc, Harrold [ICST 2012] 14
Method [Unicorn, ICST 2012]:
1. Collect pairs of memory accesses in multiple tests
2. Combine pairs to patterns offline
3. Rank patterns by associating patterns with failures
Program
Test Case
Ranked Lists
Clustered Lists
Bug Graph
Patterns
Methods
Bug Graph
Patterns
Methods
15. Background Limitations Griffin Empirical Study Conclusion
Step 1: Fault Localization
15
Thread 2Thread 1
b.addElement(c)
a.initObject(b)
b.copyElements(a)
a = new Data(b)
Generate ranked list of patterns for each failing test
t1
RWR 271-851-681
RWWR 271-851-852-682
RW 271-851
RWR 250-353-252
t2
RWR 271-801-681
RWWR 271-801-802-682
RW 271-801
RWR 222-453-224
t3
t4
RWR 271-851-681
RWWR 271-851-852-682
RWR 250-354-253
RW 271-851
RWR 271-801-681
RWR 222-454-225
RW 271-801
RWWR 271-801-802-682
16. Background Limitations Griffin Empirical Study Conclusion
Step 2: Test Clustering
Method [Fault-localization-based clustering]:
1. Create initial clusters for each failing test with p patterns
2. Merge if similarity (Jaccard) above threshold th
Until no more clusters can be merged
* Jones, Bowring, Harrold [ISSTA 2007] 16
Program
Test Case
Ranked Lists
Clustered Lists
Bug Graph
Patterns
Methods
Bug Graph
Patterns
Methods
17. Background Limitations Griffin Empirical Study Conclusion
t1 t2
t3 t4
RWR 271-851-681
RWWR 271-851-852-682
RW 271-851
RWR 250-353-252
RWR 271-851-681
RWWR 271-851-852-682
RWR 250-354-253
RW 271-851
RWR 271-801-681
RWWR 271-801-802-682
RW 271-801
RWR 222-453-224
RWR 271-801-681
RWR 222-454-225
RW 271-801
RWWR 271-801-802-682
Step 2: Test Clustering
Cluster by similarity of top patterns
p =4 and th = 0.6
17
18. Background Limitations Griffin Empirical Study Conclusion
RWR 271-851-681
RWWR 271-851-852-682
RW 271-851
RWR 250-353-252
t1 t2
t3 t4
RWR 271-851-681
RWWR 271-851-852-682
RWR 250-354-253
RW 271-851
RWR 271-801-681
RWWR 271-801-802-682
RW 271-801
RWR 222-453-224
RWR 271-801-681
RWR 222-454-225
RW 271-801
RWWR 271-801-802-682
Step 2: Test Clustering
3/5 or 0.6≥th
Cluster by similarity of top patterns
p =4 and th = 0.6
18
19. Background Limitations Griffin Empirical Study Conclusion
t1 t2
t3 t4
RWR 271-851-681
RWWR 271-851-852-682
RW 271-851
RWR 250-353-252
RWR 271-851-681
RWWR 271-851-852-682
RWR 250-354-253
RW 271-851
RWR 271-801-681
RWWR 271-801-802-682
RW 271-801
RWR 222-453-224
RWR 271-801-681
RWR 222-454-225
RW 271-801
RWWR 271-801-802-682
Step 2: Test Clustering
Cluster by similarity of top patterns
p =4 and th = 0.6
19
20. Background Limitations Griffin Empirical Study Conclusion
t1 t2
t3 t4
RWR 271-851-681
RWWR 271-851-852-682
RW 271-851
RWR 250-353-252
RWR 271-851-681
RWWR 271-851-852-682
RWR 250-354-253
RW 271-851
RWR 271-801-681
RWWR 271-801-802-682
RW 271-801
RWR 222-453-224
RWR 271-801-681
RWR 222-454-225
RW 271-801
RWWR 271-801-802-682
Step 2: Test Clustering
3/5 or 0.6≥th
Cluster by similarity of top patterns
p =4 and th = 0.6
20
21. Background Limitations Griffin Empirical Study Conclusion
t1, t3 t2, t4
Two clusters of failing executions
Step 2: Test Clustering
Cluster by similarity of top patterns
p =4 and th = 0.6
21
22. Background Limitations Griffin Empirical Study Conclusion
Step 3: Bug Reconstruction
* See the paper for detailed clustering policy
22
Method:
1. Perform call-stack-based clustering to group true/false
positive patterns
(Agglomerative clustering like Step 2)
2. Identify suspicious methods, bug graph
Program
Test Case
Ranked Lists
Clustered Lists
Bug Graph
Patterns
Methods
Bug Graph
Patterns
Methods
26. Background Limitations Griffin Empirical Study Conclusion
120 main()
150 Data (Data c)
270 int getSize()
130 void run()
850 void addAll(Data c)
120 main()
151 Data (Data b)
680 void copyArray(a)
Step 3: Bug Reconstruction
Cluster patterns based on call-stack similarity
Initial Clusters
RWR 271-851-681
RWWR 271-851-852-682
26
27. Background Limitations Griffin Empirical Study Conclusion
120 main()
150 Data (Data c)
270 int getSize()
130 void run()
850 void addAll(Data c)
120 main()
151 Data (Data b)
680 void copyArray(a)
Step 3: Bug Reconstruction
Cluster patterns based on call-stack similarity
Initial Clusters
RWR 271-851-681
RWWR 271-851-852-682
120 main()
150 Data (Data c)
270 int getSize()
130 void run()
850 void addAll(Data c)
120 main()
151 Data (Data b)
680 void copyArray(a)
130 void run()
850 void addAll(Data c)
27
Common call stacks are
same for both clusters
merge
* See the paper for detailed clustering policy
28. Background Limitations Griffin Empirical Study Conclusion
Step 3: Bug Reconstruction
Cluster patterns based on call-stack similarity
Initial Clusters
RWR 271-851-681
RWR 250-354-253
RW 271-851
RWWR 271-851-852-682
RWR 250-353-252
271-851 part of
271-851-681
merge
28
29. Background Limitations Griffin Empirical Study Conclusion
Step 3: Bug Reconstruction
Cluster patterns based on call-stack similarity
Initial Clusters
RWR 271-851-681
RW 271-851
RWWR 271-851-852-682
120 main()
150 Data (Data c)
270 int getSize()
130 void run()
850 void addAll(Data c)
120 main()
151 Data (Data b)
680 void copyArray(a)
29
30. Background Limitations Griffin Empirical Study Conclusion
Step 3: Bug Reconstruction
Cluster patterns based on call-stack similarity
Initial Clusters
RWR 271-851-681
RW 271-851
RWWR 271-851-852-682
120 main()
150 Data (Data c)
270 int getSize()
130 void run()
850 void addAll(Data c)
120 main()
151 Data (Data b)
680 void copyArray(a)
Thread 2Thread 1
30
31. Background Limitations Griffin Empirical Study Conclusion
Step 3: Bug Reconstruction
Identify suspicious methods
Initial Clusters
RWR 271-851-681
RW 271-851
RWWR 271-851-852-682
120 main()
150 Data (Data c)
270 int getSize()
130 void run()
850 void addAll(Data c)
120 main()
151 Data (Data b)
680 void copyArray(a)
Thread 2Thread 1
31
suspicious method: the
method at the top in
the common call stack.
32. Background Limitations Griffin Empirical Study Conclusion
Step 3: Bug Reconstruction
Thread 1 Thread 2
120 main()
152 Data (Data b)
680 void copyArray( a)
681 a.size = b.size;
682 a.array = b.array;
120 main()
150 Data (Data c)
270 int getSize()
271 return size;
130 void run()
850 void addAll(Data c)
851 b.size += c.size;
852 b.array += c.array;
R
W
R
R
W
Present bug graph to developer
32
33. Background Limitations Griffin Empirical Study Conclusion
Empirical Studies
Studies
1. Evaluate effectiveness of finding multiple
faults
2. Evaluate effectiveness of explaining the
bug
3. Evaluate efficiency of the technique
(See paper)
Empirical Setup
• Implemented in Java (Soot) and C++ (Pin)
• Evaluated on a set of subjects
33
34. Background Limitations Griffin Empirical Study Conclusion
Evaluation: Subjects
Language Program KLOC Num.
Bugs
Bug Type
Java
TreeSet-1 7.5 5 Atomicity
TreeSet-2 7.5 3 Atomicity
StringBuffer-1 1.4 4 Atomicity
StringBuffer-2 1.4 1 Atomicity
Vector-1 9.5 4 Atomicity
Vector-2 9.5 2 Atomicity
C++
Mysql-169 331 1 Atomicity
Mysql-791 372 1 Atomicity
NSPR-165586 125 1 Atomicity
PBZip2 2 1 Order
Transmission 90 1 Order
34
35. Background Limitations Griffin Empirical Study Conclusion
Study 1: Handling Multiple Bugs
Goal
To investigate how well Griffin clusters failing
executions responsible for the same bug
Method
• Ran Step 2 of algorithm; p= 30, th= 0.8
• Computed F-measure* values to evaluate
effectiveness of clustering algorithm
35
* F-measure is a standard method to evaluate clustering. See “M. Steinbach et al.
A comparison of document clustering techniques. In Wksp, Text Mining, 2000.“
36. Background Limitations Griffin Empirical Study Conclusion
Study 1: Handling Multiple Bugs
Program # Patterns # Bugs # Output
Clusters
F-measure
TreeSet-1 714 5 7 0.88
TreeSet-2 656 3 4 0.91
StringBuffer-1 12 4 4 1.00
StringBuffer-2 3 1 1 1.00
Vector-1 18 4 4 1.00
Vector-2 10 2 2 1.00
Mysql-169 21834 1 1 1.00
Mysql-791 71694 1 2 0.94
NSPR-165586 1479 1 2 0.86
PBZip2 427 1 2 0.96
Transmission 226 1 1 1.00
36
• Most F-measures close to
1.00; indicates
effectiveness clustering
• Manual inspection when F-
measures < 1.00 indicates
that if th is a lesser value,
clustering is more effective
may need to adjust
parameters
37. Background Limitations Griffin Empirical Study Conclusion
Study 2: Reconstructing Bug Context
Goal
To investigate how well Griffin reconstructs
bug context
Method
• Ran Step 3 of algorithm
• Investigated the results
37
38. Background Limitations Griffin Empirical Study Conclusion
Study 2: Reconstructing Bug Context
Program # Bugs # Output
Clusters
# False
Positives
Suspicious
Method
contains bug
Call
stack
size
TreeSet-1 5 5 0 Y 6
TreeSet-2 3 3 0 Y 6
StringBuffer-1 4 4 0 Y 1
StringBuffer-2 1 1 0 Y 1
Vector-1 4 4 0 Y 1
Vector-2 2 2 0 Y 1
Mysql-169 1 2 1 Y 9
Mysql-791 1 1 0 Y 1
NSPR-165586 1 1 0 Y 4
PBZip2 1 1 0 Y 0
Transmission 1 1 0 Y 7
38
39. Background Limitations Griffin Empirical Study Conclusion
Study 2: Reconstructing Bug Context
39
Program # Bugs # Output
Clusters
# False
Positives
Suspicious
Method
contains bug
Call
stack
size
TreeSet-1 5 5 0 Y 6
TreeSet-2 3 3 0 Y 6
StringBuffer-1 4 4 0 Y 1
StringBuffer-2 1 1 0 Y 1
Vector-1 4 4 0 Y 1
Vector-2 2 2 0 Y 1
Mysql-169 1 2 1 Y 9
Mysql-791 1 1 0 Y 1
NSPR-165586 1 1 0 Y 4
PBZip2 1 1 0 Y 0
Transmission 1 1 0 Y 7
1
Technique successfully
outputs clusters of patterns
with false positives
40. Background Limitations Griffin Empirical Study Conclusion
Study 2: Reconstructing Bug Context
40
Program # Bugs # Output
Clusters
# False
Positives
Suspicious
Method
contains bug
Call
stack
size
TreeSet-1 5 5 0 Y 6
TreeSet-2 3 3 0 Y 6
StringBuffer-1 4 4 0 Y 1
StringBuffer-2 1 1 0 Y 1
Vector-1 4 4 0 Y 1
Vector-2 2 2 0 Y 1
Mysql-169 1 2 1 Y 9
Mysql-791 1 1 0 Y 1
NSPR-165586 1 1 0 Y 4
PBZip2 1 1 0 Y 0
Transmission 1 1 0 Y 7
2
Technique successfully
locates the bug in the
suspicious method
41. Background Limitations Griffin Empirical Study Conclusion
Study 2: Reconstructing Bug Context
41
Program # Bugs # Output
Clusters
# False
Positives
Suspicious
Method
contains bug
Call
stack
size
TreeSet-1 5 5 0 Y 6
TreeSet-2 3 3 0 Y 6
StringBuffer-1 4 4 0 Y 1
StringBuffer-2 1 1 0 Y 1
Vector-1 4 4 0 Y 1
Vector-2 2 2 0 Y 1
Mysql-169 1 2 1 Y 9
Mysql-791 1 1 0 Y 1
NSPR-165586 1 1 0 Y 4
PBZip2 1 1 0 Y 0
Transmission 1 1 0 Y 7
3
Call-stack sizes greater
than 0 in all but one case
difficult to infer method with
bug
42. Background Limitations Griffin Empirical Study Conclusion
Future Work
• Perform user studies to determine
the usefulness of the technique to
developers
42
• Perform more studies that involve
multiple bugs
• Perform studies to give more
guidance to select clustering
parameters
43. Background Limitations Griffin Empirical Study Conclusion
Contributions
• Fault explanation technique that provides
• Information about multiple bugs
• Patterns of true- and false-positive
• Visualization in a Bug Graph
• Empirical results that indicate the effectiveness
of fault explanation
• Effective in grouping concurrency bugs
• Effective in explaining concurrency bugs
• See www.cc.gatech.edu/~sangminp/issta2013
43
QUESTIONS?
45. Background Limitations Griffin Empirical Study Conclusion
Challenges
45
Engineering
issues…
Large
context size
Efficient information
gathering
Expensive
manual inspection
Large number
of patterns
Editor's Notes
[RV] In the top box, it should say, “Limitations”, since you list more than one. Also, when you mention the limitations, be sure to tell the audience that you will give an example later in the talk.
[RV] When you present this slide, emphasize that a major result of *prior work* from POPL 2006 identified this table of interleaved read/write access patterns was enough to identify atomic and order violations, including those that involve multiple variables. Also say that you will momentarily show an example.
Particular patterns of interleavedreads and writeson program variables
-- table is enough to explain bugs…
[RV] The “problem” here remains too abstract. I think you need to show it by using animation to *replace* the code executed by Threads 1 and 2 with the *actual* code as a developer would see it (namely, the methods corresponding to the leaves). Otherwise, someone will not be convinced that there is a real problem here, since the example as you show it is easy to understand.