Invited tool demonstration at the 8th International Conference on Aspect-Oriented Software Development (AOSD '09), Charlottesville, VA, USA, March 2-6, 2009.
This document discusses the implementation of digital filters in fixed-point arithmetic on embedded systems. It presents the need for methodology and tools to design fixed-point embedded filter systems. The key steps are: 1) choosing a filter algorithm, 2) rounding coefficients to fixed-point, and 3) implementing the algorithm. Optimal implementations minimize degradation from quantization errors while meeting resource constraints. The document outlines a global flow from filter design to code generation and optimization.
CBSE Question Paper Computer Science with C++ 2011Deepak Singh
This document provides instructions for a 3-hour computer science exam with 70 maximum marks. It includes 6 questions with subparts testing various C++ programming concepts. Question 1 covers local vs global variables, header files, error correction, and finding output. Question 2 differentiates class members, illustrates function overloading, and defines a class. Question 3 includes array functions for transferring content between arrays, finding an array element location, and queue operations. Question 4 involves file input/output functions for modifying a data file.
This document summarizes an article about implementing the RSA encryption/decryption algorithm on an FPGA. It begins with an overview of cryptography and the RSA algorithm. It then describes the key steps in RSA - key generation, encryption, and decryption. The main mathematical operations required for RSA are also summarized - modular addition, multiplication, and exponentiation. The document then presents the design of a 32-bit RSA decryption engine in VHDL, along with synthesis results showing its resource usage and maximum clock frequency on an FPGA. It concludes that an RSA decryption engine can be efficiently implemented on an FPGA using limited resources.
Designing Architecture-aware Library using Boost.ProtoJoel Falcou
This document discusses designing architecture-aware libraries using Boost.Proto. It describes how the NT2 scientific computing library was redesigned using Boost.Proto to make it more extensible and able to better support new hardware architectures. The redesign segmented the evaluation of expressions into phases. Boost.Proto transforms are used in each phase to advance code generation. Hardware specifications influence function overloads through generalized tag dispatching, allowing the best function implementation to be selected for a given hardware architecture. This makes it possible to more easily add support for new optimization schemes and hardware targets to the library.
The document contains a sample question paper for CBSE Grade 12 Computer Science exam. It includes multiple choice, short answer and long answer questions on topics like C++ programming, object oriented concepts, data structures, databases and computer networks. Some questions ask to write C++ code for tasks like defining classes, sorting arrays, implementing stacks. Others involve evaluating C++ code snippets, answering conceptual questions, writing SQL queries and solving problems on Boolean algebra and logic circuits.
The document provides a sample question paper for Computer Science (Theory) - Class XII. It contains 7 questions with multiple parts assessing concepts like C++ programming, data structures, databases, computer networks and Boolean algebra. The questions include writing code segments, evaluating outputs, explaining concepts and solving problems related to arrays, classes, SQL queries and logic circuits.
The document provides a 3-hour computer science exam containing multiple questions related to C++ programming. It includes questions about automatic type conversion vs type casting, header files, syntax errors, output of code snippets, polymorphism, class definitions, function definitions, arrays, memory allocation, stacks, and postfix notation evaluation.
Random numo Galois Field
Polynomial Arithmetic
Example of Polynomial Arithmetic
bers, its types and usage.
TRNG, PRNG, CHPRNG
Review of BBS
Stream Ciphering
RC4 algorithm
Basic Number Theory
Extended Euclidean Algorithm
Relevance of Extended Euclidean Algorithm
This document discusses the implementation of digital filters in fixed-point arithmetic on embedded systems. It presents the need for methodology and tools to design fixed-point embedded filter systems. The key steps are: 1) choosing a filter algorithm, 2) rounding coefficients to fixed-point, and 3) implementing the algorithm. Optimal implementations minimize degradation from quantization errors while meeting resource constraints. The document outlines a global flow from filter design to code generation and optimization.
CBSE Question Paper Computer Science with C++ 2011Deepak Singh
This document provides instructions for a 3-hour computer science exam with 70 maximum marks. It includes 6 questions with subparts testing various C++ programming concepts. Question 1 covers local vs global variables, header files, error correction, and finding output. Question 2 differentiates class members, illustrates function overloading, and defines a class. Question 3 includes array functions for transferring content between arrays, finding an array element location, and queue operations. Question 4 involves file input/output functions for modifying a data file.
This document summarizes an article about implementing the RSA encryption/decryption algorithm on an FPGA. It begins with an overview of cryptography and the RSA algorithm. It then describes the key steps in RSA - key generation, encryption, and decryption. The main mathematical operations required for RSA are also summarized - modular addition, multiplication, and exponentiation. The document then presents the design of a 32-bit RSA decryption engine in VHDL, along with synthesis results showing its resource usage and maximum clock frequency on an FPGA. It concludes that an RSA decryption engine can be efficiently implemented on an FPGA using limited resources.
Designing Architecture-aware Library using Boost.ProtoJoel Falcou
This document discusses designing architecture-aware libraries using Boost.Proto. It describes how the NT2 scientific computing library was redesigned using Boost.Proto to make it more extensible and able to better support new hardware architectures. The redesign segmented the evaluation of expressions into phases. Boost.Proto transforms are used in each phase to advance code generation. Hardware specifications influence function overloads through generalized tag dispatching, allowing the best function implementation to be selected for a given hardware architecture. This makes it possible to more easily add support for new optimization schemes and hardware targets to the library.
The document contains a sample question paper for CBSE Grade 12 Computer Science exam. It includes multiple choice, short answer and long answer questions on topics like C++ programming, object oriented concepts, data structures, databases and computer networks. Some questions ask to write C++ code for tasks like defining classes, sorting arrays, implementing stacks. Others involve evaluating C++ code snippets, answering conceptual questions, writing SQL queries and solving problems on Boolean algebra and logic circuits.
The document provides a sample question paper for Computer Science (Theory) - Class XII. It contains 7 questions with multiple parts assessing concepts like C++ programming, data structures, databases, computer networks and Boolean algebra. The questions include writing code segments, evaluating outputs, explaining concepts and solving problems related to arrays, classes, SQL queries and logic circuits.
The document provides a 3-hour computer science exam containing multiple questions related to C++ programming. It includes questions about automatic type conversion vs type casting, header files, syntax errors, output of code snippets, polymorphism, class definitions, function definitions, arrays, memory allocation, stacks, and postfix notation evaluation.
Random numo Galois Field
Polynomial Arithmetic
Example of Polynomial Arithmetic
bers, its types and usage.
TRNG, PRNG, CHPRNG
Review of BBS
Stream Ciphering
RC4 algorithm
Basic Number Theory
Extended Euclidean Algorithm
Relevance of Extended Euclidean Algorithm
20101017 program analysis_for_security_livshits_lecture02_compilersComputer Science Club
This document provides an introduction and overview of compiler optimization techniques, including:
1) Flow graphs, constant folding, global common subexpressions, induction variables, and reduction in strength.
2) Data-flow analysis basics like reaching definitions, gen/kill frameworks, and solving data-flow equations iteratively.
3) Pointer analysis using Andersen's formulation to model references between local variables and heap objects. Rules are provided to represent points-to relationships.
C++ is an object-oriented programming language created by Bjarne Stroustrup in 1985 that maintains aspects of C while adding object-oriented features like classes. C++ can be used to create small programs or large applications across many domains. Key concepts covered include functions, classes, inheritance, polymorphism, and memory management techniques like realloc() and free().
This document contains a data structures question paper from Anna University. It has two parts:
Part A contains 10 short answer questions covering topics like ADT, linked stacks, graph theory, algorithm analysis, binary search trees, and more.
Part B contains 5 long answer questions each worth 16 marks. Topics include algorithms for binary search, linear search, recursion, sorting, trees, graphs, files, and more. Students are required to write algorithms, analyze time complexity, and provide examples for each question.
The document provides an introduction to a Java programming course. It outlines the course objectives which include understanding core Java concepts like primitive data types, control flow, methods, arrays, object-oriented programming, and core Java classes. It also discusses how upon completing the course students will be able to develop programs using Eclipse IDE and write simple programs using various Java features. The document then covers specific topics that will be taught like methods, object-oriented programming concepts like classes, constructors, and polymorphism.
The document is a sample paper for Class XII Subject Informatics Practices. It consists of 3 sections - Section A with 30 marks, Section B with 20 marks each, and Section C with 20 marks each. Section A contains short answer questions, Section B contains case studies and questions, and Section C contains SQL queries and output questions. The document provides answer keys for all questions.
The document contains 15 questions from previous year CBSE and other board exam papers related to C++ programming. Each question provides code snippets and asks to determine possible outputs, values of variables, minimum and maximum values etc. For each question, the correct answer is provided along with justification in 1-2 sentences where needed. The questions are testing concepts like random numbers, loops, arrays, functions etc. and ability to read code and analyze output.
This document provides an overview of various algorithms and data structures including recursive functions, graph representations, depth-first search (DFS), breadth-first search (BFS), all-pairs shortest paths algorithms like Floyd-Warshall, single-source shortest paths algorithms like Dijkstra's, trees, binary search trees (BST), min-max heaps, greedy algorithms, backtracking, and hashing/hash tables. It includes pseudocode and source code examples for many of these algorithms.
This document contains instructions and questions for a computer science exam. It covers topics like call by value vs call by reference in C++, header files, class definitions, functions, arrays, pointers, structures, inheritance, and file input/output. There are multiple choice, short answer, and code writing questions testing a variety of programming concepts.
Boost.Dispatch is a generic tag-dispatching library that allows for specializing functions based on type constraints. It introduces a hierarchy system to categorize types and functions based on tags. This allows defining implementations of functions like "plus" for different argument types and architectures. The dispatch call uses the hierarchy information to select the best matching implementation at call sites in a generic way. This minimizes code duplication and increases the applicability of tag dispatching in C++.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Automatic Task-based Code Generation for High Performance DSELJoel Falcou
Providing high level tools for parallel programming while sustaining a high level of performance has been a challenge that techniques like Domain Specific Embedded Languages try to solve. In previous works, we investigated the design of such a DSEL – NT2 – providing a Matlab -like syntax for parallel numerical computations inside a C++ library.
Main issues addressed here is how liimtaions of classical DSEL generation and multithreaded code generation can be overcome.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses the implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) using variable text message encryption methods. It begins with an abstract that outlines ECDSA, its advantages over other digital signature algorithms like smaller key size, and implementation of ECDSA over elliptic curves P-192 and P-256 with variable size text message, fixed size text message, and text based message encryption. It then provides details on elliptic curve cryptography, the elliptic curve discrete logarithm problem, finite fields, and domain parameters for ECDSA.
C is a general-purpose programming language that is widely used for developing system software and applications. Some key aspects of C include:
1) C code is made up of functions that contain statements to be executed. Functions in C can be declared as global, accessible anywhere, or local, only accessible within the block they are defined.
2) C supports basic data types like integers, floats, characters, and strings. Variables are declared with a specific data type and can be defined as global or local.
3) The main() function is the entry point of every C program. It contains the primary logic to be executed. Header files contain declarations that are included using #include directives.
The document contains a computer science question paper with 7 questions covering topics like C++, data structures, SQL, computer networks and Boolean algebra. Some of the questions ask to write C++ code for classes, functions and algorithms. Others involve writing SQL queries on sample tables and analyzing network topologies to suggest appropriate designs. The overall document tests knowledge of key concepts in these domains through descriptive and coding questions.
The document provides an introduction to the C programming language. It discusses C's history, origins in the development of UNIX, data types, variables, constants, operators, input/output functions, conditional statements, and loops. It also provides 10 examples of C programs covering topics like calculating sums, finding prime and palindrome numbers, temperature conversion, and linear/binary search.
This document contains sample test papers for the subject Object Oriented Programming. It includes 3 sample test papers with questions related to OOP concepts like classes, objects, inheritance, polymorphism, operator overloading etc. The questions are in multiple choice and descriptive form evaluating student's understanding of key OOP concepts. The test papers follow a standard format providing instructions, marks allocation per question, and assume students have prerequisite knowledge of C++ and OOP.
Cryptography is the combination of Mathematics and Computer science. Cryptography is used for encryption and decryption of data using mathematics. Cryptography transit the information in an illegible manner such that only intended recipient will be able to decrypt the information
The Goal and The Journey - Turning back on one year of C++14 MigrationJoel Falcou
C++14 has been announced as the next best thing since sliced bread in terms of simplicity, performance and overall elegance of c++ code. This talk is the story of why and how we decided to migrate one of our old 'modern C++' software library -- BSP++, a C++ implementation of the BSP parallel programming model -- to C++14.
More than just a recollection of 'use this' or 'do that' mottos, this talk will try to ponder on :
why one should consider migrating to C++14 now
which features actually helped and which one did not
the traps and pitfalls compilers tried to pull on us
Implementation of Elliptic Curve Digital Signature Algorithm Using Variable T...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution. We present our ongoing work on automated refactoring that assists developers in specifying whether and how their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs while preserving semantics. The approach, based on a novel imperative tensor analysis, will automatically determine when it is safe and potentially advantageous to migrate imperative DL code to graph execution and modify decorator parameters or eagerly executing code already running as graphs. The approach is being implemented as a PyDev Eclipse IDE plug-in and uses the WALA Ariadne analysis framework. We discuss our ongoing work towards optimizing imperative DL code to its full potential.
Automated Evolution of Feature Logging Statement Levels Using Git Histories a...Raffi Khatchadourian
Logging—used for system events and security breaches to describe more informational yet essential aspects of software features—is pervasive. Given the high transactionality of today's software, logging effectiveness can be reduced by information overload. Log levels help alleviate this problem by correlating a priority to logs that can be later filtered. As software evolves, however, levels of logs documenting surrounding feature implementations may also require modification as features once deemed important may have decreased in urgency and vice-versa. We present an automated approach that assists developers in evolving levels of such (feature) logs. The approach, based on mining Git histories and manipulating a degree of interest (DOI) model,1 transforms source code to revitalize feature log levels based on the “interestingness” of the surrounding code. Built upon JGit and Mylyn, the approach is implemented as an Eclipse IDE plug-in and evaluated on 18 Java projects with ~3 million lines of code and ~4K log statements. Our tool successfully analyzes 99.22% of logging statements, increases log level distributions by ~20 %, and increases the focus of logs in bug fix contexts ~83 % of the time. Moreover, pull (patch) requests were integrated into large and popular open-source projects. The results indicate that the approach is promising in assisting developers in evolving feature log levels.
20101017 program analysis_for_security_livshits_lecture02_compilersComputer Science Club
This document provides an introduction and overview of compiler optimization techniques, including:
1) Flow graphs, constant folding, global common subexpressions, induction variables, and reduction in strength.
2) Data-flow analysis basics like reaching definitions, gen/kill frameworks, and solving data-flow equations iteratively.
3) Pointer analysis using Andersen's formulation to model references between local variables and heap objects. Rules are provided to represent points-to relationships.
C++ is an object-oriented programming language created by Bjarne Stroustrup in 1985 that maintains aspects of C while adding object-oriented features like classes. C++ can be used to create small programs or large applications across many domains. Key concepts covered include functions, classes, inheritance, polymorphism, and memory management techniques like realloc() and free().
This document contains a data structures question paper from Anna University. It has two parts:
Part A contains 10 short answer questions covering topics like ADT, linked stacks, graph theory, algorithm analysis, binary search trees, and more.
Part B contains 5 long answer questions each worth 16 marks. Topics include algorithms for binary search, linear search, recursion, sorting, trees, graphs, files, and more. Students are required to write algorithms, analyze time complexity, and provide examples for each question.
The document provides an introduction to a Java programming course. It outlines the course objectives which include understanding core Java concepts like primitive data types, control flow, methods, arrays, object-oriented programming, and core Java classes. It also discusses how upon completing the course students will be able to develop programs using Eclipse IDE and write simple programs using various Java features. The document then covers specific topics that will be taught like methods, object-oriented programming concepts like classes, constructors, and polymorphism.
The document is a sample paper for Class XII Subject Informatics Practices. It consists of 3 sections - Section A with 30 marks, Section B with 20 marks each, and Section C with 20 marks each. Section A contains short answer questions, Section B contains case studies and questions, and Section C contains SQL queries and output questions. The document provides answer keys for all questions.
The document contains 15 questions from previous year CBSE and other board exam papers related to C++ programming. Each question provides code snippets and asks to determine possible outputs, values of variables, minimum and maximum values etc. For each question, the correct answer is provided along with justification in 1-2 sentences where needed. The questions are testing concepts like random numbers, loops, arrays, functions etc. and ability to read code and analyze output.
This document provides an overview of various algorithms and data structures including recursive functions, graph representations, depth-first search (DFS), breadth-first search (BFS), all-pairs shortest paths algorithms like Floyd-Warshall, single-source shortest paths algorithms like Dijkstra's, trees, binary search trees (BST), min-max heaps, greedy algorithms, backtracking, and hashing/hash tables. It includes pseudocode and source code examples for many of these algorithms.
This document contains instructions and questions for a computer science exam. It covers topics like call by value vs call by reference in C++, header files, class definitions, functions, arrays, pointers, structures, inheritance, and file input/output. There are multiple choice, short answer, and code writing questions testing a variety of programming concepts.
Boost.Dispatch is a generic tag-dispatching library that allows for specializing functions based on type constraints. It introduces a hierarchy system to categorize types and functions based on tags. This allows defining implementations of functions like "plus" for different argument types and architectures. The dispatch call uses the hierarchy information to select the best matching implementation at call sites in a generic way. This minimizes code duplication and increases the applicability of tag dispatching in C++.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Automatic Task-based Code Generation for High Performance DSELJoel Falcou
Providing high level tools for parallel programming while sustaining a high level of performance has been a challenge that techniques like Domain Specific Embedded Languages try to solve. In previous works, we investigated the design of such a DSEL – NT2 – providing a Matlab -like syntax for parallel numerical computations inside a C++ library.
Main issues addressed here is how liimtaions of classical DSEL generation and multithreaded code generation can be overcome.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses the implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) using variable text message encryption methods. It begins with an abstract that outlines ECDSA, its advantages over other digital signature algorithms like smaller key size, and implementation of ECDSA over elliptic curves P-192 and P-256 with variable size text message, fixed size text message, and text based message encryption. It then provides details on elliptic curve cryptography, the elliptic curve discrete logarithm problem, finite fields, and domain parameters for ECDSA.
C is a general-purpose programming language that is widely used for developing system software and applications. Some key aspects of C include:
1) C code is made up of functions that contain statements to be executed. Functions in C can be declared as global, accessible anywhere, or local, only accessible within the block they are defined.
2) C supports basic data types like integers, floats, characters, and strings. Variables are declared with a specific data type and can be defined as global or local.
3) The main() function is the entry point of every C program. It contains the primary logic to be executed. Header files contain declarations that are included using #include directives.
The document contains a computer science question paper with 7 questions covering topics like C++, data structures, SQL, computer networks and Boolean algebra. Some of the questions ask to write C++ code for classes, functions and algorithms. Others involve writing SQL queries on sample tables and analyzing network topologies to suggest appropriate designs. The overall document tests knowledge of key concepts in these domains through descriptive and coding questions.
The document provides an introduction to the C programming language. It discusses C's history, origins in the development of UNIX, data types, variables, constants, operators, input/output functions, conditional statements, and loops. It also provides 10 examples of C programs covering topics like calculating sums, finding prime and palindrome numbers, temperature conversion, and linear/binary search.
This document contains sample test papers for the subject Object Oriented Programming. It includes 3 sample test papers with questions related to OOP concepts like classes, objects, inheritance, polymorphism, operator overloading etc. The questions are in multiple choice and descriptive form evaluating student's understanding of key OOP concepts. The test papers follow a standard format providing instructions, marks allocation per question, and assume students have prerequisite knowledge of C++ and OOP.
Cryptography is the combination of Mathematics and Computer science. Cryptography is used for encryption and decryption of data using mathematics. Cryptography transit the information in an illegible manner such that only intended recipient will be able to decrypt the information
The Goal and The Journey - Turning back on one year of C++14 MigrationJoel Falcou
C++14 has been announced as the next best thing since sliced bread in terms of simplicity, performance and overall elegance of c++ code. This talk is the story of why and how we decided to migrate one of our old 'modern C++' software library -- BSP++, a C++ implementation of the BSP parallel programming model -- to C++14.
More than just a recollection of 'use this' or 'do that' mottos, this talk will try to ponder on :
why one should consider migrating to C++14 now
which features actually helped and which one did not
the traps and pitfalls compilers tried to pull on us
Implementation of Elliptic Curve Digital Signature Algorithm Using Variable T...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Raffi Khatchadourian
Efficiency is essential to support responsiveness w.r.t. ever-growing datasets, especially for Deep Learning (DL) systems. DL frameworks have traditionally embraced deferred execution-style DL code—supporting symbolic, graph-based Deep Neural Network (DNN) computation. While scalable, such development is error-prone, non-intuitive, and difficult to debug. Consequently, more natural, imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution. We present our ongoing work on automated refactoring that assists developers in specifying whether and how their otherwise eagerly-executed imperative DL code could be reliably and efficiently executed as graphs while preserving semantics. The approach, based on a novel imperative tensor analysis, will automatically determine when it is safe and potentially advantageous to migrate imperative DL code to graph execution and modify decorator parameters or eagerly executing code already running as graphs. The approach is being implemented as a PyDev Eclipse IDE plug-in and uses the WALA Ariadne analysis framework. We discuss our ongoing work towards optimizing imperative DL code to its full potential.
Automated Evolution of Feature Logging Statement Levels Using Git Histories a...Raffi Khatchadourian
Logging—used for system events and security breaches to describe more informational yet essential aspects of software features—is pervasive. Given the high transactionality of today's software, logging effectiveness can be reduced by information overload. Log levels help alleviate this problem by correlating a priority to logs that can be later filtered. As software evolves, however, levels of logs documenting surrounding feature implementations may also require modification as features once deemed important may have decreased in urgency and vice-versa. We present an automated approach that assists developers in evolving levels of such (feature) logs. The approach, based on mining Git histories and manipulating a degree of interest (DOI) model,1 transforms source code to revitalize feature log levels based on the “interestingness” of the surrounding code. Built upon JGit and Mylyn, the approach is implemented as an Eclipse IDE plug-in and evaluated on 18 Java projects with ~3 million lines of code and ~4K log statements. Our tool successfully analyzes 99.22% of logging statements, increases log level distributions by ~20 %, and increases the focus of logs in bug fix contexts ~83 % of the time. Moreover, pull (patch) requests were integrated into large and popular open-source projects. The results indicate that the approach is promising in assisting developers in evolving feature log levels.
A Tool for Rejuvenating Feature Logging Levels via Git Histories and Degree o...Raffi Khatchadourian
Logging is a significant programming practice. Due to the highly transactional nature of modern software applications, many logs are generated every day, which may overwhelm developers. Logging information overload can be dangerous to software applications. Using log levels, developers can print the valuable information while hiding the verbose logs during software runtime. As software evolves, the log levels of logging statements associated with the surrounding software feature implementation may also need to be altered. Maintaining log levels necessitates a significant amount of manual effort. In this paper, we propose an automated approach that can rejuvenate feature log levels by matching the interest level of developers in the surrounding features. The approach is implemented as an open-source Eclipse plugin, using two external plug-ins (JGit and Mylyn). The plugin was tested on 18 open-source Java projects consisting of ~3 million lines of code and ~4K log statements. Our tool successfully analyzes 99.22% of logging statements, increases log level distributions by ~20%, and increases the focus of logs in bug fix contexts ~83% of the time. Interested readers can watch our demonstration video (https://www.youtube.com/watch?v=qIULoAXoDv4).
Challenges in Migrating Imperative Deep Learning Programs to Graph Execution:...Raffi Khatchadourian
The document discusses challenges in migrating imperative deep learning programs to graph execution. It provides examples of TensorFlow imperative code that uses features like Python side effects and variables that do not directly translate to graph execution. Specifically, it shows how a model that uses a counter variable to increment another variable on each call would not work as expected, as the initial counter value is captured during tracing, resulting in the variable being incremented on each call rather than just the first one. This demonstrates common problems that can arise from migrating imperative code to graphs and result in unexpected numerical results or reduced performance.
Actor Concurrency Bugs: A Comprehensive Study on Symptoms, Root Causes, API U...Raffi Khatchadourian
Actor concurrency is becoming increasingly important in the development of real-world software systems. Although actor concurrency may be less susceptible to some multithreaded concurrency bugs, such as low-level data races and deadlocks, it comes with its own bugs that may be different. However, the fundamental characteristics of actor concurrency bugs, including their symptoms, root causes, API usages, examples, and differences when they come from different sources are still largely unknown. Actor software development can significantly benefit from a comprehensive qualitative and quantitative understanding of these characteristics, which is the focus of this work, to foster better API documentation, development practices, testing, debugging, repairing, and verification frameworks. To conduct this study, we take the following major steps. First, we construct a set of 186 real-world Akka actor bugs from Stack Overflow and GitHub via manual analysis of 3,924
Stack Overflow questions, answers, and comments and 3,315 GitHub commits, messages, original and modified code snippets, issues, and pull requests. Second, we manually study these actor bugs and their fixes to understand and classify their symptoms, root causes, and API usages. Third, we study the differences between the commonalities and distributions of symptoms, root causes, and API usages of our Stack Overflow and GitHub actor bugs. Fourth, we discuss real-world examples of our actor bugs with these symptoms and root causes. Finally, we investigate the relation of our findings with those of previous work and discuss their implications. A few findings of our study are: (1) symptoms of our actor bugs can be classified into five categories, with Error as the most common symptom and Incorrect Exceptions as the least common, (2) root causes of our actor bugs can be classified into ten categories, with Logic as the most common root cause and Untyped Communication as the least common, (3) a small number of Akka API packages are responsible for most of API usages by our actor bugs, and (4) our Stack Overflow and GitHub actor bugs can differ significantly in commonalities and distributions of their symptoms, root causes, and API usages. While some of our findings agree with those of previous work, others sharply contrast.
An Empirical Study of Refactorings and Technical Debt in Machine Learning Sys...Raffi Khatchadourian
Machine Learning (ML), including Deep Learning (DL), systems, i.e., those with ML capabilities, are pervasive in today’s data-driven society. Such systems are complex; they are comprised of ML models and many subsystems that support learning processes. As with other complex systems, ML systems are prone to classic technical debt issues, especially when such systems are long-lived, but they also exhibit debt specific to these systems. Unfortunately, there is a gap of knowledge in how ML systems actually evolve and are maintained. In this paper, we fill this gap by studying refactorings, i.e., source-to-source semantics-preserving program transformations, performed in real-world, open-source software, and the technical debt issues they alleviate. We analyzed 26 projects, consisting of 4.2 MLOC, along with 327 manually examined code patches. The results indicate that developers refactor these systems for a variety of reasons, both specific and tangential to ML, some refactorings correspond to established technical debt categories, while others do not, and code duplication is a major cross-cutting theme that particularly involved ML configuration and model code, which was also the most refactored. We also introduce 14 and 7 new ML-specific refactorings and technical debt categories, respectively, and put forth several recommendations, best practices, and anti-patterns. The results can potentially assist practitioners, tool developers, and educators in facilitating long-term ML system usefulness.
Automated Evolution of Feature Logging Statement Levels Using Git Histories a...Raffi Khatchadourian
This document describes an approach to automatically evolve the logging statement levels of features in software based on degree of interest over time. It extracts feature logging statements and their levels from code, manipulates a degree of interest model based on code changes from Git histories, and identifies mismatches between current levels and predicted levels to suggest level changes. The goal is to reduce information overload by bringing more relevant feature logs to the foreground and less relevant ones to the background based on recent development activity. Security implications regarding side-channel attacks are also mentioned.
Streaming APIs allow for big data processing of native data structures by providing MapReduce-like operations over these structures. However, unlike traditional big data systems, these data structures typically reside in shared memory accessed by multiple cores. Although popular, this emerging hybrid paradigm opens the door to possibly detrimental behavior, such as thread contention and bugs related to non-execution and non-determinism. This study explores the use and misuse of a popular streaming API, namely, Java 8 Streams. The focus is on how developers decide whether or not to run these operations sequentially or in parallel and bugs both specific and tangential to this paradigm. Our study involved analyzing 34 Java projects and 5.53 million lines of code, along with 719 manually examined code patches. Various automated, including interprocedural static analysis, and manual methodologies were employed. The results indicate that streams are pervasive, stream parallelization is not widely used, and performance is a crosscutting concern that accounted for the majority of fixes. We also present coincidences that both confirm and contradict the results of related studies. The study advances our understanding of streams, as well as benefits practitioners, programming language and API designers, tool developers, and educators alike.
Safe Automated Refactoring for Intelligent Parallelization of Java 8 StreamsRaffi Khatchadourian
Streaming APIs are becoming more pervasive in mainstream Object-Oriented programming languages. For example, the Stream API introduced in Java 8 allows for functional-like, MapReduce-style operations in processing both finite and infinite data structures. However, using this API efficiently involves subtle considerations like determining when it is best for stream operations to run in parallel, when running operations in parallel can be less efficient, and when it is safe to run in parallel due to possible lambda expression side-effects. In this paper, we present an automated refactoring approach that assists developers in writing efficient stream code in a semantics-preserving fashion. The approach, based on a novel data ordering and typestate analysis, consists of preconditions for automatically determining when it is safe and possibly advantageous to convert sequential streams to parallel and unorder or de-parallelize already parallel streams. The approach was implemented as a plug-in to the Eclipse IDE, uses the WALA and SAFE analysis frameworks, and was evaluated on 11 Java projects consisting of $\sim$642 thousand lines of code. We found that 36.31% of candidate streams were refactorable, and an average speedup of 3.49 on performance tests was observed. The results indicate that the approach is useful in optimizing stream code to their full potential.
Safe Automated Refactoring for Intelligent Parallelization of Java 8 StreamsRaffi Khatchadourian
Streaming APIs are becoming more pervasive in mainstream Object-Oriented programming languages. For example, the Stream API introduced in Java 8 allows for functional-like, MapReduce-style operations in processing both finite and infinite data structures. However, using this API efficiently involves subtle considerations like determining when it is best for stream operations to run in parallel, when running operations in parallel can be less efficient, and when it is safe to run in parallel due to possible lambda expression side-effects. In this paper, we present an automated refactoring approach that assists developers in writing efficient stream code in a semantics-preserving fashion. The approach, based on a novel data ordering and typestate analysis, consists of preconditions for automatically determining when it is safe and possibly advantageous to convert sequential streams to parallel and unorder or de-parallelize already parallel streams. The approach was implemented as a plug-in to the Eclipse IDE, uses the WALA and SAFE analysis frameworks, and was evaluated on 11 Java projects consisting of $\sim$642 thousand lines of code. We found that 36.31% of candidate streams were refactorable, and an average speedup of 3.49 on performance tests was observed. The results indicate that the approach is useful in optimizing stream code to their full potential.
The document provides an introduction to type constraints. It defines type constraints as denoting the subtyping relationships that must hold between program elements for a program to be considered well-typed. Type constraints can be inferred from program constructs like assignments and field accesses based on implied subtype relationships. They can be used for applications like type checking, type inference, and refactoring by solving the generated constraint variables and subtyping relationships.
Safe Automated Refactoring for Intelligent Parallelization of Java 8 Streams ...Raffi Khatchadourian
Streaming APIs are becoming more pervasive in mainstream Object-Oriented programming languages. For example, the Stream API introduced in Java 8 allows for functional-like, MapReduce-style operations in processing both finite and infinite data structures. However, using this API efficiently involves subtle considerations like determining when it is best for stream operations to run in parallel, when running operations in parallel can be less efficient, and when it is safe to run in parallel due to possible lambda expression side-effects. In this paper, we present an automated refactoring approach that assists developers in writing efficient stream code in a semantics-preserving fashion. The approach, based on a novel data ordering and typestate analysis, consists of preconditions for automatically determining when it is safe and possibly advantageous to convert sequential streams to parallel and unorder or de-parallelize already parallel streams. The approach was implemented as a plug-in to the Eclipse IDE, uses the WALA and SAFE analysis frameworks, and was evaluated on 11 Java projects consisting of ∼642K lines of code. We found that 57 of 157 candidate streams (36.31%) were refactorable, and an average speedup of 3.49 on performance tests was observed. The results indicate that the approach is useful in optimizing stream code to their full potential.
A Tool for Optimizing Java 8 Stream Software via Automated RefactoringRaffi Khatchadourian
This document describes a tool called Optimize Streams that uses automated refactoring and static analysis to optimize Java 8 stream code for improved performance. The tool analyzes stream code to determine when parallelization is safe and interference-free. It was tested on 11 Java projects totaling over 600,000 lines of code, and observed an average speedup of 1.55x after refactoring stream code. The tool integrates analyses from the WALA and SAFE frameworks to infer ordering properties and prevent resource errors during refactoring.
Porting the NetBeans Java 8 Enhanced For Loop Lambda Expression Refactoring t...Raffi Khatchadourian
Java 8 is one of the largest upgrades to the popular language and framework in over a decade. However, the Eclipse IDE is missing several key refactorings that could help developers take advantage of new features in Java 8 more easily. In this paper, we discuss our ongoing work in porting the enhanced for loop to lambda expression refactoring from the NetBeans IDE to Eclipse. We also discuss future plans for new Java 8 refactorings not found in any current IDE.
Towards Safe Refactoring for Intelligent Parallelization of Java 8 StreamsRaffi Khatchadourian
The Java 8 Stream API sets forth a promising new programming model that incorporates functional-like, MapReduce-style features into a mainstream programming language. However, using streams correctly and efficiently may involve subtle considerations. In this poster, we present our ongoing work and preliminary results towards an automated refactoring approach that assists developers in writing optimal stream code. The approach, based on ordering and typestate analysis, determines when it is safe and advantageous to convert streams to parallel and optimize a parallel streams.
Proactive Empirical Assessment of New Language Feature Adoption via Automated...Raffi Khatchadourian
This document describes an empirical study assessing the adoption of default methods in Java 8. The study involved issuing pull requests to 19 open source Java projects on GitHub that contained refactorings migrating interface method implementations to default methods. The study found that developers adopted default methods when the implementation was localized to the interface or parameters, provided optional behavior, or allowed static methods to be called as instance methods. Developers rejected default methods when compatibility with older JDK versions was required to support legacy clients. The study presents a novel proactive technique for assessing new language features by introducing them to developers via automated refactoring rather than relying on postmortem analysis.
Defaultification Refactoring: A Tool for Automatically Converting Java Method...Raffi Khatchadourian
Enabling interfaces to declare (instance) method
implementations, Java 8 default methods can be used as a
substitute for the ubiquitous skeletal implementation software design
pattern. Performing this transformation on legacy software
manually, though, may be non-trivial. The refactoring requires
analyzing complex type hierarchies, resolving multiple implementation
inheritance issues, reconciling differences between class
and interface methods, and analyzing tie-breakers (dispatch
precedence) with overriding class methods. All of this is necessary
to preserve type-correctness and confirm semantics preservation.
We demonstrate an automated refactoring tool called MIGRATE
SKELETAL IMPLEMENTATION TO INTERFACE for transforming
legacy Java code to use the new default construct. The tool,
implemented as an Eclipse plug-in, is driven by an efficient,
fully-automated, type constraint-based refactoring approach. It
features an extensive ruleset covering various corner-cases
where default methods cannot be used. The resulting code is
semantically equivalent to the original, more succinct, easier to
comprehend, less complex, and exhibits increased modularity. A
demonstration can be found at http://youtu.be/YZHIy0yePh8.
Defaultification Refactoring: A Tool for Automatically Converting Java Method...Raffi Khatchadourian
This document describes a tool that automatically converts Java methods to default methods in interfaces. The tool is implemented as an Eclipse plugin and uses type constraints to identify methods that can be converted from skeletal implementation classes to default methods in corresponding interfaces according to Java 8 features. An evaluation found the tool successfully converted around 20% of candidate methods and was shown to produce compilable and semantically equivalent results through various testing techniques. Future work could include performing additional refactorings simultaneously during the conversion process.
Automated Refactoring of Legacy Java Software to Default Methods Talk at ICSE...Raffi Khatchadourian
Java 8 default methods, which allow interfaces to contain (instance) method implementations, are useful for the skeletal implementation software design pattern. However, it is not easy to transform existing software to exploit default methods as it requires analyzing complex type hierarchies, resolving multiple implementation inheritance issues, reconciling differences between class and interface methods, and analyzing tie-breakers (dispatch precedence) with overriding class methods to preserve type-correctness and confirm semantics preservation. In this paper, we present an efficient, fully-automated, type constraint-based refactoring approach that assists developers in taking advantage of enhanced interfaces for their legacy Java software. The approach features an extensive rule set that covers various corner-cases where default methods cannot be used. To demonstrate applicability, we implemented our approach as an Eclipse plug-in and applied it to 19 real-world Java projects, as well as submitted pull requests to popular GitHub repositories. The indication is that it is useful in migrating skeletal implementation methods to interfaces as default methods, sheds light onto the pattern’s usage, and provides insight to language designers on how this new construct applies to existing software.
Poster on Automated Refactoring of Legacy Java Software to Default MethodsRaffi Khatchadourian
The document describes an automated refactoring approach that uses type constraints to migrate legacy Java code using the skeletal implementation pattern to default methods. It presents a motivating example and describes how type constraints are used to determine if a migration will be valid. An evaluation on 19 open source projects found the tool was able to refactor over 19% of candidate methods. A preliminary study submitting pull requests found some were merged but others were rejected due to supporting older clients.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Dandelion Hashtable: beyond billion requests per second on a commodity server
Rejuvenate Pointcut: A Tool for Pointcut Expression Recovery in Evolving Aspect-Oriented Software
1. Rejuvenate Pointcut
A Tool for Pointcut Expression Recovery in Evolving
Aspect-Oriented Software
Raffi Khatchadourian1 Phil Greenwood2 Awais Rashid2
Guoqing Xu1
1 Ohio State University
2 Lancaster University
International Conference on Aspect-Oriented Software
Development, 2009
A
2. Motivation
Approach
Evaluation
More Information
Outline
1 Motivation
2 Approach
3 Evaluation
4 More Information
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
3. Motivation
Approach
Evaluation
More Information
Base-code
package p ;
public class A {
int f ;
void m1 () {
int a = f + 1;
}
Two methods whose
name begins with the
void m2 () {
character m.
int b = f + 2;
}
void n () {
int c = f + 3;
}
}
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
4. Motivation
Approach
Evaluation
More Information
Base-code
package p ;
public class A {
int f ;
void m1 () {
Two methods whose int a = f + 1;
name begins with the }
character m. void m2 () {
One method whose name int b = f + 2;
does not begin with the }
character m. void n () {
int c = f + 3;
}
}
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
5. Motivation
Approach
Evaluation
More Information
Base-code
package p ;
public class A {
int f ;
Two methods whose void m1 () {
name begins with the int a = f + 1;
character m. }
One method whose name void m2 () {
does not begin with the int b = f + 2;
character m. }
All method bodies access void n () {
a field f. int c = f + 3;
}
}
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
6. Motivation
Approach
Evaluation
More Information
Along Came a Pointcut
pointcut fragile() : execution(* m*(..));
Base-code V1
Selects m1() and m2() but not n().
Assume pointcut is correct in V1.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
7. Motivation
Approach
Evaluation
More Information
Along Came a Pointcut
pointcut fragile() : execution(* m*(..));
Base-code V1
Selects m1() and m2() but not n().
Assume pointcut is correct in V1.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
8. Motivation
Approach
Evaluation
More Information
Along Came a Pointcut
pointcut fragile() : execution(* m*(..));
Base-code V1
Selects m1() and m2() but not n().
Assume pointcut is correct in V1.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
9. Evolution
// . . .
void p () {
int d = f + 4;
}
// . . .
pointcut fragile() : execution(* m*(..));
Base-code V2
Same pointcut selects m1() and m2() but not n() and p().
Fragile!
CCC applies to p() in V2 but not selected!
How to identify such join points as code evolves?
10. Evolution
// . . .
void p () {
int d = f + 4;
}
// . . .
pointcut fragile() : execution(* m*(..));
Base-code V2
Same pointcut selects m1() and m2() but not n() and p().
Fragile!
CCC applies to p() in V2 but not selected!
How to identify such join points as code evolves?
11. Evolution
// . . .
void p () {
int d = f + 4;
}
// . . .
pointcut fragile() : execution(* m*(..));
Base-code V2
Same pointcut selects m1() and m2() but not n() and p().
Fragile!
CCC applies to p() in V2 but not selected!
How to identify such join points as code evolves?
12. Evolution
// . . .
void p () {
int d = f + 4;
}
// . . .
pointcut fragile() : execution(* m*(..));
Base-code V2
Same pointcut selects m1() and m2() but not n() and p().
Fragile!
CCC applies to p() in V2 but not selected!
How to identify such join points as code evolves?
13. Evolution
// . . .
void p () {
int d = f + 4;
}
// . . .
pointcut fragile() : execution(* m*(..));
Base-code V2
Same pointcut selects m1() and m2() but not n() and p().
Fragile!
CCC applies to p() in V2 but not selected!
How to identify such join points as code evolves?
14. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
A.f declares_field A contains p
gets_field declares_method
A.n()
gets_field declares_method
A.m1()
Would execution(* A.n()) also be suggested?
15. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
A.f declares_field A contains p
gets_field declares_method
A.n()
gets_field declares_method
A.m1()
Would execution(* A.n()) also be suggested?
16. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
A.f declares_field A contains p
gets_field declares_method
A.n()
gets_field declares_method
A.m1()
Would execution(* A.n()) also be suggested?
17. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
A.f declares_field A contains p
gets_field declares_method
A.n()
gets_field declares_method
A.m1()
Would execution(* A.n()) also be suggested?
18. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
declares_field
A.f gets_field A.n() declares_method A contains p
gets_field declares_method
gets_field A.p() declares_method
A.m1()
Would execution(* A.n()) also be suggested?
19. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
declares_field
A.f gets_field A.n() declares_method A contains p
gets_field declares_method
gets_field A.p() declares_method
A.m1()
Would execution(* A.n()) also be suggested?
20. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
declares_field
A.f gets_field A.n() declares_method A contains p
gets_field declares_method
gets_field A.p() declares_method
A.m1()
Would execution(* A.n()) also be suggested?
21. Pointcut Rejuvenation: Leveraging Commonality
Phase I: Analysis using Concern Graphs
Extract commonalities between currently selected join points.
Phase II: Rejuvenation
Apply extracted patterns to new version of the base-code.
A.m2()
gets_field declares_method
declares_field
A.f gets_field A.n() declares_method A contains p
gets_field declares_method
gets_field A.p() declares_method
A.m1()
Would execution(* A.n()) also be suggested?
22. Suggestion Ranking Scheme
Measurements for Suggestion Ranking
α error: How strong are the relationships between advised
shadows compared to ones captured by a pattern?
β error: How well does the pattern express the same
intentions as the pointcut?
23. Suggestion Ranking Scheme
Measurements for Suggestion Ranking
α error: How strong are the relationships between advised
shadows compared to ones captured by a pattern?
β error: How well does the pattern express the same
intentions as the pointcut?
All Join Point Shadows
24. Suggestion Ranking Scheme
Measurements for Suggestion Ranking
α error: How strong are the relationships between advised
shadows compared to ones captured by a pattern?
β error: How well does the pattern express the same
intentions as the pointcut?
ω a join point shadow; code correspondi
All Join Point Shadows
A a piece of advice
Apce a pointcut bound to advice A; a set of
Apce a subsequent revision of Apce
P the original program, the underlying b
P a subsequence revision of program P
ΩP the set of join point shadows containe
IG P a finite graph representing structura
tween program elements in P
π an acyclic path (sequence of arcs) in I
ΠP a set of acyclic paths derived from pro
25. P
o.w.
Suggestion Ranking Scheme
+
(CG P ))|
Measurements for Suggestion Ranking
if |Apce | = 0
α error: How strong are the relationships between advised
+
ths(CG P ))| shadows compared to ones captured by a pattern? (2
β error:
o.w.
How well does the pattern express the same
intentions as the pointcut?
=0
ω a join point shadow; code correspondi
A
All Join Point Shadows
a piece of advice
(3
Apce a pointcut bound to advice A; a set of
Apce a subsequent revision of Apce
π )) + err β (ˆ , Apce )abs(ˆ ) original program, the underlying b
ˆ π P
πthe
(4
P a subsequence revision of program P
ΩP the set of join point shadows containe
tribute equations. IG P a finite graph representing structura
tween program elements in P
π an acyclic path (sequence of arcs) in I
ΠP a set of acyclic paths derived from pro
26. P
o.w.
Suggestion Ranking Scheme
+
(CG P ))|
Measurements for Suggestion Ranking
if |Apce | = 0
α error: How strong are the relationships between advised
+
ths(CG P ))| shadows compared to ones captured by a pattern? (2
β error:
o.w.
How well does the pattern express the same
intentions as the pointcut?
=0
ω a join point shadow; code correspondi
A
All Join Point Shadows
a piece of advice
(3
Apce a pointcut bound to advice A; a set of
Apce a subsequent revision of Apce
π )) + err β (ˆ , Apce )abs(ˆ ) original program, the underlying b
ˆ π P
πthe
(4
P a subsequence revision of program P
ΩP the set of join point shadows containe
tribute equations. IG P a αfinite graph representing structura
tween program elements in P
π an acyclic path (sequence of arcs) in I
ΠP a set of acyclic paths derived from pro
27. P
o.w.
Suggestion Ranking Scheme
+
(CG P ))|
Measurements for Suggestion Ranking
if |Apce | = 0
α error: How strong are the relationships between advised
+
ths(CG P ))| shadows compared to ones captured by a pattern? (2
β error:
o.w.
How well does the pattern express the same
intentions as the pointcut?
=0
ω a join point shadow; code correspondi
A
All Join Point Shadows
a piece of advice
(3
Apce a pointcut bound to advice A; a set of
Apce a subsequent revision of Apce
π )) + err β (ˆ , Apce )abs(ˆ ) original program, the underlying b
ˆ π P
πthe
(4
P a subsequence revision of program P
ΩP the set of join point shadows containe
tribute equations. IG P a αfinite graph representing structura
tween program elements in P
π an acyclic path (sequence of arcs) in I
ΠP a set of acyclic paths derived from pro
28. P
o.w.
Suggestion Ranking Scheme
+
(CG P ))|
Measurements for Suggestion Ranking
if |Apce | = 0
α error: How strong are the relationships between advised
+
ths(CG P ))| shadows compared to ones captured by a pattern? (2
β error:
o.w.
How well does the pattern express the same
intentions as the pointcut?
=0
ω a join point shadow; code correspondi
A
All Join Point Shadows
a piece of advice
(3
Apce a pointcut bound to advice A; a set of
β Apce a subsequent revision of Apce
π )) + err β (ˆ , Apce )abs(ˆ ) original program, the underlying b
ˆ π P
πthe
(4
P a subsequence revision of program P
ΩP the set of join point shadows containe
tribute equations. IG P a αfinite graph representing structura
tween program elements in P
π an acyclic path (sequence of arcs) in I
ΠP a set of acyclic paths derived from pro
29. Motivation
Approach
Evaluation
More Information
Suggestion Confidence
Each suggestion is associated with a confidence value ([0,1]).
A suggestion inherits the confidence of the pattern that
produced it.
A pattern’s confidence is calculated using a combination of α,
β, and the depth of the patterns.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
30. Motivation
Approach
Evaluation
More Information
Suggestion Confidence
Each suggestion is associated with a confidence value ([0,1]).
A suggestion inherits the confidence of the pattern that
produced it.
A pattern’s confidence is calculated using a combination of α,
β, and the depth of the patterns.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
31. Motivation
Approach
Evaluation
More Information
Suggestion Confidence
Each suggestion is associated with a confidence value ([0,1]).
A suggestion inherits the confidence of the pattern that
produced it.
A pattern’s confidence is calculated using a combination of α,
β, and the depth of the patterns.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
33. Motivation
Approach
Evaluation
More Information
But How Well Does It Work?
Correlation analysis (Phase I) on 20+ AspectJ benchmarks.
Average confidence was 0.66.
Applied to 4 multi-versioned AspectJ projects (Phase II).
Rejuvenated pointcuts in major releases (26 in total).
Able to identify 94% of new shadows introduced in later
versions
On average, appearing in the top 4% of results.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
34. Motivation
Approach
Evaluation
More Information
But How Well Does It Work?
Correlation analysis (Phase I) on 20+ AspectJ benchmarks.
Average confidence was 0.66.
Applied to 4 multi-versioned AspectJ projects (Phase II).
Rejuvenated pointcuts in major releases (26 in total).
Able to identify 94% of new shadows introduced in later
versions
On average, appearing in the top 4% of results.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
35. Motivation
Approach
Evaluation
More Information
But How Well Does It Work?
Correlation analysis (Phase I) on 20+ AspectJ benchmarks.
Average confidence was 0.66.
Applied to 4 multi-versioned AspectJ projects (Phase II).
Rejuvenated pointcuts in major releases (26 in total).
Able to identify 94% of new shadows introduced in later
versions
On average, appearing in the top 4% of results.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
36. Motivation
Approach
Evaluation
More Information
But How Well Does It Work?
Correlation analysis (Phase I) on 20+ AspectJ benchmarks.
Average confidence was 0.66.
Applied to 4 multi-versioned AspectJ projects (Phase II).
Rejuvenated pointcuts in major releases (26 in total).
Able to identify 94% of new shadows introduced in later
versions
On average, appearing in the top 4% of results.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
37. Motivation
Approach
Evaluation
More Information
But How Well Does It Work?
Correlation analysis (Phase I) on 20+ AspectJ benchmarks.
Average confidence was 0.66.
Applied to 4 multi-versioned AspectJ projects (Phase II).
Rejuvenated pointcuts in major releases (26 in total).
Able to identify 94% of new shadows introduced in later
versions
On average, appearing in the top 4% of results.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
38. Motivation
Approach
Evaluation
More Information
But How Well Does It Work?
Correlation analysis (Phase I) on 20+ AspectJ benchmarks.
Average confidence was 0.66.
Applied to 4 multi-versioned AspectJ projects (Phase II).
Rejuvenated pointcuts in major releases (26 in total).
Able to identify 94% of new shadows introduced in later
versions
On average, appearing in the top 4% of results.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
39. Motivation
Approach
Evaluation
More Information
Tool and Material Downloads
Tool research prototype publicly available at
http://code.google.com/p/rejuvenate-pc.
Research related material publicly available at
http://sites.google.com/site/pointcutrejuvenation.
Full evaluation available in corresponding technical report.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
40. Motivation
Approach
Evaluation
More Information
Tool and Material Downloads
Tool research prototype publicly available at
http://code.google.com/p/rejuvenate-pc.
Research related material publicly available at
http://sites.google.com/site/pointcutrejuvenation.
Full evaluation available in corresponding technical report.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut
41. Motivation
Approach
Evaluation
More Information
Tool and Material Downloads
Tool research prototype publicly available at
http://code.google.com/p/rejuvenate-pc.
Research related material publicly available at
http://sites.google.com/site/pointcutrejuvenation.
Full evaluation available in corresponding technical report.
Khatchadourian, Greenwood, Rashid, Xu Rejuvenate Pointcut