The document presents a general framework for slicing programs and models. It defines a slicing framework with syntax, semantics, projections, and slicing criteria. It applies the framework to program slicing examples and proposes adapting it for model slicing. The framework is then demonstrated on a class model case study, showing how slicing can reduce models for comprehension and debugging.
The document provides an introduction to a Java programming course. It outlines the course objectives which include understanding core Java concepts like primitive data types, control flow, methods, arrays, object-oriented programming, and core Java classes. It also discusses how upon completing the course students will be able to develop programs using Eclipse IDE and write simple programs using various Java features. The document then covers specific topics that will be taught like methods, object-oriented programming concepts like classes, constructors, and polymorphism.
1) The document describes experiments involving generating and manipulating discrete time signals in MATLAB. It includes generating original signals, folding signals, shifting signals, adding and multiplying signals, and plotting the results.
2) The experiments generate signals, perform operations like folding, shifting, addition and scaling on the signals, and plot the original and manipulated signals to observe the effects of the different operations.
3) The document contains code to generate sample signals, apply various signal processing operations like folding, shifting, addition and scaling, and use MATLAB plotting functions to display the original and processed signals.
The document introduces data-flow analysis, which derives information about a program's dynamic behavior by examining its static code. It discusses liveness analysis, which determines whether a variable is live (will be used in the future) or dead at a given point. The concepts of control flow graphs, uses/defs, and solving the data-flow equations through iterative analysis are explained. An example liveness analysis is worked through to demonstrate the process.
This document discusses chapters and concepts from Liang's Introduction to Java Programming textbook. It covers defining and using methods in Java, including defining methods with parameters and return types, invoking methods by passing arguments, and how method calls are handled by the call stack. Key points covered include formal vs actual parameters, method signatures, return values, and reusing methods from other classes. Examples are provided to demonstrate calculating sums and finding maximum values using methods.
The document discusses selection control statements in Java including if, if-else, and switch statements. It provides examples of using boolean expressions and comparison operators to write conditional logic. Key topics covered include one-way if statements, two-way if-else statements, nested if statements, avoiding common errors, and using logical operators to combine conditions. Examples provided demonstrate how to use random numbers and selection statements to create simple math quiz programs.
This document contains 36 multiple choice questions about queuing theory and waiting line models. It covers topics like the characteristics of queuing systems, different types of queuing models (M/M/1, M/D/1, etc.), assumptions of queuing models, and using queuing theory to analyze real world systems. Several questions also provide word problems to test the application of queuing concepts to calculate metrics like average queue length and server utilization. The questions assess understanding of key queuing theory terminology, assumptions, models, and calculations.
The document discusses intermediate code generation in compilers. It describes how compilers generate intermediate code representations after parsing source code. Intermediate representations allow separating the front-end and back-end of compilers, facilitating code optimization and retargeting compilers to different architectures. Common intermediate representations discussed include abstract syntax trees, postfix notation, static single assignment form, and three-address instructions. The document also provides examples of generating three-address code using syntax-directed translation.
The document provides an introduction to a Java programming course. It outlines the course objectives which include understanding core Java concepts like primitive data types, control flow, methods, arrays, object-oriented programming, and core Java classes. It also discusses how upon completing the course students will be able to develop programs using Eclipse IDE and write simple programs using various Java features. The document then covers specific topics that will be taught like methods, object-oriented programming concepts like classes, constructors, and polymorphism.
1) The document describes experiments involving generating and manipulating discrete time signals in MATLAB. It includes generating original signals, folding signals, shifting signals, adding and multiplying signals, and plotting the results.
2) The experiments generate signals, perform operations like folding, shifting, addition and scaling on the signals, and plot the original and manipulated signals to observe the effects of the different operations.
3) The document contains code to generate sample signals, apply various signal processing operations like folding, shifting, addition and scaling, and use MATLAB plotting functions to display the original and processed signals.
The document introduces data-flow analysis, which derives information about a program's dynamic behavior by examining its static code. It discusses liveness analysis, which determines whether a variable is live (will be used in the future) or dead at a given point. The concepts of control flow graphs, uses/defs, and solving the data-flow equations through iterative analysis are explained. An example liveness analysis is worked through to demonstrate the process.
This document discusses chapters and concepts from Liang's Introduction to Java Programming textbook. It covers defining and using methods in Java, including defining methods with parameters and return types, invoking methods by passing arguments, and how method calls are handled by the call stack. Key points covered include formal vs actual parameters, method signatures, return values, and reusing methods from other classes. Examples are provided to demonstrate calculating sums and finding maximum values using methods.
The document discusses selection control statements in Java including if, if-else, and switch statements. It provides examples of using boolean expressions and comparison operators to write conditional logic. Key topics covered include one-way if statements, two-way if-else statements, nested if statements, avoiding common errors, and using logical operators to combine conditions. Examples provided demonstrate how to use random numbers and selection statements to create simple math quiz programs.
This document contains 36 multiple choice questions about queuing theory and waiting line models. It covers topics like the characteristics of queuing systems, different types of queuing models (M/M/1, M/D/1, etc.), assumptions of queuing models, and using queuing theory to analyze real world systems. Several questions also provide word problems to test the application of queuing concepts to calculate metrics like average queue length and server utilization. The questions assess understanding of key queuing theory terminology, assumptions, models, and calculations.
The document discusses intermediate code generation in compilers. It describes how compilers generate intermediate code representations after parsing source code. Intermediate representations allow separating the front-end and back-end of compilers, facilitating code optimization and retargeting compilers to different architectures. Common intermediate representations discussed include abstract syntax trees, postfix notation, static single assignment form, and three-address instructions. The document also provides examples of generating three-address code using syntax-directed translation.
This is an updated version of the opening talk I gave on February 4, 2013 at the Dagstuhl Seminar on Fault Prediction, Localization, and Repair (http://goo.gl/vc6Cl).
Abstract: Software debugging, which involves localizing, understanding, and removing the cause of a failure, is a notoriously difficult, extremely time consuming, and human-intensive activity. For this reason, researchers have invested a great deal of effort in developing automated techniques and tools for supporting various debugging tasks. Although potentially useful, most of these techniques have yet to fully demonstrate their practical effectiveness. Moreover, many current debugging approaches suffer from some common limitations and rely on several strong assumptions on both the characteristics of the code being debugged and how developers behave when debugging such code. This talk provides an overview of the state of the art in the broader area of software debugging, discuss strengths and weaknesses of the main existing debugging techniques, present a set of open challenges in this area, and sketch future research directions that may help address these challenges.
The second part of the talk discusses joint work with Chris Parnin presented at ISSTA 2011: C. Parnin and A. Orso, "Are Automated Debugging Techniques Actually Helping Programmers?", in Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 2011), pp 199-209, http://doi.acm.org/10.1145/2001420.2001445
The document describes the HercuLeS HLS environment, a new high-level synthesis tool marketed by Ajax Compilers. It provides the following key details:
- HercuLeS is an extensible, high-level synthesis environment that allows whole-program hardware compilation from C code through pluggable analyses and optimizations.
- It features an automatic flow from an algorithmic C description to customized digital hardware using an intermediate representation called N-Address Code and construction of control/data flow graphs.
- The generated VHDL code and self-checking testbenches aim to be human-readable, vendor- and technology-independent.
The document discusses program dependence graphs (PDGs) and system dependence graphs (SDGs). A PDG represents a single program as a graph of nodes for statements connected by edges showing data and control dependencies. An SDG extends a PDG to represent relationships between procedures/processes in a program by adding nodes for calls, arguments, and results and edges connecting them. SDGs allow modeling interdependencies across multiple processes in a program.
- The document summarizes techniques for slicing object-oriented programs. It discusses static and dynamic slicing, and limitations of previous approaches.
- It proposes a new intermediate representation called the Object-Oriented System Dependence Graph (OSDG) to more precisely capture dependencies in object-oriented programs. The OSDG explicitly represents data members of objects.
- An edge-marking algorithm is presented for efficiently performing dynamic slicing of object-oriented programs using the OSDG. This avoids recomputing the entire slice after each statement.
The document discusses intermediate code generation in compilers. It aims to generate a machine-independent intermediate form (IR) that is suitable for optimization and portability. The IR facilitates retargeting compilers to new machines and enables machine-independent code optimization. Common IR representations include abstract syntax trees, directed acyclic graphs, control flow graphs, postfix notation, and three-address code. Three-address code is a simple representation where instructions have at most three operands. It allows efficient code manipulation and optimization.
Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).
This document discusses using digital twins and machine learning to achieve adaptation in uncertain systems. It provides an overview of enterprise system project failures and introduces digital twins as a new approach for design and control. It then describes a conceptual model and approach for developing digital twins for uncertain systems using goals, domain modeling, and agent-based modeling. Finally, it discusses research challenges in using this approach, including validation, verification, modeling expertise, efficiency, explainability and unknown unknowns.
This presentation provides an overview of decision-making in organisations and introduces a new language called ESL that uses actors to create an executable model that can be analysed. A number of small examples of ESL are shown. The presentation concludes with a larger case study that addresses the recent demonetisation event in India.
Context-Aware Content-Centric Collaborative Workflow Management for Mobile De...ClarkTony
This document discusses context-aware content-centric collaborative workflow management for mobile devices. It presents a use case scenario of geographically distributed collaborators using mobile devices to share content like photos. It proposes adapting workflow technology to make workflows context-aware and integrate a context-aware content lifecycle. The research aims to provide a workflow language and distributed workflow management system prototypes on Android to design, define, manage and execute such context-aware content-centric workflows.
LEAP A Language for Architecture Design, Simulation and AnalysisClarkTony
The document proposes LEAP, a language for architecture design, simulation and analysis. It describes key features of LEAP including components, ports, connectors, information models, behavior specification, operations, event processing and simulation capabilities. The language is intended to allow definition of system structure and behavior at a high level of abstraction through textual definitions and diagrams. Several use cases are presented to demonstrate LEAP features such as data modeling, state specification, communication between components and GUI representation.
A Common Basis for Modelling Service-Oriented and Event-Driven ArchitectureClarkTony
This document discusses modeling service-oriented and event-driven architectures. It proposes a language-driven approach using a domain-specific modeling language for enterprise architecture that supports components, information models, invariants, business process specifications, and events. Current modeling techniques like ArchiMate, MODAF and TOGAF are described as complex, imprecise and lacking capabilities like simulation. A case study and simulation results are also mentioned but not described.
The document discusses model-driven development approaches for context-aware reactive applications (CARA), beginning with definitions of context-aware and reactive applications, and then exploring modelling techniques like UML profiles and the widget calculus formalism for representing CARA models, with examples including a simple button model and the "Buddy" mobile phone contact application case study.
XMF is a meta-circular language based on ObjLisp for language-oriented programming. It allows defining grammars and syntax processing, modeling with classes and operations, and provides features like pattern matching, code generation templates, and integration with Java. XMF includes a match language construct for pattern matching that can desugar patterns into OCL operations.
This document discusses XMF and XModeler. XMF is an executable modeling framework that can be used for model-driven language engineering. It includes features like meta-modeling, reflection, and code generation templates. XModeler is an Eclipse-based modeling tool built using XMF. It allows users to visually construct and edit models defined in XMF. The document outlines the history and technologies behind XMF and XModeler, describes their main features and capabilities, and discusses tool requirements and the MVC architecture used in XModeler.
This document discusses using the ISWIM (I See What You Mean) constraint representation approach for model-driven testing. It provides an overview of domains that require constraint modeling like telecommunications and aerospace. It then describes ISWIM's visual constraint language and how snapshots can represent constraints. The document outlines how a testing architecture could use an ISWIM model to generate test cases, check invariants and pre/post conditions, and report results. The approach was implemented in a tool that tests a sample sales system model.
The document discusses using the ISWIM (Integrated Semantic Web Inference Model) approach to model constraints for testing purposes in a model-driven way. It describes using ISWIM constraints represented as snapshots to specify test cases, including static scenarios, invariants and method pre/post-conditions. It then presents an example of modeling a sales system using ISWIM constraints to generate test cases for validating the system.
The document discusses domain-specific languages (DSLs) and tool interoperability. It proposes the Knowledge Industry Survival Strategy (KISS) framework to address issues with DSLs like lack of interoperability between tools. KISS aims to automate software construction from domain models using open standards. It defines compliance levels for tool interoperability from no interoperability to fully interoperable behavior through common representations of models, services, and behavior. The document outlines how KISS would enable tools to work together through model persistence, transformation and execution.
The MCMS project aims to (1) detect patterns in institutional data that indicate student retention issues, (2) develop an approach for real-time student support and intervention to address retention, and (3) provide high-quality information to support institutional decision making. The process involves using data mining to analyze student data and identify relationships between performance and factors like year of study, library usage, and nationality. Interventions are then developed based on the findings, such as emails, reports, and feedback to students and faculty. Additionally, an Intelligent Decision Support SIG is being formed to address issues around decision support technologies, data integration and sharing, and engaging stakeholders to develop a roadmap for the sector.
1) The document describes a MOP-based DSL for testing Java programs using OCL specifications.
2) The DSL uses OCL pre and post conditions to specify tests in an @MSpec block, and @Test blocks to define test cases calling multiple @MSpecs.
3) The approach maps Java classes to XMF metaclasses, allowing OCL to access Java objects and invoke methods, handling different implementations like Java or EMF.
This document proposes an approach called Syntax Classes for defining domain-specific languages (DSLs) in an extensible way within Java. It allows new languages to be defined as classes that implement a standard interface for abstract syntax trees. This provides a modular and conservative approach to DSL definition that fully integrates new languages into the Java platform and tooling. Syntax classes provide a standardized mechanism for defining the syntax, static processing, and execution of DSL code through grammar definitions, AST manipulation, and language import capabilities. Examples of DSL constructs that could be defined include vector operations, data mapping, and entity definitions.
Reverse engineering and theory building v3ClarkTony
The document discusses reverse engineering through theory building. It proposes that understanding complex systems involves building theories about the domain and system aspects. These theories consist of statements and relationships that explain different views of the system. The document also presents a case study of developing theories to understand issues in a library management system and modifying the theories based on deductions from reality.
This is an updated version of the opening talk I gave on February 4, 2013 at the Dagstuhl Seminar on Fault Prediction, Localization, and Repair (http://goo.gl/vc6Cl).
Abstract: Software debugging, which involves localizing, understanding, and removing the cause of a failure, is a notoriously difficult, extremely time consuming, and human-intensive activity. For this reason, researchers have invested a great deal of effort in developing automated techniques and tools for supporting various debugging tasks. Although potentially useful, most of these techniques have yet to fully demonstrate their practical effectiveness. Moreover, many current debugging approaches suffer from some common limitations and rely on several strong assumptions on both the characteristics of the code being debugged and how developers behave when debugging such code. This talk provides an overview of the state of the art in the broader area of software debugging, discuss strengths and weaknesses of the main existing debugging techniques, present a set of open challenges in this area, and sketch future research directions that may help address these challenges.
The second part of the talk discusses joint work with Chris Parnin presented at ISSTA 2011: C. Parnin and A. Orso, "Are Automated Debugging Techniques Actually Helping Programmers?", in Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 2011), pp 199-209, http://doi.acm.org/10.1145/2001420.2001445
The document describes the HercuLeS HLS environment, a new high-level synthesis tool marketed by Ajax Compilers. It provides the following key details:
- HercuLeS is an extensible, high-level synthesis environment that allows whole-program hardware compilation from C code through pluggable analyses and optimizations.
- It features an automatic flow from an algorithmic C description to customized digital hardware using an intermediate representation called N-Address Code and construction of control/data flow graphs.
- The generated VHDL code and self-checking testbenches aim to be human-readable, vendor- and technology-independent.
The document discusses program dependence graphs (PDGs) and system dependence graphs (SDGs). A PDG represents a single program as a graph of nodes for statements connected by edges showing data and control dependencies. An SDG extends a PDG to represent relationships between procedures/processes in a program by adding nodes for calls, arguments, and results and edges connecting them. SDGs allow modeling interdependencies across multiple processes in a program.
- The document summarizes techniques for slicing object-oriented programs. It discusses static and dynamic slicing, and limitations of previous approaches.
- It proposes a new intermediate representation called the Object-Oriented System Dependence Graph (OSDG) to more precisely capture dependencies in object-oriented programs. The OSDG explicitly represents data members of objects.
- An edge-marking algorithm is presented for efficiently performing dynamic slicing of object-oriented programs using the OSDG. This avoids recomputing the entire slice after each statement.
The document discusses intermediate code generation in compilers. It aims to generate a machine-independent intermediate form (IR) that is suitable for optimization and portability. The IR facilitates retargeting compilers to new machines and enables machine-independent code optimization. Common IR representations include abstract syntax trees, directed acyclic graphs, control flow graphs, postfix notation, and three-address code. Three-address code is a simple representation where instructions have at most three operands. It allows efficient code manipulation and optimization.
Graal is a dynamic meta-circular research compiler for Java that is designed for extensibility and modularity. One of its main distinguishing elements is the handling of optimistic assumptions obtained via profiling feedback and the representation of deoptimization guards in the compiled code. Truffle is a self-optimizing runtime system on top of Graal that uses partial evaluation to derive compiled code from interpreters. Truffle is suitable for creating high-performance implementations for dynamic languages with only moderate effort. The presentation includes a description of the Truffle multi-language API and performance comparisons within the industry of current prototype Truffle language implementations (JavaScript, Ruby, and R). Both Graal and Truffle are open source and form themselves research platforms in the area of virtual machine and programming language implementation (http://openjdk.java.net/projects/graal/).
This document discusses using digital twins and machine learning to achieve adaptation in uncertain systems. It provides an overview of enterprise system project failures and introduces digital twins as a new approach for design and control. It then describes a conceptual model and approach for developing digital twins for uncertain systems using goals, domain modeling, and agent-based modeling. Finally, it discusses research challenges in using this approach, including validation, verification, modeling expertise, efficiency, explainability and unknown unknowns.
This presentation provides an overview of decision-making in organisations and introduces a new language called ESL that uses actors to create an executable model that can be analysed. A number of small examples of ESL are shown. The presentation concludes with a larger case study that addresses the recent demonetisation event in India.
Context-Aware Content-Centric Collaborative Workflow Management for Mobile De...ClarkTony
This document discusses context-aware content-centric collaborative workflow management for mobile devices. It presents a use case scenario of geographically distributed collaborators using mobile devices to share content like photos. It proposes adapting workflow technology to make workflows context-aware and integrate a context-aware content lifecycle. The research aims to provide a workflow language and distributed workflow management system prototypes on Android to design, define, manage and execute such context-aware content-centric workflows.
LEAP A Language for Architecture Design, Simulation and AnalysisClarkTony
The document proposes LEAP, a language for architecture design, simulation and analysis. It describes key features of LEAP including components, ports, connectors, information models, behavior specification, operations, event processing and simulation capabilities. The language is intended to allow definition of system structure and behavior at a high level of abstraction through textual definitions and diagrams. Several use cases are presented to demonstrate LEAP features such as data modeling, state specification, communication between components and GUI representation.
A Common Basis for Modelling Service-Oriented and Event-Driven ArchitectureClarkTony
This document discusses modeling service-oriented and event-driven architectures. It proposes a language-driven approach using a domain-specific modeling language for enterprise architecture that supports components, information models, invariants, business process specifications, and events. Current modeling techniques like ArchiMate, MODAF and TOGAF are described as complex, imprecise and lacking capabilities like simulation. A case study and simulation results are also mentioned but not described.
The document discusses model-driven development approaches for context-aware reactive applications (CARA), beginning with definitions of context-aware and reactive applications, and then exploring modelling techniques like UML profiles and the widget calculus formalism for representing CARA models, with examples including a simple button model and the "Buddy" mobile phone contact application case study.
XMF is a meta-circular language based on ObjLisp for language-oriented programming. It allows defining grammars and syntax processing, modeling with classes and operations, and provides features like pattern matching, code generation templates, and integration with Java. XMF includes a match language construct for pattern matching that can desugar patterns into OCL operations.
This document discusses XMF and XModeler. XMF is an executable modeling framework that can be used for model-driven language engineering. It includes features like meta-modeling, reflection, and code generation templates. XModeler is an Eclipse-based modeling tool built using XMF. It allows users to visually construct and edit models defined in XMF. The document outlines the history and technologies behind XMF and XModeler, describes their main features and capabilities, and discusses tool requirements and the MVC architecture used in XModeler.
This document discusses using the ISWIM (I See What You Mean) constraint representation approach for model-driven testing. It provides an overview of domains that require constraint modeling like telecommunications and aerospace. It then describes ISWIM's visual constraint language and how snapshots can represent constraints. The document outlines how a testing architecture could use an ISWIM model to generate test cases, check invariants and pre/post conditions, and report results. The approach was implemented in a tool that tests a sample sales system model.
The document discusses using the ISWIM (Integrated Semantic Web Inference Model) approach to model constraints for testing purposes in a model-driven way. It describes using ISWIM constraints represented as snapshots to specify test cases, including static scenarios, invariants and method pre/post-conditions. It then presents an example of modeling a sales system using ISWIM constraints to generate test cases for validating the system.
The document discusses domain-specific languages (DSLs) and tool interoperability. It proposes the Knowledge Industry Survival Strategy (KISS) framework to address issues with DSLs like lack of interoperability between tools. KISS aims to automate software construction from domain models using open standards. It defines compliance levels for tool interoperability from no interoperability to fully interoperable behavior through common representations of models, services, and behavior. The document outlines how KISS would enable tools to work together through model persistence, transformation and execution.
The MCMS project aims to (1) detect patterns in institutional data that indicate student retention issues, (2) develop an approach for real-time student support and intervention to address retention, and (3) provide high-quality information to support institutional decision making. The process involves using data mining to analyze student data and identify relationships between performance and factors like year of study, library usage, and nationality. Interventions are then developed based on the findings, such as emails, reports, and feedback to students and faculty. Additionally, an Intelligent Decision Support SIG is being formed to address issues around decision support technologies, data integration and sharing, and engaging stakeholders to develop a roadmap for the sector.
1) The document describes a MOP-based DSL for testing Java programs using OCL specifications.
2) The DSL uses OCL pre and post conditions to specify tests in an @MSpec block, and @Test blocks to define test cases calling multiple @MSpecs.
3) The approach maps Java classes to XMF metaclasses, allowing OCL to access Java objects and invoke methods, handling different implementations like Java or EMF.
This document proposes an approach called Syntax Classes for defining domain-specific languages (DSLs) in an extensible way within Java. It allows new languages to be defined as classes that implement a standard interface for abstract syntax trees. This provides a modular and conservative approach to DSL definition that fully integrates new languages into the Java platform and tooling. Syntax classes provide a standardized mechanism for defining the syntax, static processing, and execution of DSL code through grammar definitions, AST manipulation, and language import capabilities. Examples of DSL constructs that could be defined include vector operations, data mapping, and entity definitions.
Reverse engineering and theory building v3ClarkTony
The document discusses reverse engineering through theory building. It proposes that understanding complex systems involves building theories about the domain and system aspects. These theories consist of statements and relationships that explain different views of the system. The document also presents a case study of developing theories to understand issues in a library management system and modifying the theories based on deductions from reality.
The document discusses the concept of language factories, which aim to support the definition and construction of programming languages in a component-based way. This would allow for greater reuse of common language components, more agile language engineering, and language refactoring and analysis. Some key goals of language factories include supporting reuse of language syntax, semantics, and tools, as well as enabling flexible composition of language components to build new languages. Examples of reusable expression and measurement language components are provided.
The document discusses a domain specific language (DSL) for contextual design. It proposes a model-driven approach to language design to address issues with ambiguity and lack of precision in user-centered design processes and artifacts. Specifically, it focuses on developing a DSL to provide formal semantics and tool support for models used in contextual design, such as cultural models. The talk outlines key problems identified in a case study that motivated this work, and describes how a DSL following language engineering principles can formally define the syntax and semantics of contextual design models.
Formalizing homogeneous language embeddingsClarkTony
This document discusses formalizing homogeneous language embeddings using a mu-calculus model. It proposes a model for embedding one language into another using evaluation, loading, and unloading functions. It provides examples of embedding a language into itself using fixed point operators, and using let bindings to embed one language into another. Further work is needed on parsing, static analysis, logics for combining languages, and practical considerations.
- The document discusses model-driven testing using "filmstrips" and "snapshots" to generate test cases from models of a system. Filmstrips are sequences of snapshot patterns that represent system states.
- A case study is presented that uses extended use cases with filmstrips to test a hotel booking system. An approach to defining a domain-specific language for filmstrips is also described.
- A demo is presented of a language for testing Java applications using filmstrips to specify test cases and connect them to a Java application and test reporting.
1. Domain Specific Modelling can be viewed as a form of theory building, where a domain is understood by developing theories about it consisting of true statements.
2. Traditionally, programming is seen as implementing a theory, but modelling languages treat the domain weakly and lack support for truly representing theories.
3. The paper proposes enhancing modelling languages to more fully support domain theories, including defining syntax, semantics, and mappings between theories and implementations.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
Model Slicing
1. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
A General Model Based Slicing Framework
Tony Clark
School of Engineering and Information Sciences, Middlesex University
September 30, 2011
Tony Clark A General Model Based Slicing Framework
2. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Contents
1 Program Slicing
2 Model Slicing
3 A General Slicing Framework
GSF for Programs
GSF for Models
4 Case Study
5 Conclusion
Tony Clark A General Model Based Slicing Framework
3. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Definition and Use Cases
What? Slicing is used to reduce the size of programs by
removing those statements that do not contribute to
the values of specified variables at a given program
location.
Why? Debugging; comprehension; verification.
How? Static (backwards,forwards); dynamic; conditioned;
union.
Tony Clark A General Model Based Slicing Framework
4. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Backward Slicing: Example1
Original Criterion Slice
x = 42
d = 23 d d = 23
Tony Clark A General Model Based Slicing Framework
5. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Backward Slicing: Example2
Original Criterion Slice
sum=0
prod=1 prod = 1
read(i) read(i)
while(i<11) { while (i<11) {
sum=sum+i
prod=prod*i prod=prod*i
i=i+1 i=i+1
} prod }
Tony Clark A General Model Based Slicing Framework
6. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Dynamic Slicing
Original Criterion Slice
sum=0
prod=1
prod = 1
read(i)
read(i)
while(i<11) {
sum=sum+i
prod=prod*i
i=i+1
} prod, pos(next(input))=11
Tony Clark A General Model Based Slicing Framework
7. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Conditioned Slicing
Original Criterion Slice
read(a) read(a)
if(a<0)
a=-a
x=1/a x, +ve(next(input)) x=1/a
Tony Clark A General Model Based Slicing Framework
8. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Unions
Original Criterion Slice
a = 42 a = 42
x = 2
b = 23 + a b = 23 + a
y = 3
c = b + 2 c c = b + 2
a = 42
x = 2
b = 23 + a
y = 3 y = 3
c = b + 2 y
a = 42 a = 42
x = 2
b = 23 + a b = 23 + a
y = 3 y = 3
c = b + 2 y and c c = b + 2
Tony Clark A General Model Based Slicing Framework
9. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Contents
1 Program Slicing
2 Model Slicing
3 A General Slicing Framework
GSF for Programs
GSF for Models
4 Case Study
5 Conclusion
Tony Clark A General Model Based Slicing Framework
10. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Definition and Use Cases
What? erm - drop some bits?
Why? Model management and comprehension; debugging
transformations; debugging executable models; model
verification.
How? State machines; class diagrams (syntax); specific
technologies.
Problem No general definition for model slicing.
Tony Clark A General Model Based Slicing Framework
11. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Differences to Programs
No starting point for execution.
Static and dynamic slices.
Multiple languages.
DSLs, profiles, meta-models.
Highly structured.
Weak semantics.
Links to model transformations; transformation combinations.
Tony Clark A General Model Based Slicing Framework
12. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
Contents
1 Program Slicing
2 Model Slicing
3 A General Slicing Framework
GSF for Programs
GSF for Models
4 Case Study
5 Conclusion
Tony Clark A General Model Based Slicing Framework
13. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
A General Slicing Framework
φ - σ:Σ
γ:Γ
p
x
q
-
-
φ z-
p γ :Γ - σ :Σ - q ω:Σ
-
q
p
y
? ?
-
γ :Γ - σ :Σ
φ
Γ Syntax domain.
Σ Semantic domain.
p, q Projections.
φ Semantics.
(x, ω) Slicing criterion.
Tony Clark A General Model Based Slicing Framework
14. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
Key Features
To set up the framework for a particular domain you will need:
syntax (Γ) The things you want to slice.
semantics (Σ, φ) The meaning of the syntax defined as a semantic
domain and a semantic mapping.
projections (p, q) Relationships that hold between syntactic and
semantic elements and that remove items that are
not of interest.
criterion (x, ω) The things that remain invariant, defined in terms
of the semantic domain.
products If the syntax and semantic domain also define
products for elements and projections then there is
scope for slicing composition.
Tony Clark A General Model Based Slicing Framework
15. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
Semantic Domain for Programs
Tony Clark A General Model Based Slicing Framework
16. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
GSF: Program Example1
Original Criterion Slice
x = 42
d = 23 d d = 23
Semantics is σ = {[{x → 41; d → 23}]} The slice criterion requires
that any trace in σ when projected onto the second step has a
value for d, so ω = {{d → v } | v ∈ V } Therefore the smallest set
of traces for which there exists a structural projection is
σ = {[{d → v } | v ∈ V }]}.
Tony Clark A General Model Based Slicing Framework
17. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
GSF: Program Example2
Original Criterion Slice
sum=0
prod=1 prod = 1
read(i) read(i)
while(i11) { while (i11) {
sum=sum+i
prod=prod*i prod=prod*i
i=i+1 i=i+1
} prod }
Semantics is σ = {[{sum → 0}, {sum → 0; prod → 1}, {sum → 0; prod →
1; i → v }] + trace(v ) | v ∈ 0 . . . 10} ] where trace(0) = []
trace(v ) = trace(v − 1) + [{sum → i; prod →!v ; i → v }]
i∈0...v
Criterion requires that ω = {prod → v | v ∈ V } and x maps each trace onto
the last maplet for prod.
Tony Clark A General Model Based Slicing Framework
18. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
Modelling Language: Syntax
Tony Clark A General Model Based Slicing Framework
19. Program Slicing
Model Slicing
GSF for Programs
A General Slicing Framework
GSF for Models
Case Study
Conclusion
Modelling Language: Semantic Domain
Tony Clark A General Model Based Slicing Framework
20. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Contents
1 Program Slicing
2 Model Slicing
3 A General Slicing Framework
GSF for Programs
GSF for Models
4 Case Study
5 Conclusion
Tony Clark A General Model Based Slicing Framework
21. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Case Study: Use Cases
Tony Clark A General Model Based Slicing Framework
22. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Case Study: Class Model
Tony Clark A General Model Based Slicing Framework
23. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
OCL
c o n t e x t Library :: borrow ( reader , book )
p r e not hasFine ( reader ) and
p o s t borrows - includes ( b |
b . reader = getReader ( reader ) and
b . book = getBook ( book ) and
b . date = date ())
Tony Clark A General Model Based Slicing Framework
24. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Case Study: State Machine
Tony Clark A General Model Based Slicing Framework
25. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Case Study: Sequence Model
Tony Clark A General Model Based Slicing Framework
26. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
A Borrowing Step
( s1 , ok = 0 - borrow ( fred , b ) ,f , s6 )
where
s1 = ({(0 ,* , Library )[ limit = l ] ,(1 ,* , Book )[ name =n , id =1] ,
(2 , initialized , Reader )[ name = fred , id =2]} ,{})
f = [( s1 , false = 0 - hasFine ( fred ) ,... , s1 ) ,
( s1 ,2 = 0 - getReader ( fred ) ,... , s1 ) ,
( s1 ,1 = 0 - getBook ( n ) ,... , s1 ) ,
( s1 , d = 0 - getDate () ,... , s1 ) ,
( s1 ,3 = 0 - new ( Borrows ) ,[] , s2 ) ,
( s2 , ok = 3 - setReader (2) ,... , s4 ) ,
....]
s2 = ({(0 ,* , Library )[ limit = l ] ,(1 ,* , Book )[ name =n , id =1] ,
(2 , initialized , Reader )[ name = fred , id =2] ,
(3 ,* , Borrows )[]} ,{})
Tony Clark A General Model Based Slicing Framework
27. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Slicing Criterion
B ={( s1 , v = l - borrow (r , b ) ,F , s2 ) |
F - Filmstrip ,
s1 - Snapshot ,
s2 - Snapshot ,
v - Value ,
l - Id ,
r - Str ,
b - Str }
Use ω = B ∗ as basis for slicing criterion.
x1 projects a filmstrip to the sequence of borrow steps it contains.
Tony Clark A General Model Based Slicing Framework
28. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Sliced Model
Tony Clark A General Model Based Slicing Framework
29. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Refining the Slice
{[( s1 , v = l - borrow (r , b ) ,[] , s2 )] |
s1 - Snapshot ,
s2 - Snapshot ,
v - Value ,
l - Id ,
r - Str ,
b - Str }
The mapping part x2 is a refinement of x1 that maps all nested
filmstrips to [].
Tony Clark A General Model Based Slicing Framework
30. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Refined Slice
Tony Clark A General Model Based Slicing Framework
31. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Contents
1 Program Slicing
2 Model Slicing
3 A General Slicing Framework
GSF for Programs
GSF for Models
4 Case Study
5 Conclusion
Tony Clark A General Model Based Slicing Framework
32. Program Slicing
Model Slicing
A General Slicing Framework
Case Study
Conclusion
Issues and Problems
Constructing slicing transformations: use of existing model
transformation languages? New languages?
Projection definition for OCL - requires theorem proving?
Combination of slices? Consistency of slices?
Slicing theories?
Restricted forms of slicing: static, dynamic, conditioned?
Tony Clark A General Model Based Slicing Framework