In MDE resolving pragmatic issues related to the management of models is key to success. Model comparison is one of the most challenging operations playing a central role in a wide range of modelling activities including model versioning, evolution and even collaborative and distributed specification of models. Over the last decade, several syntactic methods have been proposed to compare models even though they struggle in achieving higher levels of accuracy especially when the semantics of the application domain has to be considered. Existing methods improve comparison precision at the price of high performance costs.
In this talk I presented a lightweight semantic comparison method, which relies on a new matching algorithm that considers ontological information encoded in the WordNet lexical database further than ordinary syntactical and structural correlations. The approach has been implemented as extension of EMFCompare and evaluated to measure its precision and performances when compared to existing approaches.
Collaborative model driven software engineering: a Systematic Mapping StudyDavide Ruscio
Collaborative software engineering (CoSE) deals with methods, processes and tools for enhancing collaboration, communication, and co-ordination (3C) among team members. CoSE can be employed to conceive different kinds of artifacts during the development and evolution of software systems. For instance, when focusing on software design, multiple stakeholders with different expertise and responsibility collaborate on the system design.
Model-Driven Software Engineering (MDSE) provides suitable techniques and tools for specifying, manipulating, and analyzing modeling artifacts including metamodels, models, and transformations. A collaborative MDSE approach can be defined as a method or technique allowing multiple stakeholders to work on a set of shared modeling artifacts, and to be aware of each others’ work. Even though Collaborative MDSE is gaining a growing interest in both academia and practice, a holistic view on what Collaborative MDSE is, its components, the related opportunities and challenges is still missing.
In this talk, I outlined the main insights of the systematic mapping study we have done to identify and classify approaches, methods, and techniques that support collaborative. We present three complementary dimensions that we have identified during the study as the peculiar aspects building up a collaborative MDSE: a model management infrastructure for managing the life cycle of the models, a set of collaboration means for allowing involved stakeholders to work on the modelling artifacts collaboratively, and a set of communication means for allowing involved stakeholders to be aware of the activities of the other stakeholders. The identification of limitations and challenges of currently available collaborative MDE approaches is also given by discussing the implications for future investigation.
Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into the problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: ``What is the cleanest and lowest-impact solution to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?''. We introduce a novel and low impact methodology specifically designed for code built in that context, combining semantic web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: the MEX Vocabulary.
Meta-modeling: concepts, tools and applicationsSaïd Assar
Presentation made as a tutorial at the rcis2015 conference in Athens, Greece, on May 13, 2015.
Video recording available online on IEEE Education (http://www.computer.org/web/computingnow/education)
Early Analysis and Debuggin of Linked Open Data CubesEnrico Daga
The release of the Data Cube Vocabulary specification introduces a standardised method for publishing statistics following the linked data principles. However, a statistical dataset can be very complex, and so understanding how to get value out of it may be hard. Analysts need the ability to quickly grasp the content of the data to be able to make use of it appropriately. In addition, while remodelling the data, data cube publishers need support to detect bugs and issues in the structure or content of the dataset. There are several aspects of RDF, the Data Cube vocabulary and linked data that can help with these issues however, including that they make the data "self-descriptive". Here, we attempt to answer the question "How feasible is it to use this feature to give an overview of the data in a way that would facilitate debugging and exploration of statistical linked open data?" We present a tool that automatically builds interactive facets as diagrams out of a Data Cube representation without prior knowledge of the data content to be used for debugging and early analysis. We show how this tool can be used on a large, complex dataset and we discuss the potential of this approach.
Collaborative model driven software engineering: a Systematic Mapping StudyDavide Ruscio
Collaborative software engineering (CoSE) deals with methods, processes and tools for enhancing collaboration, communication, and co-ordination (3C) among team members. CoSE can be employed to conceive different kinds of artifacts during the development and evolution of software systems. For instance, when focusing on software design, multiple stakeholders with different expertise and responsibility collaborate on the system design.
Model-Driven Software Engineering (MDSE) provides suitable techniques and tools for specifying, manipulating, and analyzing modeling artifacts including metamodels, models, and transformations. A collaborative MDSE approach can be defined as a method or technique allowing multiple stakeholders to work on a set of shared modeling artifacts, and to be aware of each others’ work. Even though Collaborative MDSE is gaining a growing interest in both academia and practice, a holistic view on what Collaborative MDSE is, its components, the related opportunities and challenges is still missing.
In this talk, I outlined the main insights of the systematic mapping study we have done to identify and classify approaches, methods, and techniques that support collaborative. We present three complementary dimensions that we have identified during the study as the peculiar aspects building up a collaborative MDSE: a model management infrastructure for managing the life cycle of the models, a set of collaboration means for allowing involved stakeholders to work on the modelling artifacts collaboratively, and a set of communication means for allowing involved stakeholders to be aware of the activities of the other stakeholders. The identification of limitations and challenges of currently available collaborative MDE approaches is also given by discussing the implications for future investigation.
Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into the problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: ``What is the cleanest and lowest-impact solution to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?''. We introduce a novel and low impact methodology specifically designed for code built in that context, combining semantic web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: the MEX Vocabulary.
Meta-modeling: concepts, tools and applicationsSaïd Assar
Presentation made as a tutorial at the rcis2015 conference in Athens, Greece, on May 13, 2015.
Video recording available online on IEEE Education (http://www.computer.org/web/computingnow/education)
Early Analysis and Debuggin of Linked Open Data CubesEnrico Daga
The release of the Data Cube Vocabulary specification introduces a standardised method for publishing statistics following the linked data principles. However, a statistical dataset can be very complex, and so understanding how to get value out of it may be hard. Analysts need the ability to quickly grasp the content of the data to be able to make use of it appropriately. In addition, while remodelling the data, data cube publishers need support to detect bugs and issues in the structure or content of the dataset. There are several aspects of RDF, the Data Cube vocabulary and linked data that can help with these issues however, including that they make the data "self-descriptive". Here, we attempt to answer the question "How feasible is it to use this feature to give an overview of the data in a way that would facilitate debugging and exploration of statistical linked open data?" We present a tool that automatically builds interactive facets as diagrams out of a Data Cube representation without prior knowledge of the data content to be used for debugging and early analysis. We show how this tool can be used on a large, complex dataset and we discuss the potential of this approach.
This presentation is based on ``Statistical Modeling: The two cultures'' from Leo Breiman. It compares the data modeling culture (statistics) and the algorithmic modeling culture (machine learning).
Invited Talk: Early Detection of Research Topics Angelo Salatino
Slides of my talk at Chan Zuckerberg Initiative (Meta)
Abstract:
The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.
Model-Driven Generation of MVC2 Web Applications: From Models to CodeIJEACS
Computer systems engineering is based,
increasingly, on models. These models permit to describe the
systems under development and their environment at different
abstraction levels. These abstractions allow us to conceive
applications independently of target platforms. For a long
time, models have only constituted a help for human users,
allow to manually develop the final code of computer
applications. The Model-Driven Engineering approach (MDE)
consists of programming at the level of models, represented as
an instance of a meta-model, and using them for generating the
end code of applications. The MDA (Model-Driven
Architecture) is a typical model-driven engineering approach
to application design. MDA is based on the UML standard to
define models and on the meta-modeling environment (MOF)
[1] for model-level programming and code generation. The
code generation operation is the subject of this paper. Thus, in
this work, we explain the code generation of MVC2 Web
application by using the M2M transformation (ATL
transformation language) then the M2T transformation. To
implement this latter we use the Acceleo generator which is a
generator language. In the M2T transformation, we use the
PSM model of Struts2 already generated by M2M
transformation as an input model of Acceleo generator. This
transformation is validated by a case study. The main goal of
this paper is to achieve the end-to-end code generation.
Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis MethodsKamiya Toshihiro
Presentation of:
[Position Paper] Toshihiro Kamiya, Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis Methods, Proc. 10th International Workshop on Software Clones (IWSC 2016), pp. 19-20, 2016.
Notice: re-uploaded on March 16, 2016. (Fix "IWSC05's" -> "IWSC15's" on page 5)
Redes de sensores sem fio autonômicas: abordagens, aplicações e desafiosPET Computação
Este curso tem como principal objetivo apresentar aos ouvintes conceitos sobre redes de sensores sem fio (RSSF), protocolos de comunicação para RSSF e conceitos de computação autonômica. Além disso, aplicações focadas nas áreas de monitoramento ambiental, agricultura de precisão, segurança e defesa também serão apresentados.
Clonal Selection Algorithm Parallelization with MPJExpressAyi Purbasari
This paper exploits the parallelism potential on a Clonal Selection Algorithm (CSA) as a parallel metaheuristic algorithm, due the lack of explanation detail of the stages of designing parallel algorithms. To parallelise population-based algorithms, we need to exploit and define their granularity for each stage; do data or functional partition; and choose the communication model. Using a library for a message-passing model, such as MPJExpress, we define appropriate methods to implement process communication. This research results pseudo-code for the two communication message-passing models, using MPJExpress. We implemented this pseudo-codes using Java Language with a dataset from the Travelling Salesman Problem (TSP). The experiments showed that multicommunication model using alltogether method gained better performance that master-slave model that using send-and receive method.
SiriusCon17 - A Graphical Approach to Modularization and Layering of MetamodelsObeo
Modularity is a key aspect in software engineering since it comes with several benefits like reusability, extensibility and maintainability. Although it is a well established concept, it has not received much attention when it comes to model-driven software development. Over time, metamodels tend to evolve and grow in complexity to encompass new aspects and features. If modularization steps are not taken and metamodels are extended intrusively, they can become difficult to maintain and to extend. With the increased complexity, the modularization can become even more challenging.
In this talk, we present a novel approach to assist the modeler in the task of modularization. Our approach addresses the problem from a graphical perspective. The proposed tool support displays a layered structure, where each layer has certain level of abstraction, and allows the modeler to organize metamodels inside the layers. In this layered structure, the metamodels should only depend on metamodels with the same or a higher abstraction level and should not take part in cyclical dependencies. The tool provides the modeler with full control over the modularization process and full knowledge about the relations between the metamodels, thus facilitating the modularization task greatly.
Not Only Statements: The Role of Textual Analysis in Software QualityRocco Oliveto
My keynote at the 2012 Workshop on Mining Unstructured Data (co-located with the 10th Working Conference on Reverse Engineering - WCRE'12). Kingston, Ontario, Canada. October 17th, 2012.
Introduction to Model-Based Machine LearningDaniel Emaasit
The field of machine learning has seen the development of thousands of learning algorithms. Typically, scientists choose from these algorithms to solve specific problems. Their choices often being limited by their familiarity with these algorithms. In this classical/traditional framework of machine learning, scientists are constrained to making some assumptions so as to use an existing algorithm. This is in contrast to the model-based machine learning approach which seeks to create a bespoke solution tailored to each new problem.
Developing recommendation systems to support open source software developers ...Davide Ruscio
Open-source software (OSS) forges contain rich data sources useful for supporting development activities. Several techniques and tools have been promoted to provide open source developers with innovative features, aiming to obtain improvements in development effort, cost savings, and developer productivity. In the context of the EU H2020 CROSSMINER project, different recommendation systems have been conceived to assist software programmers in different phases of the development process by providing them with various artifacts, such as third-party libraries, or documentation about how to use the APIs being adopted, or relevant API function calls. To develop such recommendations, various technical choices have been made to overcome issues related to several aspects, including the lack of baselines, limited data availability, decisions about the performance measures, and evaluation approaches. This lecture provides an introduction to Recommendation Systems in Software Engineering (RSSE) and describes the challenges that have been encountered in the context of the CROSSMINER project. Specific attention is devoted to present the intricacies related to the development and evaluation techniques that have been employed to conceive and evaluate the CROSSMINER recommendation systems. The lessons that have been learned while working on the project are also discussed.
https://sites.google.com/gssi.it/csgssi/ph-d-program/se-ai-course-2021
This presentation is based on ``Statistical Modeling: The two cultures'' from Leo Breiman. It compares the data modeling culture (statistics) and the algorithmic modeling culture (machine learning).
Invited Talk: Early Detection of Research Topics Angelo Salatino
Slides of my talk at Chan Zuckerberg Initiative (Meta)
Abstract:
The ability to promptly recognise new research trends is strategic for many stakeholders, including universities, institutional funding bodies, academic publishers and companies. While the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. Hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. In this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. This study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. These initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in Philosophy of Science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise.
Model-Driven Generation of MVC2 Web Applications: From Models to CodeIJEACS
Computer systems engineering is based,
increasingly, on models. These models permit to describe the
systems under development and their environment at different
abstraction levels. These abstractions allow us to conceive
applications independently of target platforms. For a long
time, models have only constituted a help for human users,
allow to manually develop the final code of computer
applications. The Model-Driven Engineering approach (MDE)
consists of programming at the level of models, represented as
an instance of a meta-model, and using them for generating the
end code of applications. The MDA (Model-Driven
Architecture) is a typical model-driven engineering approach
to application design. MDA is based on the UML standard to
define models and on the meta-modeling environment (MOF)
[1] for model-level programming and code generation. The
code generation operation is the subject of this paper. Thus, in
this work, we explain the code generation of MVC2 Web
application by using the M2M transformation (ATL
transformation language) then the M2T transformation. To
implement this latter we use the Acceleo generator which is a
generator language. In the M2T transformation, we use the
PSM model of Struts2 already generated by M2M
transformation as an input model of Acceleo generator. This
transformation is validated by a case study. The main goal of
this paper is to achieve the end-to-end code generation.
Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis MethodsKamiya Toshihiro
Presentation of:
[Position Paper] Toshihiro Kamiya, Introducing Parameter Sensitivity to Dynamic Code-Clone Analysis Methods, Proc. 10th International Workshop on Software Clones (IWSC 2016), pp. 19-20, 2016.
Notice: re-uploaded on March 16, 2016. (Fix "IWSC05's" -> "IWSC15's" on page 5)
Redes de sensores sem fio autonômicas: abordagens, aplicações e desafiosPET Computação
Este curso tem como principal objetivo apresentar aos ouvintes conceitos sobre redes de sensores sem fio (RSSF), protocolos de comunicação para RSSF e conceitos de computação autonômica. Além disso, aplicações focadas nas áreas de monitoramento ambiental, agricultura de precisão, segurança e defesa também serão apresentados.
Clonal Selection Algorithm Parallelization with MPJExpressAyi Purbasari
This paper exploits the parallelism potential on a Clonal Selection Algorithm (CSA) as a parallel metaheuristic algorithm, due the lack of explanation detail of the stages of designing parallel algorithms. To parallelise population-based algorithms, we need to exploit and define their granularity for each stage; do data or functional partition; and choose the communication model. Using a library for a message-passing model, such as MPJExpress, we define appropriate methods to implement process communication. This research results pseudo-code for the two communication message-passing models, using MPJExpress. We implemented this pseudo-codes using Java Language with a dataset from the Travelling Salesman Problem (TSP). The experiments showed that multicommunication model using alltogether method gained better performance that master-slave model that using send-and receive method.
SiriusCon17 - A Graphical Approach to Modularization and Layering of MetamodelsObeo
Modularity is a key aspect in software engineering since it comes with several benefits like reusability, extensibility and maintainability. Although it is a well established concept, it has not received much attention when it comes to model-driven software development. Over time, metamodels tend to evolve and grow in complexity to encompass new aspects and features. If modularization steps are not taken and metamodels are extended intrusively, they can become difficult to maintain and to extend. With the increased complexity, the modularization can become even more challenging.
In this talk, we present a novel approach to assist the modeler in the task of modularization. Our approach addresses the problem from a graphical perspective. The proposed tool support displays a layered structure, where each layer has certain level of abstraction, and allows the modeler to organize metamodels inside the layers. In this layered structure, the metamodels should only depend on metamodels with the same or a higher abstraction level and should not take part in cyclical dependencies. The tool provides the modeler with full control over the modularization process and full knowledge about the relations between the metamodels, thus facilitating the modularization task greatly.
Not Only Statements: The Role of Textual Analysis in Software QualityRocco Oliveto
My keynote at the 2012 Workshop on Mining Unstructured Data (co-located with the 10th Working Conference on Reverse Engineering - WCRE'12). Kingston, Ontario, Canada. October 17th, 2012.
Introduction to Model-Based Machine LearningDaniel Emaasit
The field of machine learning has seen the development of thousands of learning algorithms. Typically, scientists choose from these algorithms to solve specific problems. Their choices often being limited by their familiarity with these algorithms. In this classical/traditional framework of machine learning, scientists are constrained to making some assumptions so as to use an existing algorithm. This is in contrast to the model-based machine learning approach which seeks to create a bespoke solution tailored to each new problem.
Developing recommendation systems to support open source software developers ...Davide Ruscio
Open-source software (OSS) forges contain rich data sources useful for supporting development activities. Several techniques and tools have been promoted to provide open source developers with innovative features, aiming to obtain improvements in development effort, cost savings, and developer productivity. In the context of the EU H2020 CROSSMINER project, different recommendation systems have been conceived to assist software programmers in different phases of the development process by providing them with various artifacts, such as third-party libraries, or documentation about how to use the APIs being adopted, or relevant API function calls. To develop such recommendations, various technical choices have been made to overcome issues related to several aspects, including the lack of baselines, limited data availability, decisions about the performance measures, and evaluation approaches. This lecture provides an introduction to Recommendation Systems in Software Engineering (RSSE) and describes the challenges that have been encountered in the context of the CROSSMINER project. Specific attention is devoted to present the intricacies related to the development and evaluation techniques that have been employed to conceive and evaluate the CROSSMINER recommendation systems. The lessons that have been learned while working on the project are also discussed.
https://sites.google.com/gssi.it/csgssi/ph-d-program/se-ai-course-2021
FOCUS: A Recommender System for Mining API Function Calls and Usage PatternsDavide Ruscio
Software developers interact with APIs on a daily basis and, therefore, often face the need to learn how to use new APIs suitable for their purposes. Previous work has shown that recommending usage patterns to developers facilitates the learning process. Current approaches to usage pattern recommendation, however, still suffer from high redundancy and poor run-time performance. In this paper, we reformulate the problem of usage pattern recommendation in terms of a collaborative filtering recommender system. We present a new tool, FOCUS, which mines open-source project repositories to recommend API method invocations and usage patterns by analyzing how APIs are used in projects similar to the current project. We evaluate FOCUS on a large number of Java projects extracted from GitHub and Maven Central and find that it outperforms the state-of-the-art approach PAM with regards to success rate, accuracy, and execution time. Results indicate the suitability of context-aware collaborative-filtering recommender systems to provide API usage patterns.
CrossSim: exploiting mutual relationships to detect similar OSS projectsDavide Ruscio
Slides presented at SEAA 2018 http://dsd-seaa2018.fit.cvut.cz/seaa/ related to the paper http://reposto.di.univaq.it/aigon2/index.php/attachments/single/211
Software development is a knowledge-intensive activity, which requires mastering several languages, frameworks, technology trends (among other aspects) under the pressure of ever-increasing arrays of external libraries and resources.
Recommender systems are gaining high relevance in software
engineering since they aim at providing developers with real-time recommendations, which can reduce the time spent on discovering and understanding reusable artifacts from software repositories, and thus inducing productivity and quality gains.
In this presentation, we focus on the problem of mining open source software repositories to identify similar projects, which can be evaluated and eventually reused by developers. To this end, CROSSSIM is proposed as a novel approach to model open source software projects and related artifacts and to compute similarities among them. An evaluation on a dataset containing 580 GitHub projects shows that CROSSSIM outperforms an existing technique, which has been proven to have a good performance in detecting similar GitHub repositories.
Consistency Recovery in Interactive ModelingDavide Ruscio
MDE projects contain different kinds of artifacts such as models, metamodels, model transformations, and deltas. These artifacts are related in terms of relationships such as transformation or conformance. In this presentation, we capture the types of artifacts and the relevant relationships in a megamodeling-based manner for the purpose of monitoring and recovering project consistency in response to changes that users may apply to the project within an interactive modeling platform. The approach supports users in experimenting with MDE projects and receiving feedback upon changes on the grounds of a specific execution semantics for megamodels. The approach is validated within the web-based modeling platform MDEFORGE.
Edelta: an approach for defining and applying reusable metamodel refactoringsDavide Ruscio
Metamodels can be considered one of the key artifacts of any model-based project. Similarly to other software artifacts, metamodels are expected to evolve during their life-cycle and consequently it is crucial to develop approaches and tools supporting the definition and re-use of metamodel refactorings in a disciplined way.
This paper proposes Edelta, a domain specific language for specifying reusable libraries of metamodel refactorings. The language allows both atomic and complex changes and it is supported by an Eclipse-based IDE. The developed supporting environment allows the developer to apply refactorings both in a batch manner and in a step-by-step fashion, which provides developers with an immediate view of the evolving Ecore model before actually changing it.
Mining Correlations of ATL Transformation and Metamodel MetricsDavide Ruscio
Model transformations are considered to be the “heart” and “soul” of Model Driven Engineering, and as a such, advanced techniques and tools are needed for supporting the development, quality assurance, maintenance, and evolution of model transformations. Even though model transformation developers are gaining the availability of powerful languages and tools for developing, and testing model transformations, very few techniques are available to support the understanding of transformation characteristics. In this talk, a process to analyze model transformations is discussed with the aim of identifying to what extent their characteristics depend on the corresponding input and target metamodels. The process relies on a number of transformation and metamodel metrics that are calculated and properly correlated. The talk discusses the application of the approach on a corpus consisting of more than 90 ATL transformations and 70 corresponding metamodels.
The slides have been used to present the paper "Mining Correlations of ATL Transformation and Metamodel Metrics" at MISE2015 workshop at ICSE2015 (http://goo.gl/UJ9nWC)
MDEForge: an extensible Web-based modeling platformDavide Ruscio
Model-Driven Engineering (MDE) refers to the systematic use of models as first class entities throughout the software development life cycle. Over the last few years, many MDE technologies have been conceived for developing domain specific modeling languages, and for supporting a wide range of model management activities. However, existing modeling platforms neglect a number of important features that if missed reduce the acceptance and the relevance of MDE in industrial contexts, e.g., the possibility to search and reuse already developed modeling artifacts, and to adopt model management tools as a service.
In this presentation we propose MDEForge a novel extensible Web-based modeling platform specifically conceived to foster a community-based modeling repository, which underpins the development, analysis and reuse of modeling artifacts.~Moreover, it enables the adoption of model management tools as software-as-a-service that can be remotely used without overwhelming the users with intricate and error-prone installation and configuration procedures.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Top 7 Unique WhatsApp API Benefits | Saudi ArabiaYara Milbes
Discover the transformative power of the WhatsApp API in our latest SlideShare presentation, "Top 7 Unique WhatsApp API Benefits." In today's fast-paced digital era, effective communication is crucial for both personal and professional success. Whether you're a small business looking to enhance customer interactions or an individual seeking seamless communication with loved ones, the WhatsApp API offers robust capabilities that can significantly elevate your experience.
In this presentation, we delve into the top 7 distinctive benefits of the WhatsApp API, provided by the leading WhatsApp API service provider in Saudi Arabia. Learn how to streamline customer support, automate notifications, leverage rich media messaging, run scalable marketing campaigns, integrate secure payments, synchronize with CRM systems, and ensure enhanced security and privacy.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
How Recreation Management Software Can Streamline Your Operations.pptx
Semantic based model matching with emf compare
1. Dipartimento di Ingegneria e Scienze
Università degli Studi dell’Aquila
dell’Informazione e Matematica
Semantic-based Model Matching with
EMFCompare
Davide Di Ruscio
davide.diruscio@univaq.it
@ddiruscio
Models and Evolution Workshop at MoDELS 2016 – October 2, 2016 – Saint-Malo, France
2. ME‘16 – October 2, 2016 – Saint-Malo, France
2
Joint work with
Alfonso Pierantonio
Unversity of L’Aquila
(Italy)
Ludovico Iovino
Gran Sasso Science Institute
(Italy)
Juri Di Rocco
Unversity of L’Aquila
(Italy)
Lorenzo Addazi
Malardalen University
(Sweden)
Antonio Cicchetti
Malardalen University
(Sweden)
3. ME‘16 – October 2, 2016 – Saint-Malo, France
3
Introduction
Model comparison is one of the most challenging
operations in MDE
It underpins a wide range of modelling activities
• E.g., model versioning, evolution, collaborative modeling, …
Calculating model differences relies on the model
matching problem
• It can be reduced to the problem of finding correspondences between
two given graphs (Graph Isomorphism Problem, NP-Hard)
4. ME‘16 – October 2, 2016 – Saint-Malo, France
4
Introduction
a
b
f
c
e
d
Version 1
a
k
l
c
e
dVersion 2
m
5. ME‘16 – October 2, 2016 – Saint-Malo, France
5
Introduction
a
b
f
c
e
d
Version 1
a
k
l
c
e
dVersion 2
m
Establish
correspondences
Calculate
differences
6. ME‘16 – October 2, 2016 – Saint-Malo, France
6
Introduction
a
b
f
c
e
d
Version 1
a
k
l
c
e
dVersion 2
m
Establish
correspondences
Calculate
differences
7. ME‘16 – October 2, 2016 – Saint-Malo, France
7
Introduction
a
b
f
c
e
d
Version 1
a
k
l
c
e
dVersion 2
m
Establish
correspondences
Calculate
differences
> Rename node b as k
> Rename node f as l
> Add node m
> Add edge from k to m
8. ME‘16 – October 2, 2016 – Saint-Malo, France
8
Model-matching
Static Identity-Based Matching: each model element
has a persistent unique identifier that is assigned to it
upon creation
Signature-Based Matching: the identifier of each model
element is dynamically calculated by combining the
values of its features
Similarity-Based Matching: models are typed attribute
graphs and matching elements are identified by
considering the aggregated similarity of their features.
Language-Specific Matching: matching algorithms are
tailored to a particular modelling language
9. ME‘16 – October 2, 2016 – Saint-Malo, France
9
Similiartiy-based matching
Extensible
• Static identity-based or signature-based matching can be also added
by defining custom generator functions
10. ME‘16 – October 2, 2016 – Saint-Malo, France
10
The default match engine
The Levenshtein distance algorithm is applied on the
string representation of the elements
• For optimisation purposes the models are compared by considering
elements selected within a proper search window
...
foreach (elM1 : Model1.getElements())
foreach (elM2 : elM1.getWindowElements())
result[elM1][elM2] = calculateSimilarity(elM1, elM2)
return createMatches(result)
...
11. ME‘16 – October 2, 2016 – Saint-Malo, France
11
A meta-model evolution scenario
A University theses management metamodel
12. ME‘16 – October 2, 2016 – Saint-Malo, France
12
A meta-model evolution scenario
A University theses management metamodel
Extract super class
13. ME‘16 – October 2, 2016 – Saint-Malo, France
13
A meta-model evolution scenario
A University theses management metamodel
Attribute renaming
14. ME‘16 – October 2, 2016 – Saint-Malo, France
14
A meta-model evolution scenario
A University theses management metamodel
15. ME‘16 – October 2, 2016 – Saint-Malo, France
15
A meta-model evolution scenario
A University theses management metamodel
16. ME‘16 – October 2, 2016 – Saint-Malo, France
16
A meta-model evolution scenario
A University theses management metamodel
17. ME‘16 – October 2, 2016 – Saint-Malo, France
17
Contextual issues: limited consideration of the
features characterising the elements
surrounding/containing the compared one
Linguistic issues: lack of semantical evaluation of
the features characterizing the compared elements
• False-negative e.g., renaming a given class using a syntactically
different name
• False-positive e.g., renaming a given class using a semantically
different term, which however presents a strong syntactical
similarity
18. ME‘16 – October 2, 2016 – Saint-Malo, France
19
Proposed approach
Semantic Match Engine
• Use of the WordNet lexical dictionary as ontological source
19. ME‘16 – October 2, 2016 – Saint-Malo, France
20
WordNet in a nutshell
Lexical database for the English language
English words are grouped into sets of synonyms
(synsets)
Each synset includes
- a generic definition joining the contained words
- semantic relationships connecting it to other synsets
http://www.cs.princeton.edu/courses/archive/fall16/cos226/assignments/wordnet.html
20. ME‘16 – October 2, 2016 – Saint-Malo, France
21
WordNet in a nutshell
Lexical database for the English language
English words are grouped into sets of synonyms
(synsets)
Each synset includes
- a generic definition joining the contained words
- semantic relationships connecting it to other synsets
http://www.cs.princeton.edu/courses/archive/fall16/cos226/assignments/wordnet.html
21. ME‘16 – October 2, 2016 – Saint-Malo, France
22
The proposed
semantic model matching
function createMatches(Comparison comparison, List
leftEObjects, List rightEObjects){
SemanticMatch root = createSemanticMatch(null, null);
exploreMatches(root, leftEObjects, rightEObjects);
evaluateMatches(root);
filterMatches(root, comparison);
}
Exploration
Evaluation
Filtering
22. ME‘16 – October 2, 2016 – Saint-Malo, France
23
The proposed
semantic model matching
function createMatches(Comparison comparison, List
leftEObjects, List rightEObjects){
SemanticMatch root = createSemanticMatch(null, null);
exploreMatches(root, leftEObjects, rightEObjects);
evaluateMatches(root);
filterMatches(root, comparison);
}
Exploration
Evaluation
Filtering
A labelled graph representation of the
compared models is produced
• each node represents a semantic match
• each incoming or outgoing labelled edge
represents a connection with its parents or
children elements
24. ME‘16 – October 2, 2016 – Saint-Malo, France
25
The proposed
semantic model matching
function createMatches(Comparison comparison, List
leftEObjects, List rightEObjects){
SemanticMatch root = createSemanticMatch(null, null);
exploreMatches(root, leftEObjects, rightEObjects);
evaluateMatches(root);
filterMatches(root, comparison);
}
Exploration
Evaluation
Filtering
Each SemantichMatch node is integrated
with the semantic distance value between
the encapsulated element
26. ME‘16 – October 2, 2016 – Saint-Malo, France
27
The proposed
semantic model matching
function createMatches(Comparison comparison, List
leftEObjects, List rightEObjects){
SemanticMatch root = createSemanticMatch(null, null);
exploreMatches(root, leftEObjects, rightEObjects);
evaluateMatches(root);
filterMatches(root, comparison);
}
Exploration
Evaluation
Filtering
The set of SemanticMatch elements are
filtered out with respect to a predefined
threshold
28. ME‘16 – October 2, 2016 – Saint-Malo, France
29
Experiments
The Model Exchange Benchmark
• 5 structural modelling languages
• All the possible pairs of metamodels are given as input to:
• Semantic EMFCompare
• EMFCompare
• GAMMA(*)
• Coma++, FOAM, Crosi, Alignment API, AMW
(*) M. Kessentini, A. Ouni, P. Langer, M. Wimmer, and S. Bechikh, “Search-based metamodel matching
with structural and syntactic measures,” J. Syst. Softw., vol. 97, no. C, pp. 1–14, Oct. 2014.
29. ME‘16 – October 2, 2016 – Saint-Malo, France
30
Experiments
Measures
30. ME‘16 – October 2, 2016 – Saint-Malo, France
31
Experiments
Measures
It denotes the percentage of
correctly matched elements
with respect to all the
proposed matches
31. ME‘16 – October 2, 2016 – Saint-Malo, France
32
Experiments
Measures
It denotes the percentage of
correctly matched elements
with respect to all the
expected matches
32. ME‘16 – October 2, 2016 – Saint-Malo, France
33
Experiments
Measures It combines Precision and
Recall to get an equally
weighted average value of
the measures
33. ME‘16 – October 2, 2016 – Saint-Malo, France
34
Experiments
GAMMA provides best results with respect to Precision,
Recall, and F-Measure
GAMMA uses SBSE approaches and it requires to be
initialized with a set of initial solutions (knowledge base)
34. ME‘16 – October 2, 2016 – Saint-Malo, France
35
Experiments
Semantic EMFCompare:
• produces more matches than expected
• in some cases has lower Precision than EMFCompare
• only in one case F-Measure is lower than EMFCompare
36. ME‘16 – October 2, 2016 – Saint-Malo, France
37
Lessons learnt
Extending EMFCompare with semantic aspects can be
done in a lightweight manner
An increasing matching power can come at the price of
an increasing imprecision (more false-positives and false-
negatives)
The selection of the appropriate dictionary (depending on
the artifacts to be compared) can make the difference
• Comparing metamodels is semantically different than comparing models
of specific domains
Performing experiments can be an issue due to the lack
of models to be used as test cases
• Existing model mutations approaches should be extended to implement
“semantics-aware” mutations
37. ME‘16 – October 2, 2016 – Saint-Malo, France
38
Conclusion and Future Work
Model comparison is a very complex task
It underpins the management of a wide number of
(meta-)model (co-)evolution scenarios
An extension of the EMFCompare tool has been
proposed to enable “semantics-aware” matches
Further experiments will be performed by considering
the application of different dictionaries depending on
the kinds of artifacts to be matched