A presentation of the on-going work on interoperability within the toolchain. A new domain OSLC KM is introduced, some experiments for reusing models are also presented and, some videos are also used to present some user stories.
This document discusses digitalizing the engineering lifecycle through task automation and reuse. It proposes a knowledge-centric systems engineering approach using a knowledge management strategy called "Sailing the V". This involves defining a controlled vocabulary and formalizing relationships between terms, textual patterns, and rules to infer information and link system artifacts like requirements, models, and simulations. The goal is to automate tasks, enable reuse, ensure quality, and provide a more integrated environment for engineers. Future work will focus on data integration, semantics, artificial intelligence, and enhancing engineering methods.
This is the presentation of the paper about the integration of artificial intelligence and the systems engineering lifecycle.
You can find more information in the following link: https://event.conflr.com/IS2019/sessiondetail_395325
This presentation is a keynote in the AI4SE International Workshop exploring the challenges and opportunities of bringing Systems Engineering the development of AI/ML functions for safety-critical systems.
This document discusses engineering digitalization through task automation and reuse in the development lifecycle. It proposes a knowledge-centric approach to systems engineering using a knowledge management strategy. This includes defining a controlled vocabulary, relating terms through relationships and clusters, representing textual patterns for matching, and combining rules and tasks to infer information. This knowledge graph could then enable capabilities like requirements extraction, model population, quality checking, and reuse of system artifacts. The approach aims to automate tasks, link different artifact types, and leverage semantics and AI/ML to better understand and exploit knowledge embedded in systems artifacts.
Presentation adapted from the ProSTEP symposium to present the concept and advances in the digitalization of the lifecyle with focus on task automation and reuse.
The objective of this presentation to present some challenges and opportunities in the integration of Systems Engineering and the Artificial Intelligence/Machine Learning model lifecycle.
1) The document discusses how systems engineering methods can be integrated with the AI/ML lifecycle to engineer intelligent systems. It identifies 10 major challenges for this integration, including describing AI/ML model needs and capabilities, integrating AI/ML into specification, verification, and other systems engineering processes.
2) The document proposes concepts for tackling each challenge, such as using standards to describe AI/ML model lifecycles and digital twin environments for verification. It also discusses opportunities like reusing existing AI/ML models and the need to educate new professionals.
3) Key points are that research is active in integrating systems engineering and AI/ML to build safer, more cost-effective cyber-physical systems, and
A presentation of the on-going work on interoperability within the toolchain. A new domain OSLC KM is introduced, some experiments for reusing models are also presented and, some videos are also used to present some user stories.
This document discusses digitalizing the engineering lifecycle through task automation and reuse. It proposes a knowledge-centric systems engineering approach using a knowledge management strategy called "Sailing the V". This involves defining a controlled vocabulary and formalizing relationships between terms, textual patterns, and rules to infer information and link system artifacts like requirements, models, and simulations. The goal is to automate tasks, enable reuse, ensure quality, and provide a more integrated environment for engineers. Future work will focus on data integration, semantics, artificial intelligence, and enhancing engineering methods.
This is the presentation of the paper about the integration of artificial intelligence and the systems engineering lifecycle.
You can find more information in the following link: https://event.conflr.com/IS2019/sessiondetail_395325
This presentation is a keynote in the AI4SE International Workshop exploring the challenges and opportunities of bringing Systems Engineering the development of AI/ML functions for safety-critical systems.
This document discusses engineering digitalization through task automation and reuse in the development lifecycle. It proposes a knowledge-centric approach to systems engineering using a knowledge management strategy. This includes defining a controlled vocabulary, relating terms through relationships and clusters, representing textual patterns for matching, and combining rules and tasks to infer information. This knowledge graph could then enable capabilities like requirements extraction, model population, quality checking, and reuse of system artifacts. The approach aims to automate tasks, link different artifact types, and leverage semantics and AI/ML to better understand and exploit knowledge embedded in systems artifacts.
Presentation adapted from the ProSTEP symposium to present the concept and advances in the digitalization of the lifecyle with focus on task automation and reuse.
The objective of this presentation to present some challenges and opportunities in the integration of Systems Engineering and the Artificial Intelligence/Machine Learning model lifecycle.
1) The document discusses how systems engineering methods can be integrated with the AI/ML lifecycle to engineer intelligent systems. It identifies 10 major challenges for this integration, including describing AI/ML model needs and capabilities, integrating AI/ML into specification, verification, and other systems engineering processes.
2) The document proposes concepts for tackling each challenge, such as using standards to describe AI/ML model lifecycles and digital twin environments for verification. It also discusses opportunities like reusing existing AI/ML models and the need to educate new professionals.
3) Key points are that research is active in integrating systems engineering and AI/ML to build safer, more cost-effective cyber-physical systems, and
Nowadays, the digital transformation is affecting any task, activity, process that is done in any organization or even in our daily life activities. The edu-cation sector, considered as one of the leading sectors in terms of innovation through technology, is also facing a transformation in which digital technol-ogy is rapidly evolving. In this context, the Massive Open Online Courses (MOOC) phenomenon has gained a lot of attraction due to the capability of reaching thousands or even millions of students from all over the world. However, the activities related to MOOCs are not yet being evaluated or quantified as a driver of change. Since the creation of MOOCs requires sup-port and institutional commitment to deliver high-quality courses on tech-nology-based platforms, it seems reasonable to measure the degree of inno-vation in education through the definition of an indicator that collects the commitment of an institution or a person to this new environment of digital education. That is why, in this paper, authors present the definition of a novel indicator and several potential metrics to represent and quantify the degree of innovation in education in universities. Furthermore, a case study is conducted to evaluate 3 different metrics on 36 European universities in the context of the edX and Coursera platforms.
This document discusses software engineering challenges in building AI-based complex systems. It notes that while AI is providing promising results on a functional level, building full systems using AI introduces new technical and non-technical questions. These include how to interpret data, ensure systems are dependable, and address ethical concerns. From a technical perspective, challenges involve efficiently collecting, storing, processing, and analyzing data, as well as building and testing AI-based systems. The document argues that addressing these challenges will require interdisciplinary work between software engineering, data science, and domain knowledge.
The document discusses several challenges in developing AI systems using machine learning. It describes different types of machine learning including supervised, unsupervised, and reinforcement learning. It then discusses some common failures of AI systems in 2018 related to healthcare, recruiting tools, and predictions. The rest of the document focuses on software engineering challenges for AI systems, such as managing complex data dependencies, unintended feedback loops in the systems, limited transparency in deep learning models, and the need for continuous monitoring as the external world changes.
This document discusses new challenges in developing and managing AI-based systems over their lifecycles. It notes that AI engineering requires new approaches to system and software architecture to handle large amounts of data processing and computational requirements. Effective development requires addressing technical issues like heterogeneous platforms and non-technical issues like data ownership and interpretability. Proper software engineering practices are needed to manage challenges across the development cycle from data collection and model training to deployment and system evolution.
BuildingSMART Standards Summit 2015 - Technical Room - Linked Data for Constr...Pieter Pauwels
Presentation at the Technical Room of the BuildingSMART Standards Summit October 2015 in Singapore. The presentation was done together with Jakob Beetz, TUEindhoven, with strong support by Walter Terkaj, ITIA-CNR, and Kris McGlinn, TCDublin. It is part of the SWIMing H2020 project, run by Kris McGlinn (http://swiming-project.eu/).
BabelNet Workshop 2016 - Making sense of building data and building product dataPieter Pauwels
Presentation at the 2016 BabelNet Workshop on 2 March 2016 IN Luxembourg (http://babelnet.org/lux): "Making sense of building data and building product data". Together with Thomas Krijnen (TUEindhoven) and Jakob Beetz (TUEindhoven). The paper is available at http://babelnet.org/lux/index.html#program_section.
Enabling the digital thread using open OSLC standardsAxel Reichwein
This document discusses enabling the digital thread using open OSLC standards. It summarizes that simulation data management is complex due to the multidisciplinary nature of engineering and different data sources having different APIs, preventing connectivity. The digital thread aims to connect all data through a product's lifecycle for increased efficiency. OSLC proposes open standards for common APIs and URLs to identify and connect data across systems. This would allow applications to be decoupled from data sources and enable new applications to reuse existing universal data assets. Universal data management is needed for the digital thread instead of the current discipline-specific approaches.
This document summarizes research into software engineering patterns for designing machine learning systems. A survey found that ML developers have little knowledge of applicable architecture and design patterns. A literature review identified 19 scholarly papers and 19 gray documents discussing practices. The research aims to classify ML patterns according to the typical ML pipeline process and software development lifecycle. It identifies 12 architecture patterns, 13 design patterns, and 8 anti-patterns for ML systems. Future work includes documenting the patterns fully and analyzing their impact on ML system quality attributes.
CD4ML and the challenges of testing and quality in ML systemsSeldon
Speaker: Danilo Sato, principal consultant at ThoughtWorks.
Bio: Danilo Sato (@dtsato) is a principal consultant at ThoughtWorks with experience in many areas of architecture and engineering: software, data, infrastructure, and machine learning. He is the author of "DevOps in Practice: Reliable and Automated Software Delivery", a member of ThoughtWorks Technology Advisory Board, and ThoughtWorks Office of the CTO.
Title: CD4ML and the challenges of testing and quality in ML systems
Abstract: Continuous Delivery for Machine Learning (CD4ML) deals with the challenges of applying Continuous Delivery principles to ML systems to make the end-to-end process of developing and deploying them more repeatable and reliable. These systems are generally more complex than traditional software applications, and ML models are non-deterministic and hard to explain. In this talk we will discuss the challenges of testing and quality in ML systems, and share some practices for applying different types of tests to help overcome those issues.
www.devopsinpractice.com
www.devopsnapratica.com.br
BuildingSMART Standards Summit 2015 - JBeetz - Product Room - Use Cases for i...Pieter Pauwels
Presentation held by Jakob Beetz at the BuildingSMART Standards Summit 2015 in Singapore. The presentation was made in the Product Room and aimed at investigating and discussing the relation between the Linked Data Working Group (LDWG) and the buildingSMART Data Dictionary (bSDD) Working Group.
This document introduces software architecture and provides examples using GitHub. It defines software architecture as the fundamental concepts or properties of a system embodied in its elements, relationships, and design principles. The document outlines Philippe Kruchten's 4+1 view model for describing software architecture, including logical, process, physical and development views in addition to scenarios. Diagrams for GitHub's class, component, sequence and deployment architectures are presented as examples.
[Capella Days 2020] Keynote: MBSE with Arcadia and Capella - Reconciling with...Obeo
by Juan Navas (Thales)
Complex systems engineering programs not only deal with the inherent complexity of the systems they develop but also shall be able to adapt very quickly to changes.
This requires adapting existing well-proven engineering practices in order to support shorter time-to-market, more frequent variations in operational contexts and usages, and more complex engineering organizations. In this talk, Juan Navas will present the latest methodological progress on Arcadia and Capella that tackle these stakes.
TensorFlow London 18: Dr Alastair Moore, Towards the use of Graphical Models ...Seldon
This document discusses using graphical models and machine learning techniques to improve management processes for 21st century businesses. It argues that current management practices have not evolved significantly and are poorly integrated with digital systems. The document proposes designing management tools and business models based on principles of continuous learning and integration between human and machine systems. It presents examples like the machine learning canvas and Wardley mapping to help conceptualize business problems and solutions in a way that facilitates machine learning. The goal is to develop tools that allow businesses to constantly adapt and improve using data and predictive analytics.
ATMOSPHERE was invited to be a speaker at Think Milano event, on 6th June from 14.30 to 17.30, to join a panel discussion called “L’infrastruttura cloud ready protagonista del future” on how cloud infrastructures are important for different market sectors.
The document describes lessons learned from developing protocols to enable data sharing in a virtual enterprise. It discusses protocols selected by the NIIIP Consortium that build on STEP to allow engineering organizations to share technical product data over the Internet. The protocols included SDAI Java/IDL bindings, EXPRESS-X for data mapping, and STEP Services for data integration. These were used to implement a Virtual Enterprise Product Data Repository (VEPR) demonstrated in the last of three cycles to integrate product data from multiple sources. Key lessons included the need for standards to contribute and access controlled data in a VEPR as well as for applications to operate on data from different repositories.
The document discusses IFC2RDF tools that can convert Industry Foundation Classes (IFC) files into Resource Description Framework (RDF) triples. It describes the IFC2RDF converter, which includes conversion code, executable JAR files, documentation, and a REST interface. The converter allows generating RDF graphs and linked data from IFC files. It also discusses developing custom rules, inference engines, and SPARQL queries to generate model view definitions from the converted IFC data.
MPLS/SDN 2013 Intercloud Standardization and Testbeds - SillAlan Sill
This talk givens an overview of several multi-SDO and cross-SDO activities to promote and spur innovation in cloud computing. The focus is on API development and standardization, including testbeds, test use cases, and collaborative activities between organizations to create and carry out development and testing in this area. The focus is on work being pursued through the Cloud and Autonomic Computing Center at Texas Tech University, which is part of the US National Science Foundation's Industry/University Cooperative Research Center, and on work being done by standards organizations such as the Open Grid Forum, Distributed Management Task Force, and Telecommunications Management Forum in which the CAC@TTU is involved. A summary is also given of work to produce a new round of more detailed use cases suitable for testing by the US National Institute of Standards and Technology's Standards Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC) working group, with brief mention also given to other related work going on in this area in other parts of the world. Background and other standards work is also mentioned.
Nowadays, the digital transformation is affecting any task, activity, process that is done in any organization or even in our daily life activities. The edu-cation sector, considered as one of the leading sectors in terms of innovation through technology, is also facing a transformation in which digital technol-ogy is rapidly evolving. In this context, the Massive Open Online Courses (MOOC) phenomenon has gained a lot of attraction due to the capability of reaching thousands or even millions of students from all over the world. However, the activities related to MOOCs are not yet being evaluated or quantified as a driver of change. Since the creation of MOOCs requires sup-port and institutional commitment to deliver high-quality courses on tech-nology-based platforms, it seems reasonable to measure the degree of inno-vation in education through the definition of an indicator that collects the commitment of an institution or a person to this new environment of digital education. That is why, in this paper, authors present the definition of a novel indicator and several potential metrics to represent and quantify the degree of innovation in education in universities. Furthermore, a case study is conducted to evaluate 3 different metrics on 36 European universities in the context of the edX and Coursera platforms.
This document discusses software engineering challenges in building AI-based complex systems. It notes that while AI is providing promising results on a functional level, building full systems using AI introduces new technical and non-technical questions. These include how to interpret data, ensure systems are dependable, and address ethical concerns. From a technical perspective, challenges involve efficiently collecting, storing, processing, and analyzing data, as well as building and testing AI-based systems. The document argues that addressing these challenges will require interdisciplinary work between software engineering, data science, and domain knowledge.
The document discusses several challenges in developing AI systems using machine learning. It describes different types of machine learning including supervised, unsupervised, and reinforcement learning. It then discusses some common failures of AI systems in 2018 related to healthcare, recruiting tools, and predictions. The rest of the document focuses on software engineering challenges for AI systems, such as managing complex data dependencies, unintended feedback loops in the systems, limited transparency in deep learning models, and the need for continuous monitoring as the external world changes.
This document discusses new challenges in developing and managing AI-based systems over their lifecycles. It notes that AI engineering requires new approaches to system and software architecture to handle large amounts of data processing and computational requirements. Effective development requires addressing technical issues like heterogeneous platforms and non-technical issues like data ownership and interpretability. Proper software engineering practices are needed to manage challenges across the development cycle from data collection and model training to deployment and system evolution.
BuildingSMART Standards Summit 2015 - Technical Room - Linked Data for Constr...Pieter Pauwels
Presentation at the Technical Room of the BuildingSMART Standards Summit October 2015 in Singapore. The presentation was done together with Jakob Beetz, TUEindhoven, with strong support by Walter Terkaj, ITIA-CNR, and Kris McGlinn, TCDublin. It is part of the SWIMing H2020 project, run by Kris McGlinn (http://swiming-project.eu/).
BabelNet Workshop 2016 - Making sense of building data and building product dataPieter Pauwels
Presentation at the 2016 BabelNet Workshop on 2 March 2016 IN Luxembourg (http://babelnet.org/lux): "Making sense of building data and building product data". Together with Thomas Krijnen (TUEindhoven) and Jakob Beetz (TUEindhoven). The paper is available at http://babelnet.org/lux/index.html#program_section.
Enabling the digital thread using open OSLC standardsAxel Reichwein
This document discusses enabling the digital thread using open OSLC standards. It summarizes that simulation data management is complex due to the multidisciplinary nature of engineering and different data sources having different APIs, preventing connectivity. The digital thread aims to connect all data through a product's lifecycle for increased efficiency. OSLC proposes open standards for common APIs and URLs to identify and connect data across systems. This would allow applications to be decoupled from data sources and enable new applications to reuse existing universal data assets. Universal data management is needed for the digital thread instead of the current discipline-specific approaches.
This document summarizes research into software engineering patterns for designing machine learning systems. A survey found that ML developers have little knowledge of applicable architecture and design patterns. A literature review identified 19 scholarly papers and 19 gray documents discussing practices. The research aims to classify ML patterns according to the typical ML pipeline process and software development lifecycle. It identifies 12 architecture patterns, 13 design patterns, and 8 anti-patterns for ML systems. Future work includes documenting the patterns fully and analyzing their impact on ML system quality attributes.
CD4ML and the challenges of testing and quality in ML systemsSeldon
Speaker: Danilo Sato, principal consultant at ThoughtWorks.
Bio: Danilo Sato (@dtsato) is a principal consultant at ThoughtWorks with experience in many areas of architecture and engineering: software, data, infrastructure, and machine learning. He is the author of "DevOps in Practice: Reliable and Automated Software Delivery", a member of ThoughtWorks Technology Advisory Board, and ThoughtWorks Office of the CTO.
Title: CD4ML and the challenges of testing and quality in ML systems
Abstract: Continuous Delivery for Machine Learning (CD4ML) deals with the challenges of applying Continuous Delivery principles to ML systems to make the end-to-end process of developing and deploying them more repeatable and reliable. These systems are generally more complex than traditional software applications, and ML models are non-deterministic and hard to explain. In this talk we will discuss the challenges of testing and quality in ML systems, and share some practices for applying different types of tests to help overcome those issues.
www.devopsinpractice.com
www.devopsnapratica.com.br
BuildingSMART Standards Summit 2015 - JBeetz - Product Room - Use Cases for i...Pieter Pauwels
Presentation held by Jakob Beetz at the BuildingSMART Standards Summit 2015 in Singapore. The presentation was made in the Product Room and aimed at investigating and discussing the relation between the Linked Data Working Group (LDWG) and the buildingSMART Data Dictionary (bSDD) Working Group.
This document introduces software architecture and provides examples using GitHub. It defines software architecture as the fundamental concepts or properties of a system embodied in its elements, relationships, and design principles. The document outlines Philippe Kruchten's 4+1 view model for describing software architecture, including logical, process, physical and development views in addition to scenarios. Diagrams for GitHub's class, component, sequence and deployment architectures are presented as examples.
[Capella Days 2020] Keynote: MBSE with Arcadia and Capella - Reconciling with...Obeo
by Juan Navas (Thales)
Complex systems engineering programs not only deal with the inherent complexity of the systems they develop but also shall be able to adapt very quickly to changes.
This requires adapting existing well-proven engineering practices in order to support shorter time-to-market, more frequent variations in operational contexts and usages, and more complex engineering organizations. In this talk, Juan Navas will present the latest methodological progress on Arcadia and Capella that tackle these stakes.
TensorFlow London 18: Dr Alastair Moore, Towards the use of Graphical Models ...Seldon
This document discusses using graphical models and machine learning techniques to improve management processes for 21st century businesses. It argues that current management practices have not evolved significantly and are poorly integrated with digital systems. The document proposes designing management tools and business models based on principles of continuous learning and integration between human and machine systems. It presents examples like the machine learning canvas and Wardley mapping to help conceptualize business problems and solutions in a way that facilitates machine learning. The goal is to develop tools that allow businesses to constantly adapt and improve using data and predictive analytics.
ATMOSPHERE was invited to be a speaker at Think Milano event, on 6th June from 14.30 to 17.30, to join a panel discussion called “L’infrastruttura cloud ready protagonista del future” on how cloud infrastructures are important for different market sectors.
The document describes lessons learned from developing protocols to enable data sharing in a virtual enterprise. It discusses protocols selected by the NIIIP Consortium that build on STEP to allow engineering organizations to share technical product data over the Internet. The protocols included SDAI Java/IDL bindings, EXPRESS-X for data mapping, and STEP Services for data integration. These were used to implement a Virtual Enterprise Product Data Repository (VEPR) demonstrated in the last of three cycles to integrate product data from multiple sources. Key lessons included the need for standards to contribute and access controlled data in a VEPR as well as for applications to operate on data from different repositories.
The document discusses IFC2RDF tools that can convert Industry Foundation Classes (IFC) files into Resource Description Framework (RDF) triples. It describes the IFC2RDF converter, which includes conversion code, executable JAR files, documentation, and a REST interface. The converter allows generating RDF graphs and linked data from IFC files. It also discusses developing custom rules, inference engines, and SPARQL queries to generate model view definitions from the converted IFC data.
MPLS/SDN 2013 Intercloud Standardization and Testbeds - SillAlan Sill
This talk givens an overview of several multi-SDO and cross-SDO activities to promote and spur innovation in cloud computing. The focus is on API development and standardization, including testbeds, test use cases, and collaborative activities between organizations to create and carry out development and testing in this area. The focus is on work being pursued through the Cloud and Autonomic Computing Center at Texas Tech University, which is part of the US National Science Foundation's Industry/University Cooperative Research Center, and on work being done by standards organizations such as the Open Grid Forum, Distributed Management Task Force, and Telecommunications Management Forum in which the CAC@TTU is involved. A summary is also given of work to produce a new round of more detailed use cases suitable for testing by the US National Institute of Standards and Technology's Standards Acceleration to Jumpstart Adoption of Cloud Computing (SAJACC) working group, with brief mention also given to other related work going on in this area in other parts of the world. Background and other standards work is also mentioned.
OGF actively collaborates with other standards organizations through cooperative agreements to develop standards for distributed computing. OGF has relationships with groups like DMTF, ISO, SNIA, ETSI, ITU-T, and NIST to jointly develop standards for areas like cloud computing, identity management, and data formats. These collaborations help drive innovation while avoiding duplication of efforts between organizations.
Overview and introductory remarks for the OGF sessions held May 21-22, 2015 co-located with the European Grid Initiative 2015 conference that took place the week of May 18-22, 2015 in Lisbon, Portugal. For details, see https://www.ogf.org/ogf/doku.php/events/ogf-44
General Introduction to technologies that will be seen in the school ISSGC Summer School
1. The document discusses principles of distributed computing technologies including service-oriented architectures, high-throughput computing, distributed data management, job submission and execution management, using distributed systems, higher-level APIs like OGSA-DAI and SAGA, and workflows.
2. It provides histories and visions for technologies like UNICORE, Condor, Globus, gLite, ARC, and P-GRADE portal.
3. Key principles covered include Web services, parallelism, interoperability, simplicity, scalability, and hiding complexity through high-level interfaces.
Open Security Controls Assessment Language (OSCAL) - 1st Workshop, Nov 5-7, 2019MichaelaIorgaPhD
Abstract:
Aligning security risk management and compliance activities with the broader adoption of cloud technology and the exponential increase in the complexity of smart systems leveraging such cloud solutions has been a challenging task to date. Additionally, the proliferation of container technology employed in cloud ecosystems for enhanced portability and security, compels organizations to leverage risk management strategies that are tightly coupled with the dynamic nature of their systems. NIST’s Open Security Controls Assessment Language (OSCAL) is a standard of standards that provides a normalized expression of security requirements across standards, and machine-readable representation of security information from controls to system implementation and security assessment. This bridges the gap between antiquated approaches to IT compliance and innovative technology solutions.
Imagine a future where security documentation builds itself, and security management tools from different vendors integrate seamlessly. Security practitioners will spend less time on security documentation, assessments, and adjudication, yet the results of those activities will be more accurate and more easily monitored. OSCAL enables this and more.
The document discusses opportunities for VLSI designers in various industries. It describes the NCRAV 2016 conference which aims to provide a platform for students, researchers and professionals to share ideas in VLSI design and related fields. It then lists some topic areas in VLSI design that will be covered, and outlines the revenues and typical end equipment targeted by the semiconductor industry.
Legion is a runtime machine learning platform streamlining the model development process from exploration to production deployment through automation of data workflows, continous delivery, and quality assurance. The project is released under open-source Apache Software License.
Tool-Driven Technology Transfer in Software EngineeringHeiko Koziolek
This talk presentst the tool-driven technology transfer process ABB Corporate Research applies in selected software engineering University collaborations. As an example, we have created an add-in to a popular UML tool and developed the tooling in close interaction with the target users. Centering the technology transfer around tool implementations brings many benefits such as the need to make conceptual contributions applicable and the ability to quickly benefit from the new concepts. A challenge to this form of technology transfer is the long-term commitment to the maintenance of the tooling, which we try to address by creating an open developer community. Tool-driven technology transfer projects have proven to be valuable a instrument of bringing advanced software engineering technologies into our organization.
Emerging standards and support organizations within engineering simulation Modelon
This document discusses emerging standards and support organizations within engineering simulation. It summarizes several key standards including SysML, Modelica, FMI, STEP, and OSLC. It discusses the goals and benefits of standards organizations like INCOSE and NAFEMS. The document advocates that engineers tie themselves to standards rather than specific tools to gain benefits like lower costs, improved competition between vendors, and increased ownership of models.
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...Nane Kratzke
The document discusses a proposed lightweight multi-cloud domain-specific language (DSL) for defining elastic and transferable cloud-native applications. It begins by outlining the research context and motivation to avoid vendor lock-in and make applications portable across different cloud infrastructures. The presentation then describes requirements for a cloud programming language, including supporting containerized deployments, application scaling, lightweight definitions, multi-cloud operations, and infrastructure independence. It proposes a core DSL model and shows how it can be made platform agnostic. An evaluation demonstrates deploying an application to different clouds and runtime environments and transferring it between infrastructures. The DSL is found to fulfill the intended requirements within the limitations of its scope.
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1Uu0ooG.
For over a year now, Adrian has been reading a research paper every weekday and posting a summary to his blog, 'The Morning Paper.' This is the story of what he has learned on the journey - both as it relates to the value of reading papers, and of course, what they tell about what the future may hold for us! Filmed at qconlondon.com.
Adrian Colyer is Venture Partner at Accel Partners, London Author of 'The Morning Paper' (https://blog.acolyer.org). Previously he held CTO roles at SpringSource, VMware, and Pivotal.
Cisco has developed a comprehensive approach, the Mass Scale Networking (MSN) Transformation Journey, that covers both aspects. On the technology front, technologies such as Segment Routing, EVPN, orchestration, automation, HW/SW disaggregation are covered. On the operating model side, the use of advanced APIs, model driven operations, Infrastructure as Code (IaC), and others are also covered. The primary objective of this session being to create a methodical and structured approach to drive an SP’s MSN Journey.
This document discusses challenges and solutions for machine learning at scale. It begins by describing how machine learning is used in enterprises for business monitoring, optimization, and data monetization. It then covers the machine learning lifecycle from identifying business questions to model deployment. Key topics discussed include modeling approaches, model evolution, standardization, governance, serving models at scale using systems like TensorFlow Serving and Flink, working with data lakes, using notebooks for development, and machine learning with Apache Spark/MLlib.
Cloud Standards in the Real World: Cloud Standards Testing for DevelopersAlan Sill
Learn about standards studied in the US National Science Foundation Cloud and Autonomic Computing Industry/University Cooperative Research Center Cloud Standards Testing Lab and how you can get involved to extend the successes from these results in your own cloud software settings. Presented at the O'Reilly OSCON 2014 Open Cloud Day.
Video available at https://www.youtube.com/watch?v=eD2h0SqC7tY
Conference: 13th IEEE International Conference on Industrial Informatics, INDIN 2015. Cambridge, UK – July 22-24 2015
Title of the paper: Towards processing and reasoning streams of events in knowledge-driven manufacturing execution systems
Authors: Borja Ramis Ferrer, Sergii Iarovyi, Andrei Lobov, José L. Martinez Lastra
This is a presentation by Prof. Anne Elster at the International Workshop on Open Source Supercomputing held in conjunction with the 2017 ISC High Performance Computing Conference.
The document summarizes key topics from the Cloud Native Summit conference, including:
- Distributed tracing and Zipkin, which allows visibility into request paths and troubleshooting of latency issues. Zipkin is an open source distributed tracing system.
- Production ready Kubernetes clusters on Catalyst Cloud, which provides security, high availability, and scalability for containerized applications.
- Building serverless applications at scale using services like AWS Lambda, and addressing concurrency bottlenecks when autoscaling.
- Istio service mesh, which provides control of traffic policies, authentication, and observability across distributed services through its control plane and sidecar proxy architecture.
- GitOps for infrastructure as code deployments on Open
Scilab Challenge@NTU 2014/2015 Project BriefingTBSS Group
The document discusses a Scilab challenge project hosted by NTU that aims to promote the use of Scilab in academic institutions. Students are tasked with developing a radar design toolbox for Scilab and XCos to simulate radar systems. Top projects will receive prizes and an opportunity to present at an international conference. The TBSS-Scilab partnership manages Scilab activities and user groups in Singapore and Vietnam.
Summary
The Cytoscape Cyberinfrastructure (CI) extends the successful Cytoscape development and community model by enabling network biologists to contribute and leverage microservices deployable at scale. The CI solves many of Cytoscape’s limitations while also delivering novel and dynamic functionality to both Cytoscape and standalone workflows, thus further empowering the already vital network biology community.
Abstract
Cytoscape is an indispensable tool for network data analysis and visualization. One of Cytoscape’s greatest strengths is that it is powered by a vibrant array of developer-contributed apps. However, as network biologists’ requirements evolve, Cytoscape is challenged not only to keep pace, but to lead new and existing developers to create even greater value. Currently, multiscale and multifaceted networks push the memory limits of a Cytoscape workstation, while complex calculations such as Network Based Stratification and Network Based GWAS strain workstation processors. Increasingly, users demand support for collaborative projects, reproducible workflows, and interoperability with external tool chains. Finally, economic pressures favor solutions that promote code and algorithm reusability and evolvability.
In response, we have created the Cytoscape Cyberinfrastructure (CI), which is both an Internet-scale distributed system (based on Microservices [1]) and the network biology community it serves. Its mission is to enable and encourage network biologists to create and deploy high quality, innovative and scalable services focusing on network-based computation, collaboration and visualization.
Microservices can be written in any language, and are highly testable and evolvable. They can run on servers ranging from a single thread to a large cloud-based cluster. They can easily be reused in reproducible workflows or can serve as components in larger services. The CI links microservices via a light weight REST-based aspect-oriented interchange protocol (called CX), which enables tailored data streams while supporting service innovation via evolvable standards. CI infrastructure services support user authentication, long duration job execution, and a service repository that enables researchers to publish their services or discover services published by others. This model builds on the successful Cytoscape app community, which is based on similar mechanisms though at the scale of individual workstations.
Prominent examples of microservices include NDEx [2] (a repository for biological networks), NodeWalker (which uses heat dispersion to identify the most relevant subnetworks containing a given set of genes), cyNetShare [3] (which visualizes a network in a browser) and Cytoscape itself (which can also call CI services). Interfaces are available for Python, IPython, R and Matlab. Future work includes adding clustering, analysis, layout, publishing and display microservices and interfaces to Galaxy and Taverna workflows.
Similar to OSLC KM (Knowledge Management): elevating the meaning of data and operations within the toolchain (20)
El documento describe una investigación sobre la monitorización de redes sociales y la desinformación en Europa. El proyecto busca desarrollar una plataforma híbrida que utilice técnicas deterministas e inteligencia artificial para clasificar y analizar contenidos en redes, detectar bots y medir la viralidad. El objetivo final es ayudar a verificar la información y combatir la desinformación.
Este documento presenta una introducción a Deep Learning. Comienza con una agenda que incluye una visión general de Deep Learning, Keras y ejemplos de casos de uso. Luego cubre arquitecturas y configuraciones de redes neuronales profundas, incluidas funciones de activación, pérdida y ejemplos de redes como AlexNet y ResNet. También describe el entorno tecnológico, incluidos frameworks como TensorFlow y Keras, e infraestructura en la nube. Finalmente, proporciona una metodología de trabajo y una lista de ejemplos práct
This is the final degree project of Eduardo Cibrián that has developed a semantic system to generate news headlines for several sports based on a set of patterns
In this presentation, a an overview of the blockchain foundations are presented. The presentation introduces the use of blockchain in the music industry. To do so, a good number of platforms are presented. It mainly reviews the use of blockchain for intellectual property management, digital identity, monetization, etc.
El documento presenta una arquitectura propuesta para Big Data que incluye bloques funcionales para adquisición masiva de datos, almacenamiento en Data Lake, análisis de demografía, lenguaje natural, métricas sociales y redes ego, y presentación de visualizaciones. El objetivo es analizar grandes cantidades de datos para segmentar audiencias y tomar decisiones en RTVE.
This document provides instructions for a simple presentation. It outlines the basic steps one should follow when giving a presentation, including introducing yourself and the topic, presenting the main points in a clear manner, and concluding by summarizing what was covered. The guidelines are intended to help give a straightforward presentation without unnecessary complexity or extras.
This document provides an introduction and overview of SKOS (Simple Knowledge Organization System), which is a Semantic Web vocabulary for representing knowledge organization systems such as thesauri, classification schemes, and taxonomies. It describes the core SKOS entities like Concept and Concept Scheme, and properties like prefLabel, notation, and semantic relations. The document then provides a step-by-step guide to modeling a car taxonomy in SKOS, including assigning URIs, adding labels and documentation, and linking concepts with semantic and mapping properties. Tips are also included around best practices like reusing existing vocabularies and defining custom properties. Overall, the document serves as a tutorial for how to represent and structure a controlled vocabulary using the SKOS
This document describes the CORFU technique for unifying and reconciling corporate names in public contracts metadata. It aims to create a "big name" or unique identifier for each company by normalizing varying names into a single entry. The technique is applied to 400,000 supplier names in Australian public procurement data. It involves loading the names, normalizing text, filtering basic names, applying natural language processing including tokenization and stemming, clustering similar names, and selecting cluster representatives to link names to a company URI. The goal is to improve transparency in tracking where public money is spent.
The document describes an approach called RDFIndex for representing and computing quantitative indexes using semantic web technologies. The main contributions are a high-level model built on top of the RDF Data Cube Vocabulary for representing indexes, and a Java-SPARQL based processor to exploit metadata, validate indexes, and compute new index values. An example index called the "World Bank Naive Index" is used to illustrate how the RDFIndex approach can represent the structure of an index and its components/indicators in RDF, and compute the index values.
Some slides about the Map/Reduce programming model (academic purposes) adapting some examples of the book Map/Reduce design patterns.
Special thanks to the next authors:
-http://shop.oreilly.com/product/0636920025122.do
-http://mapreducepatterns.com/index.php?title=Main_Page
-http://highlyscalable.wordpress.com/2012/02/01/mapreduce-patterns/
This document discusses quality management for service-based systems and cloud applications. It begins with quotes from Aristotle about how constantly performing certain actions can lead one to acquire particular qualities. It then discusses the need to manage how cloud services act in order to ensure quality. The document provides an overview of a proposed quality management architecture, including concepts like quality models, monitoring tools, and execution environments for analytics. It also reviews some existing quality models, monitoring techniques and tools, and cloud management platforms. Finally, it outlines next steps around designing and testing a complete quality management example.
The document discusses the need for a pan-European e-procurement platform to aggregate, publish, and search public procurement notices using linked open data. It proposes the MOLDEAS approach, which would transform procurement data into structured, machine-readable formats using semantic web technologies to enable more effective search and reuse across borders. By making procurement data openly available according to linked data principles, it aims to boost SME participation in public contracts throughout Europe.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
OSLC KM (Knowledge Management): elevating the meaning of data and operations within the toolchain
1. OSLC KM: elevating the
meaning of data and operations
within the toolchain
Jose María Alvarez, Roy Mendieta & Juan Llorens | Assoc. Prof. UC3M | josemaria.alvarez@uc3m.es
2. 2OSLC Fest 2018
OSLC KM
Introduction
Characteristics?
Aspect Comment
Type of product Complex (very complex!)
Development
lifecycle
Multidisciplinary (software,
mechanics, electronics, etc.)
→ Time and costs
Functionality It is being increased over time
Lifetime Long (+30 years)
Regulation
(under)
High
Suppliers Thousands
Engineers Thousands
Customers Hundreds
Scope International
… …
4. 4OSLC Fest 2018
OSLC KM
Some needs…
A knowledge model to drive the
development lifecycle.Knowledge
Naming
Ad-hoc
integration
Discovery
Collaborati
on
Vendor
Lock-in
A common vocabulary to
standardize the naming of any
system artefact.
Integration of different tools.
A method to automatically
discovery and manage traces.
An engineering environment to
ensure quality, save costs and
enable team collaboration.
A method to avoid the vendor
lock-in ensuring compatibility in
terms of models, formats,
access protocols, etc.
5. 5OSLC Fest 2018
OSLC KM
Reuse principles
Abstraction
• Complexity management
• How and when “reuse” is possible
Selection
• Artefact discovery
• Representation, storage, classification
and comparison
Specialization
• How artefacts can be customized?
Integration
• To what extent the artefact can
be easily integrated in other
context.
6. 6OSLC Fest 2018
OSLC KM
Main question
Is it possible to:
-improve the degree of reuse of any system
artefact and
-deliver added-value services
through a common representation and an
interoperable access model?
7. 7OSLC Fest 2018
Core
(Configuration
Management,
Reporting )
ALM-PLM
Architecture
Management
Asset
Management
Automation
Change
Management
Estimation &
Measurement
Performance
Monitoring
Quality
Management
Reconciliation
Requirements
Management
Others
• Mobile
OSLC KM
Related work: representation and data exchange
Open Services for Lifecycle Collaboration (OSLC)
REST services + Linked Data + Resource Shape
Model-based Systems Engineering (MBSE) → SysML
ISO STEP 10303
(STandard for the Exchange of Product model data)
W3C Recommendation SHACL and Shape Expressions
8. 8OSLC Fest 2018
OSLC KM
Related work: system artefact reuse
Models & Quality
Libraries &
Components
Previous works
Ontologies
Product lines
Repositories
[2] [4]
[1]
[5]
[3]
[7] [8]
[6]
9. 9OSLC Fest 2018
OSLC KM
Preliminary evaluation
• Some types of artefacts can not be represented (and lack of connectors for any X)
• Linked Data and RDF suits well mainly for data exchanging
• STEP is not service oriented→ making integration more difficult
OSLC/
STEP
• Not everything is a model
• Not every model is a SysML model
• Different SysML interpretations
MBSE
• Approaches focused on software artefacts (components and product lines)
• Component models and web services (operations)
• Common data models (data)
Reuse
11. 11OSLC Fest 2018
OSLC KM
Concept: a winning strategy
Visualization
Integrated view of system
artefacts.
Human interface
Query artefacts using natural
language.
Automation of tasks
Support to tasks that require
a whole view of the system:
-Test case description
-Change impact analysis
-Populate models
-Documentation
…
Quality
Ensure the quality of any
system artefact
Language uniformity
Ensure consistency along the
development lifecycle.
Traceability
Discover and manage links.
13. 13OSLC Fest 2018
OSLC KM
Concept: metadata, data and operations
Attributes
Data
Meta
Contents
Operations
3rd party
functiona-
lities
Quality,
traceability,
naming,
documenting,
etc.
14. 14OSLC Fest 2018
OSLC KM
OSLCKM Resource Shape
System Representation
Language
System Knowledge Base Domain Ontology
System Assets Store Domain artifacts
Delegated Operations
Functionality
Interface
Concept: OSLC KM (Knowledge Management)
See specification: http://trc-research.github.io/spec/km/
16. 16OSLC Fest 2018
OSLC KM
OSLC KM: Domain ontology
Taxonomy
Semantic relationships
Controlled vocabulary
Domain vocabulary
Patterns
Templates built on top of
the domain vocabulary
and semantic
relationships.
E.g. requirements,
design, etc.
Inference
Generation of new
knowledge
Consistency
…
17. 17OSLC Fest 2018
OSLC KM
OSLC KM: domain artefacts
Input artefact
Tool k
Transformation
rules
SKB
SRL
(industrial knowledge graph)
Linked Data
Text
SysML
Modelica
Simulink
…
18. 18OSLC Fest 2018
• D6.3 Design of the AMASS tools and methods for
cross/intra-domain reuse (b)
• Mapping between WSDL and REST (and json-rpc)
OSLC KM
OSLC KM: delegated operation
19. 19OSLC Fest 2018
OSLC KM
OSLC KM: functional architecture
Mapping Rules
RDF2DataS
hape
(Visitor
Patterrn)
Reasoning
process to
classify and
infer new
triples
(optional)
Validation
& Data
Shape
generation
OSLC KM specification
OSLC-KM processor
OSLC-
based
resources
and RDF
Semantic
Indexing
process
OSLC KM
based
resources
RDF vocabularies
Semantic
Search
Process &
Naming
SAR
Traceabi
lity
OSLC KM item2
OSLC KM item1
Quality
Checking
Quality rules
Visualiza
tion
General-purpose
view
Preferred view
System Artefact or
Natural language query
End-users and
tools
OSLC KM interface
OSLC KM items
(OSLC resources
&
skos:Concept)
OSLC KM items
(mappings)
OSLC KM items
(OSLC resources+
quality metrics)
System Artefact
Repository
20. 20OSLC Fest 2018
OSLC KM
OSLC KM: technological environment
Tool k
Step 5
Common
Services
OSLC KM
Provider
(.Net, Java)
HTTP
&
RDF
OSLC KM
Client &
Provider
(.Net)
CAKE
(.Net)
KM
SAS & SKB
(.Net)
OSLC KM
adapter
(.Net, Java, XSLT)
See libraries: https://github.com/trc-research/oslc-km
21. 21OSLC Fest 2018
OSLC KM
OSLC KM: technological environment
Tool k
Step 5
Common
Services
OSLC KM
Provider
(.Net, Java)
HTTP
&
RDF
OSLC KM
Client &
Provider
(.Net)
CAKE
(.Net)
KM
SAS & SKB
(.Net)
OSLC KM
adapter
(.Net, Java, XSLT)
See libraries: https://github.com/trc-research/oslc-km
22. 22OSLC Fest 2018
OSLC KM
Implementation: A world of knowledge by The Reuse Company
SIM
System
Interoperability
Manager
OSLC - KM OSLC - KMOSLC - KM
KCSE
Quality
Traceability
Retrieval & ReuseInteroperability
Reasoning
Authoring
23. 23OSLC Fest 2018
OSLC KM
Scientific experimentation [9]
Enabling system artefact exchange and selection through a Linked
Datalayer. Jose María Álvarez-Rodríguez; Mendieta, R.; de la Vara, J. L.;
Fraga, A.; and Llorens, J. J. UCS (JUCS), In-Press: 1–24. 2018
03
02
01
• Logical SysML models and two tools:
Papyrus and IBM Rhapsody
• Physical models from Simulink
Selection of tools and types of artefacts
• 25 user-based quries for SyML models and
20 for Simulink models
• AMASS project
Design of queries
• Common information retrieval
performance metrics:
• Precision, recall y F1 measure [10].
Selection of performance metrics
06
05
04
Based on [11]:
1) Precision > 20% acceptable, >30% good & > 50%
Excellent
2) Recall: > 60% acceptable, > 70% good and > 80%
Excellent
Selection of acceptance ranges
• Perform queries on top of the selected
models to calculate the performance
metrics
Execution
• Analysis of results based on the acceptance
ranges.
Analisys of results and limitations
Data is available here: https://github.com/trc-research/oslc-km
24. 24OSLC Fest 2018
OSLC KM
Design of the experiment: user queries
Id Query
Q1 System availability
Q2 Maximum rate of failure
Q3 Manage Traffic flow
Q4 System for purify water
Q5 System using remote control component
Q6 System use cameras
Q7 System with an statistical data component
Q8 System Performance Requirements
Q9 Requirements of System Usability
Q10 System with Simulation Component
Q11 Group Creation
Q12 System Restrictions Requirements
Q13 System that use Sensors
Q14 Gather and Interpret Information Module
Q15 Adaptive Control
Q16 Consistency in transaction
Q17 Manual Control
Q18 intruders detection
Q19 Time Validation
Q20 computer response time
Q21 System validation cards
Q22 tasks and scenarios
Q23 traffic management based in the region
Q24 semaphores automatic operation
Q25 Control standard
Id Consulta
Q1 A flow between a constant , product, block sum and a outport block.
Q2 A flow between an inport, product, an a block sum.
Q3 A flow between an inport, block sum and integrator.
Q4 A flow between a subsystem and outport block.
Q5 A flow between a subsystem and to Workspace block.
Q6 A flow between a Transport Delay and Subsystem block.
Q7 A flow between a Integrator block, Transport Delay and Subsystem block.
Q8 A flow between a Inport and constant blocks with a product block.
Q9 A flow between a Inport and constant blocks with a product block and
the product block with outport block
Q10 A flow between a Integrator and Subsystem, Add block and subsystem
and Subsystema with Subsystem
Q11 A flow between a Integrator and Subsystem, Add block and subsystem
and Subsystema with Subsystem1 and subsystem2
Q12 A flow between a Integrator and Subsystem, Add block and subsystem
and Subsystema with Subsystem1 and subsystem2 with to Workspace
block
Q13 Model with no flows only inport block, outport block and product block
Q14 Two submodels of A flow between an inport, product, an a block sum and
outport.
Q15 Two submodels of A flow between an inport, product, an a block sum and
outport with two constants
Q16 A flow between inport and add block, and two inports nodes without
flow
Q17 A flow between add bloc and constant with divide block.
Q18 A flow between divide block tro integrator nodes and tree outports block
Q19 A flow between integrator block and aoutport block and two outports
block and one add block with no flows
Q20 A flow between 4 transfer delay with two subsystems.
Logical models Physical models
25. 25OSLC Fest 2018
• Precision: fraction of relevant models among the retrieved models.
• Value [0-1]
(𝑃)𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 =
| 𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑚𝑜𝑑𝑒𝑙𝑠 ∩ 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑑 𝑚𝑜𝑑𝑒𝑙𝑠 |
| 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑑 𝑚𝑜𝑑𝑒𝑙𝑠 |
• Recall: fraction of relevant models that have been retrieved over the
total amount of relevant models.
• Value [0-1]
(𝑅)𝑒𝑐𝑎𝑙𝑙 =
| 𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑚𝑜𝑑𝑒𝑙𝑠 ∩ 𝑟𝑒𝑡𝑟𝑖𝑒𝑣𝑒𝑑 𝑚𝑜𝑑𝑒𝑙𝑠 |
| 𝑟𝑒𝑙𝑒𝑣𝑎𝑛𝑡 𝑚𝑜𝑑𝑒𝑙𝑠 |
• F1-measure: harmonic mean of precision and recall.
• Value [0-1]
𝐹1 = 2 ∗
𝑃 ∗ 𝑅
𝑃 + 𝑅
OSLC KM
Design of the experiment: performance metrics
26. 26OSLC Fest 2018
Physical models-Simulink
Logical models-SysML
OSLC KM
Analysis: aggregated values
0.77
0.96
0.82
0.67
0.85
0.66
0.71
0.82
0.7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P R F1
OSLC KM Papyrus IBM Rhapsody
0.68
0.79
0.61
0.32 0.31
0.55
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
P R F1
OSLC KM Simulink
27. 27OSLC Fest 2018
OSLC KM
Analysis of results: OSLC KM
Logical models-SysML
27
32%
Rest68%
Excellent
Physical models-Simulink
12%
Rest88%
Excellent
60%
Rest
40%
Excellent
Precision
Recall
F1
40%
Rest60%
Excellent
60%
Excellent
40%
Rest
57%
Rest
43%
Excellent
28. 28OSLC Fest 2018
OSLC KM
Scientific experimentation: limitations
• User queries are restricted to the AMASS use cases.
• Models are restricted to the AMASS use cases and those part of the common
libraries.
• Only two types of models are considered
Data
• Continuous calculation and improvement of the performance metrics, create a
kind of “benchmark”.
• Measure the impact of quality in the degree of reuse.
Process
• Robustness analysis to measure the impact of the different representations of
the same data.
Analysis
29. 29OSLC Fest 2018
OSLC KM
User story *: Extract information from legacy documents
As
So that…
I want to…
Check quality, find
similar
requirements,
recovery traces, etc.
Requirements
Engineer
Import requirements
Already available in
Word/PDF documents
30. 30OSLC Fest 2018
OSLC KM
User story I: Reuse (and find similar) logical & physical models
As
So that…
I want to…
Reuse of existing
system artefacts
-Recovery traces
Domain Engineer Access and search logical
and physical models
I am using different tools:
Modelica, Papyrus, IBM
Rhapsody and Magic
Draw
31. 31OSLC Fest 2018
OSLC KM
User story II: Check quality of logical models
As
So that…
I want to…
Ensure that
everything is CCC
Quality/Domain
Engineer
Check the quality of my
models
I am using different tools:
Modelica, Papyrus, IBM
Rhapsody and Magic
Draw
32. 32OSLC Fest 2018
OSLC KM
User story III: Generate documentation
As
So that…
I want to…
Create consistent
and up-to-date
documentation
Automation
Domain Engineer Report documentation
Reuse of my system
artefacts
33. 33OSLC Fest 2018
OSLC KM
User story IV: Populate models from Simulink (e.g. an ontology)
As
So that…
I want to…
Create my web
ontology based on
OWL
Domain Engineer Reuse my physical
models to populate an
ontology
34. 34OSLC Fest 2018
OSLC KM
User story V: Checklist from MSExcel
As
So that…
I want to…
Measure the
completeness of my
process
Systems Engineer Validate whether a set
activities have been
fulfilled
35. 35OSLC Fest 2018
OSLC KM
Conclusions and Future work
-OSLC and Linked Data suits well
for data exchange.
-Define methodology to reuse
vocabularies, etc.
Data
exchange
Represen-
tation
Reuse
Coverage
Experiment
& User
stories
OSLC KM
SRL is a language and a model
repository to ease the reuse of
existing data and operations.
Existing tools should improve its
support to interoperability
mechanisms in both : data and
operations.
-Increase the number of tools that
are supported.
-API-economy
-Extend the existing experiments
and user stories.
-Take advantage of the industrial
knowledge graph.
-Release new versions of the
source code.
-Reach a higher TRL (8-9)
-Promote the approach to OASIS
OSLC
36. 36OSLC Fest 2018
OSLC KM
Acknowledgements
The research leading to these results has received funding from the AMASS project
(H2020-ECSEL grant agreement no 692474; Spain's MINECO ref. PCIN-2015-262) and
the CRYSTAL project (ARTEMIS FP7-CRitical sYSTem engineering AcceLeration project no
332830-CRYSTAL and the Spanish Ministry of Industry).
Learn more: https://www.amass-ecsel.eu/
37. Thank you
for your
attention!
Juan Llorens & Jose María Álvarez-Rodríguez
Josemaria.alvarez@uc3m.es
@chema_ar
Take a seat and comment with us!
38. 38OSLC Fest 2018
1. W. Frakes and C. Terry, “Software reuse: metrics and models,” ACM Comput. Surv. CSUR, vol. 28, no. 2, pp.
415–435, 1996.
2. A. Mili, R. Mili, and R. T. Mittermeir, “A survey of software reuse libraries,” Ann. Softw. Eng., vol. 5, pp. 349–
414, 1998.
3. J. Guo and others, “A survey of software reuse repositories,” in Engineering of Computer-Based Systems,
IEEE International Conference on the, 2000, pp. 92–92.
4. R. Land, D. Sundmark, F. Lüders, I. Krasteva, and A. Causevic, “Reuse with software components-a survey of
industrial state of practice,” in Formal Foundations of Reuse and Domain Engineering, Springer, 2009, pp.
150–159.
5. T. Thüm, S. Apel, C. Kästner, I. Schaefer, and G. Saake, “A Classification and Survey of Analysis Strategies for
Software Product Lines,” ACM Comput. Surv., vol. 47, no. 1, pp. 1–45, Jun. 2014.
6. V. Castañeda, L. Ballejos, L. Caliusco, and R. Galli, “The Use of Ontologies in Requirements Engineering,”
GJRE, vol. 10, no. 6, 2010.
7. R. Mendieta, J. L. de la Vara, J. Llorens, and J. M. Alvarez-Rodríguez, “Towards Effective SysML Model Reuse,”
in Proceedings of the 5th International Conference on Model-Driven Engineering and Software
Development - Volume 1: MODELSWARD, 2017, pp. 536–541.
8. Elena Gallego, J. M. Alvarez-Rodríguez and J. Llorens, “Reuse of Physical System Models by means of
Semantic Knowledge Representation: A Case Study applied to Modelica,” in Proceedings of the 11th
International Modelica Conference 2015, 2015, vol. 1.
9. N. Juristo and A. M. Moreno, Basics of Software Engineering Experimentation, vol. 5/6. Springer Science &
Business Media, 2001.
10. W. B. Croft, D. Metzler, and T. Strohman, Search Engines: Information Retrieval in Practice. Pearson
Education, 2010.
11. J. H. Hayes, A. Dekhtyar, and S. K. Sundaram, “Improving after-the-fact tracing and mapping: Supporting
software quality predictions,” IEEE Softw., vol. 22, pp. 30–37, 2005.
OSLC KM
References