In the new era of digitalization, there is an ever-growing need for design and production processes capable of increasing systems quality, reducing risks and the chance of errors, while, at the same me, reducing overall production costs. Nowadays, more and more systems design scenarios comprise a high number of domains.
However, the underlying tool landscape is still dominated by closed ecosystems, resulting in the design data remaining in separate silos. To effectively deal with novel, massively diverse yet interconnected engineering scenarios, while also considering industrial sustainability and the well-being of the future digital society, we have to propose new ways to look at the digital thread, supporting every phase of a digital engineering lifecycle, while turning the siloed multi-domain engineering data into a holistic, accessible and globally analyzable digital thread.
The document discusses the need for holistic systems engineering and breaking down disconnected silos. It proposes a conceptual framework using lightweight traceability and digital thread analytics to address completeness, correctness, and consistency challenges when transferring engineering data between systems engineering tools and detailed design tools. This approach aims to improve communication, reduce defects, and connect previously separated disciplines and tools.
The Genesis of Holistic Systems Engineering: Completeness and Consistency Man...IncQuery Labs
IncQuery Group's presentation for the MBSE CES event, “The Genesis of Holistic Systems Engineering: Completeness and Consistency Management of the Digital Thread”
A machine learning and data science pipeline for real companiesDataWorks Summit
Comcast is one of the largest cable and telecommunications providers in the country built on decades of mergers, acquisitions, and subscriber growth. The success of our company depends on keeping our customers happy and how quickly we can pivot with changing trends and new technologies. Data abounds within our internal data centers and edge networks as well as both the private and public cloud across multiple vendors.
Within such an environment and given such challenges, how do we get AI, machine learning, and data science platforms built so our company can respond to the market, predict our customers’ needs and create new revenue generating products that delight our customers? If you don’t happen to be our friends and colleagues at Google, Facebook, and Amazon, what are technologies, strategies, and toolkits you can employ to bring together disparate data sets and quickly get them into the hands of your data scientists and then into your own production systems for use by your customers and business partners?
We’ll explore our journey and evolution and look at specific technologies and decisions that have gotten us to where we are today and demo how our platform works.
Speaker
Ray Harrison, Comcast, Enterprise Architect
Prashant Khanolkar, Comcast, Principal Architect Big Data
Knowledge-Based Analysis and Design (KBAD): An Approach to Rapid Systems Engi...Elizabeth Steiner
The document describes Knowledge-Based Analysis and Design (KBAD), a methodology developed by Systems and Proposal Engineering Company for rapid systems engineering and architecture development. KBAD combines system engineering and program management disciplines to develop an executable knowledge base that can support decision-making across a system's lifecycle. It utilizes a modified form of Model-Based Systems Engineering (MBSE) with simplified constructs and relationships between elements. The goal is to reduce complexity and capture the essential information needed for analysis and design in a more cost-effective manner than traditional approaches.
Embedded systems are application-specific systems that contain both hardware and software tailored for a particular task. Good hardware/software codesign involves representing the system functionality using unified models that can be partitioned between hardware and software implementations. There are various partitioning algorithms that aim to optimize metrics like performance, cost and power consumption by assigning functional objects to either hardware or software components. The choice of modeling language and partitioning approach depends on the application and design constraints.
The document discusses the need for holistic systems engineering and breaking down disconnected silos. It proposes a conceptual framework using lightweight traceability and digital thread analytics to address completeness, correctness, and consistency challenges when transferring engineering data between systems engineering tools and detailed design tools. This approach aims to improve communication, reduce defects, and connect previously separated disciplines and tools.
The Genesis of Holistic Systems Engineering: Completeness and Consistency Man...IncQuery Labs
IncQuery Group's presentation for the MBSE CES event, “The Genesis of Holistic Systems Engineering: Completeness and Consistency Management of the Digital Thread”
A machine learning and data science pipeline for real companiesDataWorks Summit
Comcast is one of the largest cable and telecommunications providers in the country built on decades of mergers, acquisitions, and subscriber growth. The success of our company depends on keeping our customers happy and how quickly we can pivot with changing trends and new technologies. Data abounds within our internal data centers and edge networks as well as both the private and public cloud across multiple vendors.
Within such an environment and given such challenges, how do we get AI, machine learning, and data science platforms built so our company can respond to the market, predict our customers’ needs and create new revenue generating products that delight our customers? If you don’t happen to be our friends and colleagues at Google, Facebook, and Amazon, what are technologies, strategies, and toolkits you can employ to bring together disparate data sets and quickly get them into the hands of your data scientists and then into your own production systems for use by your customers and business partners?
We’ll explore our journey and evolution and look at specific technologies and decisions that have gotten us to where we are today and demo how our platform works.
Speaker
Ray Harrison, Comcast, Enterprise Architect
Prashant Khanolkar, Comcast, Principal Architect Big Data
Knowledge-Based Analysis and Design (KBAD): An Approach to Rapid Systems Engi...Elizabeth Steiner
The document describes Knowledge-Based Analysis and Design (KBAD), a methodology developed by Systems and Proposal Engineering Company for rapid systems engineering and architecture development. KBAD combines system engineering and program management disciplines to develop an executable knowledge base that can support decision-making across a system's lifecycle. It utilizes a modified form of Model-Based Systems Engineering (MBSE) with simplified constructs and relationships between elements. The goal is to reduce complexity and capture the essential information needed for analysis and design in a more cost-effective manner than traditional approaches.
Embedded systems are application-specific systems that contain both hardware and software tailored for a particular task. Good hardware/software codesign involves representing the system functionality using unified models that can be partitioned between hardware and software implementations. There are various partitioning algorithms that aim to optimize metrics like performance, cost and power consumption by assigning functional objects to either hardware or software components. The choice of modeling language and partitioning approach depends on the application and design constraints.
Eclipse Hawk provides scalable querying of models by indexing them into graph databases. It addresses challenges of collaborative modeling on large systems by distributed teams. The Hawk API is designed for flexibility, performance, and scalability through features like multiple communication styles, efficient encodings, and paged results.
Deploying ML models in production, with or without CI/CD, is significantly more complicated than deploying traditional applications. That is mainly because ML models do not just consist of the code used for their training, but they also depend on the data they are trained on and on the supporting code. Monitoring ML models also adds additional complexity beyond what is usually done for traditional applications. This talk will cover these problems and best practices for solving them, with special focus on how it's done on the Databricks platform.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Source-to-source transformations: Supporting tools and infrastructurekaveirious
Introduction to source-to-source transformation. Concept and overview. Basics of existing tools (TXL, ROSE, Cetus, EDG, C-to-C, Memphis); pros and cons. Part of an internal evaluation for selecting a source-to-source transformation tool.
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
The document discusses Clean Architecture, an architectural pattern for software design. It aims to facilitate maintainability, technical agility, and independent development. Clean Architecture prescribes separating an application into distinct layers - entities, use cases, interfaces, and entry points. This separation aims to make codebases independent of frameworks and easily testable. The document outlines principles like SOLID and DRY, and patterns like layered architecture and MVC that influence Clean Architecture. It provides tips for migrating existing applications to this architecture.
How a Data Mesh is Driving our Platform | Trey Hicks, GlooHostedbyConfluent
At Gloo.us, we face a challenge in providing platform data to heterogeneous applications in a way that eliminates access contention, avoids high latency ETLs, and ensures consistency for many teams. We're solving this problem by adopting Data Mesh principles and leveraging Kafka, Kafka Connect, and Kafka streams to build an event driven architecture to connect applications to the data they need. A domain driven design keeps the boundaries between specialized process domains and singularly focused data domains clear, distinct, and disciplined. Applying the principles of a Data Mesh, process domains assume the responsibility of transforming, enriching, or aggregating data rather than relying on these changes at the source of truth -- the data domains. Architecturally, we've broken centralized big data lakes into smaller data stores that can be consumed into storage managed by process domains.
This session covers how we’re applying Kafka tools to enable our data mesh architecture. This includes how we interpret and apply the data mesh paradigm, the role of Kafka as the backbone for a mesh of connectivity, the role of Kafka Connect to generate and consume data events, and the use of KSQL to perform minor transformations for consumers.
[2017/2018] Introduction to Software ArchitectureIvano Malavolta
This document provides an introduction to software architecture concepts. It defines software architecture as the selection of structural elements and their interactions within a system. Common architectural styles are described, including Model-View-Controller (MVC), publish-subscribe, layered, shared data, peer-to-peer, and pipes and filters. Tactics are introduced as design decisions that refine styles to control quality attributes. The document emphasizes that architectural styles solve recurring problems and promote desired qualities like performance, security, and maintainability.
[2016/2017] Introduction to Software ArchitectureIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Revolutionary container based hybrid cloud solution for MLPlatform
Ness' data science platform, NextGenML, puts the entire machine learning process: modelling, execution and deployment in the hands of data science teams.
The entire paradigm approaches collaboration around AI/ML, being implemented with full respect for best practices and commitment to innovation.
Kubernetes (onPrem) + Docker, Azure Kubernetes Cluster (AKS), Nexus, Azure Container Registry(ACR), GlusterFS
Workflow
Argo->Kubeflow
DevOps
Helm, kSonnet, Kustomize,Azure DevOps
Code Management & CI/CD
Git, TeamCity, SonarQube, Jenkins
Security
MS Active Directory, Azure VPN, Dex (K8s) integrated with GitLab
Machine Learning
TensorFlow (model training, boarding, serving), Keras, Seldon
Storage (Azure)
Storage Gen1 & Gen2, Data Lake, File Storage
ETL (Azure)
Databricks, Spark on K8, Data Factory (ADF), HDInsight (Kafka and Spark), Service Bus (ASB)
Lambda functions & VMs, Cache for Redis
Monitoring and Logging
Graphana, Prometeus, GrayLog
Integration Patterns for Big Data ApplicationsMichael Häusler
Big Data technologies like distributed databases, queues, batch processors, and stream processors are fun and exciting to play with. Making them play nicely together can be challenging. Keeping it fun for engineers to continuously improve and operate them is hard. At ResearchGate, we run thousands of YARN applications every day to gain insights and to power user facing features. Of course, there are numerous integration challenges on the way:
* integrating batch and stream processors with operational systems
* ingesting data and playing back results while controlling performance crosstalk
* rolling out new versions of synchronous, stream, and batch applications and their respective data schemas
* controlling the amount of glue and adapter code between different technologies
* modeling cross-flow dependencies while handling failures gracefully and limiting their repercussions
We describe our ongoing journey in identifying patterns and principles to make our big data stack integrate well. Technologies to be covered will include MongoDB, Kafka, Hadoop (YARN), Hive (TEZ), Flink Batch, and Flink Streaming.
A world's one of the first complete Online Web-based Development Frameworks to develop and deploy Decision Support Systems, Knowledge-based systems, Web-sites and Applications backed by Expert System, Case-Based Reasoning and Hybrid AI Technologies
This document discusses application architecture and considerations for different layers including presentation, domain, and data source layers. It covers topics like layering, client types, content delivery, domain layer patterns like transaction script, domain model and table module. It also discusses data source layer patterns like gateway, active record and data mapper. Finally, it provides an example of implementing user signup in the Play! framework.
Apidays Paris 2023 - Productizing AsyncAPI for Data Replication and Changed D...apidays
Apidays Paris 2023 - Software and APIs for Smart, Sustainable and Sovereign Societies
December 6, 7 & 8, 2023
Productizing AsyncAPI for Data Replication and Changed Data Capture
Julien Testut, Senior Principal Product Manager, Oracle
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Helixa uses serverless machine learning architectures to power an audience intelligence platform. It ingests large datasets and uses machine learning models to provide insights. Helixa's machine learning system is built on AWS serverless services like Lambda, Glue, Athena and S3. It features a data lake for storage, a feature store for preprocessed data, and uses techniques like map-reduce to parallelize tasks. Helixa aims to build scalable and cost-effective machine learning pipelines without having to manage servers.
The document discusses software design and implementation. It describes the design phase as involving high-level architectural design to develop the overall structure of a software program, and low-level detailed design to develop specific algorithms and data structures. The implementation phase includes activities like constructing software components, testing, developing prototypes, training, and installing the system. Good design principles include modularity, low coupling between modules, and high cohesion within modules.
In this introductory session, we dive into the inner workings of the newest version of Azure Data Factory (v2) and take a look at the components and principles that you need to understand to begin creating your own data pipelines. See the accompanying GitHub repository @ github.com/ebragas for code samples and ADFv2 ARM templates.
Towards Scalable Validation of Low-Code System Models: Mapping EVL to VIATRA ...IncQuery Labs
Presented at the LowCode Workshop 2021 at MODELS 2021 by Benedek Horváth. Authors are Qurat ul ain Ali, Benedek Horváth, Dimitris Kolovos, Konstantinos Barmpis and Ákos Horváth.
Towards Continuous Consistency Checking of DevOps ArtefactsIncQuery Labs
Presented at the International Workshop DevOps@MODELS 2021 by Benedek Horváth. The authors are Alessandro Colantoni, Benedek Horváth, Ákos Horváth, Luca Berardinelli, and Manuel Wimmer (Johannes Kepler University Linz, IncQuery Labs).
More Related Content
Similar to IncQuery_presentation_Incose_EMEA_WSEC.pptx
Eclipse Hawk provides scalable querying of models by indexing them into graph databases. It addresses challenges of collaborative modeling on large systems by distributed teams. The Hawk API is designed for flexibility, performance, and scalability through features like multiple communication styles, efficient encodings, and paged results.
Deploying ML models in production, with or without CI/CD, is significantly more complicated than deploying traditional applications. That is mainly because ML models do not just consist of the code used for their training, but they also depend on the data they are trained on and on the supporting code. Monitoring ML models also adds additional complexity beyond what is usually done for traditional applications. This talk will cover these problems and best practices for solving them, with special focus on how it's done on the Databricks platform.
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Source-to-source transformations: Supporting tools and infrastructurekaveirious
Introduction to source-to-source transformation. Concept and overview. Basics of existing tools (TXL, ROSE, Cetus, EDG, C-to-C, Memphis); pros and cons. Part of an internal evaluation for selecting a source-to-source transformation tool.
Data Scientists and Machine Learning practitioners, nowadays, seem to be churning out models by the dozen and they continuously experiment to find ways to improve their accuracies. They also use a variety of ML and DL frameworks & languages , and a typical organization may find that this results in a heterogenous, complicated bunch of assets that require different types of runtimes, resources and sometimes even specialized compute to operate efficiently.
But what does it mean for an enterprise to actually take these models to "production" ? How does an organization scale inference engines out & make them available for real-time applications without significant latencies ? There needs to be different techniques for batch (offline) inferences and instant, online scoring. Data needs to be accessed from various sources and cleansing, transformations of data needs to be enabled prior to any predictions. In many cases, there maybe no substitute for customized data handling with scripting either.
Enterprises also require additional auditing and authorizations built in, approval processes and still support a "continuous delivery" paradigm whereby a data scientist can enable insights faster. Not all models are created equal, nor are consumers of a model - so enterprises require both metering and allocation of compute resources for SLAs.
In this session, we will take a look at how machine learning is operationalized in IBM Data Science Experience (DSX), a Kubernetes based offering for the Private Cloud and optimized for the HortonWorks Hadoop Data Platform. DSX essentially brings in typical software engineering development practices to Data Science, organizing the dev->test->production for machine learning assets in much the same way as typical software deployments. We will also see what it means to deploy, monitor accuracies and even rollback models & custom scorers as well as how API based techniques enable consuming business processes and applications to remain relatively stable amidst all the chaos.
Speaker
Piotr Mierzejewski, Program Director Development IBM DSX Local, IBM
The document discusses Clean Architecture, an architectural pattern for software design. It aims to facilitate maintainability, technical agility, and independent development. Clean Architecture prescribes separating an application into distinct layers - entities, use cases, interfaces, and entry points. This separation aims to make codebases independent of frameworks and easily testable. The document outlines principles like SOLID and DRY, and patterns like layered architecture and MVC that influence Clean Architecture. It provides tips for migrating existing applications to this architecture.
How a Data Mesh is Driving our Platform | Trey Hicks, GlooHostedbyConfluent
At Gloo.us, we face a challenge in providing platform data to heterogeneous applications in a way that eliminates access contention, avoids high latency ETLs, and ensures consistency for many teams. We're solving this problem by adopting Data Mesh principles and leveraging Kafka, Kafka Connect, and Kafka streams to build an event driven architecture to connect applications to the data they need. A domain driven design keeps the boundaries between specialized process domains and singularly focused data domains clear, distinct, and disciplined. Applying the principles of a Data Mesh, process domains assume the responsibility of transforming, enriching, or aggregating data rather than relying on these changes at the source of truth -- the data domains. Architecturally, we've broken centralized big data lakes into smaller data stores that can be consumed into storage managed by process domains.
This session covers how we’re applying Kafka tools to enable our data mesh architecture. This includes how we interpret and apply the data mesh paradigm, the role of Kafka as the backbone for a mesh of connectivity, the role of Kafka Connect to generate and consume data events, and the use of KSQL to perform minor transformations for consumers.
[2017/2018] Introduction to Software ArchitectureIvano Malavolta
This document provides an introduction to software architecture concepts. It defines software architecture as the selection of structural elements and their interactions within a system. Common architectural styles are described, including Model-View-Controller (MVC), publish-subscribe, layered, shared data, peer-to-peer, and pipes and filters. Tactics are introduced as design decisions that refine styles to control quality attributes. The document emphasizes that architectural styles solve recurring problems and promote desired qualities like performance, security, and maintainability.
[2016/2017] Introduction to Software ArchitectureIvano Malavolta
This presentation is about a lecture I gave within the "Software systems and services" immigration course at the Gran Sasso Science Institute, L'Aquila (Italy): http://cs.gssi.infn.it/.
http://www.ivanomalavolta.com
Revolutionary container based hybrid cloud solution for MLPlatform
Ness' data science platform, NextGenML, puts the entire machine learning process: modelling, execution and deployment in the hands of data science teams.
The entire paradigm approaches collaboration around AI/ML, being implemented with full respect for best practices and commitment to innovation.
Kubernetes (onPrem) + Docker, Azure Kubernetes Cluster (AKS), Nexus, Azure Container Registry(ACR), GlusterFS
Workflow
Argo->Kubeflow
DevOps
Helm, kSonnet, Kustomize,Azure DevOps
Code Management & CI/CD
Git, TeamCity, SonarQube, Jenkins
Security
MS Active Directory, Azure VPN, Dex (K8s) integrated with GitLab
Machine Learning
TensorFlow (model training, boarding, serving), Keras, Seldon
Storage (Azure)
Storage Gen1 & Gen2, Data Lake, File Storage
ETL (Azure)
Databricks, Spark on K8, Data Factory (ADF), HDInsight (Kafka and Spark), Service Bus (ASB)
Lambda functions & VMs, Cache for Redis
Monitoring and Logging
Graphana, Prometeus, GrayLog
Integration Patterns for Big Data ApplicationsMichael Häusler
Big Data technologies like distributed databases, queues, batch processors, and stream processors are fun and exciting to play with. Making them play nicely together can be challenging. Keeping it fun for engineers to continuously improve and operate them is hard. At ResearchGate, we run thousands of YARN applications every day to gain insights and to power user facing features. Of course, there are numerous integration challenges on the way:
* integrating batch and stream processors with operational systems
* ingesting data and playing back results while controlling performance crosstalk
* rolling out new versions of synchronous, stream, and batch applications and their respective data schemas
* controlling the amount of glue and adapter code between different technologies
* modeling cross-flow dependencies while handling failures gracefully and limiting their repercussions
We describe our ongoing journey in identifying patterns and principles to make our big data stack integrate well. Technologies to be covered will include MongoDB, Kafka, Hadoop (YARN), Hive (TEZ), Flink Batch, and Flink Streaming.
A world's one of the first complete Online Web-based Development Frameworks to develop and deploy Decision Support Systems, Knowledge-based systems, Web-sites and Applications backed by Expert System, Case-Based Reasoning and Hybrid AI Technologies
This document discusses application architecture and considerations for different layers including presentation, domain, and data source layers. It covers topics like layering, client types, content delivery, domain layer patterns like transaction script, domain model and table module. It also discusses data source layer patterns like gateway, active record and data mapper. Finally, it provides an example of implementing user signup in the Play! framework.
Apidays Paris 2023 - Productizing AsyncAPI for Data Replication and Changed D...apidays
Apidays Paris 2023 - Software and APIs for Smart, Sustainable and Sovereign Societies
December 6, 7 & 8, 2023
Productizing AsyncAPI for Data Replication and Changed Data Capture
Julien Testut, Senior Principal Product Manager, Oracle
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Helixa uses serverless machine learning architectures to power an audience intelligence platform. It ingests large datasets and uses machine learning models to provide insights. Helixa's machine learning system is built on AWS serverless services like Lambda, Glue, Athena and S3. It features a data lake for storage, a feature store for preprocessed data, and uses techniques like map-reduce to parallelize tasks. Helixa aims to build scalable and cost-effective machine learning pipelines without having to manage servers.
The document discusses software design and implementation. It describes the design phase as involving high-level architectural design to develop the overall structure of a software program, and low-level detailed design to develop specific algorithms and data structures. The implementation phase includes activities like constructing software components, testing, developing prototypes, training, and installing the system. Good design principles include modularity, low coupling between modules, and high cohesion within modules.
In this introductory session, we dive into the inner workings of the newest version of Azure Data Factory (v2) and take a look at the components and principles that you need to understand to begin creating your own data pipelines. See the accompanying GitHub repository @ github.com/ebragas for code samples and ADFv2 ARM templates.
Similar to IncQuery_presentation_Incose_EMEA_WSEC.pptx (20)
Towards Scalable Validation of Low-Code System Models: Mapping EVL to VIATRA ...IncQuery Labs
Presented at the LowCode Workshop 2021 at MODELS 2021 by Benedek Horváth. Authors are Qurat ul ain Ali, Benedek Horváth, Dimitris Kolovos, Konstantinos Barmpis and Ákos Horváth.
Towards Continuous Consistency Checking of DevOps ArtefactsIncQuery Labs
Presented at the International Workshop DevOps@MODELS 2021 by Benedek Horváth. The authors are Alessandro Colantoni, Benedek Horváth, Ákos Horváth, Luca Berardinelli, and Manuel Wimmer (Johannes Kepler University Linz, IncQuery Labs).
The Genesis of Holistic Systems Engineering: Completeness and Consistency Man...IncQuery Labs
The document discusses the challenges of disconnected engineering silos and proposes a framework to address it. It presents the 3C challenge of completeness, correctness, and consistency when transferring systems engineering data to detailed design tools. The framework includes automated bridge tools to create a digital thread between tools and digital thread analytics to analyze the connections and identify issues. It demonstrates connecting a systems engineering tool to an electrical design tool to map components and ensure signal allocations are consistent.
On 18th September, our CEO, István Ráth, joined by Enrique Krajmalnik from Zuken, presented at the 2021 INCOSE Western States Regional Conference. Their talk concentrated on the current challenges of systems engineering, promoting a much-needed paradigm shift and a novel, holistic approach.
The conceptual framework underpinning this novel concept is the combination of light-weight bridge tools, such as the E3.GENESYS Connector from Zuken, and digital thread analytics powered by our flagship product, the IncQuery Suite. This framework provides discipline-specific views of multi-domain engineering data, and powerful structural and numerical analysis to ensure completeness, correctness and consistency throughout the entire design process.
Towards the Next Generation of Reactive Model Transformations on Low-Code Pla...IncQuery Labs
Authors: Benedek Horváth(IncQuery Labs cPlc., Johannes Kepler University Linz, Linz, Austria), Ákos Horváth (IncQuery Labs cPlc.), Manuel Wimmer (Johannes Kepler University Linz, Linz, Austria)
Read the research here: https://dl.acm.org/doi/10.1145/3417990.3420199
Model Checking as a Service: Towards Pragmatic Hidden Formal MethodsIncQuery Labs
Authors: Bence Graics, Ákos Hajdu, Zoltán Micskei, Vince Molnár, István Ráth, Luigi Andolfato, Ivan Gomes, and Robert Karban
Read the research here: https://dl.acm.org/doi/10.1145/3417990.3421407
1. The paper presents EMF-IncQuery, a framework for incrementally evaluating model queries over EMF models as the models evolve.
2. Experiments show that EMF-IncQuery can efficiently recompute query results incrementally in real-time as models with over 1.5 million elements are modified, enabling on-the-fly model validation.
3. EMF-IncQuery has been applied in both academic research and industrial tools/applications for tasks like model validation, visualization, program analysis, and design space exploration.
Incquery Suite Models 2020 Conference by István Ráth, CEO of IncQuery LabsIncQuery Labs
This document discusses how IncQuery Suite can be used to analyze digital threads in model-based systems engineering (MBSE) projects. It provides an overview of IncQuery Suite's features for efficiently extracting and analyzing engineering data across proprietary tools, validating documents and projects, performing graph queries and full-text search, and integrating with various tools. The document also presents two case studies, one involving integrating IncQuery Suite with Airbus's application platform to enable data continuity, and another using IncQuery Suite to provide model checking as a service for SysML models.
Lessons learned from building Eclipse-based add-ons for commercial modeling t...IncQuery Labs
In this presentation, we summarize the lessons we have learned during the MagicDraw adaptation of VIATRA, Eclipse’s open source framework for scalable reactive model transformations. We have built V4MD, an open source extension for MagicDraw that others can freely reuse and build on, and IncQuery for MagicDraw, a commercial add-on that provides powerful yet user-friendly querying and validation capabilities.
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
Transform Your Communication with Cloud-Based IVR SolutionsTheSMSPoint
Discover the power of Cloud-Based IVR Solutions to streamline communication processes. Embrace scalability and cost-efficiency while enhancing customer experiences with features like automated call routing and voice recognition. Accessible from anywhere, these solutions integrate seamlessly with existing systems, providing real-time analytics for continuous improvement. Revolutionize your communication strategy today with Cloud-Based IVR Solutions. Learn more at: https://thesmspoint.com/channel/cloud-telephony
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
DDS Security Version 1.2 was adopted in 2024. This revision strengthens support for long runnings systems adding new cryptographic algorithms, certificate revocation, and hardness against DoS attacks.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
2. The IncQuery Group is an international team of engineering experts with a strong
research and development background. We support systems engineers in several
to create tailormade solutions. Automotive professionals, aircraft engineers, space
engineers all trust us to make their systems work exceptionally, safer, faster, and more
reliable.
Who We
Are
5. Digital Thread
More and more systems design scenarios comprise a high number of domains, also
displaying a remarkable diversity in their nature
Digital Threads
• Siloed multi-domain engineering data
• Digital tools and toolchains
• Lifecycle management
• Connections that bridge data across silos
Promises
• Increasing systems quality
• Reducing risks and chances of errors
• Reducing overall production costs
6. The impact of disconnected silos
Various isolated disciplines
Systems Engineering, Mechanical,
Electrical, ALM/PLM, …
As disconnected silos, what is the interface between
architecture and disciplines?
- It is often a document produced from a discipline specific tool
- Consequence: data reentry and/or copy-paste
- No guarantee of completeness, correctness and consistency
A lot of time and money is wasted!
No global consistency
Difficult customization
Data lock-in
Vendor lock-in
10. The 3C Challenge
Completeness
• Make sure all my components and functions exist both in SE and ECAD
Correctness
• If component A has is of type “PCB” (in SE)
it should be mapped to a PCB device (in ECAD)
Consistency
• If a connection transfers an item between components A and B (in SE)
there is a wire carrying the corresponding signal between devices A and B (in
ECAD)
Cable/Harness
Video Drone Model
What causes 3C problems?
• Input error
• Forgetting/missing something
• Copy/paste error
• Incorrect mappings
• Roundtripping gone bad
• Change
13. First-class citizen
What can links connect?
• Full documents
• Repositories/Large set of data
• Low-level elements/objects
Link
Link
Link
document document
E.g. linking between a serialized version of the video drone and
cable/harness models as files.
E.g. linking between given version of the video drone and
cable/harness models stored in the silos.
E.g. linking between components in a video drone model and wires in
a cable/harness model.
14. Linking between Silos
• Multiplicity
• 1-to-1, 1-to-many, many-to-many
Link
Link
Link
E.g. different links for each
component and wire, item
and signal.
E.g. one link for all wires
related to a component.
E.g. for each component
pairs and connections
between them, there is a
link to the relevant devices
and wires between them.
15. Linking between Silos
• Multiplicity
• 1-to-1, 1-to-many, many-to-many
• Recognize broken links
• Automated (Immediate/Scheduled), Manual
Link
Link
Link
E.g. a component is
deleted which had a
related wire.
16. Linking between Silos
• Multiplicity
• 1-to-1, 1-to-many, many-to-many
• Recognize broken links
• Automated (Immediate/Scheduled), Manual
• Managing Versions
• Supporting all versions, Supporting only published versions, Only latest revision
Link
Link’
Link’’
Device
Device'
Component
Component'
E.g. for each change a new
link is created.
17. Linking between Silos
• Multiplicity
• 1-to-1, 1-to-many, many-to-many
• Recognize broken links
• Automated (Immediate/Scheduled), Manual
• Managing Versions
• Supporting all versions, Supporting only published versions, Only latest revision
Link
Link’’
Manually publish new version
Device
Device'
Component
Component'
E.g. links are created when
it is triggered by a publish.
E.g. for each change a new
link is created.
18. Linking between Silos
• Multiplicity
• 1-to-1, 1-to-many, many-to-many
• Recognize broken links
• Automated (Immediate/Scheduled), Manual
• Managing Versions
• Supporting all versions, Supporting only published versions, Only latest revision
Link’’
Device'
Component'
E.g. no version information
is available.
E.g. links are created when
it is triggered by a publish.
E.g. for each change a new
link is created.
19. Managing links with data between
Silos
View data & links
- Data and links are presented for end-
users in a table/tree/diagram format
- Custom representation or existing tools
- Data and links are navigable
Querying data & links
- Simple/Complex filtering
- Relation/Graph based querying
- Full/text search
- Hybrid
20. Data visible from the Silos
No data replication from Silos
A wrapper is used for accessing the data inside Silos
Full data replication of data from Silos in native format
Data is stored in a native format (object blobs, files etc.)
Data Warehousing, Data Lake
Full data replication from Silos
All data is extracted from the Silos to provide a full access
to the data
Publishing a state of the data from Silos
Usually requires a manual step to publish the data
All data of a given snapshot of the data is accessible
Wrapper
blob,
files,
etc…
Access
Request
Manually publish new version
21. Mapping between Silos:
Rule based
Precondition (in SE):
there is connection that transfers an item
between components A and B
Action (in ECAD):
create a wire carrying the corresponding signal
between devices A and B
Silo Silo
Systems Engineering ECAD
Transformation
22. Automated creation of links
Handover automation:
Bridges capable of moving data, metadata, and documents between tools automatically. It
helps in replacing redundant and error-prone data re-entry with automated import-export
steps.
Requires customized transformation capabilities:
- Model to model, Model to text, Text to model
- Possibility to create custom rule definitions
- Diagram-based editor
- Text-based editor
24. Comparison Table*
Tool Linking Querying Transformation Data Storage
Syndeia™ - Intercax Generic links with
tool-specific
endpoints
Gremlin Rule-based
synchronization
No replication
(links only)
Smartfacts OSLC linking
support
Traceability
coverage queries
? (no information) No replication
SBE Vision Generic links with
tool-specific
endpoints
“Semantic search”
(elastic search)
? (no information) Full replication
(ontology based)
The Reuse company–
Eningeering
Studio
OSLC KM,
Interface modelling
Rule-based
validation
Rule-based
transformation
framework
Hybrid Replication
(latest version)
IncQuery Cloud Generic links based
on URLs
Elastic search,
SPARQL, VQL
Tool-specific
bridges
Full replication
(multiple)
*Based on data accessible from the websites of the given tools as of 2023 / 04 / 12
25. Addressing the 3C Analysis
case study with Zuken
E3.GENESYS and IncQuery
26. Our take
• Our take
Discipline-specific, automated
bridge tools that create the
digital thread
Overlaid layer of digital thread
analytics that can expose parts
of the digital thread depending
on the need/use-case
Vendor-neutral federated tool
integration
• Single source of truth is NOT a single model
it is the ”model of models”
• Digital thread analytics can
• look at links AND look into models
• Semantically analyze both
• Holistic adaptable to all tools in the
toolchain
29. A new platform for digital engineering
automation.
• Creates a unified, searchable, and
analyzable representation of your complete
digital thread: the knowledge graph
• Automated Quality Gates: detailed
validation reports and analysis dashboards
that integrate seamlessly with modern, web-
based tools
• Handover Automation: light-weight bridge
tools that eliminate copy-paste and date re-
entry
• Powered by digital thread analytics:
queries and mapping rules that can
seamlessly cross tool (silo) boundaries
IncQuery Suite
DESKTOP
VALIDATOR
CLOUD
30. Main features
- Works with popular tools like Enterprise Architect and MagicDraw/Cameo out
of the box.
- Runs as a standalone application or as part of a DevOps pipeline
- Provides a convenient extension framework to define custom validation
rules for models, which we rely on for the GENESYS adaptation
- Supports centrally-shared / version-managed projects, by integrating with
Teamwork Cloud, or file-based VCSs such as Git/SVN.
Devops-ready automated quality gate, providing detailed model
quality reports, based on standard and custom rules.
- Helps Systems Engineers to assess key quality-related metrics of their work,
independently of what authoring environment they work in.
- Helps downstream stakeholders (e.g. QA Engineers, Software Architects,
Electrical Engineers, …) to automatically assess the quality of an inbound
systems architecture model, based on rules such as the library provided by the
SAIC Digital Engineering Validation Tool, or 3C analysis.
IncQuery Validator
32. Validation report for 3C Analysis
• Results after initial import performed with
GENESYS.E3 Connector
• Partially complete
(Subsystem mapping is
disabled by default)
• Inconsistent signal allocations
“If a connection transfers an item between components A and B (in SE)
there is a wire carrying the corresponding signal between
devices A and B (in ECAD)”
34. Validation Report for 3C
Analysis
• Re-run the validation
• Result: Allocation problems resolved
35. Progress tracking
• Historical analytics as the “progress bar” of a complex engineering process
• Model Integrator / Reviewer can follow the “Transition to Detailed Design” process on a version
control dashboard
• Track progress via KPIs as the mapping completeness is improved
• Identify and fix correctness issues quickly
35
36. Takeaway
• Creating the Digital thread requires a lot of
underlying methodologies and technologies to work in harmony
• There is no single golden solution
• Define your requirements carefully
• Consistency, completeness, consistency analysis
• Version control
• Link management
• Handover automation
• Access control
• Model validation
• Etc.
Be open to share your successes and failures
Bridge can address
Correctness
Completeness (to a certain degree)
Consistency – not really, as there are several additional and manual steps to be made by the electrical engineer that are specific to the ECAD domain and cannot be automated.
“Single Source of Truth” in reality is not a single model, it’s the “model of models”
Therefore, to ensure that consistency can be checked and maintained throughout the entire digital thread, We need an additional solution that
Can look at links between models and can look into models
Analyze both in a semantically meaningful way
Is holistic in terms of the complete digital thread, i.e. adaptable to other tools as well
E.g. ALM/PLM
Now let’s look at how we can build an analytics dashboard for the 3C validation challenge of the “transition to detailed design” scenario as Enrique has introduced earlier.
Numerical charts,
Tables,
Hypertext,
Web components
Etc
In fact, the table shown here contains hyperlinks which navigate directly into the respective tools, in this case GENESYS or E3.series, so that the electrical engineer can fix problems quickly.
All organized into interactive documents which can be exported to standard formats such as PDF or published into platforms such as Confluence.
After realizing the issues that need to be address, in Step 2, the electrical engineer will proceed to create a wiring diagram and add signal carriage information to their design.
In the final, third step of our demonstration sequence, the electrical engineer then uses the IQ MA again to validate that indeed, as a result of their actions, the number of inconsistencies reported has decreased.
Going further, and looking at the whole scope of the transition process, this 3C Analyis dashboard can be enriched with historic capabilities which enables the electrical engineer or a model reviewer to keep track of the progress and accurately assess the remaining time needed to complete the transition process. In other words, our dashboard can act as a progress bar of a very complex engineering process, showing not just the percentage of correctly mapped model elements, but also when errors have been introduced and fixed. By the way, all of these charts can be exported into Excel, together with the underlying data, at a click of a button.