Alluxio Day x APAC Modern Data Stack
September 22, 2022
For more on Alluxio Day: https://www.alluxio.io/alluxio-day/
For more Alluxio events: https://alluxio.io/events/
Speaker: Yingjun Wu (Founder & CEO, RisingWave Labs)
In this presentation, I will talk about the birth, the growth, and the prosperity of modern data stack. I will show you why modern data stack is more than a buzzword, and how it will possibly evolve in the next couple of years.
Property graph vs. RDF Triplestore comparison in 2020Ontotext
This presentation goes all the way from intro "what graph databases are" to table comparing the RDF vs. PG plus two different diagrams presenting the market circa 2020
Leveraging Generative AI to Accelerate Graph Innovation for National Security...Neo4j
Leveraging Generative AI to Accelerate Graph Innovation for National Security with Neo4j and AWS
Nick Miller, US Federal Team Lead, AWS Marketplace
Government agencies are undergoing digital transformation initiatives to deliver improved customer experiences. Generative AI is a promising technology that may accelerate this transformation for customers. Come hear how AWS and Neo4j are partnered to help government agencies more rapidly adopt and deliver the power and promise of emerging GenAI capabilities to government missions.
Easily Identify Sources of Supply Chain GridlockNeo4j
Join us for this 20-minute webinar to hear from Nick Johnson, Product Marketing Manager for Graph Data Science, as he explains the fundamentals of Neo4j Graph Data Science and its applications in optimizing supply chain management. Discover how leveraging graph analytics can help you identify bottlenecks, reduce costs, and streamline your supply chain operations more efficiently.
This workshop presentation from Enterprise Knowledge team members Joe Hilger, Founder and COO, and Sara Nash, Technical Analyst, was delivered on June 8, 2020 as part of the Data Summit 2020 virtual conference. The 3-hour workshop provided an interdisciplinary group of participants with a definition of what a knowledge graph is, how it is implemented, and how it can be used to increase the value of your organization’s datas. This slide deck gives an overview of the KM concepts that are necessary for the implementation of knowledge graphs as a foundation for Enterprise Artificial Intelligence (AI). Hilger and Nash also outlined four use cases for knowledge graphs, including recommendation engines and natural language query on structured data.
The perfect couple: Uniting Large Language Models and Knowledge Graphs for En...Neo4j
Large Language models are amazing but are also black-box models that often fail to capture and accurately represent factual knowledge. Knowledge graphs, by contrast, are structural knowledge models that explicitly represent knowledge and, indeed, allow us to detect implicit relationships. In this talk we will demonstrate how LLMs can be improved by Knowledge Graphs, and how LLM’s can augment Knowledge Graphs. A perfect couple!
Knowledge Graphs and Generative AI
Dr. Katie Roberts, Data Science Solutions Architect, Neo4j
It’s no secret that Large Language Models (LLMs) are popular right now, especially in the age of Generative AI. LLMs are powerful models that enable access to data and insights for any user, regardless of their technical background, however, they are not without challenges. Hallucinations, generic responses, bias, and a lack of traceability can give organizations pause when thinking about how to take advantage of this technology. Graphs are well suited to ground LLMs as they allow you to take advantage of relationships within your data that are often overlooked with traditional data storage and data science approaches. Combining Knowledge Graphs and LLMs enables contextual and semantic information retrieval from both structured and unstructured data sources. In this session, you’ll learn how graphs and graph data science can be incorporated into your analytics practice, and how a connected data platform can improve explainability, accuracy, and specificity of applications backed by foundation models.
Alluxio Day x APAC Modern Data Stack
September 22, 2022
For more on Alluxio Day: https://www.alluxio.io/alluxio-day/
For more Alluxio events: https://alluxio.io/events/
Speaker: Yingjun Wu (Founder & CEO, RisingWave Labs)
In this presentation, I will talk about the birth, the growth, and the prosperity of modern data stack. I will show you why modern data stack is more than a buzzword, and how it will possibly evolve in the next couple of years.
Property graph vs. RDF Triplestore comparison in 2020Ontotext
This presentation goes all the way from intro "what graph databases are" to table comparing the RDF vs. PG plus two different diagrams presenting the market circa 2020
Leveraging Generative AI to Accelerate Graph Innovation for National Security...Neo4j
Leveraging Generative AI to Accelerate Graph Innovation for National Security with Neo4j and AWS
Nick Miller, US Federal Team Lead, AWS Marketplace
Government agencies are undergoing digital transformation initiatives to deliver improved customer experiences. Generative AI is a promising technology that may accelerate this transformation for customers. Come hear how AWS and Neo4j are partnered to help government agencies more rapidly adopt and deliver the power and promise of emerging GenAI capabilities to government missions.
Easily Identify Sources of Supply Chain GridlockNeo4j
Join us for this 20-minute webinar to hear from Nick Johnson, Product Marketing Manager for Graph Data Science, as he explains the fundamentals of Neo4j Graph Data Science and its applications in optimizing supply chain management. Discover how leveraging graph analytics can help you identify bottlenecks, reduce costs, and streamline your supply chain operations more efficiently.
This workshop presentation from Enterprise Knowledge team members Joe Hilger, Founder and COO, and Sara Nash, Technical Analyst, was delivered on June 8, 2020 as part of the Data Summit 2020 virtual conference. The 3-hour workshop provided an interdisciplinary group of participants with a definition of what a knowledge graph is, how it is implemented, and how it can be used to increase the value of your organization’s datas. This slide deck gives an overview of the KM concepts that are necessary for the implementation of knowledge graphs as a foundation for Enterprise Artificial Intelligence (AI). Hilger and Nash also outlined four use cases for knowledge graphs, including recommendation engines and natural language query on structured data.
The perfect couple: Uniting Large Language Models and Knowledge Graphs for En...Neo4j
Large Language models are amazing but are also black-box models that often fail to capture and accurately represent factual knowledge. Knowledge graphs, by contrast, are structural knowledge models that explicitly represent knowledge and, indeed, allow us to detect implicit relationships. In this talk we will demonstrate how LLMs can be improved by Knowledge Graphs, and how LLM’s can augment Knowledge Graphs. A perfect couple!
Knowledge Graphs and Generative AI
Dr. Katie Roberts, Data Science Solutions Architect, Neo4j
It’s no secret that Large Language Models (LLMs) are popular right now, especially in the age of Generative AI. LLMs are powerful models that enable access to data and insights for any user, regardless of their technical background, however, they are not without challenges. Hallucinations, generic responses, bias, and a lack of traceability can give organizations pause when thinking about how to take advantage of this technology. Graphs are well suited to ground LLMs as they allow you to take advantage of relationships within your data that are often overlooked with traditional data storage and data science approaches. Combining Knowledge Graphs and LLMs enables contextual and semantic information retrieval from both structured and unstructured data sources. In this session, you’ll learn how graphs and graph data science can be incorporated into your analytics practice, and how a connected data platform can improve explainability, accuracy, and specificity of applications backed by foundation models.
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
Accessories Shop Management System on Advance JAVAMd. Mahbub Alam
The company maintains basically 4 registry copies which are accordingly:
• Sales registry book
• Servicing registry book
• Download registry book
• Laser registry book
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
This two-part Tutorial will cover most of the key aspects of DDS to ensure that you can proficiently start using it for designing or developing your next system. In brief this tutorial will get you jump-started into DDS.
Informatica provides the market's leading data integration platform. Tested on nearly 500,000 combinations of platforms and applications, the data integration platform inter operates with the broadest possible range of disparate standards, systems, and applications. This unbiased and universal view makes Informatica unique in today's market as a leader in the data integration platform. It also makes Informatica the ideal strategic platform for companies looking to solve data integration issues of any size.
With the world’s supply chain system in crisis, it’s clear that better solutions are needed. Digital twins built on knowledge graph technology allow you to achieve an end-to-end view of the process, supporting real-time monitoring of critical assets.
The Data Distribution Service (DDS) is a standard for ubiquitous, interoperable, secure, platform independent, and real-time data sharing across network connected devices. DDS is today used in a large class of applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
Differently from traditional message-centric technologies, DDS is data-centric – the accent is on seamless (user-defined) data sharing as opposed to message delivery. Therefore, when embracing DDS and data-centricity, data modeling becomes a key step in the design of a distributed system.
This webcast will (1) explain the role and scope of data modeling in DDS, (2) introduce the techniques at the foundation of effective and extensible Data Models, and (3) summarize the most common DDS Data Modeling Idioms.
Unleash the Power of Neo4j with GPT and Large Language Models: Harmonizing Co...Neo4j
Unleash the Power of Neo4j with GPT and Large Language Models: Harmonizing Concepts from Cancer Research Data
Robert Chang, Modeling & Analytics Domain Leader, FI Consulting
Dr. Mark Jensen, Director of Data Science, Frederick National Laboratory for Cancer Research
Neo4j is a powerful tool for managing and curating large amounts of text data. We demonstrate how Neo4j works seamlessly with large language models and GPT to harmonize data and facilitate the querying of information at scale.
Big Data Monetization - The Path From Internal to ExternalcVidya Networks
"How can big data help us accelerate external monetization?"
A presentation by Hezi Zelevski, VP Corporate Development at cVidya
Presented in the " Monetizing Big Data in Telecoms World Summit 2015" conference in Singapore on April 20-21, 2015
Turning Data into Business Value with a Modern Data PlatformCloudera, Inc.
3 Things to Learn About:
-Real-time analytics and data in motion
-Self-service access for SQL analysts and data scientists alike
-Public cloud and hybrid infrastructure
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Knowledge, Graphs & 3D CAD Systems - David Bigelow @ GraphConnect Chicago 2013Neo4j
Global Design and Manufacturing Companies spend a lot of time looking in the rear-view mirror relative to their product design and configuration requirements in order to determine what NOT to do in the future. A lot of time and money is spent tracking information related to design validation, testing and warranty data. Understanding history is important, it often repeats and the bad decisions of the past needs to be avoided.
But, what about the GOOD decisions that have been made, those are just as, if not more important to a design and configuration process! Where do those get stored?! How are they measured?! Most importantly, HOW ARE THEY ENFORCED?! Specifically, how do you help someone in a company make the RIGHT decisions, not just be fearful of repeating a BAD one?!
This is a complex problem for any Design, Engineering or IT Department. That problem gets even more complex when you are required to incorporate a 3D CAD (Computer Aided Design) systems into the mix. If 3D parts and assemblies do not physically connect together properly, or are never supposed to work logically together based on the customer application, you will lose business. The solution is to rethink the approach to how a company not only captures knowledge about failures, but also start to capture successes. The ultimate goal is to help design and engineering staff make the right decisions first, to guide them through valid relations and requirements with ease so they are never distracted by bad decisions - or forced to address a potentially bad decision before it is made.
This is where graph databases are poised to address a very complex problem in a simple and easy to understand way. There are two problems that come up from this:
1) how to document the relationships, rules, dependencies and logic in the graph structure, and
2) how to guide/navigate different role-specific-users through that process safely/accurately.
This presentation will cover the real-world complexities of defining, validating, documenting and enforcing mechanical 3D CAD product configuration rules and structures. Demonstrations of how different roles within the company (e.g. configuration manager, engineer, sales, etc.) can interface with the same graph database using multiple interfaces (e.g. thick client, thin and web) to be interactively guided to a proper solution the first time.
Timing verification of real-time automotive Ethernet networks: what can we ex...RealTime-at-Work (RTaW)
Switched Ethernet is a technology that is profoundly reshaping automotive communication architectures as it did in other application domains such as avionics with the use of AFDX backbones. Early stage timing verification of critical embedded networks typically relies on simulation and worst-case schedulability analysis. When the modeling power of schedulability analysis is not sufficient, there are typically two options: either make pessimistic assumptions or ignore what cannot be modeled. Both options are unsatisfactory because they are either inefficient in terms of resource usage or potentially unsafe. To overcome those issues, we believe it is a good practice to use simulation models, which can be more realistic, along with schedulability analysis. The two basic questions that we aim to study here is what can we expect from simulation, and how to use it properly? This empirical study explores these questions on realistic case-studies and provides methodological guidelines for the use of simulation in the design of switched Ethernet networks. A broader objective of the study is to compare the outcomes of schedulability analyses and simulation, and conclude about the scope of usability of simulation in the desi gn of critical Ethernet networks
Præsentationen blev holdt ved InfinIT-konferencen SummIT 2013, der blev afholdt den 22. maj 2013 på Axelborg i København. Læs mere om konferencen her: http://www.infinit.dk/dk/arrangementer/tidligere_arrangementer/summit_2013.htm
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
Accessories Shop Management System on Advance JAVAMd. Mahbub Alam
The company maintains basically 4 registry copies which are accordingly:
• Sales registry book
• Servicing registry book
• Download registry book
• Laser registry book
The Data Distribution Service for Real-Time Systems (DDS) is an Object Management Group (OMG) standard for publish/subscribe designed to address the needs of a large class of mission- and business-critical distributed real-time systems and system of systems. The DDS standard was formally adopted in 2004 and in less than five years from its inception has experienced swift adoption in a wide variety of application domains. These application domains are characterized by the need to distribute high volumes of data with predictable low latencies, such as, Radar Processors, Flying and Land Drones, Combat Management Systems, Air Traffic Management, High Performance Telemetry, Large Scale Supervisory Systems, and Automated Stocks and Options Trading. Along with wide commercial adoption, the DDS Standard has been recommended and mandated as the technology for real-time data distribution by key administrations worldwide such as the US Navy, the DoD Information-Technology Standards Registry (DISR), the UK MoD, and EUROCONTROL.
This two-part Tutorial will cover most of the key aspects of DDS to ensure that you can proficiently start using it for designing or developing your next system. In brief this tutorial will get you jump-started into DDS.
Informatica provides the market's leading data integration platform. Tested on nearly 500,000 combinations of platforms and applications, the data integration platform inter operates with the broadest possible range of disparate standards, systems, and applications. This unbiased and universal view makes Informatica unique in today's market as a leader in the data integration platform. It also makes Informatica the ideal strategic platform for companies looking to solve data integration issues of any size.
With the world’s supply chain system in crisis, it’s clear that better solutions are needed. Digital twins built on knowledge graph technology allow you to achieve an end-to-end view of the process, supporting real-time monitoring of critical assets.
The Data Distribution Service (DDS) is a standard for ubiquitous, interoperable, secure, platform independent, and real-time data sharing across network connected devices. DDS is today used in a large class of applications, such as, Power Generation, Large Scale SCADA, Air Traffic Control and Management, Smart Cities, Smart Grids, Vehicles, Medical Devices, Simulation, Aerospace, Defense and Financial Trading.
Differently from traditional message-centric technologies, DDS is data-centric – the accent is on seamless (user-defined) data sharing as opposed to message delivery. Therefore, when embracing DDS and data-centricity, data modeling becomes a key step in the design of a distributed system.
This webcast will (1) explain the role and scope of data modeling in DDS, (2) introduce the techniques at the foundation of effective and extensible Data Models, and (3) summarize the most common DDS Data Modeling Idioms.
Unleash the Power of Neo4j with GPT and Large Language Models: Harmonizing Co...Neo4j
Unleash the Power of Neo4j with GPT and Large Language Models: Harmonizing Concepts from Cancer Research Data
Robert Chang, Modeling & Analytics Domain Leader, FI Consulting
Dr. Mark Jensen, Director of Data Science, Frederick National Laboratory for Cancer Research
Neo4j is a powerful tool for managing and curating large amounts of text data. We demonstrate how Neo4j works seamlessly with large language models and GPT to harmonize data and facilitate the querying of information at scale.
Big Data Monetization - The Path From Internal to ExternalcVidya Networks
"How can big data help us accelerate external monetization?"
A presentation by Hezi Zelevski, VP Corporate Development at cVidya
Presented in the " Monetizing Big Data in Telecoms World Summit 2015" conference in Singapore on April 20-21, 2015
Turning Data into Business Value with a Modern Data PlatformCloudera, Inc.
3 Things to Learn About:
-Real-time analytics and data in motion
-Self-service access for SQL analysts and data scientists alike
-Public cloud and hybrid infrastructure
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Knowledge, Graphs & 3D CAD Systems - David Bigelow @ GraphConnect Chicago 2013Neo4j
Global Design and Manufacturing Companies spend a lot of time looking in the rear-view mirror relative to their product design and configuration requirements in order to determine what NOT to do in the future. A lot of time and money is spent tracking information related to design validation, testing and warranty data. Understanding history is important, it often repeats and the bad decisions of the past needs to be avoided.
But, what about the GOOD decisions that have been made, those are just as, if not more important to a design and configuration process! Where do those get stored?! How are they measured?! Most importantly, HOW ARE THEY ENFORCED?! Specifically, how do you help someone in a company make the RIGHT decisions, not just be fearful of repeating a BAD one?!
This is a complex problem for any Design, Engineering or IT Department. That problem gets even more complex when you are required to incorporate a 3D CAD (Computer Aided Design) systems into the mix. If 3D parts and assemblies do not physically connect together properly, or are never supposed to work logically together based on the customer application, you will lose business. The solution is to rethink the approach to how a company not only captures knowledge about failures, but also start to capture successes. The ultimate goal is to help design and engineering staff make the right decisions first, to guide them through valid relations and requirements with ease so they are never distracted by bad decisions - or forced to address a potentially bad decision before it is made.
This is where graph databases are poised to address a very complex problem in a simple and easy to understand way. There are two problems that come up from this:
1) how to document the relationships, rules, dependencies and logic in the graph structure, and
2) how to guide/navigate different role-specific-users through that process safely/accurately.
This presentation will cover the real-world complexities of defining, validating, documenting and enforcing mechanical 3D CAD product configuration rules and structures. Demonstrations of how different roles within the company (e.g. configuration manager, engineer, sales, etc.) can interface with the same graph database using multiple interfaces (e.g. thick client, thin and web) to be interactively guided to a proper solution the first time.
Timing verification of real-time automotive Ethernet networks: what can we ex...RealTime-at-Work (RTaW)
Switched Ethernet is a technology that is profoundly reshaping automotive communication architectures as it did in other application domains such as avionics with the use of AFDX backbones. Early stage timing verification of critical embedded networks typically relies on simulation and worst-case schedulability analysis. When the modeling power of schedulability analysis is not sufficient, there are typically two options: either make pessimistic assumptions or ignore what cannot be modeled. Both options are unsatisfactory because they are either inefficient in terms of resource usage or potentially unsafe. To overcome those issues, we believe it is a good practice to use simulation models, which can be more realistic, along with schedulability analysis. The two basic questions that we aim to study here is what can we expect from simulation, and how to use it properly? This empirical study explores these questions on realistic case-studies and provides methodological guidelines for the use of simulation in the design of switched Ethernet networks. A broader objective of the study is to compare the outcomes of schedulability analyses and simulation, and conclude about the scope of usability of simulation in the desi gn of critical Ethernet networks
Præsentationen blev holdt ved InfinIT-konferencen SummIT 2013, der blev afholdt den 22. maj 2013 på Axelborg i København. Læs mere om konferencen her: http://www.infinit.dk/dk/arrangementer/tidligere_arrangementer/summit_2013.htm
erocci - a scalable model-driven API framework, OW2con'16, Paris. OCCIware
REST APIs are becoming the most common technology for distributed applications. When it comes to designing and implementing such APIs, the heterogeneity of technologies for designing and describing them can make integration and even development a nightmare.
erocci provides developers a simple and standard way to describe these APIs, letting best-of-breed model-driven engineering technology doing all boilerplate work for you.
erocci easily integrates with existing APIs as it follows the following standards:
* HTTP / REST
* Swagger/OpenAPI for API description
* Open Cloud Computing Interface for data model
In the presentation, we will explain the use of erocci and its extension mechanisms.
Cours multimédia interactif intégrant le Tableau Blanc Numérique destiné aux élèves de la 4.A.E.F.
Exemple de Ressource Numérique didactique réalisé à l'aide du logiciel ActivInspire de Promethean.
Frame latency evaluation: when simulation and analysis alone are not enoughRealTime-at-Work (RTaW)
This talk is about temporal verification in real-time communication systems. Early in the design cycle, the two main approaches for verifying timing constraints and dimensioning the networks are worst-case schedulability analysis and simulation. The aim of the talk is to demonstrate that both provide complementary results and that, most often, none of them alone is sufficient. In particular, it will be shown that response time distributions that can be derived from simulations cannot replace worst-case analysis. This will be done on automotive case-studies using RTaW analysis and simulation software tools.
Timing verification of automotive communication architecture using quantile ...RealTime-at-Work (RTaW)
Slides of a paper at ERTSS'2014 co-authored by Nicolas NAVET (University of Luxembourg), Shehnaz LOUVART (Renault), Jose VILLANUEVA (Renault), Sergio CAMPOY-MARTINEZ (Renault) and Jörn MIGGE (RealTime-at-Work). Early stage timing verification on CAN traditionally relies on simulation and schedulability analysis, also known as worst-case response time (WCRT) analysis. Despite recent progresses, the latter technique remains pessimistic especially in complex networking architectures with gateways and heterogeneous communication stacks. Indeed, there are practical cases where no exact WCRT analysis is available, and merely upper bounds on the response times can be derived, on the basis of which unnecessary conservative design choices may be made. Simulation, on the other hand, does not provide anyguarantees per se and, in the context of critical networks, should only be used along with an adequate methodology. In this paper, we argue for the use of quantiles of the response time distribution as performance
metrics providing an adjustable trade-off between safety and resource usage optimization. We discuss how the exact value of the quantile to consider should be chosen with regard to the criticality of the frames, and illustrate the approach on two typical automotive use-cases.
Model Transformation: A survey of the state of the artTom Mens
Presentation about model transformation at the international summer school on Model-Driven Development for Distributed, Real-Time and Embedded Systems (MDD4DRES, 2009, Aussois, France).
This presentation delivered for undergraduate students under the university relations programme of 99X Technology. This presentation covers basic concepts of Unified Modelling Language including some hands-on activities.
This ppt covers the following topics:
Introduction
Data design
Software architectural styles
Architectural design process
Assessing alternative architectural designs
Thus it covers Architectural Design
Software engineering is a detailed study of engineering to the design, development and maintenance of software. Software engineering was introduced to address the issues of low-quality software projects.
Third AssignmentDescribe in 100 – 200 words an application with .docxrandymartin91030
Third Assignment
Describe in 100 – 200 words an application with which you are familiar. This should be an application with which other students and the course instructor are likely to be familiar. An example would be Microsoft Word. Then, select one of the architectural design styles given in the presentation on Architectural Design. Explain why this style is appropriate for the application you described. Then apply this style to the application and explain the result in enough detail that your fellow students are likely to understand.
Organization of your submission
Third Assignment
Your name
Submission Date
Application Description
Style you have selected
Why this style is appropriate for this application
The application’s architecture using this style
Explanation of this architecture (show how some common tasks for this application might be performed using this architecture)
Grading Rubric
Criterion
Points
Application description is well-organized
5
Style choice is one of the styles described
2
Style choice is effectively justified
8
Presented architecture uses the selected style
3
Presented architecture is complete
4
Architecture is described clearly
8
Chapter 7:
Design: Architecture and Methodology
1
Design Topics Covered
Architectural vs. detailed design
“Common” architectural styles, tactics, and reference architectures
Basic techniques for detailed design
Basic issues with user-interface design
2
Design
Starts mostly from/with requirements (evolving mostly from functionalities and other non-functional characteristics).
How is the software solution going to be structured?
What are the main components—(functional comp)?
Often directly from requirements’ functionalities
(use cases).
How are these components related?
Possibly re-organize the components (composition/decomposition).
Two main levels of design:
Architectural (high level)
Detailed design
How should we depict design—notation/language?
3
Relationship between Architecture and Design
Detailed Design
Comes from
Requirements &
Architecture
4
Software Architecture
Structure(s) of the solution, comprising:
Major software elements
Their externally visible properties
Relationships among elements
Every software system has an architecture.
May have multiple structures!
Multiple ways of organizing elements, depending on the perspective
External properties of components (and modules)
Component (module) interfaces
Component (module) interactions, rather than internals of components and modules
5
Views and Viewpoints
View – representation of a system structure
4+1 views (by Krutchen)
Logical (OO decomposition – key abstractions)
Process (run-time, concurrency/distribution of functions)
Subsystem decomposition
Physical architecture
+1: use cases
Other classification (Bass, Clements, Kazman)
Module
Run-time
Allocation (mapping to development environment)
Different views for different people
6
Architectural Styles/Patterns
Pipes a.
the Modeling is a way of thinking about thesaman zaker
Modeling is a way of thinking about the problems using models organized around the real world ideas.
understanding of various interrelationships of a system
fastest way to delineate the complex relationships
If you're new to UML, our UML tutorial can get you on the right path. Learn more about what The Unified Modeling Language is, what it does, and why it's important.
Detailed description and introduction to UML(Unified Modeling Language).Structural and behavioral modeling.Class Diagram, Object Diagram.Notation for building all kinds of UML diagrams.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
2. • Senior Consultant & Trainer,
25 years of experience in modeling
SADT, OMT, UML, SysML
• OMG Certified on UML2 and SysML
• Co-founder of association
• Author of UML best-sellers in France
• … and of the first French
SysML book
pascal.roques@prfc.fr
The Speaker: Pascal Roques
2
3. What is SysML?
3
• SysML™ is a general-purpose graphical
modeling language for specifying,
analyzing, designing, and verifying
complex systems that may include
hardware, software, information,
personnel, procedures, and facilities
• It is a specialized UML profile targeted to
system engineering
4. Why Model?
4
• In all domains, those building complex
systems have been modelling for ages!
• To harness complexity
• To reduce risks
• To communicate!
• Abstraction
• Hide details
• …
10. The Four « Pillars » of SysML
10 www.omgsysml.org/
11. SysML and Requirements
11
• SysML defines elements for modeling
requirements and their relationships
• including relationships to other artifacts such
as test cases or blocks
12. Requirements in SysML
(1/3)
• A requirement specifies a capability or
condition that must (or should) be
satisfied
• A requirement may specify a function that
a system must perform or a performance
condition a system must achieve
• Use cases are typically effective for
capturing the functional requirements, but
not as well for non-functional
• The incorporation of text-based requirements
into SysML effectively accommodates a broad
range of requirements
12
13. Requirements in SysML
(2/3)
• SysML provides modeling constructs to
represent text-based requirements and
relate them to other modeling elements
• The requirements diagram can depict the
requirements in graphical, tabular, or tree
structure format
• A requirement can also appear on other
diagrams to show its relationship to other
modeling elements
• The requirements modeling constructs are
intended to provide a bridge between
traditional requirements management tools
and the SysML models
13
14. Requirements in SysML
(3/3)
• A standard requirement includes
properties to specify its unique identifier
and text requirement
• Additional properties such as verification status,
can be specified by the user
• Several requirements relationships are
specified that enable the modeler to relate
requirements to other requirements as well
as to other model elements
• These include relationships for defining a
requirements hierarchy, deriving requirements,
satisfying requirements, verifying requirements,
and refining requirements
14
15. Composite Requirement
• A Composite Requirement can contain sub
requirements in terms of a requirements
hierarchy, specified using the namespace
containment mechanism
• A composite requirement may state that the
system shall do A and B and C, which can be
decomposed into the child requirements that
the system shall do A, the system shall do B,
and the system shall do C
15
16. Requirement Reuse
• There is a real need for Requirement
reuse across product families and projects
• Typical scenarios are regulatory, statutory, or
contractual requirements that are applicable
across products and/or projects and
requirements that are reused across product
families
• SysML introduces the concept of a slave
requirement
16
17. Derive Relationship
• The derived requirements generally
correspond to requirements at the next
level of the system hierarchy
17
18. Satisfy Relationship
• The satisfy relationship describes how a
design or implementation model satisfies
one or more requirements
18
19. Verify Relationship
• The verify relationship defines how a test
case or other model element verifies a
requirement
19
20. Refine Relationship
• The refine requirement relationship can be
used to describe how a model element or
set of elements can be used to further
refine a requirement
20
21. Trace Relationship
• A generic trace requirement relationship
provides a general-purpose relationship
between a requirement and any other
model element
• The semantics of trace include no real
constraints and therefore are quite weak
21
22. Warning: Arrow direction!
• Most requirement relationships in SysML
are based on the UML dependency
• The arrow points from the dependent model
element (client) to the independent model
element (supplier)
• In SysML, the arrowhead direction is
opposite of what has typically been used
for requirements flow-down where the
higher-level requirement points to the
lower-level requirement
22
23. Requirement Subclasses
• Modelers can customize requirements
taxonomies by defining additional
subclasses of the Requirement stereotype
• For example, a modeler may want to define
requirements categories to represent
operational, functional, interface, performance,
physical, storage, activation/deactivation,
design constraints, and other specialized
requirements such as reliability and
maintainability, or to represent a high level
stakeholder need
• Some potential Requirement subclasses
are defined in Non-normative Extensions
23
27. Requirements Table (1/2)
• The requirement diagram has a distinct
disadvantage when viewing large numbers
of requirements
• The traditional method of viewing requirements
in textual documents is a more compact
representation than viewing them in a diagram
• SysML embraces the concept of displaying
results of model queries in tables as well
as using tables as a data input
mechanism, but the specifics of generating
tables is left to the tool implementer
27
28. Requirements Table (2/2)
• The tabular format can be used to
represent the requirements, their
properties and relationships
28
30. Requirement Packages
• Requirements can be organized into a
package structure
• A typical structure may include a top-level
package for all requirements
• Each nested package within this package may
contain requirements from different
specifications (system, subsystem, component,
etc.)
• Each specification package contains the text-
based requirements for that specification
• This package structure corresponds to a typical
specification tree that is a useful artifact for
describing the scope of requirements for a
project
30
31. Other Ways to Represent
“Requirements”
• Nearly all SysML diagram types can
represent Requirements!
• Use Case Diagram
• Sequence Diagram
• State Diagram
• Activity Diagram
• Block Definition Diagram
• Internal Block Diagram
• Parametric Diagram
31
32. Use Case Diagram
• The Use Case diagram describes the
usage of a system (subject) by its actors
(environment) to achieve a goal, that is
realized by the subject providing a set of
services to selected actors
32
33. Sequence Diagram
• The Sequence diagram describes the flow
of control between actors and systems
(blocks) or between parts of a system
• This diagram represents the sending and
receiving of messages between the
interacting entities called lifelines, where
time is represented along the vertical axis
33
35. State Machine Diagram
• The StateMachine package defines a set
of concepts that can be used for modeling
discrete behavior through finite state
transition systems
• The state machine represents behavior
as the state history of an object in terms
of its transitions and states
35
37. Activity Diagram
• Activity modeling emphasizes the inputs,
outputs, sequences, and conditions for
coordinating other behaviors. It provides
a flexible link to blocks owning those
behaviors
37
39. Block Definition Diagram
• The Block Definition Diagram defines
features of blocks and relationships
between blocks such as associations,
generalizations, and dependencies
• It captures the definition of blocks in
terms of properties and operations, and
relationships such as a system hierarchy
or a system classification tree
39
41. Internal Block Diagram (Domain)
• The Internal Block Diagram captures the
internal structure of a block in terms of
properties and connectors between
properties
• A block can include properties to specify
its values, parts, and references to other
blocks
• Ports are a special class of property used
to specify allowable types of interactions
between blocks
41
43. Parametric Diagram
• Parametric diagrams include usages of
constraint blocks to constrain the
properties of another block
• The usage of a constraint binds the
parameters of the constraint, such as F,
m, and a, to specific properties of a block,
such as a mass, that provide values for
the parameters
43
47. Industrial Feedback (1)
47
• In 2009, MagicDraw R&D decided to
migrate from Document-driven to Model-
driven Requirement Engineering using
SysML
• Advantages:
• Much better teamwork and version
management capabilities
• More formal/structured descriptions of the
requirements
• Maintain the information about already
implemented functionality
• Traceability to the architecture and test cases
48. Industrial Feedback (2)
48
• SE^2 and APE Case Study
• Large telescope SysML Model
• Guidelines for modeling Requirements:
• Distinguish Objectives, Stakeholder
Requirements, System Requirements and
Analysis elements (e.g. Use Cases)
• Modeling can be used for requirements
specification
• Above a certain number of requirements, they
become difficult to visualize graphically. It is
better to use the tabular format
• SysML requirements are not a replacement of
RM tools but a visualization aid for
architectural important requirements
57. Conclusion (1/3)
57
• A Requirements Model can provide
information that helps determine if the
requirements meet their desired attributes
• SysML requirements modeling provides a
‘link’ between the text requirements and
the rest of the model elements
• … But for the moment, SysML
requirements are not a complete
replacement of RM tools
58. Conclusion (2/3)
58
• SysML Requirement modeling concept
should not remain just a buzz!
• It can be a real breakthrough for people
who do not master yet a tooled
Requirements Management process
• It can be also valuable for people used to
Requirements Management tools
• Models can help a lot to formalize requirements
(state machines, block diagrams, etc.)
• Diagrams are a very powerful communication
tool between all stakeholders
59. Conclusion (3/3)
59
Validation Tests
System Validation
User
Requirements
Need Operational Use
derivation
Components
Requirements
Components
Tests
Components Verification
Subsystems
Requirements
Subsystems
Tests
Subsystems Verification
derivation
System
Requirements Verification Tests
System Verification
derivation
60. Additional Resources…
• Websites:
• www.omgsysml.org/
• www.incose.org/
• http://mbse.gfse.de/index.html
• Books:
• P. Roques, SysML par l’exemple,
2009, Eyrolles
• S. Friedenthal, A. Moore, and R. Steiner, A
Practical Guide to SysML, 2011, OMG Press
• T. Weilkiens, Systems Engineering with
SysML/UML: Modeling, Analysis, Design, 2008,
OMG Press
60