The platform architecture developed by the CPaaS.io project - both the overall system architecture as well as the two implementation architectures, one based on FIWARE and the other on u2 - as presented at the first year review meeting in Tokyo on October 5, 2017.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
Arne Rossmann outlines why the Business Data Lake works and which Services the Business Data Lake should provide. Organizations can use the Business Data Lake concept best when they standardize, industrialize and innovate.
Presented by Arne Rossman, Capgemini Germany, at the OOP Conference, 31 January 2017
Azure Synapse is Microsoft's new cloud analytics service offering that combines enterprise data warehouse and Big Data analytics capabilities. It offers a powerful and streamlined platform to facilitate the process of consolidating, storing, curating and analysing your data to generate reliable and actionable business insights.
Best Practices for Adopting Microsoft Dynamics 365Precisely
There are many advantages to migrating your on-premise CRM to the cloud-based Microsoft Dynamics 365. Organizations rely on clean CRM data to create single views of their customers. As you prepare to migrate your platform, you want to ensure that the integrity of your data remains intact both during and after the migration. With the right tools, you can have confidence that you are prepared from both a data structure and a data quality perspective.
View this webcast with Microsoft and Syncsort for a fast-paced hour focused on helping you prepare for an on-premise CRM to Dynamics 365 Cloud migration. The content includes:
• Highlights of the Dynamics 365 functionality not available with on-premise CRM including LinkedIn integration
• Guidance on how to prepare your database for the move to 365 using Syncsort data quality
• Information on how to eliminate duplicates and normalize data for the migration to 365 and after you are live
• The value of providing your userbase a clean and refreshed starting point
Data Modeling & Metadata for Graph DatabasesDATAVERSITY
Graph databases are seeing a spike in popularity as their value in leveraging large data sets for key areas such as fraud detection, marketing, and network optimization become increasingly apparent. With graph databases, it’s been said that ‘the data model and the metadata are the database’. What does this mean in a practical application, and how can this technology be optimized for maximum business value?
Architecture of Dynamics CRM with Office 365 and AzurePedro Azevedo
In this session I explained the relationship between Dynamics 365, Office 365 and Microsoft Azure. Other goal is to explain of easy is to start to develop in this platform.
Arne Rossmann outlines why the Business Data Lake works and which Services the Business Data Lake should provide. Organizations can use the Business Data Lake concept best when they standardize, industrialize and innovate.
Presented by Arne Rossman, Capgemini Germany, at the OOP Conference, 31 January 2017
Azure Synapse is Microsoft's new cloud analytics service offering that combines enterprise data warehouse and Big Data analytics capabilities. It offers a powerful and streamlined platform to facilitate the process of consolidating, storing, curating and analysing your data to generate reliable and actionable business insights.
Best Practices for Adopting Microsoft Dynamics 365Precisely
There are many advantages to migrating your on-premise CRM to the cloud-based Microsoft Dynamics 365. Organizations rely on clean CRM data to create single views of their customers. As you prepare to migrate your platform, you want to ensure that the integrity of your data remains intact both during and after the migration. With the right tools, you can have confidence that you are prepared from both a data structure and a data quality perspective.
View this webcast with Microsoft and Syncsort for a fast-paced hour focused on helping you prepare for an on-premise CRM to Dynamics 365 Cloud migration. The content includes:
• Highlights of the Dynamics 365 functionality not available with on-premise CRM including LinkedIn integration
• Guidance on how to prepare your database for the move to 365 using Syncsort data quality
• Information on how to eliminate duplicates and normalize data for the migration to 365 and after you are live
• The value of providing your userbase a clean and refreshed starting point
Data Modeling & Metadata for Graph DatabasesDATAVERSITY
Graph databases are seeing a spike in popularity as their value in leveraging large data sets for key areas such as fraud detection, marketing, and network optimization become increasingly apparent. With graph databases, it’s been said that ‘the data model and the metadata are the database’. What does this mean in a practical application, and how can this technology be optimized for maximum business value?
Architecture of Dynamics CRM with Office 365 and AzurePedro Azevedo
In this session I explained the relationship between Dynamics 365, Office 365 and Microsoft Azure. Other goal is to explain of easy is to start to develop in this platform.
Extend SAP S/4HANA to deliver real-time intelligent processesSAP Technology
Accelerate your journey towards an intelligent enterprise. Extend SAP S/4HANA with SAP’s business technology platform to make data from all sources available to business applications and analytics, deliver real-time insights to act quickly, augment intelligence, and enable breakthrough innovations.
Big Data and Data Warehousing Together with Azure Synapse Analytics (SQLBits ...Michael Rys
SQLBits 2020 presentation on how you can build solutions based on the modern data warehouse pattern with Azure Synapse Spark and SQL including demos of Azure Synapse.
How Graph Data Science can turbocharge your Knowledge GraphNeo4j
Knowledge Graphs are becoming mission-critical across many industries. More recently, we are witnessing the application of Graph Data Science to Knowledge Graphs, offering powerful outcomes. But how do we define Knowledge Graphs in industry and how can they be useful for your project? In this talk, we will illustrate the various methods and models of Graph Data Science being applied to Knowledge Graphs and how they allow you to find implicit relationships in your graph which are impossible to detect in any other way. You will learn how graph algorithms from PageRank to Embeddings drive ever deeper insights in your data.
Data Architecture Strategies: The Rise of the Graph DatabaseDATAVERSITY
Graph databases are growing in popularity, with their ability to quickly discover and integrate key relationship between enterprise data sets. Business use cases such as recommendation engines, master data management, social networks, enterprise knowledge graphs and more provide valuable ways to leverage graph databases in your organization. This webinar provides an overview of graph database technologies, and how they can be used for practical applications to drive business value.
FLiP Into Trino
FLiP into Trino. Flink Pulsar Trino
Pulsar SQL (Trino/Presto)
Remember the days when you could wait until your batch data load was done and then you could run some simple queries or build stale dashboards? Those days are over, today you need instant analytics as the data is streaming in real-time. You need universal analytics where that data is. I will show you how to do this utilizing the latest cloud native open source tools. In this talk we will utilize Trino, Apache Pulsar, Pulsar SQL and Apache Flink to analyze instantly data from IoT, sensors, transportation systems, Logs, REST endpoints, XML, Images, PDFs, Documents, Text, semistructured data, unstructured data, structured data and a hundred data sources you could never dream of streaming before. I will teach how to use Pulsar SQL to run analytics on live data.
Tim Spann
Developer Advocate
StreamNative
David Kjerrumgaard
Developer Advocate
StreamNative
https://www.starburst.io/info/trinosummit/
https://github.com/tspannhw/FLiP-Into-Trino/blob/main/README.md
https://github.com/tspannhw/StreamingAnalyticsUsingFlinkSQL/tree/main/src/main/java
select * from pulsar."public/default"."weather";
Apache Pulsar plus Trio = fast analytics at scale
The first step towards understanding data assets’ impact on your organization is understanding what those assets mean for each other. Metadata – literally, data about data – is a practice area required by good systems development, and yet is also perhaps the most mislabeled and misunderstood Data Management practice. Understanding metadata and its associated technologies as more than just straightforward technological tools can provide powerful insight into the efficiency of organizational practices and enable you to combine practices into sophisticated techniques supporting larger and more complex business initiatives. Program learning objectives include:
- Understanding how to leverage metadata practices in support of business strategy
- Discuss foundational metadata concepts
- Guiding principles for and lessons previously learned from metadata and its practical uses applied strategy
Metadata strategies include:
- Metadata is a gerund so don’t try to treat it as a noun
- Metadata is the language of Data Governance
- Treat glossaries/repositories as capabilities, not technology
Arquitetura Corporativa, Arquitetura de Soluções, Arquitetura de Negócio, Arq...Ricardo Sul
Cada vez mais as empresas estão organizando suas Arquiteturas de TI em arquiteturas especialistas com o objetivo de garantir a excelência no trabalho. Sendo assim, as organizações dividem suas arquiteturas em Arquitetura Corporativa, Arquitetura de Soluções, Arquitetura de Negócio, Arquitetura de Sistemas, Arquitetura de Dados, Arquitetura de Software, Arquitetura de Integração e Serviços, portanto, você sabe a diferença entre elas? Sabe qual ponto cada arquitetura de “atacar” em um projeto sem ultrapassar suas fronteiras? E a sua organização, ela está preparada para essa nova configuração? Conheça então cada uma delas e suas especificidades e quais seriam as configurações arquiteturais mais adequadas para sua organização.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Data-Ed Slides: Best Practices in Data Stewardship (Technical)DATAVERSITY
In order to find value in your organization's data assets, heroic data stewards are tasked with saving the day- every single day! These heroes adhere to a data governance framework and work to ensure that data is: captured right the first time, validated through automated means, and integrated into business processes. Whether its data profiling or in depth root cause analysis, data stewards can be counted on to ensure the organization's mission critical data is reliable. In this webinar we will approach this framework, and punctuate important facets of a data steward’s role.
Learning Objectives:
- Understand the business need for a data governance framework
- Learn why embedded data quality principles are an important part of system/process design
- Identify opportunities to help drive your organization to a data driven culture
DataEd Webinar: Reference & Master Data Management - Unlocking Business ValueDATAVERSITY
Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions—its master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on-time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach typically involving Data Governance and Data Quality activities.
Learning Objectives:
- Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBOK)
- Understand why these are an important component of your Data Architecture
- Gain awareness of Reference and MDM Frameworks and building blocks
- Know what MDM guiding principles consist of and best practices
- Know how to utilize reference and MDM in support of business strategy
DAS Slides: Building a Data Strategy - Practical Steps for Aligning with Busi...DATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task. The opportunity in getting it right can be significant, however, as data drives many of the key initiatives in today’s marketplace: digital transformation, marketing, customer centricity, and more. This webinar will help de-mystify Data Strategy and Data Architecture and will provide concrete, practical ways to get started.
Privacy and data protection are important in the context of IoT. The CPaaS.io platform empowers citizens to control the data that is related to them. This presentation given at the first year review meeting in Tokyo on October 5, 2017 explains how.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
Extend SAP S/4HANA to deliver real-time intelligent processesSAP Technology
Accelerate your journey towards an intelligent enterprise. Extend SAP S/4HANA with SAP’s business technology platform to make data from all sources available to business applications and analytics, deliver real-time insights to act quickly, augment intelligence, and enable breakthrough innovations.
Big Data and Data Warehousing Together with Azure Synapse Analytics (SQLBits ...Michael Rys
SQLBits 2020 presentation on how you can build solutions based on the modern data warehouse pattern with Azure Synapse Spark and SQL including demos of Azure Synapse.
How Graph Data Science can turbocharge your Knowledge GraphNeo4j
Knowledge Graphs are becoming mission-critical across many industries. More recently, we are witnessing the application of Graph Data Science to Knowledge Graphs, offering powerful outcomes. But how do we define Knowledge Graphs in industry and how can they be useful for your project? In this talk, we will illustrate the various methods and models of Graph Data Science being applied to Knowledge Graphs and how they allow you to find implicit relationships in your graph which are impossible to detect in any other way. You will learn how graph algorithms from PageRank to Embeddings drive ever deeper insights in your data.
Data Architecture Strategies: The Rise of the Graph DatabaseDATAVERSITY
Graph databases are growing in popularity, with their ability to quickly discover and integrate key relationship between enterprise data sets. Business use cases such as recommendation engines, master data management, social networks, enterprise knowledge graphs and more provide valuable ways to leverage graph databases in your organization. This webinar provides an overview of graph database technologies, and how they can be used for practical applications to drive business value.
FLiP Into Trino
FLiP into Trino. Flink Pulsar Trino
Pulsar SQL (Trino/Presto)
Remember the days when you could wait until your batch data load was done and then you could run some simple queries or build stale dashboards? Those days are over, today you need instant analytics as the data is streaming in real-time. You need universal analytics where that data is. I will show you how to do this utilizing the latest cloud native open source tools. In this talk we will utilize Trino, Apache Pulsar, Pulsar SQL and Apache Flink to analyze instantly data from IoT, sensors, transportation systems, Logs, REST endpoints, XML, Images, PDFs, Documents, Text, semistructured data, unstructured data, structured data and a hundred data sources you could never dream of streaming before. I will teach how to use Pulsar SQL to run analytics on live data.
Tim Spann
Developer Advocate
StreamNative
David Kjerrumgaard
Developer Advocate
StreamNative
https://www.starburst.io/info/trinosummit/
https://github.com/tspannhw/FLiP-Into-Trino/blob/main/README.md
https://github.com/tspannhw/StreamingAnalyticsUsingFlinkSQL/tree/main/src/main/java
select * from pulsar."public/default"."weather";
Apache Pulsar plus Trio = fast analytics at scale
The first step towards understanding data assets’ impact on your organization is understanding what those assets mean for each other. Metadata – literally, data about data – is a practice area required by good systems development, and yet is also perhaps the most mislabeled and misunderstood Data Management practice. Understanding metadata and its associated technologies as more than just straightforward technological tools can provide powerful insight into the efficiency of organizational practices and enable you to combine practices into sophisticated techniques supporting larger and more complex business initiatives. Program learning objectives include:
- Understanding how to leverage metadata practices in support of business strategy
- Discuss foundational metadata concepts
- Guiding principles for and lessons previously learned from metadata and its practical uses applied strategy
Metadata strategies include:
- Metadata is a gerund so don’t try to treat it as a noun
- Metadata is the language of Data Governance
- Treat glossaries/repositories as capabilities, not technology
Arquitetura Corporativa, Arquitetura de Soluções, Arquitetura de Negócio, Arq...Ricardo Sul
Cada vez mais as empresas estão organizando suas Arquiteturas de TI em arquiteturas especialistas com o objetivo de garantir a excelência no trabalho. Sendo assim, as organizações dividem suas arquiteturas em Arquitetura Corporativa, Arquitetura de Soluções, Arquitetura de Negócio, Arquitetura de Sistemas, Arquitetura de Dados, Arquitetura de Software, Arquitetura de Integração e Serviços, portanto, você sabe a diferença entre elas? Sabe qual ponto cada arquitetura de “atacar” em um projeto sem ultrapassar suas fronteiras? E a sua organização, ela está preparada para essa nova configuração? Conheça então cada uma delas e suas especificidades e quais seriam as configurações arquiteturais mais adequadas para sua organização.
Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service, that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. This is a huge deck with lots of screenshots so you can see exactly how it works.
Data-Ed Slides: Best Practices in Data Stewardship (Technical)DATAVERSITY
In order to find value in your organization's data assets, heroic data stewards are tasked with saving the day- every single day! These heroes adhere to a data governance framework and work to ensure that data is: captured right the first time, validated through automated means, and integrated into business processes. Whether its data profiling or in depth root cause analysis, data stewards can be counted on to ensure the organization's mission critical data is reliable. In this webinar we will approach this framework, and punctuate important facets of a data steward’s role.
Learning Objectives:
- Understand the business need for a data governance framework
- Learn why embedded data quality principles are an important part of system/process design
- Identify opportunities to help drive your organization to a data driven culture
DataEd Webinar: Reference & Master Data Management - Unlocking Business ValueDATAVERSITY
Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions—its master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on-time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach typically involving Data Governance and Data Quality activities.
Learning Objectives:
- Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBOK)
- Understand why these are an important component of your Data Architecture
- Gain awareness of Reference and MDM Frameworks and building blocks
- Know what MDM guiding principles consist of and best practices
- Know how to utilize reference and MDM in support of business strategy
DAS Slides: Building a Data Strategy - Practical Steps for Aligning with Busi...DATAVERSITY
Developing a Data Strategy for your organization can seem like a daunting task. The opportunity in getting it right can be significant, however, as data drives many of the key initiatives in today’s marketplace: digital transformation, marketing, customer centricity, and more. This webinar will help de-mystify Data Strategy and Data Architecture and will provide concrete, practical ways to get started.
Privacy and data protection are important in the context of IoT. The CPaaS.io platform empowers citizens to control the data that is related to them. This presentation given at the first year review meeting in Tokyo on October 5, 2017 explains how.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
Accelerating the Digital Transformation – Building a 3D IoT Reference Archite...OPEN DEI
OPEN DEI Webinar “The role of the Reference Architectures in Data-oriented Digital Platforms”
28 May 2020
Ovidiu Vermesan (Chief Scientist – SINTEF, CREATE-IoT coordinator)
Open Source Platforms Integration for the Development of an Architecture of C...Eswar Publications
The goal of the Internet of Things (IoT) is to achieve the interconnection and interaction of all kind of everyday
objects. IoT architecture can be implemented in various ways. This paper presents a way to mount an IoT architecture using open source hardware and software platforms and shows that this is a viable option to collect information through various sensors and present it through a web page.
The CPaaS.io platform allows to make task logic - e.g., for analytics - to be adaptively moved from the cloud to the edge of an IoT network. This presentation given at the first year review meeting in Tokyo on October 5, 2017 explains how.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
Modeling and Provisioning IoT Cloud Systems for Testing UncertaintiesHong-Linh Truong
Modern Cyber-Physical Systems (CPS) and Internet of Things (IoT)
systems consist of both loosely and tightly interactions among
various resources in IoT networks, edge servers and cloud data
centers. These elements are being built atop virtualization layers
and deployed in both edge and cloud infrastructures. They also deal
with a lot of data through the interconnection of different types of
networks and services. Therefore, several new types of uncertainties
are emerging, such as data, actuation, and elasticity uncertainties.
This triggers several challenges for testing uncertainty in such
systems. However, there is a lack of novel ways to model and
prepare the right infrastructural elements covering requirements
for testing emerging uncertainties. In this paper, first we present
techniques for modeling CPS/IoT Systems and their uncertainties
to be tested. Second, we introduce techniques for determining and
generating deployment configuration for testing in different IoT
and cloud infrastructures. We illustrate our work with a real-world
use case for monitoring and analysis of Base Transceiver Stations.
Towards a Resource Slice Interoperability Hub for IoTHong-Linh Truong
Interoperability for IoT is a challenging problem
because it requires us to tackle (i) cross-system interoperability
issues at the IoT platform sides as well as relevant network
functions and clouds in the edge systems and data centers
and (ii) cross-layer interoperability, e.g., w.r.t. data formats,
communication protocols, data delivery mechanisms, and perfor-
mance. However, existing solutions are quite static w.r.t software
deployment and provisioning for interoperability. Many middle-
ware, services and platforms have been built and deployed as
interoperability bridges but they are not dynamically provisioned
and reconfigured for interoperability at runtime. Furthermore,
they are often not considered together with other services as a
whole in application-specific contexts. In this paper, we focus
on dynamic aspects by introducing the concept of Resource
Slice Interoperability Hub (rsiHub). Our approach leverages
existing software artifacts and services for interoperability to
create and provision dynamic resource slices, including IoT,
network functions and clouds, for addressing application-specific
interoperability requirements. We will present our key concepts,
architectures and examples toward the realization of rsiHub.
Linux has become one of the most important software to run the civil infrastructure systems such as power plants, water distribution, traffic control and healthcare. From computer system viewpoint, the systems require a very high level of quality on real-time performance, reliability and security to avoid serious failure. To overcome the issues to apply Linux on such systems, as the first step, we need to gather the actual requirements. Past few months, some companies who are interested in this area actually got together and discussed to put those requirements together. In this talk, we would like to share the current status of this requirement discussion and our future collaboration plan. Please join us to improve Linux together and make the world better place!
Connecting Cities, Technologies and Citizens – the Swiss-European-Japanese pr...Stephan Haller
Smart Cities is a lot about connectivity and networking, not just in the technical sense. This talk given at a mini-symposium of the Swiss Informatics Society in May 2019 highlights this using the EU-Japan Horizon 2020 project CPaaS.io.
The full talk is available on YouTube: https://youtu.be/kmh26qUnGh8
WEBINAR 91: EU-JAPAN SMART CITY PROJECT – CITY PLATFORM AS A SERVICE (CPAAS.IO)
11-04-17 | 10:00 - 11:00 AM CET
What role does an open city platform play in creating smart city innovation?
Data will be what the smart city of the future runs on. To make this a reality, cities need a platform where data from a variety of sources –e.g., Internet of Things data, open government data, social media – can be processed, linked, and analysed and made accessible to third parties for further exploitation. The CPaaS.io project is building such a platform in Europe and in Japan. The Webinar is introducing this platform with a special focus on the targeted use cases and how they can be realized.
The webinar is targeted to EU and Japanese companies, organisations and public administration authorities seeking to innovate in the Smart City domain and reuse results from the CPaaS.io project.
Recording of the webinar available at <i>https://www.eu-japan.eu/videos/about-japan/webinar91.mp4</i>
Use Cases of the CPaaS.io project as presented at the first year review meeting in Tokyo on October 5, 2017.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
CPaaS.io Y1 Review Meeting - Holistic Data ManagementStephan Haller
Data management and governance aspects of the CPaaS.io platform as presented at the first year review meeting in Tokyo on October 5, 2017.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
Overview presentation of the CPaaS.io project given at the first year review meeting in Tokyo on October 5, 2017.
Disclaimer:
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European Commission and NICT have no liability in respect of this document, which is merely representing the view of the project consortium. This document is subject to change without notice.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
1. City Platform as a Service – Integrated and Open
WP 3: Platform Architecture
Klaus Mößner (University of Surrey)
Noboru Koshizuka (University of Tokyo)
Year 1 Review Meeting, Tokyo
October 5, 2017
4. Main Results
• Obj1: Perform a requirements analysis & cross-check
Requirements analysis done and preliminary requirement mapping
achieved;
• Obj2: Design the CPaaS.io system architecture
First version of IoT ARM-compliant system (logical) architecture;
First version of each a FIWARE- and u2- based platform instantiation view;
• Obj3: Perform system integration
EU-side: FIWARE and security components are integrated and tested with
simple WP2 use cases;
Japan-side: in u2 architecture IoT components, open data components, and
PDS (omotenashi-platform) components are integrated.
29.11.2017 CPaaS.io - Consortium Confidential 4
5. IoT-ARM methodology (Architecture Reference Model)
What is it made of?
29.11.2017 CPaaS.io - Consortium Confidential 5
• IoT Reference Model to promote common understanding
High abstraction level
Describes the aspects of the IoT domain that do not change
Enables a general discourse on the IoT domain
Provides a Domain, Information, Functional, Communication and Security models
• IoT Reference Architecture to describe essential building blocks and identify design
choices
Based on IoT Reference Model
Reference for building compliant IoT architectures
Provides views and perspectives on different architectural aspects that are of concern to
stakeholders
• Guidelines to help in developing an architecture for a specific system based on the
IoT Reference Architecture
Provide different types of guidance: process, usage per model and view, examples etc…
IoTARMframeworkand
methodology
6. Guidance through IoT ARM Process
29.11.2017 CPaaS.io - Consortium Confidential 6
IoTARMframeworkand
methodology
7. IoT ARM Reference Model
29.11.2017 CPaaS.io - Consortium Confidential 7
Attribute
attributeName
attributeType
ValueContainer
MetaData
metadataName
metadataType
metadataValue
VirtualEntity
identifier
entityType
ServiceDescription
Association
Value
Resource
Description
Device
Description
0..* 1..*
1
0..*
metadata
0..1
0..*
Information Model:
structure (e.g. relations,
attributes) of all the
information (data) that is
handled in an IoT system
Domain Model: Core concepts of IoT
and their relations - independent of
specific technologies
Device
Physical
Entity
Service
exposes
Resource
hosts
Association
Virtual
Entity
models
User
interacts
invokes
Functional Model: Functional
groups of an IoT system and their
relations
Management
Security
Application
IoT
Process
Management
Virtual
Entity
IoT
Service
Communication
Device
Service
Organisation
IoTARMframeworkand
methodology
8. IoT ARM views
• Physical Entity view
• Context view
• Functional view
• Uses the FM as canvas for describing the system “logical” functional
decomposition
• Information view
• Modelling of information / ontologies / data storage
• inter-FC data flows & interactions through System use-cases
• Instantiation view (new – CPaaS.io)
• Instantiation of “logical” FCs into “concrete” FCs and mapping information
• Deployment view
29.11.2017 CPaaS.io - Consortium Confidential 8
IoTARMframeworkand
methodology
9. Results in Year 1
Summary and Discussion
29.11.2017 CPaaS.io - Consortium Confidential 11
10. Results in Year 1
• Requirement Analysis
• Platform requirement collection
• Requirement Analysis and mapping
• VOLERE template summarises the outcomes of this first phase
• System Architecture (IoT ARM compliant)
• Functional view v1 which introduces needed Functional Components (FC)s
• Information view v1: System UCs describing flows of information and
interactions between FCs
• Set of perspectives
• 2 instantiation views with concrete FC mapping figure and table
• Platform integration(s) (u2- & FIWARE- based))
29.11.2017 CPaaS.io - Consortium Confidential 12
11. Requirement Analysis (1/2)
Tasks
1. Collect platform requirements (FReq & NFReq)
2. Unified with use-case requirements
a. Understand: x-check with issuers and rewrite if necessary
b. Adopt uniform terminology / split / factorise / rewrite
3. Analysis requirement mapping
FReq: identify target FC (functional decomposition) and impacted
Domain Model concept Functional & Information views
NFReq: identify target perspectives
29.11.2017 CPaaS.io - Consortium Confidential 13
13. System architecture (1/3)
Overview
• Consists of a set of views and perspectives (main focus in period 1 was on
views)
• Functional view:
• Set of functional groups and components organised along the IoT ARM Functional
Model
• Logical components with high-level descriptions
• Those high-level functionalities need being implemented in both derived u2- and
FIWARE- based concrete architectures
• Information view:
• Identifies typical usages of logical FCs (simplest ones in Arch v1)
• Describes those usages through UML use-case and interaction diagrams (so-called
system use-cases)
• An extensive set of system UCs can be seen as a CPaaS.io platform cookbook.
• Perspectives: few ones have been identified already (e.g. security and
interoperability) and technical objectives described
• 2 instantiation views with FC mapping
29.11.2017 CPaaS.io - Consortium Confidential 15
14. System architecture (2/3)
“logical” vs. “concrete”
29.11.2017 CPaaS.io - Consortium Confidential 16
“Logical”
architecture
U2- based
architecture
FIWARE- based
architecture
“logical” views “concrete” views (instantiation)
• Logical components concrete components
• Logical system UCs concrete system UCs
• Mapping tables (u2 and FIWARE instantiations)
18. CPaaS.io Platoform/JP side
Based on CPaaS.io System Architecture
29.11.2017 CPaaS.io - Consortium Confidential 20
IoT-Engine
(μT-Kernel)
ucode
BLE
ucode
NFC
ucode
QR
IoT Aggregator
* Tunneling
* Device Access Control
ucode Resolution Module (ucode+ucR Manager)
ucode RP
* ID Resolution * IoT Service Finding
ucR Query Protocol
* SPARQL-Based * REST-based
eTRONSecurityArchitecture
eTP(EntityTransferProtocol)
eTRONTags
OPaaS.io
* Personal Data Store
*Access Control
of Privacy Data
u2 Open Data Catalog KokosilIoT Translation Engine
Open Data
Distribution
Platform
Smart City Services
Other Platform
(External)
City Government Data (Static)
Obtained from Other Data Stores
Personal Data
Sensor/Realtime Data (Dynamic)
Obtained from City Infrastructures
Applications & Services
Information Services Layer
& Analytics Layer
Semantic Integration Layer
IoT Services Layer (Cloud Side)
IoT Services Layer (Edge Nodes Side)
Privacy,Security&Trust
Data Resources
WP2 WP2
WP2
WP3 WP4
WP3
WP5
WP5WP6
WP6
WP4
WP2 WP5WP4
19. Components in u2 Open IoT Platform
IoT-Engine
IoT-Engine is a standard development platform,
standardized by TRON Forum, for Open IoT to realize
"Aggregate Computing". The RTOS used on the IoT-
Engine is µT-Kernel 2.0 which TRON Forum releases as
open source.
ucode BLE, ucode NFC, ucode QR
ucode BLE, ucode NFC, and ucode QR are tiny devices
which export ucodes in several communication media.
IoT Aggregator
IoT Aggregator implements "aggregate computing model"
which uses all devices, services and systems connected to
the network to achieve desired services. It tries to offer a
holistically optimized IoT service by mixing various devices
and services as a whole.
OPaaS.io (Omotenashi Platform)
OPaaS.io (Omotenashi-Platform as a Services) is the
general-purpose Personal Data Store (PDS) in u2 Open
IoT Platform. "Omotenashi" means "hospitality" according
to personal conditions in Japanese.
Open Data Distribution Platform
Open Data Distribution Platform contains data storages
for open data in ucR-based format, and APIs for the data
storage.
It deals with both static open data such as statistic data
and real-time/dynamic open data such as sensor data.
eTRON
eTRON (Entity and Economy TRON) is a security
architecture in u2 Open IoT Platform.
It consists with ETP (Entity Transfer Protocol), which
exchanges valued data securely, and eTRON Tag, tamper
resistant smart card supporting ETP, etc.
29.11.2017 CPaaS.io - Consortium Confidential 21
20. Components in u2 Open IoT Platform
u2 Open Data Catalog
u2 Open Data Catalog provides a data catalog for open
data which uses "Open Data Distribution Platform". cKAN
and dKAN are used for its core module.
IoT Translation Engine
IoT Translation Engine realizes multilingual IoT Services
and Open Data Services. It utilized automatic translation
technologies.
Kokosi
Kokosil is the location-aware information service platform
of u2 Open IoT platform architecture.
It is mainly used for tourism support information services.
ucode Manager
To identify individual objects, spaces, and concepts in the
real world, unique identifiers are assigned to respective
objects, spaces and concepts that we wish to identify.
Information associated with ucodes, namely, the ucR
graph is registered in the ucR database. The protocol for
accessing the ucR database in this manner is called ucode
Resolution Protocol (ucodeRP).
ucR Manager
The wide-area distributed database which manages ucR
graphs is called a ucode Relation database. The ucR
database comprehensively manages information on the
relations among multiple ucodes in addition to the content
such as information services associated with individual
entities to which ucodes are assigned.
ucR Manager provides APIs both in REST API format and
in SPARQL format.
29.11.2017 CPaaS.io - Consortium Confidential 22
21. Platform Integration – EU Side (1/3)
FIWARE-based architecture
29.11.2017 CPaaS.io - Consortium Confidential 24
• Three-layered view of the
integration
• Layer 1: IoT devices
• Layer 2: Context & process
management
• Layer 3: Security layer
• Current system allows
FIWARE and SPARQL app
developers (shown on top)
22. Platform Integration – EU Side (2/3)
FIWARE- instantiation view (with FC mapping)
29.11.2017 CPaaS.io - Consortium Confidential 25
23. Platform Integration – EU Side (3/3)
Next steps of system integration
• FogFlow component to be deployed (it is already integrated
with IoT Broker)
• Integration of security layer with IoT Broker (in progress)
• Adapter from NGSI to SPARQL (in progress)
• Collecting data from other use cases (in progress)
• Adapters for possible other use cases (not started)
29.11.2017 CPaaS.io - Consortium Confidential 26
24. Plan for Year 2
Outlook
29.11.2017 CPaaS.io - Consortium Confidential 27
25. WP3 Outlook
• Provide V2 of functional decomposition with focus on
• Edge computing, semantic integration layer, analytics and platform federation
(both FIWARE and u2 instances)
• Extend the collection of logical system UCs
• Revise the two instantiation views with:
• Additional FCs
• Interface and functionalities & API level integration study (esp. IoT Aggregator, Omotenasi Platform,
and open data distribution platform)
• Updated FC mapping figure and table
• Introduction of „concrete“ system UCs in the form of sequence charts
• Elucidate the tactics/design choices for the inter-operability perspective and platform
federation
• Continuously integrate more FCs to the 2 concrete CpaaS.io platforms
• Integrate with relevant scenarios
• Evaluation of architecture from the view of IoT applications and services
• Starting study for interoperability between FIWARE and u2 architecture.
29.11.2017 CPaaS.io - Consortium Confidential 28
26. Gracias Mulțumesc 謝謝 Paldies Eskerrik asko Dziękuję Mahalo תודה Go raibh maith agat спасибо Grazzi आभारी
Xin cảm ơn 감사합니다 நன்றி Köszönöm مرسي Ndiyabulela Grazia Tak Благодаря Aitäh Terima kasih Děkuji
Asante Diolch شكرا Takk Ďakujem Gràcies Kiitos Obrigado Teşekkür ederim Ngiyabonga Þakka þér Grazas
Tapadh leibh ขอบคุณ Faleminderit Ačiū Danke Merci Grazie Hvala Ευχαριστώ Dankon Tack Dank je Grazcha
…
Thank You
ありがとう
This document has been produced in the context of the CPaaS.io project which is jointly funded by the European
Commission (grant agreement n° 723076) and NICT from Japan (management number 18302). All information provided
in this document is provided "as is" and no guarantee or warranty is given that the information is fit for any particular
purpose. The user thereof uses the information at its sole risk and liability. For the avoidance of all doubts, the European
Commission and NICT have no liability in respect of this document, which is merely representing the view of the project
consortium. This document is subject to change without notice.
29.11.2017 CPaaS.io - Consortium Confidential 29
Editor's Notes
For the EUJ-02-2016 call text, refer to https://ec.europa.eu/programmes/horizon2020/sites/horizon2020/files/05i.%20LEIT-ICT_2016-2017_pre-publication.pdf, page 108ff.
Central WP, agreement about an abstract (or logical) view of the architecture to be agreed upon by all before it gets instantiated into two separate still compatible declinations / instantiations (Japan+EU)
Architecture drive somehow the other WP providing a common grounding to their work and the integration work of those concrete platforms
As pledged in the DoW, we follow the best practice in IoT Architecture, the IOT Architectural Reference Model (IOT ARM) from IoT-A (FP7 lighthouse project on architecture or the IoT)
This first slides gives an insight of the 3 main WP3 objectives and results for Year one.
The rest of this presentation will be articulated around those 3 main results
1/ The first objectives is about requirement analysis and does include as well the collection of Functional and Non-functional requirements from the platform perspective
Scenario requirements collection was part of WP2.
RESULT: requirements from both WP2 and WP3 where unified (I will explain later what “unified” really means in this context) and led to a first functional decomposition (which is the identification of needed components and their classification in functional groups following the ARM methodology). The whole requirement engineering outcome is materialized within a VOLERE template (excel sheet).
2/ The second objective is about the design of the system architecture.
RESULT in year 1 is that we have a first version of this architecture. Mainly due to the other WP time lines some technical aspects of the project are not yet dealt with in this preliminary version but have been worked on in y2. still we will see that the first version already provides a quite comprehensive description of what the CPaaS.io will consist of.
3/ Finally last WP3 objective is about the concrete platforms integration and as a result in Y1 the most central components of those architecture have been integrated and tested.
It is worth noting here that this system integration is following an agile implementation approach, we implement four week sprints with review and planning sessions, whereby year 1 is focused on the set up, integration, and instantiation of FIWARE and security components for the EU side and <COMPLETE FOR JAPANESE PART>
As said earlier the rest of the presentation is about presenting in more detail the outcomes of Obj1 and Obj2 (for the two concrete instantiations of the platform u2- and FIWARE- based I will give the floor to respectively Noboru Koshizuka and Ernoe Kovacs) and Obj3.
I will end-up the presentation with next steps and Q/A session
But before going in the detail of the technical result, please allow me a short digression about the IoT ARM framework and methodology so that every one understand well what we are talking about – as it does provide us with a specific terminology
So basically the IoT ARM provides a methodology for generating IOT Architecture. The first important thing to mention is that the IoT ARM IS NOT an architecture. It is a framework used to generate architectures.
It is made of three main parts:
The Reference model consists of models, which are not expected to be modified. They provide a technical grounding about different aspects of an IoT Architecture. For CPaaS.io we have considered mainly the Domain Model, Information Model and Functional Model (I will give a bit more detail in the following slides), the two remaining ones being not mature enough
The Reference architecture provides descriptions of Views (focussing on functional building blocks, information and deployment for instance) and Perspectives that provide tactics and design choices that can be used in order to reach certain qualities of the system
you have probably understood that views are mainly concerned with functional requirements and perspectives with non functional requirements.
Finally the Guideline part provides a clear process for the generation of the IoT architecture
This process is discussed in the next slide
I’m not going in the full detail of that slide
The important thing to understand is that it tells you how to deal with requirement engineering and how to generate the views (and in which order).
In the case of CPaaS.io
1/ we have applied the requirement engineering part
2/ We have ignored the Context/PE views because they were not relevant for the platform architecture as too scenario centric
3/ We focused our work on deriving the Functional, Information views and TWO Instantiation Views.
Those two additional views were introduced in order to describe how the two needed concrete architecture (u2-based and FIWARE-based) were derived form the Functional View (which describes functionalities at a more abstract (or logical) level).
In brief, we are adding two instantiation views which show how two compatible concrete architectures- AND I’M INSISTING ON CONCRETE AS OPPOSED TO LOGICAL - are derived from the LOGICAL architecture (info and functional views are abstract, the two instantiation views are concrete). LOGICAL views provides a technology agnostic view of what is needed by all platform instantiations while CONCRETE views propose an instantiation or realization of that logical view in term of concrete technologies, components and design choices.
We consider three models, on the left side we have a simplified view on IoT Domain Model. (virtual entity as central entity).
Virtual entities model physical entities, VMs are associated with services, services interact with resources (sensors).
If one wants to interact with a resource, one needs to go through IoT services.
Then in the middle: the Information Model . Like for all Models they are technology agnostic (no real data format mentioned) and illustrate the different levels of description and how information is to modeled (from a bird eye)
The functional model is the basis for the functional view as it identifies the main functional Groups (we shall explain the meaning of those FG later on).
As we introduced earlier, IoT architectures are made of views and perspectives
As far as the first two (physical Entity and Context) views are concerned, -as we said earlier- we have NOT dealt with them, as they are too use-case dependent (each scenario would lead to a new view)
1/ Functional view, done. – using the functional model as canvas for logical functional decomposition. The functional decomposition is directly derived from requirement engineering process
2/ Info view: it is made of two main parts, the first one is being worked on in Y2
The second part is about describing inter-FC interactions. We have provided a preliminary list of detailed usage patterns (elementary actions when dealing with the platform) and how they are to be implemented (high level principles).
To the list of views as defined in the IoT ARM, we have added “instantiation views” , they explain how we derived 2 distinct concrete architectures from a common (and agreed) logical one (mainly form the functional view)
Deployment view – is not dealt with in this document, but deployment is described (and will be described) in other WP deliverables.
The main thing about the Views and perspective (apart they are focusing respectively on functional / non-functional requirements) is that Perspectives are orthogonal to Views
While a view focuses on a special aspect of the architecture like functional, information etc… a given perspective may potentially be concerned with all aspects of the system (functionality vs. quality)
As a result perspectives (tactics) need to be applied to all views (Big arrow means “apply perspectives to views”) , and what does this mean (see Next slide)?
Perspective proposes tactic to resolve functional requirements. Tactics are applied to all views and different Design Choices are identified and described for all relevant views
For instance trying to reach a semantic interoperability system quality means different things when considering the views
At the information level it is all about ontologies / data format (Json-ld, RDF,…) and triple-stores (e.g.) while at the functional level it implies implementing
special mechanisms (SPARQL based data queries, semantic repositories, semantic description handling) and so on and so forth. Same for scalability, performance, robustness etc..
So let’s go back to a more detailed description of our three main results for year 1
This slides gives the structure for the upcoming slides (@Klaus you can repeat that for he last item of system architecture (platform integration) you give the floor to Koshizuka-san and Kovacs-san).
1/ Requirement analysis is part of the requirement engineering process.
For wp3 it consisted in complementing the collection of requirements started by WP2 (but scenario-centric) and carrying out a requirement analysis, leading to a first version of the VOLERE template (will show you a fragment of it in a few minutes).
2/ an IoT ARM compliant architecture was produced in its preliminary form. It consists of a functional view an information views and 2 instantiation views as defined earlier.
We also considered few perspectives focussing for this first release, mainly on Security
3/ finally we performed two Platform integrations corresponding to the two concrete instantiation views
For the requirement analysis part we considered first the collection of the architecture-centric requirements (both functional and non-functional)
We unified platform requirements (WP3) with use-case requirements (WP2) where “unifying” means in particular uniformizing terminology used / split and regroup requirements (in case they are complex and deal with several aspects already dealt with somewhere else)
/ factorize and rewrite if necessary.
The aim here is clearly to get a minimum set of requirements that are sufficiently concise but still precise.
The analysis phase consisted in mapping those requirements…
1/ for non-functional requirements (NFReq): mapping to perspectives
2/ for functional requirements (FReq): mapping to Functional Groups and components. For this particular mapping, doing so, we built up a first functional decomposition of our system (detailed in next few slides)
The next slide shows a fragment of the VOLERE template (which –in R1- focuses mainly on FReq)
First I have to mention that In order to provide a readable view we have hidden few columns from the original template.
We have here – on the leftmost part- general information as Yellow tabs: unique identifiers, type (FREQ /NFREQ), priorities (HIGH (must), MEDIUM (Should), LOW (Could)), category (data/management/security), description of the requirement (an hidden column gives us hints on how to verify afterwards that the req has been taken care of ).
Then (going to the right-most part) we have Green headers which consist of the actual requirement mapping: Views are identified; Perspectives, FG and FC are mapped.
How did we come up with this list? We did not start from scratch:
1) we started from the IoT-A unified requirements (UNIs) where 100s of requirements are already listed and mapped in the same template which we are therefore reusing from the IoT ARM too.
2/ we started also from the IoT-A functional decomposition built from the IoT-A UNIs (UNIs meaning here UNIfied requirements, not University of Surrey ,
and tried to stick as much as possible to IoT-A existing FCs mainly because they are well knowns, well explained and specified in the IoT-A documents.
Obviously for many requirements we had to create new ones as IoT-A focused only on the very essential FCs for an IoT Architecture ignoring more specialized/specific features.
The result of this process is a first functional decomposition (seen in next slides)
So as said, the CPaaS.io architecture consists of views and perspective with more focus on views for y1
As far as views are concerned:
1/ the functional view shows a mapping of the FCs (resulting from the requirement analysis) to the FGroups (next slide), and will explain shortly then, what this grouping really means in term of offered functionalities.
Along with the mapping comes a high-level description, which is valid for any instantiation of the platform which means… (@KLAUS: read the 3rd bullet of Functional View)
Doing so we have better chance to reach interoperability between two different concrete platforms
2/ the information view focuses in Y1 on System Use cases which show which kinds of interactions take place between the FCs for selected usage patterns. We’re using for that notations like the UML use cases and data flows
Cookbook here means : “how to use the CPaaS.io platform, when one is totally new to it, based on most typical usages: producing Data/ consuming Data/ performing analytics/discovering VEs, data sets etc…
3/ as for perspectives, we have identified a few and provided preliminary technical objectives and recommendations/tactics. Typically for the FIWARE platform its concrete implementation follows those recommendations
4/ finally we have two concrete instantiation views based on the U2 and FIWARE technologies
Before going in the detail of the Functional View, let me insist on the concepts of “logical” vs “concrete” we are using along the architecture document
Logical views provide information, concepts and principle for everyone to follow when it comes to implementing a platform
The logical components induces concrete ones (but there is not necessary a 1-to-1 mapping of course)
The Logical system use-case must be derived into concrete ones, where interactions do not occur any longer between logical FC but indeed between the concrete ones
For a new CPaaS.io user, general understanding of how the platform works is gained looking at logical FCs.
When it comes to implementing it or using it, ofc the concrete components are the ones to be considered! (depending on which kind of platform is addressed (U2 or FIWARE based))
Finally dealing with both logical and concrete means that we need to provide a mapping architectural figure and table, which we will show in a few slides
Let’s show now the functional view
You can recognize here the global template (canvas) coming from the IoT ARM functional model, which I have shown during my short digression
Let’s go quickly to the different layers:
The Application FG is where you may build platform applications to be used by the platform users, based on the platform capabilities described here after
IoT process mngt FG is where you can define business logics or processes based on platform capabilities (mainly available at VE level and IOT Services level)
However executing and organizing those processes rely on functionalities provided by the IoT Organization FG
Here typically a process optimization component defined in the IoT Process Mngt FG relies on Service orchestrator and task deployment / engine in order to implement Edge Computing capability
VE and IOT services layers provide services at those two different levels of abstraction (VE = Object while IoT Service means accessing resources like sensors and actuators , since the Domain Models says that IoT Services expose Resources)
NOTE that Some of the functionalities are duplicated because they could work at both level of abstraction)
Communication FG deals with inter-FC communication (e.g. message bus)
Finally Mngt FG and Security FG are orthogonal and provide the obvious services (resp. configuring/managing/federating the platform, providing security features as identified in the list of REQ)
This is preliminary as shown in D3.2 and we have already amended it in D3.3
Now we show quickly an overview of the result for the 2 platform integrations (next slide)
Then we go (first Japanese part then EU) with the detail of the integration itself, description of platform instance architectures, mapping figure (logical vs. concrete) and next steps as far as platform integration in Y2 is concerned.
Third main result: platform integration
EU-side: FIWARE components and security components, and tested with a simple use case (waterproof Amsterdam). Ernoe will provide more information about the FIWARE-based platform
Japan-side: ???
Ernoe here (Hopefully)
Copied from Gurkan:
Right figure shows the current view of the system.
The components listed there are integrated. We show the integrated platform in 3 layers:
IoT Devices where we have the devices themselves which pushes either their own data or NGSI data, and then IoT agent or SPARQL agent are components which convert or forward the device data to the context management layer.
Context & process management layer consists of IoT Broker, IoT Discovery, IoT Knowledge Server from NEC and Orion Context Broker (Data Context Broker) which was deployed by OdinS. IoT Knowledge Server can be used for sending SPARQL updates (pushing data through SPARQL) and later accessing it through SPARQL (by SPARQL App developers). Data context broker and IoT Broker can be used for accessing data through NGSI-10 interface. Process management is handled by FogFlow Service Orchestrator, which is integrated with IoT Broker. This component orchestrates tasks and used for leveraging cloud and edge computing resources in an optimized way.
Security layer consists of 5 components, which were provided by OdinS.
These components are currently integrated with Orion Context Broker, however NEC is also working on the integration to IoT Broker (in progress by Everton Luis from NEC).
Ernoe here (Hopefully)
Ernoe here (Hopefully)
Copied from Gurkan:
After showing the previous platform view, we listed the next steps that are decided by the partners.
Step 1: FogFlow component will be deployed in the PF. (this component is already integrated with IoT Broker).
Step 2: We are currently integrating security components from OdinS with IoT Broker too.
Step 3: AGT started working on an “adapter’ component, which will convert the event management NGSI data into SPARQL data. This data can be pushed to IoT Knowledge Server.
Step 4: We did tests with Water Management use case data arriving to the PF.
The next step is collecting data from other use cases (in progress).
Lastly, other use cases may possibly want to implement their own NGSI to SPARQL adapter to make their data accessible through SPARQL.
Of course this outlook for Y2 is already being covered and partially handled in D3.3 and completed later on, in D3.5 (M24)