The slides are about the advantages we can obtain from Microservices design. I use the simulation to present the data collection and publish of Covid-19 in Canada.
[Any data flow and assumption is the simulation, only for the presentation purpose]
Microservices Presentation from Steve Garimabajpai
This document discusses microservice design for a public health data system. It proposes breaking up the monolithic system into independent microservices for different domains like data collection, processing, and export. This would make the system more flexible, fault-tolerant, and scalable. For example, if a province changes its data format, only the relevant microservice needs to be updated instead of the whole system. The document also cautions that microservice architectures require robust orchestration tools like Kubernetes and DevOps practices to manage deployment, monitoring, and maintenance at scale.
The Health and Social Care Information Centre (HSCIC) manages complex national infrastructure and online services including NHS Choices, which receives over 48 million visits per month. HSCIC implemented Splunk to gain insights from massive logs and monitor performance, troubleshoot issues, and manage unpredictable traffic. Splunk dashboards provide real-time statistics and allow HSCIC to monitor the impact of changes, identify root causes of problems, and report key metrics to management. HSCIC plans to expand Splunk use for analytics, partner tracking, product monitoring, and integrating additional tools.
BDE SC6-ws-05/12/2016 technology part - SWCBigData_Europe
The document discusses a pilot project within the Big Data Europe program to create an online dashboard of economic data from municipal budgets. The project aims to aggregate budget and spending data from multiple sources and formats, normalize it using RDF, and analyze and visualize the data to provide insights for citizens, researchers, and decision makers. Technical components used include Apache Flume, Kafka, Spark, HDFS, Virtuoso triplestore, and D3 for visualization. An initial version has been implemented and will be evaluated with municipalities and other stakeholders.
This document discusses LA Nacion's efforts to focus on data journalism despite not having programmers on staff. They started small by learning tools for non-programmers and befriending IT departments. They developed customized scrapers to convert monthly inflation and bus subsidy reports from PDFs into datasets. This involved monitoring PDF sources for changes and converting over 400 files and 10,000 records. They launched an open data portal and raised awareness through tutorials and blogs to activate public demand for these newly accessible government datasets.
Open Federal Content & Data at the CDC and FDA CTP (OSCON 2014)Forum One
Learn how the Centers for Disease Control (CDC) and the Food and Drug Administration's Center for Tobacco Products (FDA CTP) are approaching the open data initiative by opening their federal content for syndication by developers. The massive data stores of the CDC and FDA are now being made available through open APIs that allow developers to access and present this wealth of content across third party sites.
This presentation was made by Eric Davis and Steven Meloan of Forum One, and Thom Williams of the CDC at the 2014 Open Source Convention (OSCON).
The document discusses open data for open government and the benefits of publishing government data in a semantic, linked, and open format on the web. It provides examples of open data initiatives in the US, UK, and other countries that have led to the development of many applications by third parties using publicly available government data. The speaker advocates that governments publish not just documents but the underlying data to allow others to build new sites and applications to make use of the information.
BigDataCloud Sept 8 2011 meetup - Big Data Analytics for Health by Charles Ka...BigDataCloud
The document summarizes a presentation about the HPCC Platform, an open-source big data analytics platform. It discusses who uses the platform, what it can do, examples of fraud detection proofs-of-concept using the platform, and how to get started with the platform.
Microservices Presentation from Steve Garimabajpai
This document discusses microservice design for a public health data system. It proposes breaking up the monolithic system into independent microservices for different domains like data collection, processing, and export. This would make the system more flexible, fault-tolerant, and scalable. For example, if a province changes its data format, only the relevant microservice needs to be updated instead of the whole system. The document also cautions that microservice architectures require robust orchestration tools like Kubernetes and DevOps practices to manage deployment, monitoring, and maintenance at scale.
The Health and Social Care Information Centre (HSCIC) manages complex national infrastructure and online services including NHS Choices, which receives over 48 million visits per month. HSCIC implemented Splunk to gain insights from massive logs and monitor performance, troubleshoot issues, and manage unpredictable traffic. Splunk dashboards provide real-time statistics and allow HSCIC to monitor the impact of changes, identify root causes of problems, and report key metrics to management. HSCIC plans to expand Splunk use for analytics, partner tracking, product monitoring, and integrating additional tools.
BDE SC6-ws-05/12/2016 technology part - SWCBigData_Europe
The document discusses a pilot project within the Big Data Europe program to create an online dashboard of economic data from municipal budgets. The project aims to aggregate budget and spending data from multiple sources and formats, normalize it using RDF, and analyze and visualize the data to provide insights for citizens, researchers, and decision makers. Technical components used include Apache Flume, Kafka, Spark, HDFS, Virtuoso triplestore, and D3 for visualization. An initial version has been implemented and will be evaluated with municipalities and other stakeholders.
This document discusses LA Nacion's efforts to focus on data journalism despite not having programmers on staff. They started small by learning tools for non-programmers and befriending IT departments. They developed customized scrapers to convert monthly inflation and bus subsidy reports from PDFs into datasets. This involved monitoring PDF sources for changes and converting over 400 files and 10,000 records. They launched an open data portal and raised awareness through tutorials and blogs to activate public demand for these newly accessible government datasets.
Open Federal Content & Data at the CDC and FDA CTP (OSCON 2014)Forum One
Learn how the Centers for Disease Control (CDC) and the Food and Drug Administration's Center for Tobacco Products (FDA CTP) are approaching the open data initiative by opening their federal content for syndication by developers. The massive data stores of the CDC and FDA are now being made available through open APIs that allow developers to access and present this wealth of content across third party sites.
This presentation was made by Eric Davis and Steven Meloan of Forum One, and Thom Williams of the CDC at the 2014 Open Source Convention (OSCON).
The document discusses open data for open government and the benefits of publishing government data in a semantic, linked, and open format on the web. It provides examples of open data initiatives in the US, UK, and other countries that have led to the development of many applications by third parties using publicly available government data. The speaker advocates that governments publish not just documents but the underlying data to allow others to build new sites and applications to make use of the information.
BigDataCloud Sept 8 2011 meetup - Big Data Analytics for Health by Charles Ka...BigDataCloud
The document summarizes a presentation about the HPCC Platform, an open-source big data analytics platform. It discusses who uses the platform, what it can do, examples of fraud detection proofs-of-concept using the platform, and how to get started with the platform.
Some background on Lichfield's involvement in Open Data, some tips on councils getting involved themselves, and the future of the open data movement.
Delivered on the 25th March 2011 at the Scottish regional SOCITM meeting.
This document discusses big data and provides examples of large datasets. It begins by defining the 4 Vs of big data: Volume, Variety, Velocity, and Veracity. Examples are then given to illustrate the scale of data in each category. These include unstructured text, multimedia, social media streams, sensor data and more. Several large public datasets are also described from sources like Amazon, Data.gov, Wikipedia, and biological databases. The document concludes with a list of popular data mining software tools.
The document summarizes resources for social science datasets available through the British Library and other sources. It provides an overview of dataset collections through the British Library Datasets Programme, the Economic and Social Data Service (ESDS), and other international organizations. It also discusses tools for analyzing, visualizing, citing, depositing, and archiving datasets to ensure they are accessible resources for research.
The document discusses the BBC's efforts to publish structured data on the web using semantic technologies. It summarizes their work to publish program and music data as linked open data using URIs and ontologies. This includes publishing data about TV and radio programs, music artists, and linking the data to external sources like MusicBrainz and DBPedia. It also discusses applications that can query and visualize this linked data, as well as next steps to publish additional BBC topic data and develop applications to leverage the web of linked data.
Web of Data and its Status on Persian Web Data SpaceAli Khalili
Linked Data as a step to realization of Semantic Web vision consists of a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the past years, leading to the creation of a global data space that contains many billions of assertions – the Web of Linked Data. Recently, Linked Data has received attention in domains such as libraries, e-government, e-commerce, search, news providers, e-learning and other data integration applications. Nonetheless, looking at the current status of Linked Data on Persian Web space, adoption has been very limited and data is trapped in many Persian Websites without allowing the integration and reasoning. One reason for this, is the lack of infrastructure and NLP tools to deal with the specific requirements of Persian data. This talk will introduce the Linked (Open) Data principles, its lifecycle and the required steps towards its realization on Persian Web data space.
Basic introductory talk about the Web of Linked Data, given to undergraduate and posgraduate students of Universidad del Valle (Cali, Colombia) in September 2010. Knowledge about Semantic Web is required
The State of the Data Warehouse in 2017 and BeyondSingleStore
The document provides an overview of the changing analytic environment and the evolution of the data warehouse. It discusses how new requirements like performance, usability, optimization, and ecosystem integration are driving the adoption of a real-time data warehouse approach. A real-time data warehouse is described as having low latency ingestion, in-memory and disk-optimized storage, and the ability to power both operational and machine learning applications. Examples are given of companies using a real-time data warehouse to enable real-time analytics and improve business processes.
This document provides an overview of a 1 hour and 10 minute training on digital health and HL7 FHIR presented by Janaka Peiris. The training covers topics such as electronic health records, interoperability challenges, healthcare data standards including FHIR, application programming interfaces, cloud computing models like IaaS, PaaS and SaaS, and the future of digital health according to a 2022 report. The document outlines the training agenda, introduces the presenter Janaka Peiris and his background, and provides examples to illustrate key concepts in less than 3 sentences.
Idescat on the Google Public Data ExplorerXavier Badosa
Idescat on the Google Public Data Explorer: The Why, the What and the near Future.
Google Public Data Explorer Day. Eurostat. Luxembourg, 30 June 2011.
US EPA Resource Conservation and Recovery Act published as Linked Open Data3 Round Stones
A presentation by 3 Round Stones to the US EPA on the new Linked Open Data Management System, including Linked Open Data on 4M facilities (from FRS), 25 years of Toxic Release Inventory (TRI), chemical substances (SRS), and Resource Conservation and Recovery Act (RCRA) content. This represents one of the largest Open Data projects published by a federal government agency using Open Source Software (OSS), Open Web Standards and government Open Data.
BDE SC6-hang out - technology part-SWC - MartinBigData_Europe
The document discusses a pilot project within the Big Data Europe initiative that aims to integrate citizen budget data from multiple municipalities. The pilot will develop a platform to aggregate budget and spending data from different sources and formats to allow for analysis and visualization. Technical components like Apache Flume, Kafka, Spark and HDFS will be used to ingest, store and analyze the data. A semantic layer will consolidate the data and link it. The pilot aims to evaluate the platform with municipalities and receive feedback on analyzing a growing amount of integrated budget data over time.
Presentation project healthdata.be for hospitals in Limburg (dd. 2015.11.30.)...healthdata be
The healthdata.be project aims to minimize data registration burdens and maximize the return on information collected. It focuses on standardizing and automating business processes, data collection architecture, information architecture, and data management. This will simplify interactions between actors and adhere to the "only once" principle of data collection. The project establishes a new service within the Institute of Public Health to facilitate data exchange between healthcare professionals and researchers according to privacy and confidentiality standards.
NHS Choices: Managing complex infrastructure to deliver critical online servicesSplunk
Learn how NHS Choices analyses machine data to gain real-time insights into a complex hybrid infrastructure. With this operational intelligence NHS Choices can resolve issues faster, manage unpredictable traffic, easily report to management and ultimately keep the 'front-door to the NHS' open for more than 40 million visitors a month.
Best Hospital and Healthcare Website Work Done by E Vision Technologies - Indiatej_chopra
E Vision Technologies is a fast growing IT services Company. The Company commenced its operations in 1998 providing Web-based Solutions, gradually evolving its core competencies into a pure IT Consultancy & IT Services company. Today, it provides a wide range of IT Services to a large number of Customers that include Global-1000 as well as India-100 companies.
Data Management Systems for Government Agencies - with CKANSteven De Costa
Over the last two days (5th and 6th of November 2015) I was very happy to present to a range of Victorian Government agencies and give them some context on what data management can do for their organisations.
From first principles we went through why data was important and what infrastructure was already in place via data.vic.gov.au for them to leverage. We covered examples of how other agencies, such as the Office of Environment and Heritage in NSW, are rebuilding their data management system to provide a more efficient pipeline for publishing internal and public data.
As always, I could not help highlighting the awesome leadership of WA Parks and Wildlife and the work done by Florian Mayer as the best case example for reducing the costs and friction often involved with publishing data as contextually marked up knowledge.
We covered a number of scenarios where the concept of resource containers for data were considered. This created valuable feedback which has further galvanized my thoughts about how to further extend CKAN to meet the needs of both private and open data portals, and other forms of realtime or unstructured data.
Emerging Trends in Data Visualization and Dissemination discusses providing statistical data through application programming interfaces (APIs) and as a service rather than goods. It describes how mashups combine data from multiple sources into new applications and services. The document outlines benefits of mashups, how they work by retrieving data through APIs from different websites, and factors to consider when planning a mashup like data sources and programming languages. It provides examples of the United Nations' UNData and Comtrade initiatives that make international statistical databases freely available through APIs and web services.
This document provides an overview of specialized information systems used in healthcare, including knowledge management systems, expert systems, and virtual reality systems. It describes how information systems are extensively used across business operations and clinical care in healthcare settings like hospitals and clinics. These systems aim to support operations, documentation, communication, patient care and safety, while maintaining security and connectivity across internal and external systems.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
More Related Content
Similar to Microservices Introduce for DevOps meetup
Some background on Lichfield's involvement in Open Data, some tips on councils getting involved themselves, and the future of the open data movement.
Delivered on the 25th March 2011 at the Scottish regional SOCITM meeting.
This document discusses big data and provides examples of large datasets. It begins by defining the 4 Vs of big data: Volume, Variety, Velocity, and Veracity. Examples are then given to illustrate the scale of data in each category. These include unstructured text, multimedia, social media streams, sensor data and more. Several large public datasets are also described from sources like Amazon, Data.gov, Wikipedia, and biological databases. The document concludes with a list of popular data mining software tools.
The document summarizes resources for social science datasets available through the British Library and other sources. It provides an overview of dataset collections through the British Library Datasets Programme, the Economic and Social Data Service (ESDS), and other international organizations. It also discusses tools for analyzing, visualizing, citing, depositing, and archiving datasets to ensure they are accessible resources for research.
The document discusses the BBC's efforts to publish structured data on the web using semantic technologies. It summarizes their work to publish program and music data as linked open data using URIs and ontologies. This includes publishing data about TV and radio programs, music artists, and linking the data to external sources like MusicBrainz and DBPedia. It also discusses applications that can query and visualize this linked data, as well as next steps to publish additional BBC topic data and develop applications to leverage the web of linked data.
Web of Data and its Status on Persian Web Data SpaceAli Khalili
Linked Data as a step to realization of Semantic Web vision consists of a set of best practices for publishing and connecting structured data on the Web. These best practices have been adopted by an increasing number of data providers over the past years, leading to the creation of a global data space that contains many billions of assertions – the Web of Linked Data. Recently, Linked Data has received attention in domains such as libraries, e-government, e-commerce, search, news providers, e-learning and other data integration applications. Nonetheless, looking at the current status of Linked Data on Persian Web space, adoption has been very limited and data is trapped in many Persian Websites without allowing the integration and reasoning. One reason for this, is the lack of infrastructure and NLP tools to deal with the specific requirements of Persian data. This talk will introduce the Linked (Open) Data principles, its lifecycle and the required steps towards its realization on Persian Web data space.
Basic introductory talk about the Web of Linked Data, given to undergraduate and posgraduate students of Universidad del Valle (Cali, Colombia) in September 2010. Knowledge about Semantic Web is required
The State of the Data Warehouse in 2017 and BeyondSingleStore
The document provides an overview of the changing analytic environment and the evolution of the data warehouse. It discusses how new requirements like performance, usability, optimization, and ecosystem integration are driving the adoption of a real-time data warehouse approach. A real-time data warehouse is described as having low latency ingestion, in-memory and disk-optimized storage, and the ability to power both operational and machine learning applications. Examples are given of companies using a real-time data warehouse to enable real-time analytics and improve business processes.
This document provides an overview of a 1 hour and 10 minute training on digital health and HL7 FHIR presented by Janaka Peiris. The training covers topics such as electronic health records, interoperability challenges, healthcare data standards including FHIR, application programming interfaces, cloud computing models like IaaS, PaaS and SaaS, and the future of digital health according to a 2022 report. The document outlines the training agenda, introduces the presenter Janaka Peiris and his background, and provides examples to illustrate key concepts in less than 3 sentences.
Idescat on the Google Public Data ExplorerXavier Badosa
Idescat on the Google Public Data Explorer: The Why, the What and the near Future.
Google Public Data Explorer Day. Eurostat. Luxembourg, 30 June 2011.
US EPA Resource Conservation and Recovery Act published as Linked Open Data3 Round Stones
A presentation by 3 Round Stones to the US EPA on the new Linked Open Data Management System, including Linked Open Data on 4M facilities (from FRS), 25 years of Toxic Release Inventory (TRI), chemical substances (SRS), and Resource Conservation and Recovery Act (RCRA) content. This represents one of the largest Open Data projects published by a federal government agency using Open Source Software (OSS), Open Web Standards and government Open Data.
BDE SC6-hang out - technology part-SWC - MartinBigData_Europe
The document discusses a pilot project within the Big Data Europe initiative that aims to integrate citizen budget data from multiple municipalities. The pilot will develop a platform to aggregate budget and spending data from different sources and formats to allow for analysis and visualization. Technical components like Apache Flume, Kafka, Spark and HDFS will be used to ingest, store and analyze the data. A semantic layer will consolidate the data and link it. The pilot aims to evaluate the platform with municipalities and receive feedback on analyzing a growing amount of integrated budget data over time.
Presentation project healthdata.be for hospitals in Limburg (dd. 2015.11.30.)...healthdata be
The healthdata.be project aims to minimize data registration burdens and maximize the return on information collected. It focuses on standardizing and automating business processes, data collection architecture, information architecture, and data management. This will simplify interactions between actors and adhere to the "only once" principle of data collection. The project establishes a new service within the Institute of Public Health to facilitate data exchange between healthcare professionals and researchers according to privacy and confidentiality standards.
NHS Choices: Managing complex infrastructure to deliver critical online servicesSplunk
Learn how NHS Choices analyses machine data to gain real-time insights into a complex hybrid infrastructure. With this operational intelligence NHS Choices can resolve issues faster, manage unpredictable traffic, easily report to management and ultimately keep the 'front-door to the NHS' open for more than 40 million visitors a month.
Best Hospital and Healthcare Website Work Done by E Vision Technologies - Indiatej_chopra
E Vision Technologies is a fast growing IT services Company. The Company commenced its operations in 1998 providing Web-based Solutions, gradually evolving its core competencies into a pure IT Consultancy & IT Services company. Today, it provides a wide range of IT Services to a large number of Customers that include Global-1000 as well as India-100 companies.
Data Management Systems for Government Agencies - with CKANSteven De Costa
Over the last two days (5th and 6th of November 2015) I was very happy to present to a range of Victorian Government agencies and give them some context on what data management can do for their organisations.
From first principles we went through why data was important and what infrastructure was already in place via data.vic.gov.au for them to leverage. We covered examples of how other agencies, such as the Office of Environment and Heritage in NSW, are rebuilding their data management system to provide a more efficient pipeline for publishing internal and public data.
As always, I could not help highlighting the awesome leadership of WA Parks and Wildlife and the work done by Florian Mayer as the best case example for reducing the costs and friction often involved with publishing data as contextually marked up knowledge.
We covered a number of scenarios where the concept of resource containers for data were considered. This created valuable feedback which has further galvanized my thoughts about how to further extend CKAN to meet the needs of both private and open data portals, and other forms of realtime or unstructured data.
Emerging Trends in Data Visualization and Dissemination discusses providing statistical data through application programming interfaces (APIs) and as a service rather than goods. It describes how mashups combine data from multiple sources into new applications and services. The document outlines benefits of mashups, how they work by retrieving data through APIs from different websites, and factors to consider when planning a mashup like data sources and programming languages. It provides examples of the United Nations' UNData and Comtrade initiatives that make international statistical databases freely available through APIs and web services.
This document provides an overview of specialized information systems used in healthcare, including knowledge management systems, expert systems, and virtual reality systems. It describes how information systems are extensively used across business operations and clinical care in healthcare settings like hospitals and clinics. These systems aim to support operations, documentation, communication, patient care and safety, while maintaining security and connectivity across internal and external systems.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Presentation of the OECD Artificial Intelligence Review of Germany
Microservices Introduce for DevOps meetup
1. From Project To Product Transformation
Microservice Design
Canada DevOps Community of Practice Meetup (16 April 2020 )
2. Steve Zheng ❖ Focus on Agile and DevOps, Agile & AWS
❖ Two decades of experience in:
- IT Industry
- Java Development
- Project Management
❖ Entrepreneur - 5 years of experience in building
and operating a company that has grown from 5
employees to 1600 today
DevOps Coach
4. This was last week’s
Covid-19 report.
Let's start from here
5. The data is published
by PHAC - Public
Health Agency of
Canada.
Now suppose you are
the technical architect
of PHAC.
6. Data flow diagrams in the following slides are for
illustration purposes only.
This simulation is used to clarify the Microservices.
Anyone working in Public Health and willing to comment the data flow is welcome to contac
8. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
9. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
10. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
11. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
12. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
13. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
14. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
DB
15. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
16. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
17. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
CSV
18. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
CSV
Api
19. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
CSV
Api
HTML, PDF
20. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
CSV
Api
HTML, PDF
Different security policies
21. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
CSV
Api
HTML, PDF
Different security policies
IOT Devices
22. Simulated Data Flow
PHAC
WHO
B.C
(PPHIS)
Ontario
(iPHIS)
... …
Ottawa
(OPH)
10 Provinces
3 Territories
Hospitals
Laboratory
34 Public Health Units
… …
WHO
Canadian
Government
Websites
3rd Party
Websites
OPH: Ottawa Public Health
iPHIS: Integrated Public Health Information System
PPHIS: Provincial Public Health Information System
PHAC: Public Health Agency of Canada
WHO: World Health Organization
Api
DB
CSV
CSV
Api
HTML, PDF
Different security policies
IOT Devices
24. You’re the PHAC Technical Architect,
What’s your design?
PHAC
25. You’re the PHAC Technical Architect,
What’s your design?
PHAC
Features
1. Data collection
1. From 10 Provinces & 3 Territories
2. Support API, DB, CSV, etc.
2. Data clean and correction
3. Data Aggregation
4. Data Loading
5. Data Export
1. Support for API, PDF, HTML, CSV etc.
26. Monolithic
Design
Microservice
Design
1 Package
1. Data collection (Support API, DB, CSV,
etc. from 10 Provinces & 3 Territories)
2. Data clean and correction
3. Data Aggregation
4. Data Loading
5. Data Export (API, PDF, HTML, CSV)
Server 3Server 2Server 1
Deploy 1 package to multiple servers
Deploy 19+ packages with 19*2 instances to Cloud
Domain 1:
Data Collection
Ontario
Data Collection MS
… …
10 + 3
B.C
Data Collection MS
Domain 2:
Data Process
Cleansing
and Correction MS
Aggregation MS
Loading MS
Domain 3:
Data Export
API Export MS
PDF Export MS
CSV Export MS
27. Monolithic
Design
Microservice
Design
1 Package
1. Data collection (Support API, DB, CSV,
etc. from 10 Provinces & 3 Territories)
2. Data clean and correction
3. Data Aggregation
4. Data Loading
5. Data Export (API, PDF, HTML, CSV)
Server 3Server 2Server 1
Deploy 1 package to multiple servers
Deploy 19+ packages with 19*2 instances to Cloud
Domain 1:
Data Collection
Ontario
Data Collection MS
… …
10 + 3
B.C
Data Collection MS
Domain 2:
Data Process
Cleansing
and Correction MS
Aggregation MS
Loading MS
Domain 3:
Data Export
API Export MS
PDF Export MS
CSV Export MS
32. Microservice
Design
3 Scenarios
Monolithic
Design
1. A province changes its data format
2. PHAC adds a new index
e.g. Asymptomatic infection
1. Change the “XX Province MS”, test and deploy
1. Modify the package, test EVERYTHING and deploy
33. Microservice
Design
3 Scenarios
Monolithic
Design
1. A province changes its data format
2. PHAC adds a new index
e.g. Asymptomatic infection
1. Change the “XX Province MS”, test and deploy
2. Modify “Aggregation MS”, test and Deploy
(Keep the legacy version in the production
environment, so that 3-parties have the
window to update their platform)
1. Modify the package, test EVERYTHING and deploy
34. Microservice
Design
3 Scenarios
Monolithic
Design
1. A province changes its data format
2. PHAC adds a new index
e.g. Asymptomatic infection
1. Change the “XX Province MS”, test and deploy
2. Modify “Aggregation MS”, test and Deploy
(Keep the legacy version in the production
environment, so that 3-parties have the
window to update their platform)
1. Modify the package, test EVERYTHING and deploy
2. Modify the package, test EVERYTHING and deploy the legacy
version in new servers (VMs), notify all 3-parties, update the API to
new address before adding new index to their platform
35. Microservice
Design
3 Scenarios
Monolithic
Design
1. A province changes its data format
2. PHAC adds a new index
e.g. Asymptomatic infection
3. The PDF requests from 3rd Party cause the
service to slow down
1. Change the “XX Province MS”, test and deploy
2. Modify “Aggregation MS”, test and Deploy
(Keep the legacy version in the production
environment, so that 3-parties have the
window to update their platform)
1. Modify the package, test EVERYTHING and deploy
2. Modify the package, test EVERYTHING and deploy the legacy
version in new servers (VMs), notify all 3-parties, update the API to
new address before adding new index to their platform
36. Microservice
Design
3 Scenarios
Monolithic
Design
1. A province changes its data format
2. PHAC adds a new index
e.g. Asymptomatic infection
3. The PDF requests from 3rd Party cause the
service to slow down
1. Change the “XX Province MS”, test and deploy
2. Modify “Aggregation MS”, test and Deploy
(Keep the legacy version in the production
environment, so that 3-parties have the
window to update their platform)
3. Deploy more “PDF export MS”
1. Modify the package, test EVERYTHING and deploy
2. Modify the package, test EVERYTHING and deploy the legacy
version in new servers (VMs), notify all 3-parties, update the API to
new address before adding new index to their platform
37. Microservice
Design
3 Scenarios
Monolithic
Design
1. A province changes its data format
2. PHAC adds a new index
e.g. Asymptomatic infection
3. The PDF requests from 3rd Party cause the
service to slow down
1. Change the “XX Province MS”, test and deploy
2. Modify “Aggregation MS”, test and Deploy
(Keep the legacy version in the production
environment, so that 3-parties have the
window to update their platform)
3. Deploy more “PDF export MS”
1. Modify the package, test EVERYTHING and deploy
2. Modify the package, test EVERYTHING and deploy the legacy
version in new servers (VMs), notify all 3-parties, update the API to
new address before adding new index to their platform
3. More servers(VMs), deploy more packages
40. Monolithic = One-man Band
• Monolithic:
• One system with everything.
when loading, load everything
when unloading, unload everything
• Fragile:
• If one instrument is broken, the whole system
will fail
• Scalable
• Can this musician hold 2x the instruments?
44. Microservice = Orchestra
The Stage -> Cloud or
Kubernetes Container
“Violin” Microservice
Deployed in 16
Docker instance
“Cello” Microservice
Deployed in 8
Docker instances
45. Microservice = Orchestra
The Stage -> Cloud or
Kubernetes Container
“Violin” Microservice
Deployed in 16
Docker instance
“Cello” Microservice
Deployed in 8
Docker instances
Conductor -> DevOps
46. Microservice = Orchestra
The Stage -> Cloud or
Kubernetes Container
“Violin” Microservice
Deployed in 16
Docker instance
“Cello” Microservice
Deployed in 8
Docker instances
Conductor -> DevOps
Melody -> DevOps
Principle and Tools
47. DevOps is the way to Microservice
Problems of Microservice: Orchestration, as:
• 3 instances of 1 package vs 38 instances of 19+ packages;
By 2017, Netflix architecture consisted of over 700 loosely coupled microservices and it
continues to grow
• Functional calling in one platform vs API calling in multiple platforms which cause
performance and security issues
Keep away from Microservice without implementing DevOps! The following cannot be done
manually:
• Deployment (load and unload MS without shutting down the service)
• Monitoring to identify and restart unhealthy instances