Ever wonder how these concepts contrast with and yet complement each other in a next-generation system?
Enterprise semantics
Knowledge graphs
Model-driven development
Digital twins
Self-Sovereign Identity
Own your own data
Data deduplication
Autonomous agents
Large language systems
Data-Centric Architecture combines the major technologies behind each of these concepts. In fact, it’s essential to the real-world implementation of general AI, enabling the context that’s behind contextual computing, DARPA’s Third Phase of AI. To be able to deliver, DCA needs to simplify and scale data ecosystems using these pieces of the data ecosystem puzzle.
This talk will provide an overview of how these pieces of the data-centric puzzle are fitting together. It’s a best practice to see these pieces can fit together side-by-size in an enterprise context and envision next-gen systems from the viewpoint of some of the most demanding enterprise use cases.
It’s also best practice to study how one industry vertical is moving ahead and contrast that progress with your own industry. Remember, as the data-centric ecosystem emerges and the benefits of true digitization start to pay off, many more techniques can be borrowed from other verticals and used in your own vertical. This talk will summarize several powerful recent case studies and highlight the key takeaways.
The FAIR data movement and 22 Feb 2023.pdfAlan Morrison
To realize the promise of FAIR data, companies must be data mature. They must adopt data-centric architecture and the #FAIR (findable, accessible, interoperable and reusable) principles. When they do, the data they need will be linked and self-describing. The data when queried will tell you where it is.
A desiloed, #semantic graph data abstraction--the only feasible means behind creating FAIR data at this point--is not only the means to data discovery, but also a path to model-driven development and data sharing at scale, both of which will break an organization's habit of duplicating data and logic.
This webinar highlights fresh enterprise case studies that are starting to realize the dream of #FAIRdata, as well as how these companies are succeeding:
- Zero copy integration: How to think about eliminating #dataduplication and stop the application buying binge that only exacerbates the problem.
- Dynamic, unified data model: Standard graphs provide a means of modeling once, use anywhere, for conceptual, logical and physical purposes all at once.
- Persuasion and teamwork: The #graph approach provides an ideal way to loop business units and domain experts in and empower them to recommend model changes that are easily implemented.
The whole process is bringing #enterprises like Walmart, Uber, Goldman Sachs and Nokia into the age of #contextualcomputing. Learn how to be a fast follower by thinking big, but starting small.
FAIR data_ Superior data visibility and reuse without warehousing.pdfAlan Morrison
The advantages of semantic knowledge graphs over data warehousing when it comes to scaling quality, contextualized data for machine learning and advanced analytics purposes.
A Semantic Web Primer: The History and Vision of Linked Open Data and the Web 3.0
There is a transformational change coming to the world-wide-web that will fundamentally alter how its vast array of data is structured, and as a result greatly enhance the way humans and machines interact with this indispensable resource. Given the inertia of existing infrastructure, this segue will be evolutionary as opposed to revolutionary, and indeed has been envisioned since the inception of the web. Come join us for a layman's look at the nature of the Web 3.0, its historical underpinnings, and the opportunities it presents.
The FAIR data movement and 22 Feb 2023.pdfAlan Morrison
To realize the promise of FAIR data, companies must be data mature. They must adopt data-centric architecture and the #FAIR (findable, accessible, interoperable and reusable) principles. When they do, the data they need will be linked and self-describing. The data when queried will tell you where it is.
A desiloed, #semantic graph data abstraction--the only feasible means behind creating FAIR data at this point--is not only the means to data discovery, but also a path to model-driven development and data sharing at scale, both of which will break an organization's habit of duplicating data and logic.
This webinar highlights fresh enterprise case studies that are starting to realize the dream of #FAIRdata, as well as how these companies are succeeding:
- Zero copy integration: How to think about eliminating #dataduplication and stop the application buying binge that only exacerbates the problem.
- Dynamic, unified data model: Standard graphs provide a means of modeling once, use anywhere, for conceptual, logical and physical purposes all at once.
- Persuasion and teamwork: The #graph approach provides an ideal way to loop business units and domain experts in and empower them to recommend model changes that are easily implemented.
The whole process is bringing #enterprises like Walmart, Uber, Goldman Sachs and Nokia into the age of #contextualcomputing. Learn how to be a fast follower by thinking big, but starting small.
FAIR data_ Superior data visibility and reuse without warehousing.pdfAlan Morrison
The advantages of semantic knowledge graphs over data warehousing when it comes to scaling quality, contextualized data for machine learning and advanced analytics purposes.
A Semantic Web Primer: The History and Vision of Linked Open Data and the Web 3.0
There is a transformational change coming to the world-wide-web that will fundamentally alter how its vast array of data is structured, and as a result greatly enhance the way humans and machines interact with this indispensable resource. Given the inertia of existing infrastructure, this segue will be evolutionary as opposed to revolutionary, and indeed has been envisioned since the inception of the web. Come join us for a layman's look at the nature of the Web 3.0, its historical underpinnings, and the opportunities it presents.
The average workday has become disjointed. While workers enjoy the “freedom” that comes from being able to do their jobs without being chained to their desks, it is not without its obstacles. There is certainly no shortage of mobile apps for employees, yet each app only does one thing well, and it is becoming clear that work information is spread out between too many apps. As employees rely more on mobile access, the elusive single-screen, unified mobile experience could be the answer to enterprise information discovery woes. The presentation discusses ways to overcome the information overload challenge using contextual capabilities now provided by mobile devices, a consolidated user experience, and activity streams.
To share the huge amount of heterogeneous information to the large-scale of heterogeneous users, the Internet on the computing level should be reconstructed since such a crucial infrastructure was designed without proper understandings. To upgrade, it must consist of five layers from the bottom to the top, including the routing, the multicasting, the persisting, the presenting and the humans. The routing layer is responsible for establishing the fundamental substrate and finding resources in accordance with social disciplines. The multicasting layer disseminates data in a high performance and low cost way based on the routing. The persisting layer provides the services of storing and accessing persistent data efficiently with the minimum dedicated resources. The presenting layer absorbs users’ interactions to guide the adjustments of the underlying layers other than shows connected local views to users. Completely different from the lower software layers, the topmost one is made up totally with humans, i.e., users, including individual persons or organizations, which are social capital dominating the Internet. Additionally, within the upgraded infrastructure, besides the situation that a lower layer supports its immediate upper one only, the humans layer influences the lower ones in terms of transferring its social resources to them. That is different from any other traditional layer-based systems. Those resources lead to adaptations and adjustments of all of the software layers since each of them needs to follow social rules. Eventually, the updated underlying layers return latest consequences to users upon those modifications
THE SOCIALIZED INFRASTRUCTURE OF THE INTERNET ON THE COMPUTING LEVEL ijcsit
To share the huge amount of heterogeneous information to the large-scale of heterogeneous users, the Internet on the computing level should be reconstructed since such a crucial infrastructure was designed without proper understandings. To upgrade, it must consist of five layers from the bottom to the top,
including the routing, the multicasting, the persisting, the presenting and the humans. The routing layer is responsible for establishing the fundamental substrate and finding resources in accordance with social disciplines. The multicasting layer disseminates data in a high performance and low cost way based on the routing. The persisting layer provides the services of storing and accessing persistent data efficiently with
the minimum dedicated resources. The presenting layer absorbs users’ interactions to guide the adjustments of the underlying layers other than shows connected local views to users. Completely different from the lower software layers, the topmost one is made up totally with humans, i.e., users, including individual persons or organizations, which are social capital dominating the Internet. Additionally, within
the upgraded infrastructure, besides the situation that a lower layer supports its immediate upper one only, the humans layer influences the lower ones in terms of transferring its social resources to them. That is different from any other traditional layer-based systems. Those resources lead to adaptations and
adjustments of all of the software layers since each of them needs to follow social rules. Eventually, the updated underlying layers return latest consequences to users upon those modifications.
The current status of Linked Open Data (LOD) shows evidence of many datasets available on the Web in RDF. In the meantime, there are still many challenges to overcome by organizations in their journey of publishing five stars datasets on the Web. Those challenges are not only technical, but are also organizational. At this moment where connectionist AI is gaining a wave of popularity with many applications, LOD needs to go beyond the guarantee of FAIR principles. One direction is to build a sustainable LOD ecosystem with FAIR-S principles. In parallel, LOD should serve as a catalyzer for solving societal issues (LOD for Social Good) and personal empowerment through data (Social Linked Data).
In recent years governments and research institutions have emphasized the need for open data as a fundamental component of open science. But we need much more than the data themselves for them to be reusable and useful. We need descriptive and machine-readable metadata, of course, but we also need the software and the algorithms necessary to fully understand the data. We need the standards and protocols that allow us to easily read and analyze the data with the tools of our choice. We need to be able to trust the source and derivation of the data. In short, we need an interoperable data infrastructure, but it must be a flexible infrastructure able to work across myriad cultures, scales, and technologies. This talk will present a concept of infrastructure as a body of human, organisational, and machine relationships built around data. It will illustrate how a new organization, the Research Data Alliance, is working to build those relationships to enable functional data sharing and reuse.
Data centric business and knowledge graph trendsAlan Morrison
The deck for my kickoff keynote at the Data-Centric Architecture Forum, February 3, 2020. Includes related data, content, and architecture definitions and fundamental explanations, knowledge graph trends, market outlook, transformation case studies and benefits of large-scale, cross-boundary integration/interoperation.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Graph Foundations for Advanced Analytics and CollaborationAlan Morrison
Presentation on Knowledge Graph Foundations and how they're used.
Presented at TechTarget ML and AI Summit
September 20, 2022
View the full video recording of this deck at https://www.brighttalk.com/webcast/9059/556690
Dcaf transformation & kg adoption 2022 -alan morrisonAlan Morrison
A keynote presentation on knowledge graph adoption trends and how to do digital transformation differently.
Delivered at the Enterprise Data Transformation & Knowledge Graph Adoption
A Semantic Arts DCAF Event
February 28, 2022
The average workday has become disjointed. While workers enjoy the “freedom” that comes from being able to do their jobs without being chained to their desks, it is not without its obstacles. There is certainly no shortage of mobile apps for employees, yet each app only does one thing well, and it is becoming clear that work information is spread out between too many apps. As employees rely more on mobile access, the elusive single-screen, unified mobile experience could be the answer to enterprise information discovery woes. The presentation discusses ways to overcome the information overload challenge using contextual capabilities now provided by mobile devices, a consolidated user experience, and activity streams.
To share the huge amount of heterogeneous information to the large-scale of heterogeneous users, the Internet on the computing level should be reconstructed since such a crucial infrastructure was designed without proper understandings. To upgrade, it must consist of five layers from the bottom to the top, including the routing, the multicasting, the persisting, the presenting and the humans. The routing layer is responsible for establishing the fundamental substrate and finding resources in accordance with social disciplines. The multicasting layer disseminates data in a high performance and low cost way based on the routing. The persisting layer provides the services of storing and accessing persistent data efficiently with the minimum dedicated resources. The presenting layer absorbs users’ interactions to guide the adjustments of the underlying layers other than shows connected local views to users. Completely different from the lower software layers, the topmost one is made up totally with humans, i.e., users, including individual persons or organizations, which are social capital dominating the Internet. Additionally, within the upgraded infrastructure, besides the situation that a lower layer supports its immediate upper one only, the humans layer influences the lower ones in terms of transferring its social resources to them. That is different from any other traditional layer-based systems. Those resources lead to adaptations and adjustments of all of the software layers since each of them needs to follow social rules. Eventually, the updated underlying layers return latest consequences to users upon those modifications
THE SOCIALIZED INFRASTRUCTURE OF THE INTERNET ON THE COMPUTING LEVEL ijcsit
To share the huge amount of heterogeneous information to the large-scale of heterogeneous users, the Internet on the computing level should be reconstructed since such a crucial infrastructure was designed without proper understandings. To upgrade, it must consist of five layers from the bottom to the top,
including the routing, the multicasting, the persisting, the presenting and the humans. The routing layer is responsible for establishing the fundamental substrate and finding resources in accordance with social disciplines. The multicasting layer disseminates data in a high performance and low cost way based on the routing. The persisting layer provides the services of storing and accessing persistent data efficiently with
the minimum dedicated resources. The presenting layer absorbs users’ interactions to guide the adjustments of the underlying layers other than shows connected local views to users. Completely different from the lower software layers, the topmost one is made up totally with humans, i.e., users, including individual persons or organizations, which are social capital dominating the Internet. Additionally, within
the upgraded infrastructure, besides the situation that a lower layer supports its immediate upper one only, the humans layer influences the lower ones in terms of transferring its social resources to them. That is different from any other traditional layer-based systems. Those resources lead to adaptations and
adjustments of all of the software layers since each of them needs to follow social rules. Eventually, the updated underlying layers return latest consequences to users upon those modifications.
The current status of Linked Open Data (LOD) shows evidence of many datasets available on the Web in RDF. In the meantime, there are still many challenges to overcome by organizations in their journey of publishing five stars datasets on the Web. Those challenges are not only technical, but are also organizational. At this moment where connectionist AI is gaining a wave of popularity with many applications, LOD needs to go beyond the guarantee of FAIR principles. One direction is to build a sustainable LOD ecosystem with FAIR-S principles. In parallel, LOD should serve as a catalyzer for solving societal issues (LOD for Social Good) and personal empowerment through data (Social Linked Data).
In recent years governments and research institutions have emphasized the need for open data as a fundamental component of open science. But we need much more than the data themselves for them to be reusable and useful. We need descriptive and machine-readable metadata, of course, but we also need the software and the algorithms necessary to fully understand the data. We need the standards and protocols that allow us to easily read and analyze the data with the tools of our choice. We need to be able to trust the source and derivation of the data. In short, we need an interoperable data infrastructure, but it must be a flexible infrastructure able to work across myriad cultures, scales, and technologies. This talk will present a concept of infrastructure as a body of human, organisational, and machine relationships built around data. It will illustrate how a new organization, the Research Data Alliance, is working to build those relationships to enable functional data sharing and reuse.
Data centric business and knowledge graph trendsAlan Morrison
The deck for my kickoff keynote at the Data-Centric Architecture Forum, February 3, 2020. Includes related data, content, and architecture definitions and fundamental explanations, knowledge graph trends, market outlook, transformation case studies and benefits of large-scale, cross-boundary integration/interoperation.
Building collaborative Machine Learning platform for Dataverse network. Lecture by Slava Tykhonov (DANS-KNAW, the Netherlands), DANS seminar series, 29.03.2022
The Grid means the infrastructure for the Advanced Web, for computing, collaboration and communication.
The goal is to create the illusion of a simple yet large and powerful self managing virtual computer out of a large collection of connected heterogeneous systems sharing various combinations of resources.
“Grid” computing has emerged as an important new field, distinguished from conventional distributed computing by its focus on large-scale resource sharing, innovative applications, and ,in some cases, high-performance orientation .
We presented the Grid concept in analogy with that of an electrical power grid and Grid vision
Graph Foundations for Advanced Analytics and CollaborationAlan Morrison
Presentation on Knowledge Graph Foundations and how they're used.
Presented at TechTarget ML and AI Summit
September 20, 2022
View the full video recording of this deck at https://www.brighttalk.com/webcast/9059/556690
Dcaf transformation & kg adoption 2022 -alan morrisonAlan Morrison
A keynote presentation on knowledge graph adoption trends and how to do digital transformation differently.
Delivered at the Enterprise Data Transformation & Knowledge Graph Adoption
A Semantic Arts DCAF Event
February 28, 2022
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
Data-centric design and the knowledge graphAlan Morrison
The #knowledgegraph--smart data that can describe your business and its domains--is now eating software. We won't be able to scale AI or other emerging tech without knowledge graphs, because those techs all require a transformed data foundation, large-scale integration, and shared data infrastructure.
Key to knowledge graphs are #semantics, #graphdatabase technology and a Tinker Toy-style approach to adding the missing verbs (which provide connections and context) back into your data. A knowledge graph foundation provides a means of contextualizing business domains, your content and other data, for #AI at scale.
This is from a talk I gave at the Data Centric Design for SMART DATA & CONTENT Enthusiasts meetup on July 31, 2019 at PwC Chicago. Thanks to Mary Yurkovic and Matt Turner for a very fun event!.
Data-Centric Business Transformation Using Knowledge GraphsAlan Morrison
From a talk at the Data Architecture Summit in Chicago in 2018--reviews digital transformation and what deep transformation really implies at the data layer. Cross-enterprise knowledge graphs are becoming feasible and can be a key enabler of deep transformation.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4
DCA Symposium 6 Feb 2023.pdf
1. How semantic
systems are
coming together
Alan Morrison
Enterprise Data Transformation
Symposium
Presented on February 6, 2023
1
2. Topics covered in this talk:
● Semantics
● Data centricity
● Knowledge graph and data mesh types in use
● Decentralization and semantics
● Digital twins and agents vs APIs
● Reducing duplication and rework
● Large language models and semantics
2
4. Web semantics harnesses the power of machine-readable
knowledge models to create quality data shared at scale
4
John Sowa, AWS, 2020
Semantics is the
science of shared
meaning in the form of
contextualized data
5. Semantics is the means to FAIR, smart, siloless data sharing
5
James Kobelius, 2016
Association of European Libraries, 2017
6. Problem: Semantics is the creative, capable stepmother the kids
all resent
● The kids miss Legacy Mom and insist on keeping the house the way Mom had
it–despite its evident problems.
● Legacy Dad is slowly dying. He plans to will the bulk of the estate to the
stepmother, for good reason–she knows how to manage, keep the family
together–and fix what’s wrong with the house.
● The stepmother has a great vision for how to fix the house and bring the family
together, but the kids won’t hear of it.
6
8. How to fix the house – one unified, multi-domain model
8
9. Solution: How shared graph semantics helps
● Boosts meaningful results (result of lack of data and logic transparency and
cohesiveness) and relevancy
● Contextualizes data for better management and reuse with relationship logic
● Scales meaningful connections between contexts (relevant relationships living with
entities)
● Enables Metcalfe’s network of networks effect (network_effectN
)
● Enables model-driven development (code once, reuse anywhere)
● Spans the management gap between structured data and unstructured “content”
(content being digitized and thus a subset of data)
● Scales overall data (and most application logic) management capability (organic
growth and evolution of the full resource)
● Moves beyond APIs to empower digital twins and agents (self-describing subgraphs
and the agents who do the message management)
9
10. Data centricity = more human-machine
interaction from a lifecycle perspective
10
11. Terpsichore: Human-in-the-loop semantic data lifecycle for
urban heritage/smart cities
11
An iterative, bottom-up,
user-driven process:
● User engagement
● Collection
● Digestion
● Semantic
classification
● Automated
suggestion loops
Results:
● Enrichment of
useful data
collections
● Improved dialogue
between user
communities
Artopoulos, Giorgos & Smaniotto
Costa, Carlos. (2019). Data-Driven
Processes in Participatory Urbanism:
The “Smartness” of Historical Cities.
Architecture and Culture. 7. 1-19.
10.1080/20507828.2019.1631061.
13. Problem: The “modern data stack” perpetuates the
application-centric architecture
13
From the Modern Data Stack to Knowledge Graphs by Bob Muglia, RelationalAI, Knowledge Graph Conference, June 2022
A different
database and
data model for
every app
14. Solution: RelationalAI and Goldman Sachs collaborate on
semantic, data-centric, model-driven apps
14
"The model becomes the program, and so
business analysts can become involved, and
make changes to the data structures.
"Think about thousands of people getting involved
who know about the business — think about that!"
– Bob Muglia, RelationalAI, Knowledge Graph
Conference, June 2022
Legend apps are available via
the GS app store.
16. Simple web hosting + legacy Client-Server
storage
Early Web (on Client-Server)
Compute and storage more loosely coupled,
virtualized, controlled and data-centric
“Decoupled” and “Decentralized” Cloud
Application Distribution via Proprietary
and IP Networking
Client-Server and Desktops
Commodity servers + storage + some
virtualization
Distributed Cloud and Mobile Devices
1st
2nd
3rd
4th
5th
Centralized storage and compute, with
minimal networking
Mainframe and Green Screens
The Five Commingled Phases of Compute, Networking and Storage
16
Less
centralized
Time
More
centralized
Application
Centric
Data
Centric
All phases are
still active and
evolving
17. File:Decentralization.jpg, by Adam Aladdin, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=35018016
Data centralization versus decentralization
17
Ethereum’s contribution:
Each peer node can play a
role in confirming blocks of
transactions.
This method also enables
tamperproof smart
contracts, or legal
agreements expressed in
self-executing code.
P2P data networks such as
IPFS + blockchains =
decentralized infrastructure
that enables dApps
Has a host, but one
that’s less of a
bottleneck
18. Evolution of open source decentralized file sharing,
decentralized and file systems
18
Erik Daniel and Florian Tschorsch, “ IPFS and Friends: A Qualitative Comparison of Next-Generation Peer-to-Peer Networks,” 2021,
https://arxiv.org/abs/2102.12737.
19. Shared transactions require tamperproof ledgers
19
Blockchains are
shared tamperproof
ledgers of concise,
deterministic
transaction
messages.
The graph
provides the
iterative
collaboration
and refined
data and logic
sharing loop.
Without the
data quality of a
knowledge
graph,
blockchains are
garbage
in/garbage out.
21. Data ownership and control is becoming a major bone of
contention
21
“Every time you drive (a post-2017 Tesla), it records the whole track of
where you drive, the GPS coordinates and certain other metrics for
every mile driven.
“They say that they are anonymizing the trigger results, but you could
probably match everything to a single person if you wanted to.”
–Anonymous reverse engineer of Tesla data, as quoted by Mark Harris in IEEE Spectrum, Aug 2022
22. Self-sovereign identity = personal or B2B data ownership/control
22
Markus Sabadello, “Decentralized IDentifers (DIDs),” W3C Workshop on Privacy and Linked Data, Vienna, 2018
Amazon controls
the user
agreements, data
and how it’s stored
User controls PII
and grants
permission and
access; PII stays in
place
PII = Personally
Identifiable
Information
23. Content addressing = rich, end-to-end encrypted identities
for represented entities
23
https://commons.wikimedia.org/wiki/File:Identity-concept.svg
Representation,
linking and
encryption are all
automated and
built into P2P data
networks.
You choose
whether or not to
share your content
addressed graph
with others, and if
so, how.
25. Example dCloud services base infrastructure today:
IPFS
25
“In IPFS, content* is delivered from the closest peers
that possess a copy of the content removing the
single-node pressure and improving the user
experience.”
–zK Capital Research, “IPFS: The Interplanetary File
system,” 2018
*Content infrastructure and management = data infrastructure and
management.
IPFS = Interplanetary File System
P2P
26. The InterPlanetary File System versus HTTP
26
Rachael Zisk, “Lockheed and Filecoin Foundation Partner to Deploy IPFS,” Payload, May 2022
29. OriginTrail + BSI’s supply chain tracking and tracing
29
OriginTrail and the British Standards Institute (BSI), https://twitter.com/origin_trail/status/1339606640887152642?s=20, Dec. 2020
The Monasteriven
whiskey produced in
Ireland is tracked and
traced from “grain to
glass” with the
OriginTrail.io
approach.
OT uses
decentralized
knowledge graph that
connects to one of
several different
blockchains.
This method enables
shared data reuse
and other synergies
across the supply
chain.
30. SOLID: Federated storage and decentralized apps
30
Ruben Verborgh, “Decentralizing personal data management with Solid: a hands-on workshop,” SEMIC Workshop, October 2020
31. SOLID shared, federated XaaS: Construction industry
31
“TrinPod™: World's first conceptually indexed space-time
digital twin using Solid,” Graphmetrix, 2022,
https://graphmetrix.com/trinpod
Company-specific SOLID storage pods and access
control can be managed by each supply chain partner.
Graphmetrix as digital twin provider manages the
system and system-level apps.
32. Digital twins and agents: Better data sharing than APIs?
32
Autonomous agents
Digital twins
Locale: Portsmouth, UK
Sensor nets
Iotics, 2019
and 2023
34. JP Morgan Chase creates a different lake for each product
domain
34
Raj Grover of Transform Partner and AWS, 2023
Claim is that the data
mesh is the means of
secure, FAIR data
36. Example of ChatGPT being led astray by a clever user
36
Mike Igartúa
(u/mikeigartua) on Reddit
37. In December, tech Q&A site Stack Overflow banned ChatGPT
37
“Overall, because the average rate of getting correct answers
from ChatGPT is too low, the posting of answers created by
ChatGPT is substantially harmful to the site and to users who are
asking or looking for correct answers,”
– “Temporary policy: ChatGPT is banned,” Stack Overflow, December 2022
38. A data management guru’s assessment of ChatGPT
38
“Don’t get me wrong, the technology is great in theory and I can see
many wonderful use cases for it. But if we are not VERY VERY
careful we will end up with the ens***tening of knowledge.”
– Daragh O’Brien, Managing Director of Castlebridge and Irish Computer Society Fellow
39. Reaction to Open AI’s success with ChatGPT
39
Google has invested about $300mn in artificial intelligence start-up
Anthropic, making it the latest tech giant to throw its money and
computing power behind a new generation of companies trying to claim a
place in the booming field of “generative AI”.
– Financial Times, 3 Feb 2023
https://www.ft.com/content/583ead66-467c-4bd5-84d0-ed5df7b5bf9c
40. Today, humans in the ChatGPT quality loop are labelers
40
Renu Khandelwal, “A Basic Understanding of the ChatGPT Model,” December 2022
https://arshren.medium.com/a-basic-understanding-of-the-chatgpt-model-92aba741eea1
41. Semantic properties in biochemistry are at the atomic layer?
41
Large language models (LLMs) are helping
biochemists discover new protein sequences.
Syntax helps identify chemically valid molecules
at a high level..
But semantics describes emergent properties,
i.e., what atoms are present and how they are
connected to each other.
At left, three molecules with the same identical
formula, but different semantic properties:
● Resorcinol, an antiseptic and disinfectant
● Hydroquinone, a skin lightening agent
● Catechol, a toxic molecule
Francesca Grisoni,
Chemical language models for de novo drug design: Challenges and opportunities,
Current Opinion in Structural Biology,
Volume 79,2023,102527,ISSN 0959-440X,
https://doi.org/10.1016/j.sbi.2023.102527.
(https://www.sciencedirect.com/science/article/pii/S0959440X23000015)
43. Humans-in-the loop = second-order cybernetics:
Involving users and SMEs to create context with the help
of machines
43
First order
(Engineer
outside box)
Second order
(Users and
domain
experts inside
box)
Stewart Brand, et al., Co-Evolution Quarterly, 1976
44. Seven obstacles to adoption of decentralized,
interorganizational environments
44
45. Q&A
45
Feel free to ping me anytime with questions, etc.
Alan Morrison
Data Science Central
LinkedIn | Twitter | Quora | Slideshare
+1 408 205 5109
a.s.morrison@gmail.com