A one-minute version of my talk from Connected Data London about graph data models versus mental representations. For the "Minute Madness" session at ISWC 2019.
Presentation for the Knowledge Graph Conference 2021
Abstract: Show me your schemas, and I will show you a graph! Although graph databases have become very popular in the enterprise, deep expertise in graphs is still in short supply (see "Building an Enterprise Knowledge Graph @Uber: Lessons from Reality" from KGC 2019). Developers often think of graphs as a completely different kind of thing from the rest of their company's data, and will go to great lengths to force their data into a "graph" shape. The amount of manual effort involved in building and maintaining ETL pipelines can become a bottleneck and a maintenance burden. In fact, there is usually a rich domain data model of entities, relationships, and properties which is already implicit in the company's existing schemas, be they interface descriptions for microservices, relational schemas, or various other kinds of storage schemas. Taking advantage of these schemas, and mapping conforming data into the graph, ought to require relatively little extra work, but developers need appropriate tools. In this presentation, we will illustrate such mappings with real-world examples from Uber, as well as introducing formal techniques for schema and data migration. We will also look ahead to the emerging GQL standard as the foundation for a new generation of highly interoperable graph database tools.
Presentation for Data Day Texas, 6/13/2022
Abstract: If you have ever built an enterprise knowledge graph, you know that heterogeneity comes at a cost. The more complex the interfaces to the graph become – more domain data models, more data representation languages and data exchange formats, more programming languages in which applications and ETL code are written – the more time is spent on mappings, and the harder it becomes to keep these mappings in a consistent state. At the same time, support for heterogeneity is often what motivates us to build a graph in the first place. In a previous Data Day talk, A Graph is a Graph is a Graph, I talked about a generic approach for reconciling graph and non-graph data models. The approach was later formalized as Algebraic Property Graphs and implemented in a proprietary tool which I was ultimately not permitted to release as open source software. This time around, I would like to introduce you to a new, open-source project called Hydra which expands the scope of the problem from defining composable transformations for data and schemas, to also porting those transformations between concrete programming languages, encapsulating them in developer-friendly DSLs. Learn to love typed lambda calculi, and see how weird and wonderful things get when a transformation library starts transforming itself.
An Algebraic Data Model for Graphs and Hypergraphs (Category Theory meetup, N...Joshua Shinavier
A presentation for the Category Theory meetup at Uber in San Francisco, November 21, 2019. A combination of previous slide shows motivating and presenting the Algebraic Property Graphs data model.
In Search of the Universal Data Model (Connected Data London 2019)Joshua Shinavier
The document discusses the history and development of artificial intelligence over the past 70 years. It outlines some of the key milestones in AI research from the early work in the 1950s to modern advances in machine learning. While progress has been made, fully general artificial intelligence that can match or exceed human levels of intelligence is still an ongoing challenge that researchers are working to achieve.
Algebraic Property Graphs (GQL Community Update, oct. 9, 2019)Joshua Shinavier
Algebraic Property Graphs is a formal data model for property graphs based on algebraic data types. It was developed for data integration at Uber and formalized at Conexus AI. APGs use type theory to allow for a schema and mapping language for property graphs. It also enables graph transformations, integration of non-graph datasets into graphs, and useful operations like queries, views, and conversions between graphs and algebraic databases.
Building an Enterprise Knowledge Graph @Uber: Lessons from RealityJoshua Shinavier
This document summarizes Uber's experience building an enterprise knowledge graph. It notes that Uber has over 200,000 managed datasets and billions of trips served, making it an ideal testbed for a knowledge graph. However, it also outlines several lessons learned, including that real-world data is messy, an RDF-based approach is difficult, and property graphs alone are insufficient. The document advocates standardizing on shared vocabularies, fitting tools and data models to existing infrastructure, and collaborating across teams.
A Graph is a Graph is a Graph: Equivalence, Transformation, and Composition o...Joshua Shinavier
This document provides an overview of graphs and graph data models. It discusses how graphs can be represented as categories and how different data models like property graphs, RDF, and relational models are equivalent categories. It also describes common graph transformations between these models and discusses Uber's goal of building a knowledge graph to integrate their diverse datasets.
Presentation for the Knowledge Graph Conference 2021
Abstract: Show me your schemas, and I will show you a graph! Although graph databases have become very popular in the enterprise, deep expertise in graphs is still in short supply (see "Building an Enterprise Knowledge Graph @Uber: Lessons from Reality" from KGC 2019). Developers often think of graphs as a completely different kind of thing from the rest of their company's data, and will go to great lengths to force their data into a "graph" shape. The amount of manual effort involved in building and maintaining ETL pipelines can become a bottleneck and a maintenance burden. In fact, there is usually a rich domain data model of entities, relationships, and properties which is already implicit in the company's existing schemas, be they interface descriptions for microservices, relational schemas, or various other kinds of storage schemas. Taking advantage of these schemas, and mapping conforming data into the graph, ought to require relatively little extra work, but developers need appropriate tools. In this presentation, we will illustrate such mappings with real-world examples from Uber, as well as introducing formal techniques for schema and data migration. We will also look ahead to the emerging GQL standard as the foundation for a new generation of highly interoperable graph database tools.
Presentation for Data Day Texas, 6/13/2022
Abstract: If you have ever built an enterprise knowledge graph, you know that heterogeneity comes at a cost. The more complex the interfaces to the graph become – more domain data models, more data representation languages and data exchange formats, more programming languages in which applications and ETL code are written – the more time is spent on mappings, and the harder it becomes to keep these mappings in a consistent state. At the same time, support for heterogeneity is often what motivates us to build a graph in the first place. In a previous Data Day talk, A Graph is a Graph is a Graph, I talked about a generic approach for reconciling graph and non-graph data models. The approach was later formalized as Algebraic Property Graphs and implemented in a proprietary tool which I was ultimately not permitted to release as open source software. This time around, I would like to introduce you to a new, open-source project called Hydra which expands the scope of the problem from defining composable transformations for data and schemas, to also porting those transformations between concrete programming languages, encapsulating them in developer-friendly DSLs. Learn to love typed lambda calculi, and see how weird and wonderful things get when a transformation library starts transforming itself.
An Algebraic Data Model for Graphs and Hypergraphs (Category Theory meetup, N...Joshua Shinavier
A presentation for the Category Theory meetup at Uber in San Francisco, November 21, 2019. A combination of previous slide shows motivating and presenting the Algebraic Property Graphs data model.
In Search of the Universal Data Model (Connected Data London 2019)Joshua Shinavier
The document discusses the history and development of artificial intelligence over the past 70 years. It outlines some of the key milestones in AI research from the early work in the 1950s to modern advances in machine learning. While progress has been made, fully general artificial intelligence that can match or exceed human levels of intelligence is still an ongoing challenge that researchers are working to achieve.
Algebraic Property Graphs (GQL Community Update, oct. 9, 2019)Joshua Shinavier
Algebraic Property Graphs is a formal data model for property graphs based on algebraic data types. It was developed for data integration at Uber and formalized at Conexus AI. APGs use type theory to allow for a schema and mapping language for property graphs. It also enables graph transformations, integration of non-graph datasets into graphs, and useful operations like queries, views, and conversions between graphs and algebraic databases.
Building an Enterprise Knowledge Graph @Uber: Lessons from RealityJoshua Shinavier
This document summarizes Uber's experience building an enterprise knowledge graph. It notes that Uber has over 200,000 managed datasets and billions of trips served, making it an ideal testbed for a knowledge graph. However, it also outlines several lessons learned, including that real-world data is messy, an RDF-based approach is difficult, and property graphs alone are insufficient. The document advocates standardizing on shared vocabularies, fitting tools and data models to existing infrastructure, and collaborating across teams.
A Graph is a Graph is a Graph: Equivalence, Transformation, and Composition o...Joshua Shinavier
This document provides an overview of graphs and graph data models. It discusses how graphs can be represented as categories and how different data models like property graphs, RDF, and relational models are equivalent categories. It also describes common graph transformations between these models and discusses Uber's goal of building a knowledge graph to integrate their diverse datasets.
1. The document outlines the evolution of graph schemas from early semantic web schemas like RDFS and OWL to simpler property graph schemas.
2. It discusses elements of graph schemas including entity types, relationship types, indexes, and schema imports.
3. Graph and schema management techniques are covered including schema validation, initialization, migration, and revision control.
4. Graph generation techniques are presented for capacity planning and benchmarking graphs of different sizes based on schema statistics.
This document provides an overview of sensors, semantic sensor networks, and the components of a semantic sensor network application. It discusses what sensors are, how to interface with them, common types of sensors and the quantities they can measure. It also describes sources of sensor data, modeling and publishing sensor data, applications that use sensor data, and the components involved, including sensor ontologies, modeling observations, and options for streaming sensor data. Finally, it demonstrates a semantic sensor network application using the Monitron device.
The document discusses the emergence of real-time interaction on the Semantic Web through the use of Semantic Web agents. It describes how agents can be used to enable real-time querying and subscription to data streams with provenance tracking. The RDFAgents specification extends standards for agent communication to support these capabilities. Examples of use cases involving question answering, query delegation, real-time data updates, and syndication are provided to illustrate how agents can facilitate real-time interaction on the Semantic Web.
The document discusses Linked Process, an Internet-scale distributed computing framework that uses the eXtensible Messaging and Presence Protocol (XMPP) for communication between nodes. It allows any XMPP-enabled device to participate in distributed computing tasks. The Linked Process specification defines how nodes can submit jobs, check job status, and interact through virtual machines. This approach aims to support a more general-purpose and open distributed computing platform than existing grid systems.
This document discusses adding semantic structure to real-time social data from Twitter through Twitter Annotations. It describes how Annotations can be mapped to existing Semantic Web vocabularies and linked to datasets to enable real-time semantic search over social and linked data. A system called TwitLogic is presented that captures Twitter data, converts it to RDF, and publishes it as linked streams to allow for continuous querying and integration with the live Semantic Web.
The document discusses capturing structured data from microblogs in real-time and publishing it as Linked Data. It describes how TwitLogic extracts semantic information like people, accounts, posts, and embedded knowledge from tweets using ontologies like FOAF and SIOC. TwitLogic then translates this data into RDF and provides a real-time RDF stream and semantic search capability. An example is shown of how tweets with hashtags or mentions could be annotated with semantic meaning. More details are given on implementation and future work like continuous queries and Twitter annotations.
This document provides an overview of the state of Linked Data and the Linking Open Data initiative. It describes key concepts like URIs, RDF, and the LOD cloud. It outlines datasets published as part of LOD and tools for mapping, indexing, searching, and navigating Linked Data. Statistics are presented on the size and structure of the LOD graph. The document concludes by discussing challenges in growing the data web and making Linked Data more usable and useful to end users.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
1. The document outlines the evolution of graph schemas from early semantic web schemas like RDFS and OWL to simpler property graph schemas.
2. It discusses elements of graph schemas including entity types, relationship types, indexes, and schema imports.
3. Graph and schema management techniques are covered including schema validation, initialization, migration, and revision control.
4. Graph generation techniques are presented for capacity planning and benchmarking graphs of different sizes based on schema statistics.
This document provides an overview of sensors, semantic sensor networks, and the components of a semantic sensor network application. It discusses what sensors are, how to interface with them, common types of sensors and the quantities they can measure. It also describes sources of sensor data, modeling and publishing sensor data, applications that use sensor data, and the components involved, including sensor ontologies, modeling observations, and options for streaming sensor data. Finally, it demonstrates a semantic sensor network application using the Monitron device.
The document discusses the emergence of real-time interaction on the Semantic Web through the use of Semantic Web agents. It describes how agents can be used to enable real-time querying and subscription to data streams with provenance tracking. The RDFAgents specification extends standards for agent communication to support these capabilities. Examples of use cases involving question answering, query delegation, real-time data updates, and syndication are provided to illustrate how agents can facilitate real-time interaction on the Semantic Web.
The document discusses Linked Process, an Internet-scale distributed computing framework that uses the eXtensible Messaging and Presence Protocol (XMPP) for communication between nodes. It allows any XMPP-enabled device to participate in distributed computing tasks. The Linked Process specification defines how nodes can submit jobs, check job status, and interact through virtual machines. This approach aims to support a more general-purpose and open distributed computing platform than existing grid systems.
This document discusses adding semantic structure to real-time social data from Twitter through Twitter Annotations. It describes how Annotations can be mapped to existing Semantic Web vocabularies and linked to datasets to enable real-time semantic search over social and linked data. A system called TwitLogic is presented that captures Twitter data, converts it to RDF, and publishes it as linked streams to allow for continuous querying and integration with the live Semantic Web.
The document discusses capturing structured data from microblogs in real-time and publishing it as Linked Data. It describes how TwitLogic extracts semantic information like people, accounts, posts, and embedded knowledge from tweets using ontologies like FOAF and SIOC. TwitLogic then translates this data into RDF and provides a real-time RDF stream and semantic search capability. An example is shown of how tweets with hashtags or mentions could be annotated with semantic meaning. More details are given on implementation and future work like continuous queries and Twitter annotations.
This document provides an overview of the state of Linked Data and the Linking Open Data initiative. It describes key concepts like URIs, RDF, and the LOD cloud. It outlines datasets published as part of LOD and tools for mapping, indexing, searching, and navigating Linked Data. Statistics are presented on the size and structure of the LOD graph. The document concludes by discussing challenges in growing the data web and making Linked Data more usable and useful to end users.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.