The document announces a GraphTalks event in Hamburg in March 2017 hosted by Neo Technology. It includes an agenda with sessions on graph databases and Neo4j, semantic data management, and an open networking session.
Virtual Sandbox for Data Scientists at Enterprise ScaleDenodo
View the full webinar here: https://goo.gl/rMQEQK
The Virtual Sandbox is an overarching framework to support the enterprise-scale roll out of data science programs using the industry standard, CRISP-DM methodology.
Attend this session to learn how the Virtual Sandbox optimizes analytical model generation, testing, deployment and subsequent refinement by:
• Easing data access for exploration and mash ups via a governed, self-service data access platform.
• Supporting the creation of logical views using data virtualization for reuse across the organization.
• Facilitating quick and repeatable generation of data sets for analytical model testing and refinement.
• Hastening model deployment by operationalizing the model using shared development pipelines.
Agenda:
• Review the challenges faced by enterprise-scale data science programs.
• Overview of the Virtual Sandbox and its benefits.
• Product Demonstration.
• Q&A
A strong relationship with the founder
of Data Vault for over 3 years now.
Supporting your business with 40+
certified consultants.
Incorporated as the preferred
Enterprise Data Warehouse modelling
paradigm in the Logica BI Framework.
Satisfied customers in many countries
and industry sectors
Big Data Expo 2015 - Barnsten Why Data Modelling is EssentialBigDataExpo
Learn the tips and tricks how to handle Data Modeling in your Big Data environment. Mark will show how modeling will add value to the business and how to make your Big Data landscape transparent across the organization.
You will see the latest modeling techniques for Big Data and different types of modeling notations. Also you will learn how to integrate Data Modeling into your BI environment.
Data Lake Acceleration vs. Data Virtualization - What’s the difference?Denodo
Watch full webinar here: https://bit.ly/3hgOSwm
Data Lake technologies have been in constant evolution in recent years, with each iteration primising to fix what previous ones failed to accomplish. Several data lake engines are hitting the market with better ingestion, governance, and acceleration capabilities that aim to create the ultimate data repository. But isn't that the promise of a logical architecture with data virtualization too? So, what’s the difference between the two technologies? Are they friends or foes? This session will explore the details.
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)Denodo
Watch full webinar here: https://bit.ly/34iCruM
Many organizations are embarking on strategically important journeys to embrace data and analytics. The goal can be to improve internal efficiencies, improve the customer experience, drive new business models and revenue streams, or – in the public sector – provide better services. All of these goals require empowering employees to act on data and analytics and to make data-driven decisions. However, getting data – the right data at the right time – to these employees is a huge challenge and traditional technologies and data architectures are simply not up to this task. This webinar will look at how organizations are using Data Virtualization to quickly and efficiently get data to the people that need it.
Attend this session to learn:
- The challenges organizations face when trying to get data to the business users in a timely manner
- How Data Virtualization can accelerate time-to-value for an organization’s data assets
- Examples of leading companies that used data virtualization to get the right data to the users at the right time
Parallel In-Memory Processing and Data Virtualization Redefine Analytics Arch...Denodo
To watch full webinar, follow this link: https://goo.gl/3s9hRG
The tide is changing for analytics architectures. Traditional approaches, from the data warehouse to the data lake, implicitly assume that all relevant data can be stored in a single, centralized repository. But this approach is slow and expensive, and sometimes not even feasible, because some data sources are too big to be replicated, and data is often too distributed such as those found in cloud data sources to make a “full centralization” strategy successful.
Attend this webinar to learn:
• Why Logical architectures are the best option when integrating Big Data.
• How Denodo’s parallel in-memory capabilities with dynamic query optimization redefine analytics architectures.
• How IT can meet business demands for data much faster with Data Virtualization.
Agenda:
• Challenges with traditional approaches for analytics architectures.
• Overview of Denodo's parallel in-memory capabilities.
• Product Demo of parallel in-memory capabilities accelerating analytics performance.
• Q&A.
To watch all webinars in Denodo's Packed Lunch Webinar Series, follow this link: https://goo.gl/4xL9wM
Machine Learning meets Granular Computing: the emergence of granular models in the Big Data era
** Presentation Slides from Dr Rafael Falcon, from Larus Technologies, for the February 2018 Ottawa Machine Learning & Artificial Intelligence Meetup
Abstract
Traditional Machine Learning (ML) models are unable to effectively cope with the challenges posed by the many V’s (volume, velocity, variety, etc.) characterizing the Big Data phenomenon. This has triggered the need to revisit the underlying principles and assumptions ML stands upon. Dimensionality reduction, feature/instance selection, increased computational power and parallel/distributed algorithm implementations are well-known approaches to deal with these large volumes of data.
In this talk we will introduce Granular Computing (GrC), a vibrant research discipline devoted to the design of high-level information granules and their inference frameworks. By adopting more symbolic constructs such as sets, intervals or similarity classes to describe numerical data, GrC has paved the way for a more human-centric manner of interacting with and reasoning about the real world. We will go over several granular models that address common ML tasks such as classification/clustering and will outline a methodology to appropriately design information granules for the problem at hand. Though not a mainstream concept yet, GrC is a promising direction for ML systems to harness Big Data.
Virtual Sandbox for Data Scientists at Enterprise ScaleDenodo
View the full webinar here: https://goo.gl/rMQEQK
The Virtual Sandbox is an overarching framework to support the enterprise-scale roll out of data science programs using the industry standard, CRISP-DM methodology.
Attend this session to learn how the Virtual Sandbox optimizes analytical model generation, testing, deployment and subsequent refinement by:
• Easing data access for exploration and mash ups via a governed, self-service data access platform.
• Supporting the creation of logical views using data virtualization for reuse across the organization.
• Facilitating quick and repeatable generation of data sets for analytical model testing and refinement.
• Hastening model deployment by operationalizing the model using shared development pipelines.
Agenda:
• Review the challenges faced by enterprise-scale data science programs.
• Overview of the Virtual Sandbox and its benefits.
• Product Demonstration.
• Q&A
A strong relationship with the founder
of Data Vault for over 3 years now.
Supporting your business with 40+
certified consultants.
Incorporated as the preferred
Enterprise Data Warehouse modelling
paradigm in the Logica BI Framework.
Satisfied customers in many countries
and industry sectors
Big Data Expo 2015 - Barnsten Why Data Modelling is EssentialBigDataExpo
Learn the tips and tricks how to handle Data Modeling in your Big Data environment. Mark will show how modeling will add value to the business and how to make your Big Data landscape transparent across the organization.
You will see the latest modeling techniques for Big Data and different types of modeling notations. Also you will learn how to integrate Data Modeling into your BI environment.
Data Lake Acceleration vs. Data Virtualization - What’s the difference?Denodo
Watch full webinar here: https://bit.ly/3hgOSwm
Data Lake technologies have been in constant evolution in recent years, with each iteration primising to fix what previous ones failed to accomplish. Several data lake engines are hitting the market with better ingestion, governance, and acceleration capabilities that aim to create the ultimate data repository. But isn't that the promise of a logical architecture with data virtualization too? So, what’s the difference between the two technologies? Are they friends or foes? This session will explore the details.
Bridging the Last Mile: Getting Data to the People Who Need It (APAC)Denodo
Watch full webinar here: https://bit.ly/34iCruM
Many organizations are embarking on strategically important journeys to embrace data and analytics. The goal can be to improve internal efficiencies, improve the customer experience, drive new business models and revenue streams, or – in the public sector – provide better services. All of these goals require empowering employees to act on data and analytics and to make data-driven decisions. However, getting data – the right data at the right time – to these employees is a huge challenge and traditional technologies and data architectures are simply not up to this task. This webinar will look at how organizations are using Data Virtualization to quickly and efficiently get data to the people that need it.
Attend this session to learn:
- The challenges organizations face when trying to get data to the business users in a timely manner
- How Data Virtualization can accelerate time-to-value for an organization’s data assets
- Examples of leading companies that used data virtualization to get the right data to the users at the right time
Parallel In-Memory Processing and Data Virtualization Redefine Analytics Arch...Denodo
To watch full webinar, follow this link: https://goo.gl/3s9hRG
The tide is changing for analytics architectures. Traditional approaches, from the data warehouse to the data lake, implicitly assume that all relevant data can be stored in a single, centralized repository. But this approach is slow and expensive, and sometimes not even feasible, because some data sources are too big to be replicated, and data is often too distributed such as those found in cloud data sources to make a “full centralization” strategy successful.
Attend this webinar to learn:
• Why Logical architectures are the best option when integrating Big Data.
• How Denodo’s parallel in-memory capabilities with dynamic query optimization redefine analytics architectures.
• How IT can meet business demands for data much faster with Data Virtualization.
Agenda:
• Challenges with traditional approaches for analytics architectures.
• Overview of Denodo's parallel in-memory capabilities.
• Product Demo of parallel in-memory capabilities accelerating analytics performance.
• Q&A.
To watch all webinars in Denodo's Packed Lunch Webinar Series, follow this link: https://goo.gl/4xL9wM
Machine Learning meets Granular Computing: the emergence of granular models in the Big Data era
** Presentation Slides from Dr Rafael Falcon, from Larus Technologies, for the February 2018 Ottawa Machine Learning & Artificial Intelligence Meetup
Abstract
Traditional Machine Learning (ML) models are unable to effectively cope with the challenges posed by the many V’s (volume, velocity, variety, etc.) characterizing the Big Data phenomenon. This has triggered the need to revisit the underlying principles and assumptions ML stands upon. Dimensionality reduction, feature/instance selection, increased computational power and parallel/distributed algorithm implementations are well-known approaches to deal with these large volumes of data.
In this talk we will introduce Granular Computing (GrC), a vibrant research discipline devoted to the design of high-level information granules and their inference frameworks. By adopting more symbolic constructs such as sets, intervals or similarity classes to describe numerical data, GrC has paved the way for a more human-centric manner of interacting with and reasoning about the real world. We will go over several granular models that address common ML tasks such as classification/clustering and will outline a methodology to appropriately design information granules for the problem at hand. Though not a mainstream concept yet, GrC is a promising direction for ML systems to harness Big Data.
Implementing Data Virtualization for Data Warehouses and Master Data Manageme...Denodo
The ongoing evolution of business requirements and growth of data volumes continue to put added challenges on existing DW and MDM implementations. Challenges that in many cases cannot be met. Data Virtualization compliments existing DW, MDM and other architectures and business initiatives, providing the agility and flexibility - at a lower cost – for the enablement of Virtual MDM, self-service BI, operational BI, rapid prototyping and real-time analytics.
More information and FREE registrations for this webinar: http://goo.gl/asYztF
Landing page for the entire Packed Lunch webinar series: http://goo.gl/NATMHw
Attend & get unique insights into:
How Data Virtualization can provide a simple and low cost alternative to traditional DW and MDM solutions
How Data Virtualization can enhance and extend existing DW or MDM solutions to provide a more agile data integration architecture
Case studies that demonstrate how Data Virtualization has increased agility to meet complex information needs
Multi-Cloud-Datenintegration mit DatenvirtualisierungDenodo
Watch full webinar here: https://bit.ly/3bcwEaS
Laut einer aktuellen Gartner Studie nutzen 81% der Organisationen Cloud-Services von zwei oder mehreren Anbietern, um eine hohe Flexibilität und bestmögliche Performance zu erzielen. Dies führt zu einer komplexen Infrastruktur, die sowohl das Auffinden von Daten als auch den Zugriff auf diese Daten erschwert.
Durch Datenvirtualisierung wird eine dezidierte Schicht für das Data-Discovery und den Datenzugriff angeboten. Durch eine „Multi-Location-Architektur“ wird den Nutzern ein umfänglicher und gemanagter Zugriff auf Daten ermöglicht. Dieser Zugriff ist unabhängig davon, ob sich die Daten in einem Rechenzentrum oder einer (beliebigen) Cloud befinden! Gleichzeitig behalten die "Data Owner" die lokale Kontrolle über ihre Daten und lokale Datenschutzbestimmungen (z.B. DSGVO) werden eingehalten.
Key Takeaways dieses Webinars:
- Herausforderungen bei der Einführung von Multi-Cloud-Datenstrategien
- Wie mit der Denodo-Plattform ein gemanagter „Data Access Layer“ für die gesamte Organisation bereitgestellt wird
- Multi-Location Architekturen, die gleichermaßen die lokale Kontrolle als auch den Echtzeitzugriff auf Daten ermöglichen
Next Gen Analytics Going Beyond Data WarehouseDenodo
Watch this Fast Data Strategy session with speakers: Maria Thonn, Enterprise BI Development Manager, T-Mobile & Jonathan Wisgerhof, Smart Data Architect, Kadenza: https://goo.gl/J1qiLj
Your company, like most of your peers, is undoubtedly data-aware and data-driven. However, unless you embrace a modern architecture like data virtualization to deliver actionable insights from your enterprise data, the worth of your enterprise data will diminish to a fraction of its potential.
Attend this session to learn how data virtualization:
• Provides a common semantic layer for business intelligence (BI) and analytical applications
• Enables a more agile, flexible logical data warehouse
• Acts as a single virtual catalog for all enterprise data sources including data lakes
Watch full webinar here: https://bit.ly/2N1Ndz9
How is a logical data fabric different from a physical data fabric? What are the advantages of one type of fabric over the other? Attend this session to firm up your understanding of a logical data fabric.
Rethink Your Data Governance - POPI Act Compliance Made Easy with Data Virtua...Denodo
Watch full webinar here: https://bit.ly/2Yc8nkc
The Protection of Personal Information Act (POPI) came into full effect in South Africa on July 1st, 2021. POPI will affect how businesses that serve in South Africa collect, use and transfer data, forcing them to provide specific reasons and needs for the personal data they gather and prove their compliance with the principles established by the regulation.
The regulation is already creating many challenges for companies, including:
- Ensuring secure access to most current data, whether on or off-premise
- Consistent security across all data sources
- Data access audit
- Ability to provide data lineage
This webinar aims to demonstrate how data virtualization has surfaced as a straight-forward solution to many of the challenges and questions brought on by the POPI Act. It will also include a live demonstration of how easy it can be to achieve the desired level of security with data virtualization. Data virtualization is an agile, flexible data integration technology that can help organizations address the growing challenges in data governance, security, and compliance.
Join the webinar to learn more about the benefits of using data virtualization to smoothly comply with the POPI Act.
Simplifying Cloud Architectures with Data VirtualizationDenodo
Watch here: https://bit.ly/2yxLo6f
Moving applications and data to the Cloud is a priority for many organizations. The benefits - in terms of flexibility, agility, and cost savings - are driving Cloud adoption. However, the journey to the Cloud is not as easy as many people think. The process of moving application and data to the Cloud is challenging and can entail widespread disruption across the organization if not carefully managed. Even when systems are migrated to the Cloud, the resultant hybrid or multi-Cloud architecture is more complex for users to navigate, making it harder for them to get the data that they need to do their jobs.
Data Virtualization can help organizations at all stages of their journey to the Cloud - during migration and also in the resultant hybrid or multi-Cloud architectures. Attend this webinar to learn how Data Virtualization can:
- Help organizations manage risk and minimize the disruption caused as systems are moved to the Cloud
- Provide a single point of access for data that is both on-premise and in the Cloud, making it easier for users to find and access the data that they need
- Provide a security layer to protect and manage your data when it's distributed across hybrid or multi-Cloud architectures
Maximizing Data Lake ROI with Data Virtualization: A Technical DemonstrationDenodo
Watch full webinar here: https://bit.ly/3ohtRqm
Companies with corporate data lakes also need a strategy for how to best integrate them with their overall data fabric. To take full advantage of a data lake, data architects must determine what data belongs in the Lake vs. other sources, how end users are going to find and connect to the data they need as well as the best way to leverage the processing power of the data lake. This webinar will provide you with a deep dive look at how the Denodo Platform for data virtualization enables companies to maximize their investment in their corporate data lake.
Watch on-demand this webinar to learn:
- How to create a logical data fabric with Denodo
- How to leverage the a data lake for MPP Acceleration and Summary Views
- How to leverage Presto with Denodo for file based data lakes (ie. S3, ADLS, HDFS, etc.)
Data Science Operationalization: The Journey of Enterprise AIDenodo
Watch full webinar here: https://bit.ly/3kVmYJl
As we move into a world driven by AI initiatives, we find ourselves facing new and diverse challenges when it comes to operationalization. Creating a solution and putting it into practice, is certainly not the same. The challenges span various organizational and data facades. In many instances, the data scientists may be working in silos and connecting to the live data may not always be possible. But how does one guarantee their developed model in a silo is still relevant to live data? How can we manage the data flow and data access across the entire AI operationalization cycle?
Watch on-demand to explore:
- The journey and challenges of the Data Scientist
- How Denodo data virtualization with data movement streamlines operationalization
- The best practices and techniques when dealing with siloed data
- How customers have used data virtualization in their data science initiatives
Data Virtualization for Compliance – Creating a Controlled Data EnvironmentDenodo
CIT modernized its data architecture in response to intense regulatory scrutiny. In this presentation, they present how data virtualization is being used to drive standardization, enable cross-company data integration, and serve as a common provisioning point from which to access all authoritative sources of data.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/CCqUeT.
Best Practices for Migrating from Denodo 6.x to 7.0Denodo
Watch this Fast Data Strategy Session here: https://goo.gl/ZwVCVQ
Ready to migrate to 7.0? Attend this session to learn:
• Benefits of moving from Denodo 6.x to 7.0
• Key considerations and best practices
• How Denodo Services can help with the migration effort
Working With a Real-World Dataset in Neo4j: Import and ModelingNeo4j
This webinar will cover how to work with a real world dataset in Neo4j, with a focus on how to build a graph from an existing dataset (in this case a series of JSON files). We will explore how to performantly import the data into Neo4j - both in the case of an initial import and scaling writes for your graph application. We will demonstrate different approaches for data import (neo4j-import, LOAD CSV, and using the official Neo4j drivers), and discuss when it makes to use each import technique. If you've ever asked these questions, then this webinar is for you!
- How do I design a property graph model for my domain?
- How do I use the official Neo4j drivers?
- How can I deal with concurrent writes to Neo4j?
- How can I import JSON into Neo4j?
Enabling the Real Time Analytical EnterpriseHortonworks
Combining IOT, Customer Experience and Real-Time Enterprise Data within Hadoop. What if you could derive real-time insights using ALL of your data? Join us for this webinar and learn how companies are combining “new” real-time data sources (i.e. IOT, Social, Web Logs) with continuously updated enterprise data from SAP and other enterprise transactional systems, providing deep and up-to-the-second analytical insights. This presentation will include a demonstration of how this can be achieved quickly, easily and affordably by utilizing a joint solution from Attunity and Hortonworks.
Implementing Data Virtualization for Data Warehouses and Master Data Manageme...Denodo
The ongoing evolution of business requirements and growth of data volumes continue to put added challenges on existing DW and MDM implementations. Challenges that in many cases cannot be met. Data Virtualization compliments existing DW, MDM and other architectures and business initiatives, providing the agility and flexibility - at a lower cost – for the enablement of Virtual MDM, self-service BI, operational BI, rapid prototyping and real-time analytics.
More information and FREE registrations for this webinar: http://goo.gl/asYztF
Landing page for the entire Packed Lunch webinar series: http://goo.gl/NATMHw
Attend & get unique insights into:
How Data Virtualization can provide a simple and low cost alternative to traditional DW and MDM solutions
How Data Virtualization can enhance and extend existing DW or MDM solutions to provide a more agile data integration architecture
Case studies that demonstrate how Data Virtualization has increased agility to meet complex information needs
Multi-Cloud-Datenintegration mit DatenvirtualisierungDenodo
Watch full webinar here: https://bit.ly/3bcwEaS
Laut einer aktuellen Gartner Studie nutzen 81% der Organisationen Cloud-Services von zwei oder mehreren Anbietern, um eine hohe Flexibilität und bestmögliche Performance zu erzielen. Dies führt zu einer komplexen Infrastruktur, die sowohl das Auffinden von Daten als auch den Zugriff auf diese Daten erschwert.
Durch Datenvirtualisierung wird eine dezidierte Schicht für das Data-Discovery und den Datenzugriff angeboten. Durch eine „Multi-Location-Architektur“ wird den Nutzern ein umfänglicher und gemanagter Zugriff auf Daten ermöglicht. Dieser Zugriff ist unabhängig davon, ob sich die Daten in einem Rechenzentrum oder einer (beliebigen) Cloud befinden! Gleichzeitig behalten die "Data Owner" die lokale Kontrolle über ihre Daten und lokale Datenschutzbestimmungen (z.B. DSGVO) werden eingehalten.
Key Takeaways dieses Webinars:
- Herausforderungen bei der Einführung von Multi-Cloud-Datenstrategien
- Wie mit der Denodo-Plattform ein gemanagter „Data Access Layer“ für die gesamte Organisation bereitgestellt wird
- Multi-Location Architekturen, die gleichermaßen die lokale Kontrolle als auch den Echtzeitzugriff auf Daten ermöglichen
Next Gen Analytics Going Beyond Data WarehouseDenodo
Watch this Fast Data Strategy session with speakers: Maria Thonn, Enterprise BI Development Manager, T-Mobile & Jonathan Wisgerhof, Smart Data Architect, Kadenza: https://goo.gl/J1qiLj
Your company, like most of your peers, is undoubtedly data-aware and data-driven. However, unless you embrace a modern architecture like data virtualization to deliver actionable insights from your enterprise data, the worth of your enterprise data will diminish to a fraction of its potential.
Attend this session to learn how data virtualization:
• Provides a common semantic layer for business intelligence (BI) and analytical applications
• Enables a more agile, flexible logical data warehouse
• Acts as a single virtual catalog for all enterprise data sources including data lakes
Watch full webinar here: https://bit.ly/2N1Ndz9
How is a logical data fabric different from a physical data fabric? What are the advantages of one type of fabric over the other? Attend this session to firm up your understanding of a logical data fabric.
Rethink Your Data Governance - POPI Act Compliance Made Easy with Data Virtua...Denodo
Watch full webinar here: https://bit.ly/2Yc8nkc
The Protection of Personal Information Act (POPI) came into full effect in South Africa on July 1st, 2021. POPI will affect how businesses that serve in South Africa collect, use and transfer data, forcing them to provide specific reasons and needs for the personal data they gather and prove their compliance with the principles established by the regulation.
The regulation is already creating many challenges for companies, including:
- Ensuring secure access to most current data, whether on or off-premise
- Consistent security across all data sources
- Data access audit
- Ability to provide data lineage
This webinar aims to demonstrate how data virtualization has surfaced as a straight-forward solution to many of the challenges and questions brought on by the POPI Act. It will also include a live demonstration of how easy it can be to achieve the desired level of security with data virtualization. Data virtualization is an agile, flexible data integration technology that can help organizations address the growing challenges in data governance, security, and compliance.
Join the webinar to learn more about the benefits of using data virtualization to smoothly comply with the POPI Act.
Simplifying Cloud Architectures with Data VirtualizationDenodo
Watch here: https://bit.ly/2yxLo6f
Moving applications and data to the Cloud is a priority for many organizations. The benefits - in terms of flexibility, agility, and cost savings - are driving Cloud adoption. However, the journey to the Cloud is not as easy as many people think. The process of moving application and data to the Cloud is challenging and can entail widespread disruption across the organization if not carefully managed. Even when systems are migrated to the Cloud, the resultant hybrid or multi-Cloud architecture is more complex for users to navigate, making it harder for them to get the data that they need to do their jobs.
Data Virtualization can help organizations at all stages of their journey to the Cloud - during migration and also in the resultant hybrid or multi-Cloud architectures. Attend this webinar to learn how Data Virtualization can:
- Help organizations manage risk and minimize the disruption caused as systems are moved to the Cloud
- Provide a single point of access for data that is both on-premise and in the Cloud, making it easier for users to find and access the data that they need
- Provide a security layer to protect and manage your data when it's distributed across hybrid or multi-Cloud architectures
Maximizing Data Lake ROI with Data Virtualization: A Technical DemonstrationDenodo
Watch full webinar here: https://bit.ly/3ohtRqm
Companies with corporate data lakes also need a strategy for how to best integrate them with their overall data fabric. To take full advantage of a data lake, data architects must determine what data belongs in the Lake vs. other sources, how end users are going to find and connect to the data they need as well as the best way to leverage the processing power of the data lake. This webinar will provide you with a deep dive look at how the Denodo Platform for data virtualization enables companies to maximize their investment in their corporate data lake.
Watch on-demand this webinar to learn:
- How to create a logical data fabric with Denodo
- How to leverage the a data lake for MPP Acceleration and Summary Views
- How to leverage Presto with Denodo for file based data lakes (ie. S3, ADLS, HDFS, etc.)
Data Science Operationalization: The Journey of Enterprise AIDenodo
Watch full webinar here: https://bit.ly/3kVmYJl
As we move into a world driven by AI initiatives, we find ourselves facing new and diverse challenges when it comes to operationalization. Creating a solution and putting it into practice, is certainly not the same. The challenges span various organizational and data facades. In many instances, the data scientists may be working in silos and connecting to the live data may not always be possible. But how does one guarantee their developed model in a silo is still relevant to live data? How can we manage the data flow and data access across the entire AI operationalization cycle?
Watch on-demand to explore:
- The journey and challenges of the Data Scientist
- How Denodo data virtualization with data movement streamlines operationalization
- The best practices and techniques when dealing with siloed data
- How customers have used data virtualization in their data science initiatives
Data Virtualization for Compliance – Creating a Controlled Data EnvironmentDenodo
CIT modernized its data architecture in response to intense regulatory scrutiny. In this presentation, they present how data virtualization is being used to drive standardization, enable cross-company data integration, and serve as a common provisioning point from which to access all authoritative sources of data.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/CCqUeT.
Best Practices for Migrating from Denodo 6.x to 7.0Denodo
Watch this Fast Data Strategy Session here: https://goo.gl/ZwVCVQ
Ready to migrate to 7.0? Attend this session to learn:
• Benefits of moving from Denodo 6.x to 7.0
• Key considerations and best practices
• How Denodo Services can help with the migration effort
Working With a Real-World Dataset in Neo4j: Import and ModelingNeo4j
This webinar will cover how to work with a real world dataset in Neo4j, with a focus on how to build a graph from an existing dataset (in this case a series of JSON files). We will explore how to performantly import the data into Neo4j - both in the case of an initial import and scaling writes for your graph application. We will demonstrate different approaches for data import (neo4j-import, LOAD CSV, and using the official Neo4j drivers), and discuss when it makes to use each import technique. If you've ever asked these questions, then this webinar is for you!
- How do I design a property graph model for my domain?
- How do I use the official Neo4j drivers?
- How can I deal with concurrent writes to Neo4j?
- How can I import JSON into Neo4j?
Enabling the Real Time Analytical EnterpriseHortonworks
Combining IOT, Customer Experience and Real-Time Enterprise Data within Hadoop. What if you could derive real-time insights using ALL of your data? Join us for this webinar and learn how companies are combining “new” real-time data sources (i.e. IOT, Social, Web Logs) with continuously updated enterprise data from SAP and other enterprise transactional systems, providing deep and up-to-the-second analytical insights. This presentation will include a demonstration of how this can be achieved quickly, easily and affordably by utilizing a joint solution from Attunity and Hortonworks.
The Five Graphs of Government: How Federal Agencies can Utilize Graph TechnologyNeo4j
In this session from Neo4j Government Graphday, Philip Rathle discusses how federal agencies and contractors can utilize graphs to power their applications.
Microservices architectures are changing the way that organizations build their applications and infrastructure. Companies can now achieve new levels of scale and efficiency by disaggregating their large, monolithic applications into small, independent “micro services”, each of which perform different functions. In this session, we’ll introduce the concept of microservices, help you evaluate whether your organization is ready for microservices, and discuss methods for implementing these architectures.
Business model navigator - 55 business model patterns
This presentation is adapted and based on working Paper “The St.Gallen Business Model Navigator” by Oliver Gassmann, Karolin Frankenberger, Michaela Csik
DATA SCIENCE IS CATALYZING BUSINESS AND INNOVATION Elvis Muyanja
Today, data science is enabling companies, governments, research centres and other organisations to turn their volumes of big data into valuable and actionable insights. It is important to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful business information. According to the McKinsey Global Institute, the U.S. alone could face a shortage of about 190,000 data scientists and 1.5 million managers and analysts who can understand and make decisions using big data by 2018. In coming years, data scientists will be vital to all sectors —from law and medicine to media and nonprofits. Has the African continent planned to train the next generation of data scientists required on the continent?
GCP - Continuous Integration and Delivery into Kubernetes with GitHub, Travis...Oleg Shalygin
Kubernetes provides an automated platform to deployment, scaling and operations of applications across a cluster of hosts. Complementing Kubernetes with a series of build scripts in conjunction with Travis-CI, GitHub, Artifactory, and Google Cloud Platform, we can take code from a merged pull request to a deployed environment with no manual intervention on a highly scaleable and robust infrastructure.
This is a quick overview of the challenges that BigData and Flexible Schema Databases like MongoDB offer regarding Data Treatment and strategies to overcome them.
AWS Webcast - Sales Productivity Solutions with MicroStrategy and RedshiftAmazon Web Services
Sales Force Automation (SFA) and Customer Relationship Management (CRM) tools, such as Salesforce.com and Microsoft Dynamics CRM, are ubiquitous tools that provide all of the transactional capabilities required to manage a company's sales pipeline. SFA and CRM data alone, however, is limited and so combining it with information from other sources enables you to create unique and powerful insights. When combined with product and financial data, for example, get visibility into relationships between geographies, sales reps, product performance, and revenue to ultimately optimize profits. Layer on advanced analytic to make predictions about future product sales based on seasonality and other market conditions. To unleash the full power of the CRM and dramatically increase operational performance and top-line revenue, companies are leveraging advanced analytic and data visualization to deliver new insights to the entire sales organization. Moreover, delivering these sales enablement productivity solutions on mobile devices, ensures strong adoption across every sales team. Join us in this webinar to learn how to use MicroStrategy together with Amazon Redshift to build mobile sales productivity solutions for your business.
Connecta Event: Big Query och dataanalys med Google Cloud PlatformConnectaDigital
Avancerad dataanalys och ”big data” har under de senaste åren klättrat på trendlistorna och är nu ett av de mest prioriterade områdena i utvecklingen av nya tjänster och produkter för ledarföretag i det digitala landskapet.
Informationen som byggs upp i systemen när kundmötena digitaliseras har visat sig vara guld värt. Här finns allt vi behöver veta för att göra våra affärer mer effektiva.
Sedan sommaren 2013 har Connecta tillsammans med Google ett etablerat samarbete för att hjälpa våra kunder med övergången till moln-tjänster för bland annat avancerad dataanalys. För att göra oss själva redo att hjälpa våra kunder har vi under ett antal år utvecklat såväl kunskaper som skaffat oss erfarenheter kring Googles olika moln-produkter, som exempelvis ”Big Query”.
Big Query är ett molnbaserat analysverktyg och en del av Google Cloud Platform. Big Query gör det möjligt att ställa snabba frågor mot enorma dataset på bara någon sekund. Big Query och Google Cloud Platform erbjuder färdiga lösningar för att sätta upp och underhålla en infrastruktur som med enkla medel gör allt detta möjligt.
På Connecta Digital Consultings tredje event för våren introducerade vi våra kunder och partners i koncepten dataanalys och Big Query.
Under eventet berördes följande punkter:
- Big Data och Business Intelligence (BI)
- “The Google Big Data tools” – framgångsfaktorer och hur man kommer igång
- Google Cloud Platform och hur man genomför en framgångsrik molnsatsning
Vi presenterade case och berättade om viktiga lärdomar vi dragit i samarbetet med Google och våra kunder.
A Connections-first Approach to Supply Chain OptimizationNeo4j
Supply chain optimization is an unusual balancing act that requires finesse, skill and timely data. Every supply chain’s the key questions to be answered are:
What to Buy? -- what are the factors in determining your optimal product mix and set of suppliers.
How much to Buy? -- what are the most and least popular items at any given time interval
When to Buy? -- long lags in delivery timing may tax limit your flexibility and influence your inventory management practices.
We will illustrate an API-based solution that utilizes a Graph database platform to add demonstrable value to Supply Planning.
3 Reasons Data Virtualization Matters in Your PortfolioDenodo
Watch the full session on-demand here: https://goo.gl/upxC5W
Real-Time Analytics for Big Data, Cloud & Self-Service BI
The world of data is only becoming distributed. Privacy, regulations, and the need for real-time decisions are challenging organizations’ legacy information strategy. This webinar will include an expert panel discussion on Logical Data Warehouse, Universal Semantic Layer, and Real-time Analytics by Paul Moxon (VP of Data Architectures), Pablo Alvarez (Director of Product Management), and Alberto Pan (CTO).
Attend and learn:
• The major challenges of legacy information strategies.
• How data virtualization can help you overcome these challenges.
• Strategies for enabling agile data management and analytics.
(Big) Data Processing for Next Generation Business Value. Presented at the Leaders Buildings Leaders Conference, held at Union College on April 3, 2015.
https://www.ucollege.edu/academics/business-and-computer-science/leaders-building-leaders
This presentation is a semi-technical overview of big data and related use-cases, the Apache Hadoop software stack, and some example data-science / analysis models.
Atelier - Architecture d’applications de Graphes - GraphSummit ParisNeo4j
Atelier - Architecture d’applications de Graphes
Participez à cet atelier pratique animé par des experts de Neo4j qui vous guideront pour découvrir l’intelligence contextuelle. En utilisant un jeu de données réel, nous construirons étape par étape une solution de graphes ; de la construction du modèle de données de graphes à l’exécution de requêtes et à la visualisation des données. L’approche sera applicable à de multiples cas d’usages et industries.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
SOPRA STERIA - GraphRAG : repousser les limitations du RAG via l’utilisation ...Neo4j
Romain CAMPOURCY – Architecte Solution, Sopra Steria
Patrick MEYER – Architecte IA Groupe, Sopra Steria
La Génération de Récupération Augmentée (RAG) permet la réponse à des questions d’utilisateur sur un domaine métier à l’aide de grands modèles de langage. Cette technique fonctionne correctement lorsque la documentation est simple mais trouve des limitations dès que les sources sont complexes. Au travers d’un projet que nous avons réalisé, nous vous présenterons l’approche GraphRAG, une nouvelle approche qui utilise une base Neo4j générée pour améliorer la compréhension des documents et la synthèse d’informations. Cette méthode surpasse l’approche RAG en fournissant des réponses plus holistiques et précises.
ADEO - Knowledge Graph pour le e-commerce, entre challenges et opportunités ...Neo4j
Charles Gouwy, Business Product Leader, Adeo Services (Groupe Leroy Merlin)
Alors que leur Knowledge Graph est déjà intégré sur l’ensemble des expériences d’achat de leur plateforme e-commerce depuis plus de 3 ans, nous verrons quelles sont les nouvelles opportunités et challenges qui s’ouvrent encore à eux grâce à leur utilisation d’une base de donnée de graphes et l’émergence de l’IA.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphAware - Transforming policing with graph-based intelligence analysisNeo4j
Petr Matuska, Sales & Sales Engineering Lead, GraphAware
Western Australia Police Force’s adoption of Neo4j and the GraphAware Hume graph analytics platform marks a significant advancement in data-driven policing. Facing the challenges of growing volumes of valuable data scattered in disconnected silos, the organisation successfully implemented Neo4j database and Hume, consolidating data from various sources into a dynamic knowledge graph. The result was a connected view of intelligence, making it easier for analysts to solve crime faster. The partnership between Neo4j and GraphAware in this project demonstrates the transformative impact of graph technology on law enforcement’s ability to leverage growing volumes of valuable data to prevent crime and protect communities.
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product UpdatesNeo4j
David Pond, Lead Product Manager, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Shirley Bacso, Data Architect, Ingka Digital
“Linked Metadata by Design” represents the integration of the outcomes from human collaboration, starting from the design phase of data product development. This knowledge is captured in the Data Knowledge Graph. It not only enables data products to be robust and compliant but also well-understood and effectively utilized.
Your enemies use GenAI too - staying ahead of fraud with Neo4jNeo4j
Delivered by Michael Down at Gartner Data & Analytics Summit London 2024 - Your enemies use GenAI too: Staying ahead of fraud with Neo4j.
Fraudsters exploit the latest technologies like generative AI to stay undetected. Static applications can’t adapt quickly enough. Learn why you should build flexible fraud detection apps on Neo4j’s native graph database combined with advanced data science algorithms. Uncover complex fraud patterns in real-time and shut down schemes before they cause damage.
BT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptxNeo4j
Delivered by Sreenath Gopalakrishna, Director of Software Engineering at BT, and Dr Jim Webber, Chief Scientist at Neo4j, at Gartner Data & Analytics Summit London 2024 this presentation examines how knowledge graphs and GenAI combine in real-world solutions.
BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Future innovation plans include the exploration of uses of EKG + Generative AI.
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit MilanNeo4j
Look beyond the hype and unlock practical techniques to responsibly activate intelligence across your organization’s data with GenAI. Explore how to use knowledge graphs to increase accuracy, transparency, and explainability within generative AI systems. You’ll depart with hands-on experience combining relationships and LLMs for increased domain-specific context and enhanced reasoning.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
2. Neo4j GraphTalks
• 09:00-09:30 Frühstück und Networking
• 09:30-10:00 Einführung in Graph-Datenbanken und Neo4j
(Bruno Ungermann, Neo Technology)
• 10.00-11.00 Semantic Data Management – der Weg zu einer nachhaltigen
unternehmensweiten Datenplattform
(Dr. Andreas Weber, semantic PDM)
• Open End (semantic PDM, Neo Technology)
10. “We found Neo4j to be literally thousands of times faster
than our prior MySQL solution, with queries that require
10-100 times less code. Today, Neo4j provides eBay with
functionality that was previously impossible.”
- Volker Pacher, Senior Developer
“Minutes to milliseconds” performance
Queries up to 1000x faster than other tested database types
Speed
11. Discrete Data
Minimally
connected data
Neo4j is designed for data relationships
Other NoSQL Relational DBMS Neo4j Graph DB
Connected Data
Focused on
Data Relationships
Development Benefits
Easy model maintenance
Easy query
Deployment Benefits
Ultra high performance
Minimal resource usage
Use the Right Database for the Right Job
12. 2000 2003 2007 2009 2011 2013 2014 20152012
GraphConnect,
first conference for
graph DBs
First
Global 2000
Customer
Introduced
first and only
declarative query
language for
property graph
Published
O’Reilly
book
on Graph
Databases
First
native
graph DB
in 24/7
production
Invented
property
graph
model
Contributed
first graph DB
to open
source
Extended
graph data
model to
labeled
property
graph
150+ customers
50K+ monthly
downloads
500+ graph
DB events
worldwide
Neo4j: The Graph Database Leader
15. “Forrester estimates that over 25% of enterprises will be using graph
databases by 2017”
“Neo4j is the current market leader in graph databases.”
“Graph analysis is possibly the single most effective competitive
differentiator for organizations pursuing data-driven operations and
decisions after the design of data capture.”
IT Market Clock for Database Management Systems, 2014
https://www.gartner.com/doc/2852717/it-market-clock-database-management
TechRadar™: Enterprise DBMS, Q1 2014
http://www.forrester.com/TechRadar+Enterprise+DBMS+Q1+2014/fulltext/-/E-RES106801
Graph Databases – and Their Potential to Transform How We Capture Interdependencies (Enterprise Management Associates)
http://blogs.enterprisemanagement.com/dennisdrogseth/2013/11/06/graph-databasesand-potential-transform-capture-interdependencies/
Neo4j Leads the Graph Database Revolution
20. Adidas Shared Meta Data Service
20 Knowledge Management
Background
• Global leader in sporting goods industry services
firm footware, apparel, hardware, 14.5 bln sales,
53,000 people
• Multitude of products, markets, media, assets and
audiences
Business Problem
• Beset by a wide array of information silos including
data about products, markets, social media, master
data, digital assets, brand content and more
• Provide the most compelling and relevant content to
consumers
• Offering enhanced recommendations to drive
revenue
Solution and Benefits
• Save time and cost through stadardized access to
content sharing-system with internal teams, partners,
IT units, fast, reliable, searchable avoiding
reduandancy
• Inprove customer experience and increase revenue by
providing relevant content and recommentations
21. Background
• Mid-size German insurer founded in 1858
• Project executed by Delvin, a subsidiary
of die Bayerische Versicherung and an IT insurance
specialist
Business Problem
• Field sales needed easy, dynamic, 24/7 access to
policies and customer data
• Existing DB2 system unable to meet performance
and scaling demands
Solution and Benefits
• Enabled flexible searching of policies and associated
personal data
• Raised the bar on industry practices
• Delivered high performance and scalability
• Ported existing metadata easily
Die Bayerische Versicherung INSURANCE
Knowledge Management21
22. Background
• Leading European Airline
• 100+ mln passengers
• 2+ mln tons freight per year
• 700+ aircrafts
Business Problem
• Need for flexible high performant Inflight Asset
Management, onboard entertainment, byod
• Complex data set: CMDB, CMS, Aircraft data feed,
media library
• Maintain individual configuration for each Aircraft
• Complex data model, aircrafts, hardware, vitual
containers, licenses, business rules, versions,
content ...
Solution and Benefits
• Neo4j powers integrated platform that provides fast
access to all aspects needed to maintain complex
system
• Fast implementation
• Higly flexible data model enable constant evolution
Lufthansa Digital Asset Mangagement
22 Graph Based Search, Knowledge Managment
23. Background
• Toy Manufacturer, founded 80+ years ago, plastic
figurines sold in 50+ countries
• 100 Mio, 250 employees
• Production Process in different countries like China
• Polymer Processing, Children‘s toys, high
responsibility
Business Problem
• Product related data stored in many different data
stores including SAP, Navision, Laboratory
Systems, Document Systems, Powerpoint, Excel..
• Hard to find correct answers for authorities, ,
internally, parents
Solution and Benefits
• Neo4j powers integrated platform that provides
visibility across whole supply chain
• Domain Experts create and evolve data model
• Correct answers within seconds
Schleich Product Information Management
23 Knowledge Management
28. Business Problem
• Optimize walmart.com user experience
• Connect complex buyer and product data to gain
super-fast insight into customer needs and product
trends
• RDBMS couldn’t handle complex queries
Solution and Benefits
• Replaced complex batch process real-time online
recommendations
• Built simple, real-time recommendation system with
low-latency queries
• Serve better and faster recommendations by
combining historical and session data
Background
• Founded in 1962 and based in Arkansas
• 11,000+ stores in 27 countries with walmart.com
online store
• 2M+ employees and $470 billion in annual
revenues
Walmart RETAIL
Real-Time Recommendations28
29. Background
• One of the world’s largest logistics carriers
• Projected to outgrow capacity of old system
• New parcel routing system
Single source of truth for entire network
B2C and B2B parcel tracking
Real-time routing: up to 7M parcels per day
Business Problem
• Needed 365x24x7 availability
• Peak loads of 3000+ parcels per second
• Complex and diverse software stack
• Need predictable performance, linear scalability
• Daily changes to logistics network: route from any
point to any point
Solution and Benefits
• Ideal domain fit: a logistics network is a graph
• Extreme availability, performance via clustering
• Greatly simplified routing queries vs. relational
• Flexible data model reflect real-world data variance
much better than relational
• Whiteboard-friendly model easy to understand
Accenture LOGISTICS
29 Real-Time Routing Recommendations
30. Background
• San Jose-based communications equipment giant
ranks #91 in the Global 2000 with $44B in annual
sales
• Needed real-time recommendations to encourage
knowledge base use on company’s support portal
Solution and Benefits
• Faster problem resolution for customers and
decreased reliance on support teams
• Scrape cases, solutions, articles et al continuously for
cross-reference links
• Provide real-time reading recommendations
• Uses Neo4j Enterprise HA cluster
Business Problem
• Reduce call-center volumes and costs via improved
online self-service quality
• Leverage large amounts of knowledge stored in
service cases, solutions, articles, forums, etc.
• Reduce resolution times and support costs
Cisco COMMUNICATIONS
Real-Time Recommendations
Solution
Support
Case
Support
Case
Knowledge
Base Article
Message
Knowledge
Base Article
Knowledge
Base Article
30
31. Business Problem
• Extremly complex individual pricing calculations
• Moved from per month to per day calculation
• Former system too slow, too inflexible
Solution and Benefits
• Huge performance increase through replacement of
legacy system
• 4 Core Laptop, 6% CPU usage provides better
performance than 3 server 96 Core config with 80%
CPU usage „mind-blowing“
• Overcame internal hurdles by using embedded,
application internal cache vs new database system
Background
• Largest Hospitality company worldwide
• 4.500+ hotels 6.500 700.000 rooms 1.5 Mln
• 15 Bln eCommerce Sales 2015, #7 IDC rating
internet sales
Marriott Hospitality
Real-Time Recommendations31
34. Background
• Second largest communications company
in France
• Based in Paris, part of Vivendi Group, partnering
with Vodafone
Solution and Benefits
• Flexible inventory management supports modeling,
aggregation, troubleshooting
• Single source of truth for entire network
• New apps model network via near-1:1 mapping
between graph and real world
• Schema adapts to changing needs
Network and IT Operations
SFR COMMUNICATIONS
Business Problem
• Infrastructure maintenance took week to plan due
to need to model network impacts
• Needed what-if to model unplanned outages
• Identify network weaknesses to uncover need for
additional redundancy
• Info lived on 30+ systems, with daily changes
LINKED
LINKED
DEPENDS_ON
Router Service
Switch Switch
Router
Fiber Link Fiber Link
Fiber Link
Oceanfloor
Cable
34
35. Business Problem
• Original RDBMS solution could handle only 5,000
servers
• Improve net performance company-wide
• Leverage M&A legacy systems with no room
for error
Solution and Benefits
• Store UNIX server and network config in Neo4j
• Combine Splunk log data into an application
that visualizes events on the network
• Neo4j vastly improved app performance
• New apps built much faster with Neo4j than SQL
Large Investment Bank FINANCIAL SERVICES
Network and IT Operations35
Background
• One of the world’s oldest and largest banks
• 100+ year-old bank with more than 1000
predecessor institutions
• 500,000 employees and contractors
• Needed to manage and visualize ~50,000 Unix
servers in its network
36. Identity Relationship ManagementIdentity Access Management
Applications
and data
Endpoints
People
Customers
(millions)
Partners and
Suppliers
Workforce
(thousands)
PCs Tablets
On-premises Private Cloud Public Cloud
Things
(Tens of
millions)
WearablesPhones
PCs
Customers
(millions)
On-premises
Applications
and data
Endpoints
People
37. Background
• Oslo-based telcom provider is #1 in Nordic
countries and #10 in world
• Online, mission-critical, self-serve system lets
users manage subscriptions and plans
• availability and responsiveness is critical to
customer satisfaction
Business Problem
• Logins took minutes to retrieve relational
access rights
• Massive joins across millions of plans,
customers, admins, groups
• Nightly batch production required 9 hours and
produced stale data
Solution and Benefits
• Shifted authentication from Sybase to Neo4j
• Moved resource graph to Neo4j
• Replaced batch process with real-time login response
measured in milliseconds that delivers real-time data,
vw yday’s snapshot
• Mitigated customer retention risks
Identity and Access Management
Telenor COMMUNICATIONS
SUBSCRIBED_BY
CONTROLLED_BY
PART_O
F
USER_ACCESS
Account
Customer
CustomerUser
Subscription
37
38. Background
• Top investment bank with $1+ trillion in assets
• Using a relational database and Gemfire to manage
employee permissions to research document and
application-service resources
• Permissions for new investment managers and
traders provisioned manually
Business Problem
• Lost an average of 5 days per new hire while they
waited to be granted access to hundreds of
resources, each with its own permissions
• Replace an unsuccessful onboarding process
implemented by a competitor
• Regulations left no room for error
Solution and Benefits
• Store models, groups and entitlements in Neo4j
• Exceeded performance requirements
• Major productivity advantage due to domain fit
• Graph visualization ease permissioning process
• Fewer compromises than with relational
• Expanded Neo4j solution to online brokerage
UBS FINANCIAL SERVICES
Identity and Access Management38
40. Revolving Debt
Number of Accounts
Normal behavior
Fraud Detection With Connected Analysis
Fraudulent pattern
41. Background
• Global financial services firm with trillions of dollars
in assets
• Varying compliance and governance
considerations
• Incredibly complex transaction systems, with ever-
growing opportunities for fraud
Business Problem
• Needed to spot and prevent fraud detection in real
time, especially in payments that fall within “normal”
behavior metrics
• Needed more accurate and faster credit risk analysis
for payment transactions
• Needed to dramatically reduce chargebacks
Solution and Benefits
• Lowered TCO by simplifying credit risk analysis and
fraud detection processes
• Identify entities and connections uniquely
• Saved billions by reducing chargebacks and fraud
• Enabled building real-time apps with non-uniform data
and no sparse tables or schema changes
London and New York Financial FINANCIAL SERVICES
Fraud Detection
s
41
42. Background
• Panama based lawyers Mossack & Fonseca do
business in hosting “letterbox companies”
• Suspected to support tax saving and organized
crime
• Altogether: 2.6 TB, 11 milo files, 214.000 letter box
companies
Business Problem
• Goal to unravel chains Bank-Person–Client–
Address–Intermediaries – M&F
• Earlier cases: spreadsheet based analysis (back-
and-forth) & pencil to extract such connections
• This case: sheer amount of data & arbitrarily chain
length condemn such approaches to fail
Solution and Benefits
• 400 journalists, investigate/update/share, 2 people
with IT background
• Identify connections quickly and easily
• Fast Results wouldn‘t be possible without GraphDB
Panama Papers Fraud Detection
Fraud Detection42
More concrete and closer to reality
Flexible
, no Fixed Schema
And deriving value from data-relationships is exactly what some of the most successful companies in the world have done.
Google created perhaps the most valuable advertising system of all time on top of their search-enginge, which is based on relationships between webpages.
Linkedin created perhaps the most valuable HR-tool ever based on relationships amongst professional
And this is also what pay-pal did, creating a peer-to-peer transaction service, based on relationships.
When it comes to shopping online, probably the most important feature is the product recommendations you make, because they will have a direct impact on your sales.
Off course, we all know Amazon has set the standard for how online-recommendations work. In this example we see a user who’s looking to buy a “Kitchen Aid”. And normally you would see recommendations based on “Related Products” or something like “People who bought product X also bought product Y”.
This would be a classical retail recommendation. This is also very easy to model with a graph.
The question here is though, if this is a limited way of looking at recommendations? – because you risk leaving out a lot of information about your user that actually affects what a good recommendation is.
…Smart TV’s.
The important thing to remember is, the more you know about your consumer, the more relevant your recommendations will be, the better the chance is that you’ll actually be able to make a sale. And this is a numbers game – and once you start doing this on scale…
When we say that networks are graphs, we mean that networks by default are entities that are connected. If you do a quick search on “network topology” you basically end up with a display of a bunch of graphs…
And if we zoom in on one of them, which seems to be a mesh network of some sort, with routers, gateways — this would be very easy to translate and model into a graph in Neo4j.
So let’s see what’s happening in the the world of IAM.
Access Management used to be pretty straight forward. And the IAM-processes used to represent a pretty simplistic world of what access meant. People accessed applications hosted on-premiss, through specific devices. And in a scenario like this one, access management isn’t really that complicated.
Today, this is simply not a reality. As we discussed previously, 1) people take on several different roles, 2) and (even if you don’t think about it) they will be connected and require secure access to millions of things, they will use different types of devices with different types of dependencies, 3) and all of these individuals and roles will expect to access and use services and applications in a very granular and personalized way.
So all of this is, of course, highly interconnected.
And all these relationships have tremendous value. and your IAM-processes has an enormously important role to play, and from many different perspectives.
…And I think this picture show you that what’s emerging are the incredibly rich data-relationships between people and things, and the different personas of people and things, and the job of IAM is going to be to use these relationships to manage who gets access to what — whether it is about accessing data coming from an IOT device or whether it’s about access to control devices remotely, or whether a device should have access to a cloud API or whether a person could share information with another person, etc… In all these different scenarios you can provide a richer experience by leveraging these relationships between all these people and things and be able to play out these different scenarios and ask those questions in real-time.
This is what the world looks like, and it’s scaling rapidly. We’re going to reach an environment where we’ll see connected devices and people by the billions, so just imagine how many data-relationships that have to be in place to make sense of all this, knowing that when devices are being connected, if they’re not properly secured, it’s a huge risk from a privacy and cyber security point of view.
So data-relationships are going to be a key part of the future when we build IAM-systems and when managing digital identity.
And, an enterprise who doesn’t appreciate and understand the full complexity of who the customers are in an environment like this, will probably start faltering quite quickly.
So it’s very exciting times for IAM, and especially for graph databases within IAM. I think how we securely manage these billions of relationships between users and things, and collaborators, employees, customers and consumers is going to be one of the epic undertakings of the future.
[In this simple fraud detection approach to detect credit card fraud, it is relatively easy to spot outliers. But what if the fraudster commits fraud while still exhibiting normal behavior. Well - this is exactly how fraud rings operate]
[A fraud ring rarely strays outside the normal behavior band. Instead they operate within normal limits and commit widespread fraud. This is very hard to detect by systems that are looking for outliers or activities outside the normal band.]