Data engineering at the interface of art and analytics: the why, what, and ho...Data Con LA
Abstract:- Netflix has a growing presence in Hollywood, with technical teams working on everything from high-speed video editing pipelines to machine learning methods for categorizing films. Data is foundational across these efforts and in this talk Josh will take a tour through why we invest so much in data about content, what data engineering challenges we tackle, and the style in which we do it.
Building a Consistent Hybrid Cloud Semantic Model In DenodoDenodo
Watch full webinar here: https://bit.ly/2UhNem8
Many businesses are moving to the Cloud. This process can take many years with data spanning On-Prem and Cloud. When Denodo needs to be deployed in a Hybrid Cloud Architecture, how should one implement that?
Join this session to get a deep dive look at how to create a shared Virtual Database that exposes a consistent Semantic Model using Denodo’s Interfaces. Both On Prem and Cloud will have their own Virtual Databases.
Watch on-demand this webinar to learn:
- How to create a Semantic Data Model
- How to use Denodo Interfaces to abstract data access for the Semantic Model
- How to create a shared Virtual Database
Cloud Modernization and Data as a Service OptionDenodo
Watch: https://bit.ly/2E99UNO
The current data landscape is fragmented, not just in location but also in terms of shape and processing paradigms. Cloud has become a key component of modern architecture design. Data lakes, IoT, NoSQL, SaaS, etc. coexist with relational databases to fuel the needs of modern analytics, ML and AI. Exploring and understanding the data available within your organization is a time-consuming task. And all of this without even knowing if that data will be useful at all.
Attend this session to learn:
- How dynamic data challenges and the speed of change requires a new approach to data architecture.
- Learn how logical data architecture can enable organizations to transition data faster to the cloud with zero downtime.
- Explore how data as a service and other API management capabilities is a must in a hybrid cloud environment.
Data engineering at the interface of art and analytics: the why, what, and ho...Data Con LA
Abstract:- Netflix has a growing presence in Hollywood, with technical teams working on everything from high-speed video editing pipelines to machine learning methods for categorizing films. Data is foundational across these efforts and in this talk Josh will take a tour through why we invest so much in data about content, what data engineering challenges we tackle, and the style in which we do it.
Building a Consistent Hybrid Cloud Semantic Model In DenodoDenodo
Watch full webinar here: https://bit.ly/2UhNem8
Many businesses are moving to the Cloud. This process can take many years with data spanning On-Prem and Cloud. When Denodo needs to be deployed in a Hybrid Cloud Architecture, how should one implement that?
Join this session to get a deep dive look at how to create a shared Virtual Database that exposes a consistent Semantic Model using Denodo’s Interfaces. Both On Prem and Cloud will have their own Virtual Databases.
Watch on-demand this webinar to learn:
- How to create a Semantic Data Model
- How to use Denodo Interfaces to abstract data access for the Semantic Model
- How to create a shared Virtual Database
Cloud Modernization and Data as a Service OptionDenodo
Watch: https://bit.ly/2E99UNO
The current data landscape is fragmented, not just in location but also in terms of shape and processing paradigms. Cloud has become a key component of modern architecture design. Data lakes, IoT, NoSQL, SaaS, etc. coexist with relational databases to fuel the needs of modern analytics, ML and AI. Exploring and understanding the data available within your organization is a time-consuming task. And all of this without even knowing if that data will be useful at all.
Attend this session to learn:
- How dynamic data challenges and the speed of change requires a new approach to data architecture.
- Learn how logical data architecture can enable organizations to transition data faster to the cloud with zero downtime.
- Explore how data as a service and other API management capabilities is a must in a hybrid cloud environment.
Oracle in the 2014 edition of its Open World rolled out new database public cloud service with its DBaaS offerings, but this is just a piece in each company's technological architecture. Businesses still have the need to create a Private cloud and discover the driver to create it; Wether it is a measured service,consolidation or rapid provisioning, finding this driver will be the initial building block for it. This presentation will give you an insight on how a Private Cloud is architected, how the service catalog is the most important brick and how get the benefit of this upcoming era of Databases.
Marta de Mesa i Jesus Gironda, de Telvent, presenten les possibilitats d'aplicar el Big Data més enllà del sector privat. Per exemple, en la previsió i planificació de recursos interns a les universitats.
Aquesta presentació ha tingut lloc a la TSIUC'14, celebrada a la Universitat Autònoma de Barcelona el passat 2 de desembre de 2014, sota el títol "Reptes en Big Data a la universitat i la Recerca".
Difference between data warehouse and data miningmaxonlinetr
What exactly is a Data Warehouse?
Termed as a special type of database, a Data Warehouse is used for storing large amounts of data, such as analytics, historical, or customer data, which can be leveraged to build large reports and also ensure data mining against it.@ http://maxonlinetraining.com/why-is-data-warehousing-online-training-important/
What is Data mining?
The process of extracting valid, previously unknown, comprehensible and actionable information from large databases and using it to make crucial business decisions’
Call us at For any queries, please contact:
+1 940 440 8084 / +91 953 383 7156 TODAY to join our Online IT Training course & find out how Max Online Training.com can help you embark on an exciting and lucrative IT career.
TODAY to join our Online IT Training course & find out how Max Online Training.com can help you embark on an exciting and lucrative IT career.
Visit www.maxonlinetraining.com
For Complete Course Overview and to a book @https://goo.gl/QbTVal
In this webinar, Michael Nash of BoldRadius explores the Typesafe Reactive Platform.
The Typesafe Reactive Platform is a suite of technologies and tools that support the creation of reactive applications, that is, applications that handle the kind of responsiveness requirements, data volume, and user load that was out of practical reach only a few years ago.
From analysis of the human genome to wearable technology to communications at a massive scale, BoldRadius has the premier team of experts with decades of collective experience in designing and building these types of applications, and in helping teams adopt these tools.
Gluent Extending Enterprise Applications with Hadoopgluent.
This presentation shows how to transparently extend enterprise applications with the power of modern data platforms such as Hadoop. Application re-writing is not needed and there is no downtime when virtualizing data with Gluent.
This talk was held at the 13th meeting on Sept 23rd 2014 by Bruno Ungermann.
Conceptual overview of Hadoop based analytics, comparison between data warehouse architecture and Big Data architecture, characteristics of „schema on read“, typical Big Data use cases like customer analytics, operational analytics and EDW optimization, short software demo
This talk was held at the 13th meeting on Sept 23rd 2014 by André Vocat.
In the process of proposing a highly available, redundant and performant infrastructure for a large Swiss Telco operator, the project team has opted for Cassandra as one of the key components. The resulting platform has, after more than one year in operation, proven to be the right choice. The session will show the chosen architecture, give an insight in to the development and deployment and shows the current status of the platform which is just about to see its first upgrade.
This talk was held at the 12th meeting on July 22 2014 by Karen Zhang.
Customers in business-to-consumer (B2C) and business-to-business (B2B) markets go through similar buying journey: need, search, evaluate, and finally order. Thus similar customer analytics approaches are applicable to both scenarios. However company’s go-to-market strategies are usually different in B2C vs. B2B. This study discusses unique characteristics of analytic methodologies applied in B2B vs. B2C. Two case studies will be presented to illustrate similarities and differences.
This talk was held at the 12th meeting on July 22 2014 by Romeo Kienzler.
After giving a short contextual overview about SQL for Hadoop projects in the Ecosystem (Hive, Impala, Presto, Cascading Lingual, ...) we will hear about the latest SQL for Hadoop features in Big SQL. Big SQL delivers some exciting capabilities including low latency and high performance queries while maintaining backwards compatibility to Hive and HCatalog. This is achieved by a optimizer and dedicated execution framework which will be covered in detail. Finally as demo of Big SQL v3.0 on a cluster in Silicon Valley Lab (SVL) will be shown.
This talk was held at the 11th meeting on April 7 2014 by Marcel Kornacker.
Impala (impala.io) raises the bar for SQL query performance on Apache Hadoop. With Impala, you can query Hadoop data – including SELECT, JOIN, and aggregate functions – in real time to do BI-style analysis. As a result, Impala makes a Hadoop-based enterprise data hub function like an enterprise data warehouse for native Big Data.
This talk was held at the 11th meeting on April 7 2014 by Karolina Alexiou.
Analysis of big data is useless (and a lot harder to sell) when you can't measure whether the resulting insights are correct. In order to develop sophisticated data analysis methodologies tailored to your particular use-case, you need to be able to figure out what works and what doesn't. It is crucial to gather data independently to your analysis (ground truth) and compare it to your results using the correct metrics and account for biases. The sheer volume of data means that you also need to have a strategy for slicing and dicing the data to isolate the really valuable parts, and also, a keen eye for visualization so that you can quickly compare methodologies and support the validity of your insights to third parties.
This talk was held at the 10th meeting on February 3rd 2014 by Daniel Fasel.
Many traditional Swiss companies, such as banks, insurance companies and government agencies, are highly interested in Big Data and Data Science but don’t know exactly what the business value of Big Data is for them. Often Big Data is misinterpreted as large amounts of data and companies are unaware of the innovation behind the new technologies of Big Data and how these technologies can be profitable to them. In this presentation, I discuss sample cases that demonstrate a set of these new technologies and how they can be applied not only for large web scale data but also for data sets of traditional companies. First, I demonstrate how multi-structured data can be indexed and searched using Autonomy. I show how fast new analytical application can be built based on a real-time streaming example using STORM, Redis and Node.js. And the last demonstration shows how machine learning algorithms and visualization can be applied for improving analytics using AsterData.
This talk was held at the 10th meeting on February 3rd 2014 by Sean Owen.
Having collected Big Data, organizations are now keen on data science and “Big Learning”. Much of the focus has been on data science as exploratory analytics: offline, in the lab. However, building from that a production-ready large-scale operational analytics system remains a difficult and ad-hoc endeavor, especially when real-time answers are required. Design patterns for effective implementations are emerging, which take advantage of relaxed assumptions, adopt a new tiered "lambda" architecture, and pick the right scale-friendly algorithms to succeed. Drawing on experience from customer problems and the open source Oryx project at Cloudera, this session will provide examples of operational analytics projects in the field, and present a reference architecture and algorithm design choices for a successful implementation.
This talk was held at the 10th meeting on February 3rd 2014 by Dr. Thilo Stadelmann.
Many companies are struggling with Big Data. Some argue that Big Data is the new answer to all problems while others are more critical about it. What is common to many discussions with IT professionals is that almost everyone has a different understanding of the topic. Moreover, many enterprises find it very hard to recruit the perfect data scientist to solve Big Data problems.
In this talk we give an overview of our understanding of data science and present the driving factors for the newly established Datalab at Zurich University of Applied Sciences. The goal of the lab is to establish a sound curriculum and research agenda to prepare data scientists for the ever-increasing demand from industry and to allow industry partners collaborate with academia to solve problems that go beyond everyday routines.
Big data is an opportunity for communications service providers (CSPs) to create the intelligence for operating their infrastructures more efficiently, to analyze the success of their services, and to create a better personal experience for their customers.
CSP Top executives, Network and IT managers and Marketing, are eager to exploit the large amount of information to achieve better business decisions. They expect their Chief Technical Officer to provide end-to-end analytic solutions based on the data available in their IT and network infrastructure.
This presentation analyzes the complete value chain that can transform CSPs’ data to knowledge. It covers the sources of information, the data collection tools, the analytic platforms providing quick data access, and finally the business intelligence use cases with the presentation and visualization of the results and predictions.
The "Babelfish" system is built with Scala and runs in the Java Virtual Machine. For graph persistence, a neo4j database with Lucene index is used. A generic importer module reads data from various data sources and persists them in a version-aware way, using the domain model as a schema. The schema is used by our domain specific language to statically verify queries. Query results can either be in the form of graphs or tables. For the latter, an additional step uses an in-memory SQL-Database for further processing of the results. Queries in the generated DSL can be submitted via a REST interface. The server uses json4s for serialization of the results. This interface as well as the deployable war-file is generated by the web framework Scalatra.
While user tracking with WebTrends, comScore, Google Analytics etc. is a de-facto standard in the online world, tracking visitors in the real world is still fragmented. From a wide perspective, potential tracking data is produced by various sensors. With a real ‘bricks and mortar’ store, one could figure out possible sensors they could use: customer frequency counters at the doors, the cashier system, free WiFi access points, video capture, temperature, background music, smells and many more. For many of those sensors additional hardware and software would be needed, but a few sensors already have solutions available, e.g. video capturing with face or even eye recognition. The most interesting sensor data that doesn’t require additional hardware and software could be the WiFi access points. Especially given that many visitors will have WiFi enabled mobile phones. This talk demonstrates how WiFi access point log files can be used to answer different questions for a particular store.
ParaView is an open-source graphical user interface for VTK with additional functionality including the capability to perform rendering in parallel and a client-server architecture enabling visualization and analysis to be performed on a server while being viewed and driven from a client. ParaView, like VTK, is open-sourced under a BSD license and its development is overseen by the commercial entity, Kitware, Inc. ParaView is multi-platform, extensible via its plugin architecture, and natively supports many common data analysis tasks and data formats. As it builds upon VTK, any VTK functionality can in principle be invoked. In practice not all VTK functionality is exposed by default but can easily be exposed or extended via the plugin architecture previously mentioned and discussed in more detail below. Exposing VTK functionality is as easy as writing a short XML file. In this talk I present the process of plugging into ParaView to do visualization and analysis of terabytes of data in real time.
Apache Drill [1] is a distributed system for interactive analysis of large-scale datasets, inspired by Google’s Dremel technology. It is a design goal to scale to 10,000 servers or more and to be able to process Petabytes of data and trillions of records in seconds. Since its inception in mid 2012, Apache Drill has gained widespread interest in the community. In this talk we focus on how Apache Drill enables interactive analysis and query at scale. First we walk through typical use cases and then delve into Drill's architecture, the data flow and query languages as well as data sources supported.
[1] http://incubator.apache.org/drill/
Oracle's BigData solutions consist of a number of new products and solutions to support customers looking to gain maximum business value from data sets such as weblogs, social media feeds, smart meters, sensors and other devices that generate massive volumes of data (commonly defined as ‘Big Data’) that isn’t readily accessible in enterprise data warehouses and business intelligence applications today.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. Agenda
• The gaph theory
• About Neo Technology
• Uses cases
• Vision du marché
• The Neo4j Technology
• Cypher the Neo4j’s « SQL »
3. The graph theory
An 840 : The horeseman’s problem
The Arab mathematician and chess master
al-Adli ar-Rumi solved the problem.
4. The graph theory
An 1735 The Königsberg’s 7 bridges
problem
How to pass through the bridges only once ?
Leonhard Euler
Swiss mathematician
5. The graph theory
2013: Today’s questions
• Collaboration
• Configuration management
• Geo mapping
• Molecule’s Interaction (Biology)
• Impact analysis
• Master Data Management
• Product management
• Recommendation
• Social
6. Agenda
• The gaph theory
• About Neo Technology
• Uses cases
• Vision du marché
• The Neo4j Technology
• Cypher the Neo4j’s « SQL »
7. Neo Technology (Neo4j) Corporate Overview
• Neo4j founded 2000
• Headquartered in Palo Alto, California
• Engineering headquarter in Malmö, Sweden
• Employees based in France, Germany, UK, Sweden, US, and Malaysia
• 24/7 support on global basis
• 100,000+ users
• F500 customers such as Adobe, Cisco, Deutsche Telecom, Telenor,
Deutsche Post, SFR, Lockheed Martin, and others
• SI partners such as Accenture and dozens of local SI boutiques
• Technology partners such as VMware, Informatica and Microsoft
• Leader in the Graph Database arena
• Mission: Help the world to make sense of data
8. Agenda
• The gaph theory
• About Neo Technology
• Uses cases
• Vision du marché
• The Neo4j Technology
• Cypher the Neo4j’s « SQL »
9. Société
- Worldwide company
- 45 millions users, + 30 000 each days.
- Owner of the social networks
ApnaCircle (Inde) and Tianji (Chine)
Problème
Viadeo, integrated Neo4j as their backend database, to
store all of their users and relationships. When their
network expanded to a level that their traditional MySQL
database couldn’t handle, Viadeo experienced
performance and storage issues that would not perform
at the rate the
company was growing.
Etude de cas: Réseau social
Bénéfices & time frame
- Real time
recomendation with
Neo4j.
- Project timeframe
= 8 weeks
Solution
Integrating Neo4j, Viadeo has highly accelerated their
system in two ways. Neo4j increased Viadeo’s performance
by requiring less storage space andless time to restructure
the graph.
10. 10
Company
- Worldwide leader in networking for the Internet
Solution
- Clustered Neo4j Enterprise architecture
- Part of a larger infrastructure solution
- Multi-region AWS deployment
- Neo4j selected in competition with custom solution
and Oracle
Benefits & time frame
- Highly flexible data analysis
- Sub-second results for large, densely-connected datasets
- User experience - competitive advantage
- 12 month project
Problem definition
- Massive amounts of data tied to members, user
groups, member content, etc. all interconnected
- Need to infer collaborative relationships based on
user-generated content
Case study: Web/ISV - social collaboration
Adobe
11. 11
Company
- Leading telco provider in the Nordics
Solution
- Neo4j Enterprise solution
- Embedded + HA
- Replacing 10 yr-old Oracle, Berkeley DB and a
mainframe environment
Problem definition
- Need: Reliable access control administration system
for 5mio customers, subscriptions and agreements
- Complex dependencies between groups, companies,
individuals, accounts, products, subscriptions,
services and agreements
- Broad and deep graphs (master customers with
1000s of customers, subscriptions & agreements)
Case study: Telco
Telenor
Benefits & time frame
- Flexible and dynamic architecture
- Exceptional performance
-Low cost compared to alternatives
-Extensible data model supports new applications and
features
12. 12
Company
-World wide leader in network infrastructure
-Large sales organization
Solution
-2x Highly Available Neo4j clusters
-One live cluster and one backup / hot spare cluster
at a different datacenter
-Total: 6 Embedded Enterprise Neo4j DBs
Benefits & timeframe
-Real time overview of sales accounts and owners
-The ability to model complex rules for account ownership
-Direct commissioning computation through the entire sales
organization
->12 month development and rollout
Problem definition
-Intricate rules governing ownership of sales accounts
-Complex rules for sales commissions
-Queries complicated to structure with RDBMS
-Oracle performance not good enough for online
account management
Case study: Sales account management
Cisco
13. Use case – What’s in common ?
Alice
ACME
ACME
EMEA
Bob
Retail Co.
FooBar Inc.
Sales Rep
Sales Rep
Worked For
Worked For
Sold To
14. Use case – What’s a best path ?
Retail Co.
Bob
ACME
Steve
Jane
Liza
Pauline
William
Sales Rep
VP
CMO
Sales Rep
VP
29. Neo4j characteristics
• Fully ACID
– Including XA-compliant distributed two-phase commits
• High Availability / Read Scaling through master-slave
replication with master failover
• In-memory speeds with warm caches while
maintaining full ACID
• Cypher query language and Java APIs
30. Caractéristiques de Neo4j
• Transactions Full ACID
– XA-compliant distributed two-phase commits
• Haute disponibilité / Scalabilité*
– master-slave réplication avec master Fail-over
– * Lecture
• Hautes performance en mémoire
– Caches évolués full ACID
• Langage des requêtes
– Cypher
– Java APIs
– JDBC
– Rest API
– Ruby
31. Agenda
• The gaph theory
• About Neo Technology
• Uses cases
• Vision du marché
• The Neo4j Technology
• Cypher the Neo4j’s « SQL »
35. A --> B --> C
A B C
Cypher the Neo4j’s « SQL »
You can traverse the graph
36. A -[*]-> B
A B
A B
A B
Cypher the Neo4j’s « SQL »
You can dynamically traverse the graph
37. Cypher the Neo4j’s « SQL »
The friend of friend query
START john=node:node_auto_index(name = 'John')
MATCH john-[:friend]->()-[:friend]->fof
RETURN john, fof
38. Thank you
Let’s move forward together !
Cédric Fauvet Your contact in France and switzerland
E-mail : Cedric.fauvet@neotechnology.com
French speaking Twitter : @Neo4jFr
French speaking community : meetup.com/graphdb-france
Editor's Notes
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS
Social networksRecommendations enginesBusiness intelligenceGeospatial applicationsMDMNetwork and systems managementProduct catalogueWeb analyticsIndexing your slow RDBMS