Презентация Виталия Никитина о возомжностях платформы HPE Idol для работы с BigData в современном кол-центре. Аналитика аудио и текстовой информации на базе платформы HPE IDOL
Bank Struggles Along the Way for the Holy Grail of Personalization: Customer 360Databricks
Ceska sporitelna is one of the largest banks in Central Europe and one it’s main goals is to improve the customer experience by weaving together the digital and traditional banking approach. The talk will focus on the real world (both technical and enterprise) challenges during shifting the vision from powerpoint slides into production: Implementing Spark and Databricks-centric analytics platform in the Azure cloud combined with a on-prem data lake in the EU-regulated financial environment Forming a new team focused on solving use cases on top of C360 in the 10 000+ employee enterprise Demonstrating this effort on real use cases such as client risk scoring using both offline and online data Spark and its MLlib as an enabler for employing hundreds of millions of client interactions personalized omni-channel CRM campaigns
Watch Alberto's presentation from Fast Data Strategy on-demand here: https://goo.gl/CRjYuD
In this session, we will review Denodo Platform 7.0 key capabilities.
Watch this session to learn more about:
• The vision behind the Denodo Platform
• The new data catalog and self-service features of Denodo Platform 7.0
• The new connectivity, data transformation, and enterprise-wide deployment features
This document discusses organizing data in a data lake or "data reservoir". It describes the changing data landscape with multiple platforms for different analytical workloads. It outlines issues with the current siloed approach to data integration and management. The document introduces the concept of a data reservoir - a collaborative, governed environment for rapidly producing information. Key capabilities of a data reservoir include data collection, classification, governance, refinery, consumption, and virtualization. It describes how a data reservoir uses zones to organize data at different stages and uses workflows and an information catalog to manage the information production process across the reservoir.
Using neo4j for enterprise metadata requirementsNeo4j
Metadata is everywhere yet traditionally approaches to managing it have been disparate, siloed and often ineffective.
In this talk James will discuss the opportunities for using graph technology to address the fundamental challenges and questions of metadata management such as impact analysis, data lineage and definitions.
Data to Value are a Data Consultancy based in London that specialise in applying lean and agile techniques to complex data requirements. Connected Data is a particular focus for the firm which they see as the new frontier for data leaders.
James Phare has over 15 years experience of creating and leading data teams in various roles in Financial Services. Prior to cofounding Data Consultancy Data to Value he was Head of Information Management and Data Architecture at Man Group – one of the world’s largest Hedge funds. James started his career at Thomson Reuters after graduating in Economics from the University of York.
Jump start into 2013 by exploring how Big Data can transform your business. Listen to Infochimps Director of Product, Tim Gasper, cover the leading use cases for 2013, sharing where the data comes from, how the systems are architected and most importantly, how they drive business insights for data-driven decisions.
Maximize the Value of Your Data: Neo4j Graph Data PlatformNeo4j
In this 60-minute conversation with IDC, we will highlight the momentum and reasons why a graph data platform is a breakthrough solution for businesses in need of a flexible data model.
Please join Mohit Sagar, Group Managing Director of CIO Network, as he hosts the conversation with Dr. Christopher Lee Marshall, Associate VP at IDC, and Nik Vora, Vice President of APAC at Neo4. During this very exciting discussion, you'll discover the insights and knowledge unlocked with the graph data platform.
8.17.11 big data and hadoop with informatica slideshareJulianna DeLua
This presentation provides a briefing on Big Data and Hadoop and how Informatica's Big Data Integration plays a role to empower the data-centric enterprise.
Creating a Modern Data Architecture for Digital TransformationMongoDB
By managing Data in Motion, Data at Rest, and Data in Use differently, modern Information Management Solutions are enabling a whole range of architecture and design patterns that allow enterprises to fully harness the value in data flowing through their systems. In this session we explored some of the patterns (e.g. operational data lakes, CQRS, microservices and containerisation) that enable CIOs, CDOs and senior architects to tame the data challenge, and start to use data as a cross-enterprise asset.
Bank Struggles Along the Way for the Holy Grail of Personalization: Customer 360Databricks
Ceska sporitelna is one of the largest banks in Central Europe and one it’s main goals is to improve the customer experience by weaving together the digital and traditional banking approach. The talk will focus on the real world (both technical and enterprise) challenges during shifting the vision from powerpoint slides into production: Implementing Spark and Databricks-centric analytics platform in the Azure cloud combined with a on-prem data lake in the EU-regulated financial environment Forming a new team focused on solving use cases on top of C360 in the 10 000+ employee enterprise Demonstrating this effort on real use cases such as client risk scoring using both offline and online data Spark and its MLlib as an enabler for employing hundreds of millions of client interactions personalized omni-channel CRM campaigns
Watch Alberto's presentation from Fast Data Strategy on-demand here: https://goo.gl/CRjYuD
In this session, we will review Denodo Platform 7.0 key capabilities.
Watch this session to learn more about:
• The vision behind the Denodo Platform
• The new data catalog and self-service features of Denodo Platform 7.0
• The new connectivity, data transformation, and enterprise-wide deployment features
This document discusses organizing data in a data lake or "data reservoir". It describes the changing data landscape with multiple platforms for different analytical workloads. It outlines issues with the current siloed approach to data integration and management. The document introduces the concept of a data reservoir - a collaborative, governed environment for rapidly producing information. Key capabilities of a data reservoir include data collection, classification, governance, refinery, consumption, and virtualization. It describes how a data reservoir uses zones to organize data at different stages and uses workflows and an information catalog to manage the information production process across the reservoir.
Using neo4j for enterprise metadata requirementsNeo4j
Metadata is everywhere yet traditionally approaches to managing it have been disparate, siloed and often ineffective.
In this talk James will discuss the opportunities for using graph technology to address the fundamental challenges and questions of metadata management such as impact analysis, data lineage and definitions.
Data to Value are a Data Consultancy based in London that specialise in applying lean and agile techniques to complex data requirements. Connected Data is a particular focus for the firm which they see as the new frontier for data leaders.
James Phare has over 15 years experience of creating and leading data teams in various roles in Financial Services. Prior to cofounding Data Consultancy Data to Value he was Head of Information Management and Data Architecture at Man Group – one of the world’s largest Hedge funds. James started his career at Thomson Reuters after graduating in Economics from the University of York.
Jump start into 2013 by exploring how Big Data can transform your business. Listen to Infochimps Director of Product, Tim Gasper, cover the leading use cases for 2013, sharing where the data comes from, how the systems are architected and most importantly, how they drive business insights for data-driven decisions.
Maximize the Value of Your Data: Neo4j Graph Data PlatformNeo4j
In this 60-minute conversation with IDC, we will highlight the momentum and reasons why a graph data platform is a breakthrough solution for businesses in need of a flexible data model.
Please join Mohit Sagar, Group Managing Director of CIO Network, as he hosts the conversation with Dr. Christopher Lee Marshall, Associate VP at IDC, and Nik Vora, Vice President of APAC at Neo4. During this very exciting discussion, you'll discover the insights and knowledge unlocked with the graph data platform.
8.17.11 big data and hadoop with informatica slideshareJulianna DeLua
This presentation provides a briefing on Big Data and Hadoop and how Informatica's Big Data Integration plays a role to empower the data-centric enterprise.
Creating a Modern Data Architecture for Digital TransformationMongoDB
By managing Data in Motion, Data at Rest, and Data in Use differently, modern Information Management Solutions are enabling a whole range of architecture and design patterns that allow enterprises to fully harness the value in data flowing through their systems. In this session we explored some of the patterns (e.g. operational data lakes, CQRS, microservices and containerisation) that enable CIOs, CDOs and senior architects to tame the data challenge, and start to use data as a cross-enterprise asset.
The document discusses how modern software architectures can help tame big data. It introduces the speakers and provides an overview of WidasConcepts. The agenda includes a discussion of how big data can help businesses, an example of big data applied in the CarbookPlus platform, and new software architectures for big data. Real-time systems and architectures like lambda architecture are presented as ways to process big data at high velocity and volume. The conclusion emphasizes that big data improves business efficiency but requires tailored implementations and new skills.
The document discusses Kasabi, a linked data marketplace that aims to make it easy to publish and use data and help people get paid for their data. It does this through cloud-based RDF storage, linked data publishing tools, search and browse capabilities for datasets, standard and custom APIs for accessing datasets instantly. The presentation demonstrates Kasabi and outlines future features like usage statistics, dataset analysis, and commercial features. Kasabi's revenue model involves fees for high-volume API usage and revenue sharing on commercial data. In summary, Kasabi is a platform for discovering, consuming, publishing and monetizing linked data.
Cloud Modernization and Data as a Service OptionDenodo
Watch: https://bit.ly/2E99UNO
The current data landscape is fragmented, not just in location but also in terms of shape and processing paradigms. Cloud has become a key component of modern architecture design. Data lakes, IoT, NoSQL, SaaS, etc. coexist with relational databases to fuel the needs of modern analytics, ML and AI. Exploring and understanding the data available within your organization is a time-consuming task. And all of this without even knowing if that data will be useful at all.
Attend this session to learn:
- How dynamic data challenges and the speed of change requires a new approach to data architecture.
- Learn how logical data architecture can enable organizations to transition data faster to the cloud with zero downtime.
- Explore how data as a service and other API management capabilities is a must in a hybrid cloud environment.
This document provides an overview of big data analysis tools and methods presented by Ehsan Derakhshan of innfinision. It discusses what data and big data are, important questions about database selection, and several tools and solutions offered by innfinision including MongoDB, PyTables, Blosc, and Blaze. MongoDB is highlighted as a scalable and high performance document database. The advantages of these tools include optimized memory usage, rich queries, fast updates, and the ability to analyze and optimize queries.
This document discusses choosing the right data architecture for big data projects. It begins by acknowledging big data comes in many types, from structured transactional data to unstructured text data. It then presents several big data architectures and platforms that are suitable for different data types and use cases, such as relational databases, NoSQL databases, data grids, and distributed file systems. The document emphasizes that one size does not fit all and the right choice depends on the specific data and business needs.
The Role of the Logical Data Fabric in a Unified Platform for Modern AnalyticsDenodo
Watch full webinar here: https://bit.ly/3FHKalT
Given the growing demand for analytics and the need for organizations to advance beyond dashboards to self-service analytics and more sophisticated algorithms like machine learning (ML), enterprises are moving towards a unified environment for data and analytics. What is the best approach to accomplish this unification?
In TDWI’s recent Best Practice Report, Unified Platforms for Modern Analytics, written by Fern Halper, TDWI VP Research, Senior Research Director for Advanced Analytics, adoption, use, challenges, architectures, and best practices for unified platforms for modern analytics is explored. One of the approaches for unification outlined in the report is a data fabric approach.
Join us for a webinar with our Director of Product Marketing, Robin Tandon, where he will discuss the role of the logical data fabric in a unified platform for modern analytics, focusing on several of the key findings outlined in this report. He will share insights and use case examples that demonstrate how a properly implemented logical data fabric is the most suitable approach for Unified Data Platforms across enterprises and organizations.
Watch on-demand & Learn:
- The benefits of a unified platform and its ability to capture diverse & emerging data types and how to support high performance and scalable solutions.
- The role of an enhanced AI driven data catalog and its implications towards the findings in the best practice report.
- Implications of a logical data fabric as it relates to several of the recommendations outlined in the report.
Partner Enablement: Key Differentiators of Denodo Platform 6.0 for the FieldDenodo
If you’re a Denodo Partner, this presentation is for you. Learn how to gain a competitive edge in the marketplace with Denodo Platform 6.0, and leverage off the new features and functionality.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/Qh8MeX.
Service generated big data and big data-as-a-serviceJYOTIR MOY
This document provides an overview of service-generated big data and big data-as-a-service. It discusses three types of service-generated big data: service trace logs, service QoS information, and service relationship data. It also describes big data-as-a-service which includes big data infrastructure-as-a-service, platform-as-a-service, and analytics software-as-a-service to provide common big data services and analyze the large volumes of service data. The business opportunities of big data-as-a-service are also briefly discussed.
Introduce the Big-Data data characteristic, big-data process flow/architecture, and take out an example about EKG solution to explain why we are run into big data issue, and try to build up a big-data server farm architecture. From there, you can have more concrete point of view, what the big-data is.
Customer Event Hub – a modern Customer 360° view with DataStax Enterprise (DSE) Guido Schmutz
Today, companies are using various channels to communicate with their customers. As a consequence, a lot of data is created, more and more also outside of the traditional IT infrastructure of an enterprise. This data often does not have a common format and they are continuously created with ever increasing volume. With Internet of Things (IoT) and their sensors, the volume as well as the velocity of data just gets more extreme.
To achieve a complete and consistent view of a customer, all these customer-related information has to be included in a 360 degree view in a real-time or near-real-time fashion. By that, the Customer Hub will become the Customer Event Hub. It constantly shows the actual view of a customer over all his interaction channels and provides an enterprise the basis for a substantial and effective customer relation.
In this presentation the value of such a platform is shown and how it can be implemented using DataStax Enterprise as the backend.
Data Mesh in Practice: How Europe’s Leading Online Platform for Fashion Goes ...Databricks
The Data Lake paradigm is often considered the scalable successor of the more curated Data Warehouse approach when it comes to democratization of data. However, many who went out to build a centralized Data Lake came out with a data swamp of unclear responsibilities, a lack of data ownership, and sub-par data availability.
Solution architecture for big data projects
solution architecture,big data,hadoop,hive,hbase,impala,spark,apache,cassandra,SAP HANA,Cognos big insights
Making ‘Big Data’ Your Ally – Using data analytics to improve compliance, due...emermell
This document summarizes a presentation on using data analytics for compliance, due diligence, and investigations. The presentation features four speakers: Raul Saccani of Deloitte, Dave Stewart of SAS Institute, John Walsh of SightSpan, and John Walsh of SAS Institute. It discusses challenges related to big data including volume, variety, and velocity of data. It provides examples of how financial institutions have used analytics for anti-money laundering model tuning and illicit network analysis. It also outlines the analytics lifecycle and considerations for adopting a proactive analytics strategy.
Mastering MapReduce: MapReduce for Big Data Management and AnalysisTeradata Aster
Whether you’ve heard of Google’s MapReduce or not, its impact on Big Data applications, data warehousing, ETL,
business intelligence, and data mining is re-shaping the market for business analytics and data processing.
Attend this session to hear from Curt Monash on the basics of the MapReduce framework, how it is used, and what implementations like SQL-MapReduce enable.
In this session you will learn:
* The basics of MapReduce, key use cases, and what SQL-MapReduce adds
* Which industries and applications are heavily using MapReduce
* Recommendations for integrating MapReduce in your own BI, Data Warehousing environment
Global Business Intelligence (BI) software vendor, Yellowfin, and Actian Corporation, pioneers of the record-breaking analytical database Vectorwise, will host a series of Big Data and BI Best Practices Webinars.
These are the slides from that presentation.
The Big Data & BI Best Practices Webinars and associated slides examine the phenomenal growth in business data and outline strategies for effectively, efficiently and quickly harnessing and exploring ‘Big Data’ for competitive advantage.
This document discusses combining Apache Spark and MongoDB for real-time analytics. It describes how MongoDB provides rich analytics capabilities through queries, aggregations, and indexing. Apache Spark can further extend MongoDB's analytics by offering additional processing capabilities. Together, Spark and MongoDB enable organizations to perform real-time analytics directly on operational data without needing separate analytics infrastructure.
Social media analytics using Azure TechnologiesKoray Kocabas
Social media are computer-mediated tools that allow people to create, share or exchange information, ideas, and pictures/videos in virtual communities and networks. To sum up Social Media is everything for your customers and Your company need to listen them to understand, make a custom offer or improve loyalty etc. Azure Stream Analytics and HDInsight platforms can solve this problem for you. We'll focus on how to get Twitter data using Stream Analytics and how to make data enrichment and storing using HDInsight and What is the problem about sentiment analytics using Azure Machine Learning.
Best Practices in the Cloud for Data Management (US)Denodo
Watch here: https://bit.ly/2Npt82U
If you have data, you are engaged in data management—be sure to do it effectively.
As organizations are assessing how COVID-19 has impacted their operations, new possibilities and uncharted routes are becoming the norm for many businesses. While exploring and implementing different deployment and operational models, the question of data management naturally surfaces while considering how these changes impact your data. Is this the right time to focus on data management? The reality is that if you have data, you are engaged in data management and so the real question is, are you doing it well?
Join Brice Giesbrecht from Caserta and Mitesh Shah from Denodo to explore data management challenges and solutions facing data driven organizations.
Key Considerations for Putting Hadoop in Production SlideShareMapR Technologies
This document discusses planning for production success with Hadoop. It covers key questions around business continuity, high availability, data protection and disaster recovery. It also discusses considerations for multi-tenancy, interoperability and high performance. Additionally, it provides an overview of MapR's enterprise-grade data platform and highlights how it addresses production requirements through features like its NFS interface, strong data protection, and high availability.
The document outlines the history of building a big data platform from 2014 to 2016, starting with building a Hadoop cluster in 2014, creating the first data report page in 2015, launching products based on big data also in 2015, developing data analysis products in 2016, and making changes to the platform in 2016. It then transitions to discussing the current state of the big data platform.
Enabling Fast Data Strategy: What’s new in Denodo Platform 6.0Denodo
In this presentation, you will see the new functionalities of the Denodo 6.0 detailing dynamic query optimization engine, managing enterprise deployments, and using information self-service for discovery and search.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DzRtkg.
The document discusses how modern software architectures can help tame big data. It introduces the speakers and provides an overview of WidasConcepts. The agenda includes a discussion of how big data can help businesses, an example of big data applied in the CarbookPlus platform, and new software architectures for big data. Real-time systems and architectures like lambda architecture are presented as ways to process big data at high velocity and volume. The conclusion emphasizes that big data improves business efficiency but requires tailored implementations and new skills.
The document discusses Kasabi, a linked data marketplace that aims to make it easy to publish and use data and help people get paid for their data. It does this through cloud-based RDF storage, linked data publishing tools, search and browse capabilities for datasets, standard and custom APIs for accessing datasets instantly. The presentation demonstrates Kasabi and outlines future features like usage statistics, dataset analysis, and commercial features. Kasabi's revenue model involves fees for high-volume API usage and revenue sharing on commercial data. In summary, Kasabi is a platform for discovering, consuming, publishing and monetizing linked data.
Cloud Modernization and Data as a Service OptionDenodo
Watch: https://bit.ly/2E99UNO
The current data landscape is fragmented, not just in location but also in terms of shape and processing paradigms. Cloud has become a key component of modern architecture design. Data lakes, IoT, NoSQL, SaaS, etc. coexist with relational databases to fuel the needs of modern analytics, ML and AI. Exploring and understanding the data available within your organization is a time-consuming task. And all of this without even knowing if that data will be useful at all.
Attend this session to learn:
- How dynamic data challenges and the speed of change requires a new approach to data architecture.
- Learn how logical data architecture can enable organizations to transition data faster to the cloud with zero downtime.
- Explore how data as a service and other API management capabilities is a must in a hybrid cloud environment.
This document provides an overview of big data analysis tools and methods presented by Ehsan Derakhshan of innfinision. It discusses what data and big data are, important questions about database selection, and several tools and solutions offered by innfinision including MongoDB, PyTables, Blosc, and Blaze. MongoDB is highlighted as a scalable and high performance document database. The advantages of these tools include optimized memory usage, rich queries, fast updates, and the ability to analyze and optimize queries.
This document discusses choosing the right data architecture for big data projects. It begins by acknowledging big data comes in many types, from structured transactional data to unstructured text data. It then presents several big data architectures and platforms that are suitable for different data types and use cases, such as relational databases, NoSQL databases, data grids, and distributed file systems. The document emphasizes that one size does not fit all and the right choice depends on the specific data and business needs.
The Role of the Logical Data Fabric in a Unified Platform for Modern AnalyticsDenodo
Watch full webinar here: https://bit.ly/3FHKalT
Given the growing demand for analytics and the need for organizations to advance beyond dashboards to self-service analytics and more sophisticated algorithms like machine learning (ML), enterprises are moving towards a unified environment for data and analytics. What is the best approach to accomplish this unification?
In TDWI’s recent Best Practice Report, Unified Platforms for Modern Analytics, written by Fern Halper, TDWI VP Research, Senior Research Director for Advanced Analytics, adoption, use, challenges, architectures, and best practices for unified platforms for modern analytics is explored. One of the approaches for unification outlined in the report is a data fabric approach.
Join us for a webinar with our Director of Product Marketing, Robin Tandon, where he will discuss the role of the logical data fabric in a unified platform for modern analytics, focusing on several of the key findings outlined in this report. He will share insights and use case examples that demonstrate how a properly implemented logical data fabric is the most suitable approach for Unified Data Platforms across enterprises and organizations.
Watch on-demand & Learn:
- The benefits of a unified platform and its ability to capture diverse & emerging data types and how to support high performance and scalable solutions.
- The role of an enhanced AI driven data catalog and its implications towards the findings in the best practice report.
- Implications of a logical data fabric as it relates to several of the recommendations outlined in the report.
Partner Enablement: Key Differentiators of Denodo Platform 6.0 for the FieldDenodo
If you’re a Denodo Partner, this presentation is for you. Learn how to gain a competitive edge in the marketplace with Denodo Platform 6.0, and leverage off the new features and functionality.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/Qh8MeX.
Service generated big data and big data-as-a-serviceJYOTIR MOY
This document provides an overview of service-generated big data and big data-as-a-service. It discusses three types of service-generated big data: service trace logs, service QoS information, and service relationship data. It also describes big data-as-a-service which includes big data infrastructure-as-a-service, platform-as-a-service, and analytics software-as-a-service to provide common big data services and analyze the large volumes of service data. The business opportunities of big data-as-a-service are also briefly discussed.
Introduce the Big-Data data characteristic, big-data process flow/architecture, and take out an example about EKG solution to explain why we are run into big data issue, and try to build up a big-data server farm architecture. From there, you can have more concrete point of view, what the big-data is.
Customer Event Hub – a modern Customer 360° view with DataStax Enterprise (DSE) Guido Schmutz
Today, companies are using various channels to communicate with their customers. As a consequence, a lot of data is created, more and more also outside of the traditional IT infrastructure of an enterprise. This data often does not have a common format and they are continuously created with ever increasing volume. With Internet of Things (IoT) and their sensors, the volume as well as the velocity of data just gets more extreme.
To achieve a complete and consistent view of a customer, all these customer-related information has to be included in a 360 degree view in a real-time or near-real-time fashion. By that, the Customer Hub will become the Customer Event Hub. It constantly shows the actual view of a customer over all his interaction channels and provides an enterprise the basis for a substantial and effective customer relation.
In this presentation the value of such a platform is shown and how it can be implemented using DataStax Enterprise as the backend.
Data Mesh in Practice: How Europe’s Leading Online Platform for Fashion Goes ...Databricks
The Data Lake paradigm is often considered the scalable successor of the more curated Data Warehouse approach when it comes to democratization of data. However, many who went out to build a centralized Data Lake came out with a data swamp of unclear responsibilities, a lack of data ownership, and sub-par data availability.
Solution architecture for big data projects
solution architecture,big data,hadoop,hive,hbase,impala,spark,apache,cassandra,SAP HANA,Cognos big insights
Making ‘Big Data’ Your Ally – Using data analytics to improve compliance, due...emermell
This document summarizes a presentation on using data analytics for compliance, due diligence, and investigations. The presentation features four speakers: Raul Saccani of Deloitte, Dave Stewart of SAS Institute, John Walsh of SightSpan, and John Walsh of SAS Institute. It discusses challenges related to big data including volume, variety, and velocity of data. It provides examples of how financial institutions have used analytics for anti-money laundering model tuning and illicit network analysis. It also outlines the analytics lifecycle and considerations for adopting a proactive analytics strategy.
Mastering MapReduce: MapReduce for Big Data Management and AnalysisTeradata Aster
Whether you’ve heard of Google’s MapReduce or not, its impact on Big Data applications, data warehousing, ETL,
business intelligence, and data mining is re-shaping the market for business analytics and data processing.
Attend this session to hear from Curt Monash on the basics of the MapReduce framework, how it is used, and what implementations like SQL-MapReduce enable.
In this session you will learn:
* The basics of MapReduce, key use cases, and what SQL-MapReduce adds
* Which industries and applications are heavily using MapReduce
* Recommendations for integrating MapReduce in your own BI, Data Warehousing environment
Global Business Intelligence (BI) software vendor, Yellowfin, and Actian Corporation, pioneers of the record-breaking analytical database Vectorwise, will host a series of Big Data and BI Best Practices Webinars.
These are the slides from that presentation.
The Big Data & BI Best Practices Webinars and associated slides examine the phenomenal growth in business data and outline strategies for effectively, efficiently and quickly harnessing and exploring ‘Big Data’ for competitive advantage.
This document discusses combining Apache Spark and MongoDB for real-time analytics. It describes how MongoDB provides rich analytics capabilities through queries, aggregations, and indexing. Apache Spark can further extend MongoDB's analytics by offering additional processing capabilities. Together, Spark and MongoDB enable organizations to perform real-time analytics directly on operational data without needing separate analytics infrastructure.
Social media analytics using Azure TechnologiesKoray Kocabas
Social media are computer-mediated tools that allow people to create, share or exchange information, ideas, and pictures/videos in virtual communities and networks. To sum up Social Media is everything for your customers and Your company need to listen them to understand, make a custom offer or improve loyalty etc. Azure Stream Analytics and HDInsight platforms can solve this problem for you. We'll focus on how to get Twitter data using Stream Analytics and how to make data enrichment and storing using HDInsight and What is the problem about sentiment analytics using Azure Machine Learning.
Best Practices in the Cloud for Data Management (US)Denodo
Watch here: https://bit.ly/2Npt82U
If you have data, you are engaged in data management—be sure to do it effectively.
As organizations are assessing how COVID-19 has impacted their operations, new possibilities and uncharted routes are becoming the norm for many businesses. While exploring and implementing different deployment and operational models, the question of data management naturally surfaces while considering how these changes impact your data. Is this the right time to focus on data management? The reality is that if you have data, you are engaged in data management and so the real question is, are you doing it well?
Join Brice Giesbrecht from Caserta and Mitesh Shah from Denodo to explore data management challenges and solutions facing data driven organizations.
Key Considerations for Putting Hadoop in Production SlideShareMapR Technologies
This document discusses planning for production success with Hadoop. It covers key questions around business continuity, high availability, data protection and disaster recovery. It also discusses considerations for multi-tenancy, interoperability and high performance. Additionally, it provides an overview of MapR's enterprise-grade data platform and highlights how it addresses production requirements through features like its NFS interface, strong data protection, and high availability.
The document outlines the history of building a big data platform from 2014 to 2016, starting with building a Hadoop cluster in 2014, creating the first data report page in 2015, launching products based on big data also in 2015, developing data analysis products in 2016, and making changes to the platform in 2016. It then transitions to discussing the current state of the big data platform.
Enabling Fast Data Strategy: What’s new in Denodo Platform 6.0Denodo
In this presentation, you will see the new functionalities of the Denodo 6.0 detailing dynamic query optimization engine, managing enterprise deployments, and using information self-service for discovery and search.
This presentation is part of the Fast Data Strategy Conference, and you can watch the video here goo.gl/DzRtkg.
SAMOA: A Platform for Mining Big Data Streams (Apache BigData Europe 2015)Nicolas Kourtellis
A general overview of the APACHE SAMOA platform for mining big data streams using machine learning algorithms running on distributed stream processing platforms such as Apache STORM, Apache Flink, Apache Samza and Apache Apex.
Results are shown from experimentation with VHT, the Vertical Hoeffding Tree proposed in "VHT: Vertical Hoeffding Tree." N. Kourtellis, G. De Francisci Morales, A. Bifet, A. Mordupo. IEEE BigData 2016.
Presentation in APACHE BIG DATA Europe 2015
SAMOA: A Platform for Mining Big Data Streams (Apache BigData North America 2...Nicolas Kourtellis
A general overview of the APACHE SAMOA platform for mining big data streams using machine learning algorithms running on distributed stream processing platforms such as Apache STORM, Apache Flink, Apache Samza and Apache Apex.
Results are shown from experimentation with VHT, the Vertical Hoeffding Tree proposed in "VHT: Vertical Hoeffding Tree." N. Kourtellis, G. De Francisci Morales, A. Bifet, A. Mordupo. IEEE BigData 2016.
Presentation in APACHE BIG DATA North America 2016
ITpro Active主催「ビッグデータはクラウドで操るBigData Platform Conference~IoT時代を勝ち抜くためのDataBase as a Service活用法~」<11月11日(水)開催>資料
実際のお客様事例をベースにインフラエンジニア、業務担当者、データサイエンティスト、マーケティングのペルソナをベースにそれぞれのシナリオでのクラウド環境を活用したデータ分析について講演
This document provides biographical information about Dr. Dinh Le Dat, the co-founder and CEO of ANTS, a Big Data advertising and data-driven marketing solution company. It outlines his educational background, including a PhD in Physics and Mathematics from Moscow State University, and over 15 years of experience working for technology companies in Vietnam, including roles as CTO of FPT Online Service JSC and co-founder of Yola JSC. It also lists his contact information and links to his LinkedIn profile and website.
This document provides an overview of Spark and using Spark on HDInsight. It discusses Spark concepts like RDDs, transformations, and actions. It also covers Spark extensions like Spark SQL, Spark Streaming, and MLlib. Finally, it highlights benefits of using Spark on HDInsight like integration with Azure services, scalability, and support.
Oxalide MorningTech #1 - BigData
1er MorningTech @Oxalide, animé par Ludovic Piot (@lpiot), le 15 décembre 2016.
Pour cette 1ère édition du Morning Tech nous vous proposons une overview sur un des thèmes du moment : le Big Data.
Au delà de ce buzz word nous aborderons :
Les grands concepts
Les étapes clés des projets Big Data et les technologies à utiliser (stockage, ingestion, …)
Les enjeux des architectures Big Data (architecture lambda, …)
L'intelligence artificielle (machine learning, deep learning, …)
Et nous finirons par un cas d'usage du big data sur AWS autour de l'utilisation des données gyroscopiques de vos internautes mobiles
Subject: Oxalide's 1st MorningTech talk about BigData.
Date: 15-dec-2016
Speakers: Ludovic Piot (@lpiot, @oxalide)
Language: french
Lien SpeakerDeck : https://speakerdeck.com/lpiot/oxalide-morningtech-number-1-bigdata
Lien SlideShare : https://www.slideshare.net/LudovicPiot/oxalide-morningtech-1-bigdata
YouTube Video capture: https://youtu.be/7O85lRzvMY0
Main topics:
* Les grands enjeux du BigData
** les 3 V du Gartner : volume, variété, vélocité
* Le stockage des données
** datalake
** les technos
* L'ingestion des données
** ETL
** datastream
** les technos
* Les enjeux du compute
** map-reduce
** spark
** lambda architecture
* Démo d'une plateforme BigData sur AWS
* L'intelligence artificielle
** datascience exploratoire et notebooks,
** machine learning,
** deep learning,
** data pipeline
** les technos
* Pour aller plus loin
** La gouvernance des données
** La dataviz
This document outlines an introductory workshop on big data held by the BigData Community. The workshop agenda includes an introduction to big data and the Hadoop ecosystem, demonstrations of Hadoop installation in standalone and pseudo-distributed modes, and a hands-on Java application example. Attendees are guided through setting up a test environment, downloading and configuring Hadoop, and testing the installation. The goal is to provide 120 students and 5 universities with an awareness of big data science and engineering through hands-on training.
GCPUG meetup 201610 - Dataflow IntroductionSimon Su
This document provides information about Simon Su and Sunny Hu, who will be presenting on Google's BigData solution. It includes their contact information and backgrounds. Simon's areas of focus include Node.js and blogging. Sunny's skills include project management, system analysis, and Java. The document also advertises a Facebook and Google+ group for the Google Cloud Platform User Group Taiwan, where people can share experiences using GCP. It poses trivia questions about Google's infrastructure and provides timelines of Google's BigData innovations.
위 자료는 BOAZ 2016 하반기 프로젝트 주제의 하나로, Advanced 정규세션 동안 Base 정규세션에서 배웠던 다양한 이론들과 기본 지식들, 그리고 툴 활용능력들을 직접 실행하며 진행한 결과물입니다.
*** 서울시 2030 나홀로족을 위한 라이프 가이드북 ***
서울에 거주하는 2030 나홀로족을 위해 제작된 라이프 가이드북. 이 가이드북의 주목적은 먹는 것(식) 그리고 사는 것(주)에 대해서 그에 관한 정보를 주는 것임.
6기 김승효 중앙대학교 응용통계학과
6기 김재은 이화여자대학교 시각디자인과
7기 박다혜 한국외국어대학교 통계학과
** 국내 최초 대학생 빅데이터 연합동아리 BOAZ **
Blog : http://BOAZbigdata.com
Facebook : http://fb.com/BOAZbigdata
There are many routes to becoming an actor, including gaining experience through amateur theater groups, work experience at theaters, or jobs in entertainment. Formal training is not required but can be beneficial, though a degree in acting or performing arts may improve chances primarily due to practical coursework. Successful actors come from various backgrounds, and experience in drama societies or other performing arts can supplement training. The roles of an actor include attending rehearsals, working closely with the director to understand their vision, and potentially contributing ideas for productions. Actors must collaborate with directors, writers, and audiences to effectively perform and communicate meaning. Many actors eventually pursue writing, directing, or producing their own work after gaining experience acting.
Curry College was originally established in 1879 as the School of Elocation in Boston, moving in 1885 to Blue Hill Ave and changing its name to the School of Expression. In 1943 it became Curry College and was able to start awarding degrees.
Manmohan Singh's first three Independence Day speeches from 2004-2006 referred more to words like economy, India, growth and reform, while his last three speeches from 2010-2012 referred more to words like corruption, planning, and less to growth. An analysis of Singh's speeches found that he spoke at greater length about corruption and planning in recent years compared to earlier speeches, and spoke less about economic growth, India, and reforms. Additionally, Singh's speeches frequently invoked and quoted past leaders like Indira Gandhi, Rajiv Gandhi, Mahatma Gandhi, and Jawaharlal Nehru.
HPE IDOL Technical Overview - july 2016Andrey Karpov
Search and Analytics Platform for Text and Rich Media
Open Innovation is transforming everything
Connected people, apps and things generating massive data in many forms
How do you bridge the gap between data and outcomes?
Augmented Intelligence power apps for competitive advantage
Machine Learning at the Service of Business Augmented Intelligence
HPE Big Data Advanced Analytics Software Solutions
Strong information and weak information
HPE IDOL: Natural Language Processing (NLP) engine
Gianluigi Viganò - How to use HP HEAVEN-on-demand functions for Big Data appsCodemotion
HP Haven, the industry’s first comprehensive, scalable, open, and secure platform for Big Data analytics enables you to deliver actionable insight where and when it is needed to drive superior business outcomes and gain competitive advantage. It includes HP Haven OnHadoop the most comprehensive array of SQL functions with any Hadoop distribution.
This document provides an introduction to data science and machine learning concepts. It discusses data analytics, machine learning, artificial intelligence, and deep learning. It introduces popular tools for data analytics like Python, Jupyter Notebook, R, and SAS. It also discusses key platforms in data science like Kaggle and DataScientists.net that host data science competitions and allow users to work on real-world datasets. The document provides examples of data analytics applications in different industries like media, healthcare, finance, and manufacturing. It also discusses concepts related to big data like the four V's of big data - volume, velocity, variety and veracity.
AI, Search, and the Disruption of Knowledge ManagementTrey Grainger
Trey Grainger discussed how search has evolved from basic keyword search to more advanced capabilities like understanding user intent, providing personalized search, and augmented search using machine learning and AI. He explained the concept of "reflected intelligence" where user interactions with search results are used to continuously improve search quality through techniques like signals boosting, learning to rank, and collaborative filtering. Grainger also outlined how knowledge graphs can help power semantic search by modeling relationships between entities to better understand queries and provide more relevant results.
Anzo Smart Data Lake 4.0 - a Data Lake Platform for the Enterprise Informatio...Cambridge Semantics
Only with a rich and interactive semantic layer can your data and analytics stack deliver true on-demand access to data, answers and insights - weaving data together from across the enterprise into an information fabric. In this webinar we introduce Anzo Smart Data Lake 4.0, which provides that rich and interactive semantic layer to your data.
Content Management, Metadata and Semantic WebAmit Sheth
Keynote given at NetObjectDays conference, Erfurt, September 11, 2001.
One of the earliest keynotes discussing commercial semantic web technologies, semantic web applications (including semantic search, semantic targeting, semantic content management). Prof. Sheth started a Semantic Web company Taalee, Inc. in 1999 (Product was MediaAnywhere A/V search engine),that merged to become Voquette in 2001 (product was called SCORE), Semagix in 2004 (product was called Semagix Freedom), and then Fortent in 2006 (products included Know Your Customers). Additional details can be found in U.S. Patent #6311194, 30 Oct. 2001 (filed 2000).
Note: the commercial system used "WorldModel" as at the time, business customers were not yet warm to "Ontology" - the concept/intent is the same. More recent information at http://knoesis.org
Content Management, Metadata and Semantic WebAmit Sheth
The document discusses new challenges in content management, including information overload and the need for semantic metadata and ontologies to improve relevance and personalization. It proposes that next-generation content management should leverage semantic technologies like knowledge bases, classification, metadata extraction and semantic engines to organize content semantically rather than just structurally. This will help enterprises better distribute the right content to the right users.
TechWise with Eric Kavanagh, Dr. Robin Bloor and Dr. Kirk Borne
Live Webcast on July 23, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=59d50a520542ee7ed00a0c38e8319b54
Analytical applications are everywhere these days, and for good reason. Organizations large and small are using analytics to better understand any aspect of their business: customers, processes, behaviors, even competitors. There are several critical success factors for using analytics effectively: 1) know which kind of apps make sense for your company; 2) figure out which data sets you can use, both internal and external; 3) determine optimal roles and responsibilities for your team; 4) identify where you need help, either by hiring new employees or using consultants 5) manage your program effectively over time.
Register for this episode of TechWise to learn from two of the most experienced analysts in the business: Dr. Robin Bloor, Chief Analyst of The Bloor Group, and Dr. Kirk Borne, Data Scientist, George Mason University. Each will provide their perspective on how companies can address each of the key success factors in building, refining and using analytics to improve their business. There will then be an extensive Q&A session in which attendees can ask detailed questions of our experts and get answers in real time. Registrants will also receive a consolidated deck of slides, not just from the main presenters, but also from a variety of software vendors who provide targeted solutions.
Visit InsideAnlaysis.com for more information.
The document discusses demystifying data science by providing motivations, a maturity model, and an ecosystem model with practical examples and advice. It explains data science concepts like data curation, machine learning, and business integration. Examples are given of using data science for time-to-event modeling, topic modeling, and anomaly detection. The importance of communication, iteration, and understanding models as approximations is emphasized.
The document discusses the evolution of search engines from basic keyword search to semantic search using knowledge graphs and structured data. It provides examples of how search engines like Google are now able to provide direct answers to queries by searching structured data rather than just documents. It emphasizes the importance of representing web content as structured data using schemas like schema.org to be discoverable in semantic search and knowledge graphs.
Big data and marketing is becoming an important tool for companies. The document discusses how big data can be used for personalization, listening to customers, and responding to better serve their needs. It outlines the key steps in the process from data collection and analysis to insights and actions. Various big data tools and techniques are mentioned to understand customer behavior and trends in order to tailor marketing and customer experiences. The challenges of translating data into insights and actions are also addressed.
#MarketingShake - Edward Chenard - Descubrí el poder del Big Data para Transf...amdia
Big data and marketing is becoming an important tool for companies. The document discusses how big data can be used for personalization, listening to customers, and responding to better serve their needs. It outlines the key steps in the process from data collection and analysis to insights and actions. Various big data tools and techniques are mentioned to understand customer behavior and trends in order to tailor marketing and customer experiences. The importance of data visualization to tell the story of patterns and create useful insights for businesses is also highlighted.
Turn Data into Business Value – Starting with Data Analytics on Oracle Cloud ...Lucas Jellema
This document discusses how to turn data into business value by starting with data analytics on Oracle Cloud. It provides an overview of the data analytics process, from gathering and preparing raw data to developing machine learning models and visualizing insights. It then details an example implementation of analyzing session data from Oracle conferences. The document emphasizes that Oracle's data analytics portfolio, including Autonomous Data Warehouse Cloud, Analytics Cloud, and Data Visualization Desktop, can support organizations in extracting value from their data.
Real-time big data analytics based on product recommendations case studydeep.bi
We started as an ad network. The challenge was to recommend the best product (out of millions) to the right person in a given moment (thousands of users within a second). We have delivered 5 billion ad views since 24 months. To put it in the scale context: If we would serve 1 ad per second it will take 160 years to serve 5 billion ads.
So we needed a solution. SQL databases did not work. Popular NoSQL databases did not work. Standard data warehouse approaches (pre-aggregations, creating schemas) - did not work too.
Re-thinking all the problems with huge data streams flowing to us every second we have built a complete solution based on open-source technologies and fresh, smart ideas from our engineering team. It is called deep.bi and now we make it available to other companies.
deep.bi lets high-growth companies solve fast data problems by providing scalable, flexible and real-time data collection, enrichment and analytics.
It was built using:
- Node.js - API
- Kafka - collecting and distributing data
- Spark Streaming - ETL, data enrichments
- Druid - real-time analytics
- Cassandra - user events store
- Hadoop + Parquet + Spark - raw data store + ad-hoc queries
This document provides resources for using AI during times of crisis and includes the following:
- Open datasets related to COVID-19 from sources like the WHO that can be used to help address the pandemic.
- Cognitive services from Microsoft like speech recognition, language processing and computer vision that can be used to build AI applications.
- Recommendations for using these AI tools and services to help educators create engaging classrooms, enhance business processes, and develop conversational agents and voice skills.
Advanced Analytics and Data Science ExpertiseSoftServe
An overview of SoftServe's Data Science service line.
- Data Science Group
- Data Science Offerings for Business
- Machine Learning Overview
- AI & Deep Learning Case Studies
- Big Data & Analytics Case Studies
Visit our website to learn more: http://www.softserveinc.com/en-us/
A quick Description about presentation:
• What is ElasticSearch and how it works.
• How ElasticSearch works to analyze data splitting a document into meaningful portions and indexing each of those portions separately. So whenever a new search request comes in, it knows what to find.
• Features and advantages of ElasticSearch like built in sharding defaults, maintaining fail-safe node clusters, automatically adding a new node without having to reboot and so on.
• Out of the box features for today’s applications like faceted search, reverse search using Percolators and pre-built Analyzers.
The tutorial includes big data search, contenders, intro to elasticsearch, more than just search, unchartered territory. Beginning is a brief detail about big data search which includes big data search in terms of rapid consumption and the challenges faced by big data search. Following is a section about contenders. It includes contenders like lucene, apache soir, sphinx and ElasticSearch itself.
Moreover, there is also an introduction section to ElasticSearch. It includes an introduction to ElasticSearch as a search server and it's features like push replication, node auto discovery, fail-safe. It also includes data analyzing and ways of indexing it right. Afterwards, there is a section on more than search which includes factors more than just search functions like facets, range facet, histogram facet, geo facet, percolator and ElasticSearch percolating.
The last section of this tutorial includes unchartered territory. It includes territories like ElasticSearch and NoSQL database, situations in cases of WHAT IF and references.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Global Situational Awareness of A.I. and where its headedvikram sood
You can see the future first in San Francisco.
Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.
The AGI race has begun. We are building machines that can think and reason. By 2025/26, these machines will outpace college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word. Along the way, national security forces not seen in half a century will be un-leashed, and before long, The Project will be on. If we’re lucky, we’ll be in an all-out race with the CCP; if we’re unlucky, an all-out war.
Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the wilful blindness of “it’s just predicting the next word”. They see only hype and business-as-usual; at most they entertain another internet-scale technological change.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years. Whether these people are also right about the next few years remains to be seen. But these are very smart people—the smartest people I have ever met—and they are the ones building this technology. Perhaps they will be an odd footnote in history, or perhaps they will go down in history like Szilard and Oppenheimer and Teller. If they are seeing the future even close to correctly, we are in for a wild ride.
Let me tell you what we see.
Natural Language Processing (NLP), RAG and its applications .pptxfkyes25
1. In the realm of Natural Language Processing (NLP), knowledge-intensive tasks such as question answering, fact verification, and open-domain dialogue generation require the integration of vast and up-to-date information. Traditional neural models, though powerful, struggle with encoding all necessary knowledge within their parameters, leading to limitations in generalization and scalability. The paper "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks" introduces RAG (Retrieval-Augmented Generation), a novel framework that synergizes retrieval mechanisms with generative models, enhancing performance by dynamically incorporating external knowledge during inference.
ViewShift: Hassle-free Dynamic Policy Enforcement for Every Data LakeWalaa Eldin Moustafa
Dynamic policy enforcement is becoming an increasingly important topic in today’s world where data privacy and compliance is a top priority for companies, individuals, and regulators alike. In these slides, we discuss how LinkedIn implements a powerful dynamic policy enforcement engine, called ViewShift, and integrates it within its data lake. We show the query engine architecture and how catalog implementations can automatically route table resolutions to compliance-enforcing SQL views. Such views have a set of very interesting properties: (1) They are auto-generated from declarative data annotations. (2) They respect user-level consent and preferences (3) They are context-aware, encoding a different set of transformations for different use cases (4) They are portable; while the SQL logic is only implemented in one SQL dialect, it is accessible in all engines.
#SQL #Views #Privacy #Compliance #DataLake
Analysis insight about a Flyball dog competition team's performanceroli9797
Insight of my analysis about a Flyball dog competition team's last year performance. Find more: https://github.com/rolandnagy-ds/flyball_race_analysis/tree/main
Challenges of Nation Building-1.pptx with more important
Callcenter HPE IDOL overview
1. HPE IDOL
Search and Analytics Platform for Text and
Rich Media
Vitali Nikitin,
BigData Leader, CIS countris
2. Open Innovation is transforming everything
Closed technology
architecture design
“After-the fact” static analytics, e.g.
Monthly reporting
Analyze data at
“rest”
Real-time insight &
understanding via machine
learning
Put data science into your
processes – Next-gen apps
and services
Analyze and apply perishable
data
anywhere at any time
Premise-based
systems
Seamless blending of open
source, advanced technology,
deployment choices…
Contain Cost Create Outcomes
Traditional Data Analytics Open Innovation Data Analytics
Journey to the New Style of Business
3. Human data
Connected people, apps and things generating massive data
in many forms
Machine data
Business data
faster growth
than
traditional
business data
10x
4. How do you bridge the gap between data and outcomes?
4
How do you consume
any data generated
or understood by
humans?
How do you identify
key aspects and
patterns to determine
outcomes?
How do you
automate to take
action?
Data sources Diverse Modern
Apps
Q1 Q2 Q3
5. Augmented Intelligence
power apps for competitive advantage
5
Augmented Intelligence
powered by HPE
Artificial intelligence, machine learning and natural
language processing using advanced analytics functions.
7. HPE Big Data Advanced Analytics Software Solutions
Vertica high-performance
advanced analytics
− Real-time performance at scale
− Premise, Cloud, and Hybrid
− Native optimized
Hadoop options
IDOL augmented
intelligence for human
information
− Advanced enterprise search and rich media
analytics
− Analyze text, audio, image, and streaming
video
Haven OnDemand APIs
and Services
− Machine Learning as a Service
− Delivered on Microsoft Azure Cloud
− Accessible to any developer
Deep
Learning
Text
Analytics
Face
Detection
Neural
Network
Speech
Recognition
Categor-
ization
9. Over 500 IDOL functions to augment your intelligence
Automatic hyperlinking
Conceptual search
Keyword search
Fieldtext search
Phrase search
Phonetic search
Field modulation
Fuzzy matching
Implicit profiling
Explicit profiling
Community and expertise network
Agents
Intent-based ranking
Alerting
Social feedback
Eduction
Automatic clustering
Clustering 2D/3D
Autoclassification
Auto language detection
Sentiment analysis
Automatic taxonomy generation
Automatic Query Guidance
Highlighting
Parametric refinement
Summarization
Real-time predictive query
Metadata extraction
Automatic tagging
Faceted navigation
Inquire
Search your data
Investigate
Analyze your data
Interact
Personalize your data
Improve
Enhance your data
10. Language independence
–Free from linguistic restraints and
rules
–Automatically adapts to changing
definitions
–Over 150 languages
–Single,multibyte and Unicode
languages
–Optional language packs for
localization
11. Product performance issues
Clustering
Side letters
Off balance
sheet transactionsAutomatically
partition the data
so that similar
information is
clustered
together
Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
12. Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
Add context to short queries by grouping results into concepts
Automatic Query Guidance
Query
”Madonna”
Results: Documents
containing ”Madonna”
Query
search
Documents about:
1.Singer
2. Italian Renaissance
3. Madonna Further
suggestions…
Most likely
meaning…
Result
documents
Conceptual
clustering
13. Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
Exploratory analytics that help you discover the “unknown unknowns”
Enhance your data
– Managed classification
– Create categories using business rules or training
– Automatic classification and clustering
– Automatically determine categories based on patterns and relationships in information
– Spot analysis of all themes and grouping
– Time sensitive analysis; What’s hot? What’s New?
– Eduction
– Apply structure to unstructured data by extracting key fields and entities
– Hundreds of entities supported, including names, addresses, credit card information, sentiment, intent, etc
– Audio analysis
– Speaker independent speech to text, speaker identification, audio events, language identification, etc
– Image and video analysis
– Next generation image classification (is this a car?/find more like “this”)
– On-screen OCR, logo detection, intelligent scene analysis, Color and texture analysis,
story segmentation, etc
14. Hundreds of conceptual entities
Eduction
– Quickly narrow search results with auto-identified facets and
conceptual entities such as employee names from documents
– Validate or customize entities
– Is this a valid credit card number?
– What are all docs that contain SSNs?
– If area code is 415, output as Home Office
– Pinpoint accuracy for multibyte languages such as CJK, Thai and some
European languages
Names
Places
IP addresses
Companies
Events
Relationships
Medicines
Airports
Cars
Social Security numbers
Phone numbers
Credit cards
Dates
Holidays
Job titles
Currencies
… many more
Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
15. Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
Analyze your data
– Quickly evaluate the relevance of information
–Automatic Query Guidance (providing top themes from query results in real time)
–Concept navigation via advanced visualizations (node graphs, theme tracking, topic
maps, broadcast analysis)
–Intelligent summarization (simple, concept and context)
–Intelligent highlighting (search terms, phrases, concepts, context, fidelity to query
grammar)
–Concept streaming (Real-time summaries from audio that are contextual to queries and
intent)
–Intelligent de-duplication, including “near” de-duplication
– Use structure to navigate the data
–Structured, semi-structured and XML support
–Parametric search (unlimited nesting and association support)
–Directed navigation (create compelling navigation for users)
16. Personalize your data
We are what we… Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
17. Discover Relationships for Richer Insight
17
Knowledge Graph
Customer A is in
Customer B’s network
Customer C is linked
to Customer E via
Customer D
Customers F and G
purchased the same
model last year
Customer H is the
most influential in
Customer B’s network
18. Intent-based ranking
– Search results personalized and targeted based on user and context
– Profile developed through complete behavior
analysis… implicit or explicit profiling
– Gather data from content consumption,
– content contribution, interaction with
colleagues, etc.
Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
19. Topical sentiment analysis
– Decomposition and classification within a sentence to pull out specific
topics
– “I stayed at the Marriott last week, and though the mattresses were
very nice, the service was awful.”
– Is this Positive? Negative? Neutral?
– How much Positive? How much Negative?
Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
20. Search video as easily as text
Transform rich media into intelligent assets
Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
Live video or playback
from archived footage
On-screen text
recognition
Face identification
Automatically generated
transcript using speech
recognition
Speaker identification
Timecode
synchronization
Automatic keyframe
generation
Automate
Automatically create metadata,
keyframes, transcriptions
Understand
Understand video footage and
audio streams in real time
Act
Apply advanced analytics such as
clustering and categorization, and link
with other file types
21. Image technology: 2D objects
Registered image Test image
Generic Logo recognition
Registered
Logos
Test image
Inquire
“Search your data”
Investigate
“Analyze your
data”
Interact
“Personalize your
data”
Improve
“Enhance your
data”
22. Intuitive Knowledge Discovery for Self-Service Analytics
22
Visualization to simplify analytics workflow Topics Map
Sunburst
Result Comparison
Rich Contextual View
Business Intelligence for Human Information (BIFHI)
24. Customer care, turbocharged
Customer Self Service via IDOL Search
Key Differentiators
Automate more customer service with advanced
features such as contextual search, automatic
hyperlinking, implicit query, sentiment analysis,
alerting, and chat agents
Find and act on 100% of information - regardless
of language, source or information format
Scalably and securely access virtually all systems,
including cloud repositories with over 400 pre-built
enterprise-class connectors
How Customer Would Deploy
IDOL powered self service web-based support using all available knowledge sources: knowledge base, contact center, forums,
product reviews and more, with connectors to existing OSS, BSS, media solutions and network management as needed
How IDOL would drive competitive advantage for customer
• Reduced churn & improved CSAT due to enhanced automation of customer self-service and improved user experience;
• SG&A cost reduction with single systems for internal and external support
Solutions like HPE Service
Anywhere run IDOL Search
to improve service quality
and staff efficiency
25. Monitor social media to proactively address incidents and issues
Social Customer Service via IDOL
Key Differentiators
• Combine social and public data (Twitter, news, etc)
with customer data in the enterprise to gain insights
• Strong text analytics to synthesize and summarize
large volumes of data – sentiment analytics, concept
extraction, extract place names
• xxxxxHow Customer Would Deploy
Deploy IDOL with connectors to various data sources.
How Social Customer Service would drive competitive advantage for customer
Tap into other sources of customer feedback for proactive and reactive resolution of service issues.
Improve customer satisfaction, mitigate churn, identify upsell opportunities
26. Build a knowledge graph of your organization and automate customer interaction
Workforce Productivity via IDOL Knowledge Management
Key Differentiators
• Automates manual customer care processes &
actions
• Expertise location to team and deliver best response
• Proactively deliver and manage relevant & timely
data
How Customer Would Deploy
Deploy a comprehensive platform for customer
interaction to automate a time-consuming, labor-
intensive process.
How IDOL would drive competitive advantage for customer
• SG&A cost savings: Increased customer satisfaction (decrease churn rate), decreased call center load, understand your
organization better to align resources and eliminate inefficiency
• Increased revenue: Re-deploy resources to high value customer add services
Unstructured
Structured
Collaboration
Expertise
Location
Categorizatio
n
Eduction
Taxonomy
27. On
Screen
Text
Recog.
Analyze video, audio, images to support & drive the next wave of experience and
monetization
Multimedia Analytics via IDOL Multimedia
Key Differentiators
• Automate - create metadata, key frames, transcriptions
• Understand - video and audio streams in real time
• Act – apply advanced analytics (cluster, categorize, link)
How Customer Would Deploy
In line with strategic next wave value added services, rich content,
and services strategy
How IDOL Multimedia Analytics would drive competitive advantage for Customer
Drive next wave content, publishing, and advertising/monetization (revenue enhancement)
- Value Added Services to complete against OTT
- Content screening, moderation
- Ad verification
- Compliance
New Age On-Demand Internet Video,
Audio
Multi
Language
Video
Analytics
Face
detection
Sentiment
extraction
Advanced
IDOL
Analytics
Speech-
to- Text
Speaker
Identify
28. IDOL powered Smart City Solution
Integration Analytics Data Fusion
Integrate data feeds
from across the city
into a common
command center for
investigation and
event monitoring
Add video, audio, and
event analytics to the
feeds to enable real
time monitoring for
security trends and
incidents
Complete the puzzle
with additional
information sources
like social media,
broadcast media
monitoring, employee
databases, etc.
Built-in Scalability
Unlimited expansion
and connectivity
already included at all
levels by design.
Automation
Streamlined workflow
and automated
process
for triggers and alerts
30. China Mobile
Communications service provider industry
Challenge
– Allow users to access information on thousands of public services
directly from their mobile phones – success of the Wireless City
platform depends on the users’ ability to quickly find information
Solution
– HPE IDOL
Result
– Over 740 million subscribers can search through more than 8,000
applications for public service information, including public
transportation schedules, public health records, traffic offenses and
more
– Users receive more accurate search results than ever before
– China Mobile customers get the most relevant and useful information
regardless of the terms they use in the search
Private | Confidential | Internal Use Only 30
31. Leading American multinational telecom
Paying careful attention to every aspect of customer-facing processes and applications
Challenge
– Provide support desk staff with fast access to precise information
required to address customer’s problem
– Improve knowledge management system search capabilities
Solution
– HPE IDOL
– HPE Big Data Professional Services
Result
– Reduced time-to-resolution with fast queries that ensure support
experts can resolve customer issues quickly
– Relevant results as query functionality makes sure that results deliver
information most likely to resolve customer issues
Private | Confidential | Internal Use Only 31
32. Leading financial software, data and media company
Subscribers require up-to-the-second information on market conditions and trends
Challenge
– Deliver search performance at the scale required by the size of its data
repository, 200 million messages, 15-20 million chats daily
– Provide robust, cost-efficient solution with scalability for large and
growing volume of data, supported by small IT headcount
Solution
– HPE IDOL
– HPE Big Data Professional Services
Result
– Detects trends in real-time messaging and chats for subscribers
– Accommodates 10+ billion of document entries without compromising
performance today
– Ensures scalability delivers ROI in the future
Private | Confidential | Internal Use Only 32
33. HPE on HPE CX Analytics
Answering critical customer satisfaction issues better and faster
Challenge
– Pull all customer-related data into a centralized repository
– Create a set of analytics services its business units can use to
improve the company’s Net Promoter Score® (NPS)
Solution
– HPE IDOL Information Analytics and HPE Vertica Analytics Platform
– Tableau
– Hadoop
Result
– Maximize value of customer experience data to improve customer
satisfaction
– Provide snapshot of customer experience metrics that is current and
comprehensive; answer complex questions in 5-10 minutes
– Generate a 360-degree view of the HPE customer experience
Private | Confidential | Internal Use Only 33
34. Dept. of State Development, Business and Innovation
Public Sector – Victoria, Australia
Challenge
– Provide a single, secure, enterprise-wide search platform across
multiple information sources inside and outside the organization
– Locate information from different information sources such as HPE
TRIM, the DSDBI Intranet, shared network drives, Salesforce and
external sites such as Hansard, Australian Bureau of Statistics,
Victoria online and other websites
Solution
– HPE IDOL, Microsearch Portlet, Microsearch consulting services
Result
– Easily and quickly find relevant information with near real-time search
across millions of documents, 9 enterprise and Internet content
sources, leading to significant time savings
– Single sign-on allows filtered results, preventing the inadvertent
disclosure of sensitive information
Private | Confidential | Internal Use Only 34
35. Global health services
Robust search technology supports health services needs of 80 million customers worldwide
Challenge
– Detect meaning of data even if data didn’t conform to specific standard
e.g. physician, MD, doctor, or Dr
– Fast query results to support positive customer experience
Solution
– HPE IDOL
Result
– Customers can quickly identify providers that meet their needs for
specialty, location and other important criteria
– Solution supports business and fiscal objectives with lower cost-in-
network providers
– Scalability maximizes ROI over time
Private | Confidential | Internal Use Only 35
36. Fortune 500 global diversified healthcare company
Private | Confidential | Internal Use Only 36
Claims data
– Provider information
– FWA recovery data
– Call center data
– Treatment/Service data
– Social media
Population and community health
Care management/Care coordination
Surveillance, Analysis, Product development innovation
Consumer cctivation/Engagement/Education
Reputation management/Outreach
Innovation focus
Lines of business
– Innovation
– Brand
– Care delivery
– Product development
– Payment integrity
– Provider
– Consumer activation