Qlik integration with Salesforce is unique, powerful and differentiating, this presentation provides insights into the options users of the Qlik platform have
API monetization extends beyond the simple selling of an API to realize new revenue streams. Monetization enables multi-pronged business relationships, for example, a SaaS provider, an industry focused ISV and a SaaS customer. Learn more about how Oracle is helping customers in the Hospitality industry realize the true value of APIs. In this talk, we will also cover some future capabilities that will help enterprises monetize their APIs for both revenue streams, and insights into the value all their APIs provide
[WSO2 Summit EMEA 2020] Building an Interactive API MarketplaceWSO2
In an API-driven world, consumers want to discover APIs while producers seek to list their APIs and API products in a thriving API ecosystem.
The primary goal of an API marketplace is to create a platform that supports this high-intensive interaction seamlessly while also transforming the technical definition of a consumer and a producer into a natural business experience, which happens between a buyer and a seller.
This session will present the capabilities of an API marketplace, how it can be used for all APIs at different levels in an organization, and how it easily falls into place in an Integrated API Supply Chain.
Watch the session on-demand here: https://wso2.com/library/summit-2020/emea/building-an-interactive-api-marketplace/
SysML v2 and the Next Generation of Modeling LanguagesEd Seidewitz
The Systems Modeling Language (SysML) is a particularly successful offshoot of the Unified Modeling Language (UML) tailored for Model-Based Systems Engineering. After a decade of growing use of SysML, in 2017 the Object Management Group (OMG) issued a Request for Proposals (RFP) for a new version of the language. A year into the ongoing work to respond to this RFP, it is clear that SysML v2 needs to be more than just an expansion of the functional capabilities of SysML. Rather, it must address fundamental architectural issues that have made it difficult to further evolve SysML v1 to address the needs of its user community. Therefore, the language is being re-designed using a new kernel metamodel with formally grounded semantics. This kernel can then be extended using semantic model libraries, rather than by expanding the language metamodel itself. This approach will allow SysML v2 to be not only the modeling language for traditional systems engineering, but also the foundation for a whole new generation of modeling languages.
API monetization extends beyond the simple selling of an API to realize new revenue streams. Monetization enables multi-pronged business relationships, for example, a SaaS provider, an industry focused ISV and a SaaS customer. Learn more about how Oracle is helping customers in the Hospitality industry realize the true value of APIs. In this talk, we will also cover some future capabilities that will help enterprises monetize their APIs for both revenue streams, and insights into the value all their APIs provide
[WSO2 Summit EMEA 2020] Building an Interactive API MarketplaceWSO2
In an API-driven world, consumers want to discover APIs while producers seek to list their APIs and API products in a thriving API ecosystem.
The primary goal of an API marketplace is to create a platform that supports this high-intensive interaction seamlessly while also transforming the technical definition of a consumer and a producer into a natural business experience, which happens between a buyer and a seller.
This session will present the capabilities of an API marketplace, how it can be used for all APIs at different levels in an organization, and how it easily falls into place in an Integrated API Supply Chain.
Watch the session on-demand here: https://wso2.com/library/summit-2020/emea/building-an-interactive-api-marketplace/
SysML v2 and the Next Generation of Modeling LanguagesEd Seidewitz
The Systems Modeling Language (SysML) is a particularly successful offshoot of the Unified Modeling Language (UML) tailored for Model-Based Systems Engineering. After a decade of growing use of SysML, in 2017 the Object Management Group (OMG) issued a Request for Proposals (RFP) for a new version of the language. A year into the ongoing work to respond to this RFP, it is clear that SysML v2 needs to be more than just an expansion of the functional capabilities of SysML. Rather, it must address fundamental architectural issues that have made it difficult to further evolve SysML v1 to address the needs of its user community. Therefore, the language is being re-designed using a new kernel metamodel with formally grounded semantics. This kernel can then be extended using semantic model libraries, rather than by expanding the language metamodel itself. This approach will allow SysML v2 to be not only the modeling language for traditional systems engineering, but also the foundation for a whole new generation of modeling languages.
Scaling Databricks to Run Data and ML Workloads on Millions of VMsMatei Zaharia
Keynote at Scale By The Bay 2020.
Cloud service developers need to handle massive scale workloads from thousands of customers with no downtime or regressions. In this talk, I’ll present our experience building a very large-scale cloud service at Databricks, which provides a data and ML platform service used by many of the largest enterprises in the world. Databricks manages millions of cloud VMs that process exabytes of data per day for interactive, streaming and batch production applications. This means that our control plane has to handle a wide range of workload patterns and cloud issues such as outages. We will describe how we built our control plane for Databricks using Scala services and open source infrastructure such as Kubernetes, Envoy, and Prometheus, and various design patterns and engineering processes that we learned along the way. In addition, I’ll describe how we have adapted data analytics systems themselves to improve reliability and manageability in the cloud, such as creating an ACID storage system that is as reliable as the underlying cloud object store (Delta Lake) and adding autoscaling and auto-shutdown features for Apache Spark.
A preview into SQL Server 2019 from Bob, Asad and my presentation at PASS Summit 2018 (Nov '18). We provided insights into what our public preview builds for SQL Server 2019 had in November.
Ejecutar proyectos de Big Data nunca ha sido más sencillo. Con AWS, puede ejecutar Hadoop, Spark, Hive, Flink y marcos similares de forma más rápida y rentable. En este seminario web, aprenderá cómo mejorar el rendimiento del procesamiento de datos y reducir los costos, especialmente en comparación con un entorno local.
ETL Made Easy with Azure Data Factory and Azure DatabricksDatabricks
Data Engineers are responsible for data cleansing, prepping, aggregating, and loading analytical data stores, which is often difficult and time-consuming. Azure Data Factory makes this work easy and expedites solution development. We’ll demonstrate how Azure Data Factory can enable a new UI-driven ETL design paradigm on top of Azure Databricks for building scaled-out data transformation pipelines.
APIs have revolutionized how companies build new marketing channels, access new customers, and create ecosystems. Enabling all this requires the exposure of APIs to a broad range of partners and developers—and potential threats.
Learn more about the latest API security issues.
Making the Case for Integration Platform as a Service (iPaaS)Axway
Running your business likely involves a triple-digit number of applications, a double-digit number of unique data sources, and a complex mix of deployment models.
An iPaaS Integrated platform as a Service is designed to address these challenges, by providing the technical means to manage APIs, orchestrate services, and process events. Research shows that cloud-based solutions – such as an integration platform-as-a-service – are currently favored over on-premises solutions, by a factor of nearly 3 to 1.
Aberdeen research fellow Derek Brink is revealing new findings on how iPaaS is changing data integration management for some of the most successful companies.
Reltio: Powering Enterprise Data-driven Applications with CassandraDataStax Academy
Cassandra's flexibility and scalability make it an ideal foundation for a modern data management architecture. Come hear how Reltio is using Cassandra, in combination with graph technologies and Spark to deliver a new breed of data-driven applications.
In this presentation you'll find out:
- How we ended up selecting Cassandra
- The unique characteristics of data-driven applications
- The best practices we learned by combining Cassandra, graph technology, Spark and more
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai Wähner
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage
A Collaborative Data Science Development WorkflowDatabricks
Collaborative data science workflows have several moving parts, and many organizations struggle with developing an efficient and scalable process. Our solution consists of data scientists individually building and testing Kedro pipelines and measuring performance using MLflow tracking. Once a strong solution is created, the candidate pipeline is trained on cloud-agnostic, GPU-enabled containers. If this pipeline is production worthy, the resulting model is served to a production application through MLflow.
A brief history of Blinkist backend time. I gave this talk at the Adidas Developer conference in Erlangen. The Adidas developer community is awesome! Go check out their Github repositories for all the project they've contributed https://github.com/adidas
This training camp teaches you how FIWARE technologies and iSHARE, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the development of innovative services based on data sharing and creating value out of the data they share. SMEs and Digital Innovation Hubs (DIHs) will be equipped with the necessary know-how to use the i4Trust framework for creating data spaces!
QlikView is the Business Answers Company. We offer a whole new
class of business intelligence software. One that puts business users
in control, lets them explore their data with unprecedented freedom,
so they get the answers they need, and can take action – now.
Scaling Databricks to Run Data and ML Workloads on Millions of VMsMatei Zaharia
Keynote at Scale By The Bay 2020.
Cloud service developers need to handle massive scale workloads from thousands of customers with no downtime or regressions. In this talk, I’ll present our experience building a very large-scale cloud service at Databricks, which provides a data and ML platform service used by many of the largest enterprises in the world. Databricks manages millions of cloud VMs that process exabytes of data per day for interactive, streaming and batch production applications. This means that our control plane has to handle a wide range of workload patterns and cloud issues such as outages. We will describe how we built our control plane for Databricks using Scala services and open source infrastructure such as Kubernetes, Envoy, and Prometheus, and various design patterns and engineering processes that we learned along the way. In addition, I’ll describe how we have adapted data analytics systems themselves to improve reliability and manageability in the cloud, such as creating an ACID storage system that is as reliable as the underlying cloud object store (Delta Lake) and adding autoscaling and auto-shutdown features for Apache Spark.
A preview into SQL Server 2019 from Bob, Asad and my presentation at PASS Summit 2018 (Nov '18). We provided insights into what our public preview builds for SQL Server 2019 had in November.
Ejecutar proyectos de Big Data nunca ha sido más sencillo. Con AWS, puede ejecutar Hadoop, Spark, Hive, Flink y marcos similares de forma más rápida y rentable. En este seminario web, aprenderá cómo mejorar el rendimiento del procesamiento de datos y reducir los costos, especialmente en comparación con un entorno local.
ETL Made Easy with Azure Data Factory and Azure DatabricksDatabricks
Data Engineers are responsible for data cleansing, prepping, aggregating, and loading analytical data stores, which is often difficult and time-consuming. Azure Data Factory makes this work easy and expedites solution development. We’ll demonstrate how Azure Data Factory can enable a new UI-driven ETL design paradigm on top of Azure Databricks for building scaled-out data transformation pipelines.
APIs have revolutionized how companies build new marketing channels, access new customers, and create ecosystems. Enabling all this requires the exposure of APIs to a broad range of partners and developers—and potential threats.
Learn more about the latest API security issues.
Making the Case for Integration Platform as a Service (iPaaS)Axway
Running your business likely involves a triple-digit number of applications, a double-digit number of unique data sources, and a complex mix of deployment models.
An iPaaS Integrated platform as a Service is designed to address these challenges, by providing the technical means to manage APIs, orchestrate services, and process events. Research shows that cloud-based solutions – such as an integration platform-as-a-service – are currently favored over on-premises solutions, by a factor of nearly 3 to 1.
Aberdeen research fellow Derek Brink is revealing new findings on how iPaaS is changing data integration management for some of the most successful companies.
Reltio: Powering Enterprise Data-driven Applications with CassandraDataStax Academy
Cassandra's flexibility and scalability make it an ideal foundation for a modern data management architecture. Come hear how Reltio is using Cassandra, in combination with graph technologies and Spark to deliver a new breed of data-driven applications.
In this presentation you'll find out:
- How we ended up selecting Cassandra
- The unique characteristics of data-driven applications
- The best practices we learned by combining Cassandra, graph technology, Spark and more
Confluent Operator as Cloud-Native Kafka Operator for KubernetesKai Wähner
Agenda:
- Cloud Native vs. SaaS / Serverless Kafka
- The Emergence of Kubernetes
- Kafka on K8s Deployment Challenges
- Confluent Operator as Kafka Operator
- Q&A
Confluent Operator enables you to:
Provisioning, management and operations of Confluent Platform (including ZooKeeper, Apache Kafka, Kafka Connect, KSQL, Schema Registry, REST Proxy, Control Center)
Deployment on any Kubernetes Platform (Vanilla K8s, OpenShift, Rancher, Mesosphere, Cloud Foundry, Amazon EKS, Azure AKS, Google GKE, etc.)
Automate provisioning of Kafka pods in minutes
Monitor SLAs through Confluent Control Center or Prometheus
Scale Kafka elastically, handle fail-over & Automate rolling updates
Automate security configuration
Built on our first hand knowledge of running Confluent at scale
Fully supported for production usage
A Collaborative Data Science Development WorkflowDatabricks
Collaborative data science workflows have several moving parts, and many organizations struggle with developing an efficient and scalable process. Our solution consists of data scientists individually building and testing Kedro pipelines and measuring performance using MLflow tracking. Once a strong solution is created, the candidate pipeline is trained on cloud-agnostic, GPU-enabled containers. If this pipeline is production worthy, the resulting model is served to a production application through MLflow.
A brief history of Blinkist backend time. I gave this talk at the Adidas Developer conference in Erlangen. The Adidas developer community is awesome! Go check out their Github repositories for all the project they've contributed https://github.com/adidas
This training camp teaches you how FIWARE technologies and iSHARE, brought together under the umbrella of the i4Trust initiative, can be combined to provide the means for creation of data spaces in which multiple organizations can exchange digital twin data in a trusted and efficient manner, collaborating in the development of innovative services based on data sharing and creating value out of the data they share. SMEs and Digital Innovation Hubs (DIHs) will be equipped with the necessary know-how to use the i4Trust framework for creating data spaces!
QlikView is the Business Answers Company. We offer a whole new
class of business intelligence software. One that puts business users
in control, lets them explore their data with unprecedented freedom,
so they get the answers they need, and can take action – now.
QlikView delivers answers for businesses of every size and kind, from one-person shops to multinational organizations. Some of the world’s largest institutions – with billions of records – rely on QlikView. QlikView accesses all your data, so your analysis is complete and thorough. No limits. No half-way answers. No blocked views. And it easily integrates with your existing systems. No need to rip or replace anything.
Hub16: VMware – Enabling business modeling and sales planning with AnaplanAnaplan
Gartner estimates that enterprises miss the equivalent of 5 to 10 percent of annual sales as lost opportunities, which could have been captured with the improvement of overall sales performance management. VMware is driving an initiative to increase sales performance by building process automation and data analytics capabilities to improve accuracy and efficiency in hitting quota targets through predictable sales planning. The initiative has three main components: go-to-marketing planning, sales planning, and quota planning. By establishing clear operating practices, defining globally consistent business processes and policies, and creating common data insights, VMware drove significant efficiencies in the overall planning cycle and enabled better quota accuracy. By leveraging the Anaplan platform, VMware has developed a multi-dimensional modeling capability by bringing together business rules and data from different sources. The capability is helping sales leadership by streamlining the planning process, performing business modeling/”what-if” analyses, and analyzing data using different dimensions like customer hierarchy/accounts, sales segments, products, etc. The framework has also enabled faster and effective collaboration between Sales and Finance teams, from the executive to regional manager level, with all participants having a single view of truth.
Watch full webinar here: [https://buff.ly/2R4JjBX]
Organizations today are data rich and insights poor. There is data everywhere. ERP systems, CRM systems, external data, data lakes and ponds. The real question to ask is “Are the users getting the insights they need when they need where they need to drive successful business outcomes”. Data Integration is a core pillar of the “Data to Value” journey. In this session you will hear how enterprises across industries are grappling with data, insights challenges and how organizations have adopted data virtualization to accelerate their "data to value" journeys.
Watch this Denodo DataFest 2018 session to learn:
How to reduce effort to get from data to value
Hope to gain faster time to Insights
How to reduce overall cost of ownership
Business Objectives that analytics can achieve is Resource allocation,
Customer segmentation, competitive benchmarking, customer facing.
Speaker: Shailender Mathur, SVP, Progressive
Driving Digital Transformation with Machine Learning in Oracle AnalyticsPerficient, Inc.
The adoption of machine learning (ML) is increasing at near-breakneck speed. As organizations seek innovative ideas on how to improve the business, Oracle Analytics Cloud with ML capabilities is leading the charge. With built-in drag-and-drop functions into visualizations and autonomous prediction execution, Oracle Analytics puts the power of machine learning in your hands.
We covered how Oracle Analytics can connect various data sources, allow you to apply ML without being statistically savvy, and easily build your story in presentation format.
Discussion included:
-In-depth look at Oracle Analytics Cloud
-How to connect different data sources like SaaS applications, data lakes, external data sources and more
-Custom-trained ML models demonstration
-Real-world business use case from end to end
Read how Synoptek has proven to be an excellent partner for companies looking to streamline their business processes and improve their finance and operations.
Assessing New Databases– Translytical Use CasesDATAVERSITY
Organizations run their day-in-and-day-out businesses with transactional applications and databases. On the other hand, organizations glean insights and make critical decisions using analytical databases and business intelligence tools.
The transactional workloads are relegated to database engines designed and tuned for transactional high throughput. Meanwhile, the big data generated by all the transactions require analytics platforms to load, store, and analyze volumes of data at high speed, providing timely insights to businesses.
Thus, in conventional information architectures, this requires two different database architectures and platforms: online transactional processing (OLTP) platforms to handle transactional workloads and online analytical processing (OLAP) engines to perform analytics and reporting.
Today, a particular focus and interest of operational analytics includes streaming data ingest and analysis in real time. Some refer to operational analytics as hybrid transaction/analytical processing (HTAP), translytical, or hybrid operational analytic processing (HOAP). We’ll address if this model is a way to create efficiencies in our environments.
The idea is the modernization of all the Dynamics products using the power of cloud and local services. But the vision of Microsoft is a bit more complex. Dynamics CRM was already part of Office 365, during this session you will learn how Microsoft is planning to rebuilt a complete strategy on top of xRM and people relations like customers and partners relationships
The idea is the modernization of all the Dynamics products using the power of cloud and local services. But the vision of Microsoft is a bit more complex. Dynamics CRM was already part of Office 365, during this session you will learn how Microsoft is planning to rebuilt a complete strategy on top of xRM and people relations like customers and partners relationships
Krypt Visibility - Reporting and Analytics for your Supply ChainKrypt, Inc.
Krypt Visibility (VISCOR) is Krypt's proprietary solution for an integrated view on your critical KPIs within your supply chain. Get alerts and ready-to-consume reports. Krypt Visibility provides a pre-built platform for your organization where data can be aggregated and leveraged.
Similar to 8 ways qlik integrates with salesforce.com (20)
Data Literacy and its Implications for SocietyPaul Van Siclen
Addressing the importance and imperative for everyone to become data literate for the future of work for both individuals and for organizations, Dr. Borne will cover five major themes: data awareness (what is it?), data relevance (why me?), data literacy (show me how), data science (where's the science?), and the data imperative (create and do something with data). Data permeates our daily lives through all conceivable digital technologies, handheld devices, business activities, and personal activities. Through data, the world is computable. The focus is not on the mathematics, the algorithms, or the engineering. Instead, the focus is on demonstrating that data science is universally appealing, data literacy is accessible, and data fluency is achievable for all. The democratization of data assets and data literacy is essential for all. Data Literacy is not a math skill -- it is a life skill.
The data flow through your Qlik solution is no longer a one-way street. Users enter feedback directly into published Qlik apps and the new information is immediately available for all users to see. Collaboration at its finest! Come and see how the writeback process works and other exciting use cases – from closing the feedback loop on machine learning through to tactical master data management.
This session will take a deep dive into the evolution of data and analytics solutions in the Insurance industry. Then we will bring it right up to date with examples of how Qlik’s modern analytics platform can transform the way you can think about using data to benefit you, your colleagues, your partners and most importantly your customers.
Often in the race to deliver governed and secure mobile analytics, the user experience becomes a victim. As a user, you should not have to choose between the two. Have a look at how Qlik Sense Mobile delivers a best of class user experience on any device, any time, while also keeping your data secure, to help you meet your business’s compliance requirements.
So why hasn’t everyone already moved to the Cloud? Why hasn’t everyone already transformed into a data-driven organization? What obstacles are standing the way? How should organizations get started on their journey? Financial institutions are quickly embracing the speed and agility that a cloud-based digital transformation can provide. This session will provide an overview for how retail banking, investment banking, and insurance can remove obstacles and launch a successful analytics journey to the cloud.
Ubiquitous data does not always translate to actionable data, though most financial institutions have a treasure trove of data they are moving to the cloud and could be using today. The potential is huge, but most struggle just to make actionable data available, let alone turn it into business value at scale. This session will highlight come of the key use cases and technologies that provide the greatest returns and organizational impact.
Dive into the world of Server-Side Extensions with Qlik, exploring examples and architectures with Python and R. This session includes examples of sentiment analysis, time-series forecasting, churn predictions, real-time routing, and much more. Whether you are a data scientist or a data analyst, this session will be useful for both sides of the house.
In this session you will learn how Qlik’s Data Integration platform (formerly Attunity) reduces time to market and time to insights for modern data architectures through real-time automated pipelines for data warehouse and data lake initiatives. Hear how pipeline automation has impacted large financial services organizations ability to rapidly deliver value and see how to build an automated near real-time pipeline to efficiently load and transform data into a Snowflake data warehouse on AWS in under 10 minutes.
Quantitative Data AnalysisReliability Analysis (Cronbach Alpha) Common Method...2023240532
Quantitative data Analysis
Overview
Reliability Analysis (Cronbach Alpha)
Common Method Bias (Harman Single Factor Test)
Frequency Analysis (Demographic)
Descriptive Analysis
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Discussion on Vector Databases, Unstructured Data and AI
https://www.meetup.com/unstructured-data-meetup-new-york/
This meetup is for people working in unstructured data. Speakers will come present about related topics such as vector databases, LLMs, and managing data at scale. The intended audience of this group includes roles like machine learning engineers, data scientists, data engineers, software engineers, and PMs.This meetup was formerly Milvus Meetup, and is sponsored by Zilliz maintainers of Milvus.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
3. 3
The Associative Difference®
vs.
✓ All your data
✓ Explore without boundaries
✓ Speed of thought
✓ Unexpected insights
The Qlik Associative Engine
x Partial subsets of data
x Restricted linear exploration
x Slow performance
x “Ask, wait, answer” cycle
Query-Based Tools
4. 4
ANALYTICS
Data Manager
(ETL from Publishing, Big Data
Index + other data sources)
Data Lake Pipeline(1), (2)
(Orchestration, automation, analytics
ready datasets)
Data Catalog(2)
(Profiling, Lineage, Governance and
Provisioning, Validation, Audit )
Marketplace/Publishing (2)
(Storefront, Shopping Cart/Checkout)
Change Data Capture (1)
(Real Time streaming, Replication, Cloud
Delivery)
Visualization alone
is insufficient
(1) (2)
What is the new Qlik platform?
A modern analytics platform providing options to enable the entire Data, Analytics and Decisions supply chain
Data Science
(Cognitive Engine, Insights Advisor
& Integration with Python/R/DataRobot)
Functions/Expressions
(Statistics, TVOM, Text,
Variables, Set Analysis)
Geo Spatial
(Full library of
Geo Calcs, mapping)
Associative Engine
(in memory, “full outer join”, search, open api’s)
Reporting
(PDF, Alerting, Office Integration)
Mobile first
(Online & Offline, BYOD, EMM Integration)
Custom applications
(Containers, Chatbots, OEM, api’s)
Governed Self Service
Guided Dashboards
Embedded analytics
(3rd party integration, mashups, api’s)
• From extracting raw data to insights & decisions
6. 6
• $100 million added revenue from monitoring and quickly addressing sales support & service contracts
• $4m in cost saving attributed to Qlik
• 96% year over year increase in sales productivity
• 12 weeks for entire deployment
• £1+ million saved by stock loss and sales analysis
• 70%+ of sales staff use Qlik everyday to review performance
• 5000+ branch sales users deployed in < 6 months
• Big Data: 4.3 million customers with 260 million agreements and 800 billion cells analyzed in Qlik
• 99% reduction in time spent on sales-related reporting
• Reduced audit time for 3 weeks to 2-3 days
• 17% sales growth over 2 yrs attributed to Qlik
• $400,000 savings achieved over a period of two years
• 50% increase in customer response times
Sales Performance Results with Qlik
8. 8
What is Qlik Sense Integration with SFDC?
Salesforce Connector (Data) SAML Authentication
& Section Access
Embedding with the Single
API or Capabilities
(Integration) APIs
13. 13
Associate All of Your Data
Opportunities
Contacts
Accounts
Quota
Quota CalendarCalendar
Demographic Data
Weather Data Shipping Data
HR Data Web Traffic Data
Financial Data
Expense Data
Marketing Data
16. 16
• Recreate Salesforce Data model as
needed for use case
• Combine Salesforce data with other
tech (Marketo, DataHug, ERP, Geo)
Qlik Application using Salesforce Data Model