SQL Analytics provides data visualization and dashboarding capabilities on data lakes to help analysts, sales executives, marketing teams, and finance departments make more data-driven decisions. It allows users to build live visualizations and dashboards from their data, analyze it using SQL queries for reports or alerts, and connect to over 40 different data sources. Key features include a simple query editor, browsing table schemas, autocomplete functions, secure collaboration through shared dashboards and queries, and automation through alerts and parameterized queries and dashboards.
This document discusses 7 emerging trends in data engineering: 1) Data discovery and metadata management using open source tools like Amundsen and Marquez. 2) Data mesh and domain ownership. 3) Data observability using tools like DBT, Great Expectations, and Dagster. 4) Data lakehouse using Apache Iceberg and Delta Lake. 5) Modern data stacks using tools for extraction, transformation, data warehouses, governance, and BI. 6) Industrialized machine learning using frameworks like TensorFlow and PyTorch. 7) Prioritizing diversity, privacy, and AI ethics through techniques like explainable AI and privacy-preserving modeling.
The document discusses evolving data warehousing strategies and architecture options for implementing a modern data warehousing environment. It begins by describing traditional data warehouses and their limitations, such as lack of timeliness, flexibility, quality, and findability of data. It then discusses how data warehouses are evolving to be more modern by handling all types and sources of data, providing real-time access and self-service capabilities for users, and utilizing technologies like Hadoop and the cloud. Key aspects of a modern data warehouse architecture include the integration of data lakes, machine learning, streaming data, and offering a variety of deployment options. The document also covers data lake objectives, challenges, and implementation options for storing and analyzing large amounts of diverse data sources.
The document discusses the responsibilities of an Enterprise Data Architect, including defining vision/strategy for data management, standards, governance, modeling, and more. It lists key tasks like implementing data strategies/roadmaps, models, and governance frameworks. The architect must understand how data is used and mitigate risks. Relevant domains include data strategy/governance, modeling, store definition, analysis, and content management. The architect must also track emerging solutions/topics and possess skills like strategy analysis, communication, and leadership.
Datasaturday Pordenone Azure Purview Erwin de KreukErwin de Kreuk
Azure Purview is Microsoft's solution for unified data governance. It includes three main components:
1. The Purview Data Map automates metadata scanning and lineage identification across hybrid data stores and applies over 100 classifiers and Microsoft sensitivity labels.
2. The Purview Data Catalog enables effortless discovery through semantic search and a business glossary, and shows data lineage with sources, owners, and transformations.
3. Purview Insights provides reports on assets, scans, the glossary, classification, and sensitive data labeling to give visibility into data usage across the estate.
How to Build a Rock-Solid Analytics and Business Intelligence StrategySAP Analytics
http://spr.ly/SBOUC_VP - The key to a successful analytics program is to have the right strategy in place. An effective approach benefits both IT and the core business alike. A solid, well-communicated business intelligence strategy is more than just a good idea. It’s crucial to maximizing ROI, reaching KPIs, and identifying metrics that actually mean something. Take the next step in your journey to a solid BI strategy.
Presenters: Deepa Sankar & Pat Saporito, SAP
Dustin Vannoy presented on using Delta Lake with Azure Databricks. He began with an introduction to Spark and Databricks, demonstrating how to set up a workspace. He then discussed limitations of Spark including lack of ACID compliance and small file problems. Delta Lake addresses these issues with transaction logs for ACID transactions, schema enforcement, automatic file compaction, and performance optimizations like time travel. The presentation included demos of Delta Lake capabilities like schema validation, merging, and querying past versions of data.
SQL Analytics provides data visualization and dashboarding capabilities on data lakes to help analysts, sales executives, marketing teams, and finance departments make more data-driven decisions. It allows users to build live visualizations and dashboards from their data, analyze it using SQL queries for reports or alerts, and connect to over 40 different data sources. Key features include a simple query editor, browsing table schemas, autocomplete functions, secure collaboration through shared dashboards and queries, and automation through alerts and parameterized queries and dashboards.
This document discusses 7 emerging trends in data engineering: 1) Data discovery and metadata management using open source tools like Amundsen and Marquez. 2) Data mesh and domain ownership. 3) Data observability using tools like DBT, Great Expectations, and Dagster. 4) Data lakehouse using Apache Iceberg and Delta Lake. 5) Modern data stacks using tools for extraction, transformation, data warehouses, governance, and BI. 6) Industrialized machine learning using frameworks like TensorFlow and PyTorch. 7) Prioritizing diversity, privacy, and AI ethics through techniques like explainable AI and privacy-preserving modeling.
The document discusses evolving data warehousing strategies and architecture options for implementing a modern data warehousing environment. It begins by describing traditional data warehouses and their limitations, such as lack of timeliness, flexibility, quality, and findability of data. It then discusses how data warehouses are evolving to be more modern by handling all types and sources of data, providing real-time access and self-service capabilities for users, and utilizing technologies like Hadoop and the cloud. Key aspects of a modern data warehouse architecture include the integration of data lakes, machine learning, streaming data, and offering a variety of deployment options. The document also covers data lake objectives, challenges, and implementation options for storing and analyzing large amounts of diverse data sources.
The document discusses the responsibilities of an Enterprise Data Architect, including defining vision/strategy for data management, standards, governance, modeling, and more. It lists key tasks like implementing data strategies/roadmaps, models, and governance frameworks. The architect must understand how data is used and mitigate risks. Relevant domains include data strategy/governance, modeling, store definition, analysis, and content management. The architect must also track emerging solutions/topics and possess skills like strategy analysis, communication, and leadership.
Datasaturday Pordenone Azure Purview Erwin de KreukErwin de Kreuk
Azure Purview is Microsoft's solution for unified data governance. It includes three main components:
1. The Purview Data Map automates metadata scanning and lineage identification across hybrid data stores and applies over 100 classifiers and Microsoft sensitivity labels.
2. The Purview Data Catalog enables effortless discovery through semantic search and a business glossary, and shows data lineage with sources, owners, and transformations.
3. Purview Insights provides reports on assets, scans, the glossary, classification, and sensitive data labeling to give visibility into data usage across the estate.
How to Build a Rock-Solid Analytics and Business Intelligence StrategySAP Analytics
http://spr.ly/SBOUC_VP - The key to a successful analytics program is to have the right strategy in place. An effective approach benefits both IT and the core business alike. A solid, well-communicated business intelligence strategy is more than just a good idea. It’s crucial to maximizing ROI, reaching KPIs, and identifying metrics that actually mean something. Take the next step in your journey to a solid BI strategy.
Presenters: Deepa Sankar & Pat Saporito, SAP
Dustin Vannoy presented on using Delta Lake with Azure Databricks. He began with an introduction to Spark and Databricks, demonstrating how to set up a workspace. He then discussed limitations of Spark including lack of ACID compliance and small file problems. Delta Lake addresses these issues with transaction logs for ACID transactions, schema enforcement, automatic file compaction, and performance optimizations like time travel. The presentation included demos of Delta Lake capabilities like schema validation, merging, and querying past versions of data.
Apache Spark is a fast and general engine for large-scale data processing. It was created by UC Berkeley and is now the dominant framework in big data. Spark can run programs over 100x faster than Hadoop in memory, or more than 10x faster on disk. It supports Scala, Java, Python, and R. Databricks provides a Spark platform on Azure that is optimized for performance and integrates tightly with other Azure services. Key benefits of Databricks on Azure include security, ease of use, data access, high performance, and the ability to solve complex analytics problems.
This framework helps organizations align Data Strategy with Business Strategy to prioritize goals around the most pressing operational needs. It introduces Data Management & Data Ability Maturity Matrix to visualize the core path of business digital transformation, which is easy to understand and follow. And it provides the standard template for implementation, which can share the flexibility to engage applications of different industries.
This document contains copyright information for Snowflake Computing and provides three different versions of a three layer design diagram. Versions A, B, and C of the three layer design diagram are protected by copyright for Snowflake Computing.
Tekslate.com is the Industry leader in providing Informatica Data Quality Training across the globe. Our online training methodology focus on hands on experience of Informatica Data Quality.
Data Warehousing in the Cloud: Practical Migration Strategies SnapLogic
Dave Wells of Eckerson Group discusses why cloud data warehousing has become popular, the many benefits, and the corresponding challenges. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when architectural modernization, restructuring of database schema, or rebuilding of data pipelines is needed.
SAS Fraud Framework for Insurance, an end-to-end solution for preventing, detecting and managing claims fraud across the various lines of business within today's insurers
Master Data Management's Place in the Data Governance Landscape CCG
This document provides an overview of master data management and how it relates to data governance. It defines key concepts like master data, reference data, and different master data management architectural models. It discusses how master data management aligns with and supports data governance objectives. Specifically, it notes that MDM should not be implemented without formal data quality and governance programs already in place. It also explains how various data governance functions like ownership, policies and standards apply to master data.
Delta Lake brings reliability, performance, and security to data lakes. It provides ACID transactions, schema enforcement, and unified handling of batch and streaming data to make data lakes more reliable. Delta Lake also features lightning fast query performance through its optimized Delta Engine. It enables security and compliance at scale through access controls and versioning of data. Delta Lake further offers an open approach and avoids vendor lock-in by using open formats like Parquet that can integrate with various ecosystems.
Collibra Data Citizen '19 - Bridging Data Privacy with Data Governance BigID Inc
This presentation was shown at the 2019 Collibra Data Citizen Event in New York City.
Presented by Nimrod Vax, Chief Product Officer & Co-Founder & Joaquin Sufuentes, Lead Architect, Metadata Managment and Personal Infomation Protection, enterprise Data Managment, Intel IT
Considerations for Data Access in the LakehouseDatabricks
Organizations are increasingly exploring lakehouse architectures with Databricks to combine the best of data lakes and data warehouses. Databricks SQL Analytics introduces new innovation on the “house” to deliver data warehousing performance with the flexibility of data lakes. The lakehouse supports a diverse set of use cases and workloads that require distinct considerations for data access. On the lake side, tables with sensitive data require fine-grained access control that are enforced across the raw data and derivative data products via feature engineering or transformations. Whereas on the house side, tables can require fine-grained data access such as row level segmentation for data sharing, and additional transformations using analytics engineering tools. On the consumption side, there are additional considerations for managing access from popular BI tools such as Tableau, Power BI or Looker.
The product team at Immuta, a Databricks partner, will share their experience building data access governance solutions for lakehouse architectures across different data lake and warehouse platforms to show how to set up data access for common scenarios for Databricks teams new to SQL Analytics.
Introducing Snowflake, an elastic data warehouse delivered as a service in the cloud. It aims to simplify data warehousing by removing the need for customers to manage infrastructure, scaling, and tuning. Snowflake uses a multi-cluster architecture to provide elastic scaling of storage, compute, and concurrency. It can bring together structured and semi-structured data for analysis without requiring data transformation. Customers have seen significant improvements in performance, cost savings, and the ability to add new workloads compared to traditional on-premises data warehousing solutions.
Doug Bateman, a principal data engineering instructor at Databricks, presented on how to build a Lakehouse architecture. He began by introducing himself and his background. He then discussed the goals of describing key Lakehouse features, explaining how Delta Lake enables it, and developing a sample Lakehouse using Databricks. The key aspects of a Lakehouse are that it supports diverse data types and workloads while enabling using BI tools directly on source data. Delta Lake provides reliability, consistency, and performance through its ACID transactions, automatic file consolidation, and integration with Spark. Bateman concluded with a demo of creating a Lakehouse.
A glimpse into the world of data visualization and how you can turn your data into visual statements by exploring the different types of charts, tools & techniques
The document outlines a reference architecture for using big data and analytics to address challenges in areas like fraud detection, risk reduction, compliance, and customer churn prevention for financial institutions. It describes components like streaming data ingestion, storage, processing, analytics and machine learning, and presentation. Specific applications discussed include money laundering prevention, using techniques like decision trees, cluster analysis, and pattern detection on data from multiple sources stored in Azure data services.
Building Reliable Data Lakes at Scale with Delta LakeDatabricks
Most data practitioners grapple with data reliability issues—it’s the bane of their existence. Data engineers, in particular, strive to design, deploy, and serve reliable data in a performant manner so that their organizations can make the most of their valuable corporate data assets.
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Built on open standards, Delta Lake employs co-designed compute and storage and is compatible with Spark API’s. It powers high data reliability and query performance to support big data use cases, from batch and streaming ingests, fast interactive queries to machine learning. In this tutorial we will discuss the requirements of modern data engineering, the challenges data engineers face when it comes to data reliability and performance and how Delta Lake can help. Through presentation, code examples and notebooks, we will explain these challenges and the use of Delta Lake to address them. You will walk away with an understanding of how you can apply this innovation to your data architecture and the benefits you can gain.
This tutorial will be both instructor-led and hands-on interactive session. Instructions on how to get tutorial materials will be covered in class.
What you’ll learn:
Understand the key data reliability challenges
How Delta Lake brings reliability to data lakes at scale
Understand how Delta Lake fits within an Apache Spark™ environment
How to use Delta Lake to realize data reliability improvements
Prerequisites
A fully-charged laptop (8-16GB memory) with Chrome or Firefox
Pre-register for Databricks Community Edition
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...Databricks
Many had dubbed 2020 as the decade of data. This is indeed an era of data zeitgeist.
From code-centric software development 1.0, we are entering software development 2.0, a data-centric and data-driven approach, where data plays a central theme in our everyday lives.
As the volume and variety of data garnered from myriad data sources continue to grow at an astronomical scale and as cloud computing offers cheap computing and data storage resources at scale, the data platforms have to match in their abilities to process, analyze, and visualize at scale and speed and with ease — this involves data paradigm shifts in processing and storing and in providing programming frameworks to developers to access and work with these data platforms.
In this talk, we will survey some emerging technologies that address the challenges of data at scale, how these tools help data scientists and machine learning developers with their data tasks, why they scale, and how they facilitate the future data scientists to start quickly.
In particular, we will examine in detail two open-source tools MLflow (for machine learning life cycle development) and Delta Lake (for reliable storage for structured and unstructured data).
Other emerging tools such as Koalas help data scientists to do exploratory data analysis at scale in a language and framework they are familiar with as well as emerging data + AI trends in 2021.
You will understand the challenges of machine learning model development at scale, why you need reliable and scalable storage, and what other open source tools are at your disposal to do data science and machine learning at scale.
Describes what Enterprise Data Architecture in a Software Development Organization should cover and does that by listing over 200 data architecture related deliverables an Enterprise Data Architect should remember to evangelize.
Apache Spark is a fast and general engine for large-scale data processing. It was created by UC Berkeley and is now the dominant framework in big data. Spark can run programs over 100x faster than Hadoop in memory, or more than 10x faster on disk. It supports Scala, Java, Python, and R. Databricks provides a Spark platform on Azure that is optimized for performance and integrates tightly with other Azure services. Key benefits of Databricks on Azure include security, ease of use, data access, high performance, and the ability to solve complex analytics problems.
This framework helps organizations align Data Strategy with Business Strategy to prioritize goals around the most pressing operational needs. It introduces Data Management & Data Ability Maturity Matrix to visualize the core path of business digital transformation, which is easy to understand and follow. And it provides the standard template for implementation, which can share the flexibility to engage applications of different industries.
This document contains copyright information for Snowflake Computing and provides three different versions of a three layer design diagram. Versions A, B, and C of the three layer design diagram are protected by copyright for Snowflake Computing.
Tekslate.com is the Industry leader in providing Informatica Data Quality Training across the globe. Our online training methodology focus on hands on experience of Informatica Data Quality.
Data Warehousing in the Cloud: Practical Migration Strategies SnapLogic
Dave Wells of Eckerson Group discusses why cloud data warehousing has become popular, the many benefits, and the corresponding challenges. Migrating an existing data warehouse to the cloud is a complex process of moving schema, data, and ETL. The complexity increases when architectural modernization, restructuring of database schema, or rebuilding of data pipelines is needed.
SAS Fraud Framework for Insurance, an end-to-end solution for preventing, detecting and managing claims fraud across the various lines of business within today's insurers
Master Data Management's Place in the Data Governance Landscape CCG
This document provides an overview of master data management and how it relates to data governance. It defines key concepts like master data, reference data, and different master data management architectural models. It discusses how master data management aligns with and supports data governance objectives. Specifically, it notes that MDM should not be implemented without formal data quality and governance programs already in place. It also explains how various data governance functions like ownership, policies and standards apply to master data.
Delta Lake brings reliability, performance, and security to data lakes. It provides ACID transactions, schema enforcement, and unified handling of batch and streaming data to make data lakes more reliable. Delta Lake also features lightning fast query performance through its optimized Delta Engine. It enables security and compliance at scale through access controls and versioning of data. Delta Lake further offers an open approach and avoids vendor lock-in by using open formats like Parquet that can integrate with various ecosystems.
Collibra Data Citizen '19 - Bridging Data Privacy with Data Governance BigID Inc
This presentation was shown at the 2019 Collibra Data Citizen Event in New York City.
Presented by Nimrod Vax, Chief Product Officer & Co-Founder & Joaquin Sufuentes, Lead Architect, Metadata Managment and Personal Infomation Protection, enterprise Data Managment, Intel IT
Considerations for Data Access in the LakehouseDatabricks
Organizations are increasingly exploring lakehouse architectures with Databricks to combine the best of data lakes and data warehouses. Databricks SQL Analytics introduces new innovation on the “house” to deliver data warehousing performance with the flexibility of data lakes. The lakehouse supports a diverse set of use cases and workloads that require distinct considerations for data access. On the lake side, tables with sensitive data require fine-grained access control that are enforced across the raw data and derivative data products via feature engineering or transformations. Whereas on the house side, tables can require fine-grained data access such as row level segmentation for data sharing, and additional transformations using analytics engineering tools. On the consumption side, there are additional considerations for managing access from popular BI tools such as Tableau, Power BI or Looker.
The product team at Immuta, a Databricks partner, will share their experience building data access governance solutions for lakehouse architectures across different data lake and warehouse platforms to show how to set up data access for common scenarios for Databricks teams new to SQL Analytics.
Introducing Snowflake, an elastic data warehouse delivered as a service in the cloud. It aims to simplify data warehousing by removing the need for customers to manage infrastructure, scaling, and tuning. Snowflake uses a multi-cluster architecture to provide elastic scaling of storage, compute, and concurrency. It can bring together structured and semi-structured data for analysis without requiring data transformation. Customers have seen significant improvements in performance, cost savings, and the ability to add new workloads compared to traditional on-premises data warehousing solutions.
Doug Bateman, a principal data engineering instructor at Databricks, presented on how to build a Lakehouse architecture. He began by introducing himself and his background. He then discussed the goals of describing key Lakehouse features, explaining how Delta Lake enables it, and developing a sample Lakehouse using Databricks. The key aspects of a Lakehouse are that it supports diverse data types and workloads while enabling using BI tools directly on source data. Delta Lake provides reliability, consistency, and performance through its ACID transactions, automatic file consolidation, and integration with Spark. Bateman concluded with a demo of creating a Lakehouse.
A glimpse into the world of data visualization and how you can turn your data into visual statements by exploring the different types of charts, tools & techniques
The document outlines a reference architecture for using big data and analytics to address challenges in areas like fraud detection, risk reduction, compliance, and customer churn prevention for financial institutions. It describes components like streaming data ingestion, storage, processing, analytics and machine learning, and presentation. Specific applications discussed include money laundering prevention, using techniques like decision trees, cluster analysis, and pattern detection on data from multiple sources stored in Azure data services.
Building Reliable Data Lakes at Scale with Delta LakeDatabricks
Most data practitioners grapple with data reliability issues—it’s the bane of their existence. Data engineers, in particular, strive to design, deploy, and serve reliable data in a performant manner so that their organizations can make the most of their valuable corporate data assets.
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Built on open standards, Delta Lake employs co-designed compute and storage and is compatible with Spark API’s. It powers high data reliability and query performance to support big data use cases, from batch and streaming ingests, fast interactive queries to machine learning. In this tutorial we will discuss the requirements of modern data engineering, the challenges data engineers face when it comes to data reliability and performance and how Delta Lake can help. Through presentation, code examples and notebooks, we will explain these challenges and the use of Delta Lake to address them. You will walk away with an understanding of how you can apply this innovation to your data architecture and the benefits you can gain.
This tutorial will be both instructor-led and hands-on interactive session. Instructions on how to get tutorial materials will be covered in class.
What you’ll learn:
Understand the key data reliability challenges
How Delta Lake brings reliability to data lakes at scale
Understand how Delta Lake fits within an Apache Spark™ environment
How to use Delta Lake to realize data reliability improvements
Prerequisites
A fully-charged laptop (8-16GB memory) with Chrome or Firefox
Pre-register for Databricks Community Edition
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
The Future of Data Science and Machine Learning at Scale: A Look at MLflow, D...Databricks
Many had dubbed 2020 as the decade of data. This is indeed an era of data zeitgeist.
From code-centric software development 1.0, we are entering software development 2.0, a data-centric and data-driven approach, where data plays a central theme in our everyday lives.
As the volume and variety of data garnered from myriad data sources continue to grow at an astronomical scale and as cloud computing offers cheap computing and data storage resources at scale, the data platforms have to match in their abilities to process, analyze, and visualize at scale and speed and with ease — this involves data paradigm shifts in processing and storing and in providing programming frameworks to developers to access and work with these data platforms.
In this talk, we will survey some emerging technologies that address the challenges of data at scale, how these tools help data scientists and machine learning developers with their data tasks, why they scale, and how they facilitate the future data scientists to start quickly.
In particular, we will examine in detail two open-source tools MLflow (for machine learning life cycle development) and Delta Lake (for reliable storage for structured and unstructured data).
Other emerging tools such as Koalas help data scientists to do exploratory data analysis at scale in a language and framework they are familiar with as well as emerging data + AI trends in 2021.
You will understand the challenges of machine learning model development at scale, why you need reliable and scalable storage, and what other open source tools are at your disposal to do data science and machine learning at scale.
Describes what Enterprise Data Architecture in a Software Development Organization should cover and does that by listing over 200 data architecture related deliverables an Enterprise Data Architect should remember to evangelize.
2. "Safe harbor" statement under the Private Securities Litigation Reform Act of 1995: This presentation contains forward-looking statements about the company's
financial and operating results, which may include expected GAAP and non-GAAP financial and other operating and non-operating results, including revenue,
net income, diluted earnings per share, operating cash flow growth, operating margin improvement, expected revenue growth, expected current remaining
performance obligation growth, expected tax rates, stock-based compensation expenses, amortization of purchased intangibles, shares outstanding, market
growth, environmental, social and governance goals and expected capital allocation, including mergers and acquisitions, capital expenditures and other
investments. The achievement or success of the matters covered by such forward-looking statements involves risks, uncertainties and assumptions. If any such
risks or uncertainties materialize or if any of the assumptions prove incorrect, the company’s results could differ materially from the results expressed or implied
by the forward-looking statements it makes.
The risks and uncertainties referred to above include -- but are not limited to -- risks associated with the effect of general economic and market conditions; the
impact of geopolitical events; the impact of foreign currency exchange rate and interest rate fluctuations on our results; our business strategy and our plan to
build our business, including our strategy to be the leading provider of enterprise cloud computing applications and platforms; the pace of change and
innovation in enterprise cloud computing services; the seasonal nature of our sales cycles; the competitive nature of the market in which we participate; our
international expansion strategy; the demands on our personnel and infrastructure resulting from significant growth in our customer base and operations,
including as a result of acquisitions; our service performance and security, including the resources and costs required to avoid unanticipated downtime and
prevent, detect and remediate potential security breaches; the expenses associated with our data centers and third-party infrastructure providers; additional
data center capacity; real estate and office facilities space; our operating results and cash flows; new services and product features, including any efforts to
expand our services beyond the CRM market; our strategy of acquiring or making investments in complementary businesses, joint ventures, services,
technologies and intellectual property rights; the performance and fair value of our investments in complementary businesses through our strategic investment
portfolio; our ability to realize the benefits from strategic partnerships, joint ventures and investments; the impact of future gains or losses from our strategic
investment portfolio, including gains or losses from overall market conditions that may affect the publicly traded companies within our strategic investment
portfolio; our ability to execute our business plans; our ability to successfully integrate acquired businesses and technologies; our ability to continue to grow
unearned revenue and remaining performance obligation; our ability to protect our intellectual property rights; our ability to develop our brands; our reliance
on third-party hardware, software and platform providers; our dependency on the development and maintenance of the infrastructure of the Internet; the
effect of evolving domestic and foreign government regulations, including those related to the provision of services on the Internet, those related to accessing
the Internet, and those addressing data privacy, cross-border data transfers and import and export controls; the valuation of our deferred tax assets and the
release of related valuation allowances; the potential availability of additional tax assets in the future; the impact of new accounting pronouncements and tax
laws; uncertainties affecting our ability to estimate our tax rate; uncertainties regarding our tax obligations in connection with potential jurisdictional transfers
of intellectual property, including the tax rate, the timing of the transfer and the value of such transferred intellectual property; the impact of expensing stock
options and other equity awards; the sufficiency of our capital resources; factors related to our outstanding debt, revolving credit facility and loan associated
with 50 Fremont; compliance with our debt covenants and lease obligations; current and potential litigation involving us; and the impact of climate change,
natural disasters and actual or threatened public health emergencies, such as the ongoing Coronavirus pandemic.
Forward-Looking Statements
3. • Tableau Viz Lighting Web Component とは
• Tableau Viz LWC が解決する課題
• Tableau Viz LWC の機能
• Tableau Viz LWC 利⽤の導⼊⼿順
• Tableau Viz LWC 利⽤の認証
アジェンダ
3
4. Tableau のビューをよりクイックかつ簡単にSalesforce に埋め込む UI開発
Tableau Viz Lighting Web Component
4
ドラッグアンドドロップで簡単に埋込み
Salesforce オブジェクトページに合わせ
フィルターを⾃動的に適応
Salesforce データ以外も表⽰可能
SAML 認証によるシングルサインオン