Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
This is the presentation for the talk I gave at JavaDay Kiev 2015. This is about an evolution of data processing systems from simple ones with single DWH to the complex approaches like Data Lake, Lambda Architecture and Pipeline architecture
Modularized ETL Writing with Apache SparkDatabricks
Apache Spark has been an integral part of Stitch Fix’s compute infrastructure. Over the past five years, it has become our de facto standard for most ETL and heavy data processing needs and expanded our capabilities in the Data Warehouse.
Since all our writes to the Data Warehouse are through Apache Spark, we took advantage of that to add more modules that supplement ETL writing. Config driven and purposeful, these modules perform tasks onto a Spark Dataframe meant for a destination Hive table.
These are organized as a sequence of transformations on the Apache Spark dataframe prior to being written to the table.These include a process of journalizing. It is a process which helps maintain a non-duplicated historical record of mutable data associated with different parts of our business.
Data quality, another such module, is enabled on the fly using Apache Spark. Using Apache Spark we calculate metrics and have an adjacent service to help run quality tests for a table on the incoming data.
And finally, we cleanse data based on provided configurations, validate and write data into the warehouse. We have an internal versioning strategy in the Data Warehouse that allows us to know the difference between new and old data for a table.
Having these modules at the time of writing data allows cleaning, validation and testing of data prior to entering the Data Warehouse thus relieving us, programmatically, of most of the data problems. This talk focuses on ETL writing in Stitch Fix and describes these modules that help our Data Scientists on a daily basis.
Making Data Timelier and More Reliable with Lakehouse TechnologyMatei Zaharia
Enterprise data architectures usually contain many systems—data lakes, message queues, and data warehouses—that data must pass through before it can be analyzed. Each transfer step between systems adds a delay and a potential source of errors. What if we could remove all these steps? In recent years, cloud storage and new open source systems have enabled a radically new architecture: the lakehouse, an ACID transactional layer over cloud storage that can provide streaming, management features, indexing, and high-performance access similar to a data warehouse. Thousands of organizations including the largest Internet companies are now using lakehouses to replace separate data lake, warehouse and streaming systems and deliver high-quality data faster internally. I’ll discuss the key trends and recent advances in this area based on Delta Lake, the most widely used open source lakehouse platform, which was developed at Databricks.
This is the presentation for the talk I gave at JavaDay Kiev 2015. This is about an evolution of data processing systems from simple ones with single DWH to the complex approaches like Data Lake, Lambda Architecture and Pipeline architecture
Modularized ETL Writing with Apache SparkDatabricks
Apache Spark has been an integral part of Stitch Fix’s compute infrastructure. Over the past five years, it has become our de facto standard for most ETL and heavy data processing needs and expanded our capabilities in the Data Warehouse.
Since all our writes to the Data Warehouse are through Apache Spark, we took advantage of that to add more modules that supplement ETL writing. Config driven and purposeful, these modules perform tasks onto a Spark Dataframe meant for a destination Hive table.
These are organized as a sequence of transformations on the Apache Spark dataframe prior to being written to the table.These include a process of journalizing. It is a process which helps maintain a non-duplicated historical record of mutable data associated with different parts of our business.
Data quality, another such module, is enabled on the fly using Apache Spark. Using Apache Spark we calculate metrics and have an adjacent service to help run quality tests for a table on the incoming data.
And finally, we cleanse data based on provided configurations, validate and write data into the warehouse. We have an internal versioning strategy in the Data Warehouse that allows us to know the difference between new and old data for a table.
Having these modules at the time of writing data allows cleaning, validation and testing of data prior to entering the Data Warehouse thus relieving us, programmatically, of most of the data problems. This talk focuses on ETL writing in Stitch Fix and describes these modules that help our Data Scientists on a daily basis.
SQL Server Tuning to Improve Database PerformanceMark Ginnebaugh
SQL Server tuning is a process to eliminate performance bottlenecks and improve application service. This presentation from Confio Software discusses SQL diagramming, wait type data, column selectivity, and other solutions that will help make tuning projects a success, including:
•SQL Tuning Methodology
•Response Time Tuning Practices
•How to use SQL Diagramming techniques to tune SQL statements
•How to read executions plans
Master Data Management's Place in the Data Governance Landscape CCG
For many organizations, Master Data Management is a necessity to ensure consistency and accuracy of essential business entities. It further plays alongside data architecture, metadata management, data quality, security & privacy, and program management in the Data Governance ecosystem.
Join CCG's data governance subject matter experts as they overview the fundamentals of Master Data Management at our Atlanta-based Data Analytics Meetup. This event will discuss how to enable components of data governance within your organization and review how to best leverage Microsoft's SQL Server Master Data Services.
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...HostedbyConfluent
"Unlike just a few years ago, today the lakehouse architecture is an established data platform embraced by all major cloud data companies such as AWS, Azure, Google, Oracle, Microsoft, Snowflake and Databricks.
This session kicks off with a technical, no-nonsense introduction to the lakehouse concept, dives deep into the lakehouse architecture and recaps how a data lakehouse is built from the ground up with streaming as a first-class citizen.
Then we focus on serverless for streaming use cases. Serverless concepts are well-known from developers triggering hundreds of thousands of AWS Lambda functions at a negligible cost. However, the same concept becomes more interesting when looking at data platforms.
We have all heard about the principle ""It runs best on Powerpoint"", so I decided to skip slides here and bring a serverless demo instead:
A hands-on, fun, and interactive serverless streaming use case example where we ingest live events from hundreds of mobile devices (don't miss out - bring your phone and be part of it!!). Based on this use case I will critically explore how much of a modern lakehouse is serverless and how we implemented that at Databricks (spoiler alert: serverless is everywhere from data pipelines, workflows, optimized Spark APIs, to ML).
TL;DR benefits for the Data Practitioners:
-Recap the OSS foundation of the Lakehouse architecture and understand its appeal
- Understand the benefits of leveraging a lakehouse for streaming and what's there beyond Spark Structured Streaming.
- Meat of the talk: The Serverless Lakehouse. I give you the tech bits beyond the hype. How does a serverless lakehouse differ from other serverless offers?
- Live, hands-on, interactive demo to explore serverless data engineering data end-to-end. For each step we have a critical look and I explain what it means, e.g for you saving costs and removing operational overhead."
Apache Kafka and the Data Mesh | Ben Stopford and Michael Noll, ConfluentHostedbyConfluent
Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.
In this talk, we share our recommendations and picks of what every developer should know about building a streaming data mesh with Kafka. We introduce the four principles of the data mesh: domain-driven decentralization, data as a product, self-service data platform, and federated governance. We then cover topics such as the differences between working with event streams versus centralized approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data. We’ll examine how to onboard data from existing systems into a mesh, modelling the communication within the mesh, how to deal with changes to your domain’s “public” data, give examples of global standards for governance, and discuss the importance of taking a product-centric view on data sources and the data sets they share.
Data Discovery at Databricks with AmundsenDatabricks
Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
Massive Data Processing in Adobe Using Delta LakeDatabricks
At Adobe Experience Platform, we ingest TBs of data every day and manage PBs of data for our customers as part of the Unified Profile Offering. At the heart of this is a bunch of complex ingestion of a mix of normalized and denormalized data with various linkage scenarios power by a central Identity Linking Graph. This helps power various marketing scenarios that are activated in multiple platforms and channels like email, advertisements etc. We will go over how we built a cost effective and scalable data pipeline using Apache Spark and Delta Lake and share our experiences.
What are we storing?
Multi Source – Multi Channel Problem
Data Representation and Nested Schema Evolution
Performance Trade Offs with Various formats
Go over anti-patterns used
(String FTW)
Data Manipulation using UDFs
Writer Worries and How to Wipe them Away
Staging Tables FTW
Datalake Replication Lag Tracking
Performance Time!
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
SQL Analytics Powering Telemetry Analysis at ComcastDatabricks
Comcast is one of the leading providers of communications, entertainment, and cable products and services. At the heart of it is Comcast RDK providing the backbone of telemetry to the industry. RDK (Reference Design Kit) is pre-bundled opensource firmware for a complete home platform covering video, broadband and IoT devices. RDK team at Comcast analyzes petabytes of data, collected every 15 minutes from 70 million devices (video and broadband and IoT devices) installed in customer homes. They run ETL and aggregation pipelines and publish analytical dashboards on a daily basis to reduce customer calls and firmware rollout. The analysis is also used to calculate WIFI happiness index which is a critical KPI for Comcast customer experience.
In addition to this, RDK team also does release tracking by analyzing the RDK firmware quality. SQL Analytics allows customers to operate a lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.
We present the results of the “Test and Learn” with SQL Analytics and the delta engine that we worked in partnership with the Databricks team. We present a quick demo introducing the SQL native interface, the challenges we faced with migration, The results of the execution and our journey of productionizing this at scale.
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
Neo4j Spatial - Backing a GIS with a true graph databaseCraig Taverner
Geographic data is naturally structured like a graph, and topological analyses view GIS data as graphs, but until now no-one has tried to make use of a real graph database as the backing store for a GIS. The developers of Neo4j have added features to the popular open source graph database to provide for support for spatial indexing, storage and topology. In addition to these core components, there are a number of useful utilities for importing and exporting data from other popular data sources, and enabling the use of this database in well known libraries and applications in the open source GIS environment.
We will discuss the advantages of using a graph database for geographic data, the performance and scalability implications, and the opportunities enabled by this approach. In today's highly connected social web, there is an increasing need for graph-based data management. At the same time applications are becoming more and more location aware. The time is right for the first geographic graph database.
The data services marketplace is enabled by a data abstraction layer that supports rapid development of operational applications and single data view portals. In this presentation yo will learn services-based reference architecture, modality, and latency of data access.
- Reference architecture for enterprise data services marketplace
- Modality and latency of data access
- Customer use cases and demo
This presentation is part of the Denodo Educational Seminar , and you can watch the video here goo.gl/vycYmZ.
Five Things to Consider About Data Mesh and Data GovernanceDATAVERSITY
Data mesh was among the most discussed and controversial enterprise data management topics of 2021. One of the reasons people struggle with data mesh concepts is we still have a lot of open questions that we are not thinking about:
Are you thinking beyond analytics? Are you thinking about all possible stakeholders? Are you thinking about how to be agile? Are you thinking about standardization and policies? Are you thinking about organizational structures and roles?
Join data.world VP of Product Tim Gasper and Principal Scientist Juan Sequeda for an honest, no-bs discussion about data mesh and its role in data governance.
SQL Server Tuning to Improve Database PerformanceMark Ginnebaugh
SQL Server tuning is a process to eliminate performance bottlenecks and improve application service. This presentation from Confio Software discusses SQL diagramming, wait type data, column selectivity, and other solutions that will help make tuning projects a success, including:
•SQL Tuning Methodology
•Response Time Tuning Practices
•How to use SQL Diagramming techniques to tune SQL statements
•How to read executions plans
Master Data Management's Place in the Data Governance Landscape CCG
For many organizations, Master Data Management is a necessity to ensure consistency and accuracy of essential business entities. It further plays alongside data architecture, metadata management, data quality, security & privacy, and program management in the Data Governance ecosystem.
Join CCG's data governance subject matter experts as they overview the fundamentals of Master Data Management at our Atlanta-based Data Analytics Meetup. This event will discuss how to enable components of data governance within your organization and review how to best leverage Microsoft's SQL Server Master Data Services.
Standing on the Shoulders of Open-Source Giants: The Serverless Realtime Lake...HostedbyConfluent
"Unlike just a few years ago, today the lakehouse architecture is an established data platform embraced by all major cloud data companies such as AWS, Azure, Google, Oracle, Microsoft, Snowflake and Databricks.
This session kicks off with a technical, no-nonsense introduction to the lakehouse concept, dives deep into the lakehouse architecture and recaps how a data lakehouse is built from the ground up with streaming as a first-class citizen.
Then we focus on serverless for streaming use cases. Serverless concepts are well-known from developers triggering hundreds of thousands of AWS Lambda functions at a negligible cost. However, the same concept becomes more interesting when looking at data platforms.
We have all heard about the principle ""It runs best on Powerpoint"", so I decided to skip slides here and bring a serverless demo instead:
A hands-on, fun, and interactive serverless streaming use case example where we ingest live events from hundreds of mobile devices (don't miss out - bring your phone and be part of it!!). Based on this use case I will critically explore how much of a modern lakehouse is serverless and how we implemented that at Databricks (spoiler alert: serverless is everywhere from data pipelines, workflows, optimized Spark APIs, to ML).
TL;DR benefits for the Data Practitioners:
-Recap the OSS foundation of the Lakehouse architecture and understand its appeal
- Understand the benefits of leveraging a lakehouse for streaming and what's there beyond Spark Structured Streaming.
- Meat of the talk: The Serverless Lakehouse. I give you the tech bits beyond the hype. How does a serverless lakehouse differ from other serverless offers?
- Live, hands-on, interactive demo to explore serverless data engineering data end-to-end. For each step we have a critical look and I explain what it means, e.g for you saving costs and removing operational overhead."
Apache Kafka and the Data Mesh | Ben Stopford and Michael Noll, ConfluentHostedbyConfluent
Data mesh is a relatively recent term that describes a set of principles that good modern data systems uphold. A kind of “microservices” for the data-centric world. While the data mesh is not technology-specific as a pattern, the building of systems that adopt and implement data mesh principles have a relatively long history under different guises.
In this talk, we share our recommendations and picks of what every developer should know about building a streaming data mesh with Kafka. We introduce the four principles of the data mesh: domain-driven decentralization, data as a product, self-service data platform, and federated governance. We then cover topics such as the differences between working with event streams versus centralized approaches and highlight the key characteristics that make streams a great fit for implementing a mesh, such as their ability to capture both real-time and historical data. We’ll examine how to onboard data from existing systems into a mesh, modelling the communication within the mesh, how to deal with changes to your domain’s “public” data, give examples of global standards for governance, and discuss the importance of taking a product-centric view on data sources and the data sets they share.
Data Discovery at Databricks with AmundsenDatabricks
Databricks used to use a static manually maintained wiki page for internal data exploration. We will discuss how we leverage Amundsen, an open source data discovery tool from Linux Foundation AI & Data, to improve productivity with trust by surfacing the most relevant dataset and SQL analytics dashboard with its important information programmatically at Databricks internally.
We will also talk about how we integrate Amundsen with Databricks world class infrastructure to surface metadata including:
Surface the most popular tables used within Databricks
Support fuzzy search and facet search for dataset- Surface rich metadata on datasets:
Lineage information (downstream table, upstream table, downstream jobs, downstream users)
Dataset owner
Dataset frequent users
Delta extend metadata (e.g change history)
ETL job that generates the dataset
Column stats on numeric type columns
Dashboards that use the given dataset
Use Databricks data tab to show the sample data
Surface metadata on dashboards including: create time, last update time, tables used, etc
Last but not least, we will discuss how we incorporate internal user feedback and provide the same discovery productivity improvements for Databricks customers in the future.
Massive Data Processing in Adobe Using Delta LakeDatabricks
At Adobe Experience Platform, we ingest TBs of data every day and manage PBs of data for our customers as part of the Unified Profile Offering. At the heart of this is a bunch of complex ingestion of a mix of normalized and denormalized data with various linkage scenarios power by a central Identity Linking Graph. This helps power various marketing scenarios that are activated in multiple platforms and channels like email, advertisements etc. We will go over how we built a cost effective and scalable data pipeline using Apache Spark and Delta Lake and share our experiences.
What are we storing?
Multi Source – Multi Channel Problem
Data Representation and Nested Schema Evolution
Performance Trade Offs with Various formats
Go over anti-patterns used
(String FTW)
Data Manipulation using UDFs
Writer Worries and How to Wipe them Away
Staging Tables FTW
Datalake Replication Lag Tracking
Performance Time!
Every business today wants to leverage data to drive strategic initiatives with machine learning, data science and analytics — but runs into challenges from siloed teams, proprietary technologies and unreliable data.
That’s why enterprises are turning to the lakehouse because it offers a single platform to unify all your data, analytics and AI workloads.
Join our How to Build a Lakehouse technical training, where we’ll explore how to use Apache SparkTM, Delta Lake, and other open source technologies to build a better lakehouse. This virtual session will include concepts, architectures and demos.
Here’s what you’ll learn in this 2-hour session:
How Delta Lake combines the best of data warehouses and data lakes for improved data reliability, performance and security
How to use Apache Spark and Delta Lake to perform ETL processing, manage late-arriving data, and repair corrupted data directly on your lakehouse
SQL Analytics Powering Telemetry Analysis at ComcastDatabricks
Comcast is one of the leading providers of communications, entertainment, and cable products and services. At the heart of it is Comcast RDK providing the backbone of telemetry to the industry. RDK (Reference Design Kit) is pre-bundled opensource firmware for a complete home platform covering video, broadband and IoT devices. RDK team at Comcast analyzes petabytes of data, collected every 15 minutes from 70 million devices (video and broadband and IoT devices) installed in customer homes. They run ETL and aggregation pipelines and publish analytical dashboards on a daily basis to reduce customer calls and firmware rollout. The analysis is also used to calculate WIFI happiness index which is a critical KPI for Comcast customer experience.
In addition to this, RDK team also does release tracking by analyzing the RDK firmware quality. SQL Analytics allows customers to operate a lakehouse architecture that provides data warehousing performance at data lake economics for up to 4x better price/performance for SQL workloads than traditional cloud data warehouses.
We present the results of the “Test and Learn” with SQL Analytics and the delta engine that we worked in partnership with the Databricks team. We present a quick demo introducing the SQL native interface, the challenges we faced with migration, The results of the execution and our journey of productionizing this at scale.
Architect’s Open-Source Guide for a Data Mesh ArchitectureDatabricks
Data Mesh is an innovative concept addressing many data challenges from an architectural, cultural, and organizational perspective. But is the world ready to implement Data Mesh?
In this session, we will review the importance of core Data Mesh principles, what they can offer, and when it is a good idea to try a Data Mesh architecture. We will discuss common challenges with implementation of Data Mesh systems and focus on the role of open-source projects for it. Projects like Apache Spark can play a key part in standardized infrastructure platform implementation of Data Mesh. We will examine the landscape of useful data engineering open-source projects to utilize in several areas of a Data Mesh system in practice, along with an architectural example. We will touch on what work (culture, tools, mindset) needs to be done to ensure Data Mesh is more accessible for engineers in the industry.
The audience will leave with a good understanding of the benefits of Data Mesh architecture, common challenges, and the role of Apache Spark and other open-source projects for its implementation in real systems.
This session is targeted for architects, decision-makers, data-engineers, and system designers.
Neo4j Spatial - Backing a GIS with a true graph databaseCraig Taverner
Geographic data is naturally structured like a graph, and topological analyses view GIS data as graphs, but until now no-one has tried to make use of a real graph database as the backing store for a GIS. The developers of Neo4j have added features to the popular open source graph database to provide for support for spatial indexing, storage and topology. In addition to these core components, there are a number of useful utilities for importing and exporting data from other popular data sources, and enabling the use of this database in well known libraries and applications in the open source GIS environment.
We will discuss the advantages of using a graph database for geographic data, the performance and scalability implications, and the opportunities enabled by this approach. In today's highly connected social web, there is an increasing need for graph-based data management. At the same time applications are becoming more and more location aware. The time is right for the first geographic graph database.
The data services marketplace is enabled by a data abstraction layer that supports rapid development of operational applications and single data view portals. In this presentation yo will learn services-based reference architecture, modality, and latency of data access.
- Reference architecture for enterprise data services marketplace
- Modality and latency of data access
- Customer use cases and demo
This presentation is part of the Denodo Educational Seminar , and you can watch the video here goo.gl/vycYmZ.
Five Things to Consider About Data Mesh and Data GovernanceDATAVERSITY
Data mesh was among the most discussed and controversial enterprise data management topics of 2021. One of the reasons people struggle with data mesh concepts is we still have a lot of open questions that we are not thinking about:
Are you thinking beyond analytics? Are you thinking about all possible stakeholders? Are you thinking about how to be agile? Are you thinking about standardization and policies? Are you thinking about organizational structures and roles?
Join data.world VP of Product Tim Gasper and Principal Scientist Juan Sequeda for an honest, no-bs discussion about data mesh and its role in data governance.
How to get the maximum performance from your AEP server. This will discuss ways to improve execution time of short running jobs and how to properly configure the server depending on the expected number of users as well as the average size and duration of individual jobs. Included will be examples of making use of job pooling, Database connection sharing, and parallel subprotocol tuning. Determining when to make use of cluster, grid, or load balanced configurations along with memory and CPU sizing guidelines will also be discussed.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
2. 1. What’s a Teradata DWH System ?
2. SMP vs MPP
3. Shared-Everything vs Shared-Nothing architecture
4. Hardware architecture
4.1 Cliques 4.2 Hot standby Nodes
5. Node architecture
5.1 PDE 5.2 Virtual Processors
5.3 Parsing Engine 5.4 Access Module Processor
5.5 Disk Arrays
6. Request Processing
2
3. 3
1 What is Teradata DWH system?
• RDBMS designed to run the world’s largest databases
• Latest Intel technology nodes
• Standard access language (SQL)
• Massive Parallel Processing ‘MPP’ system
• a “Shared-Nothing” architecture
• Parallel-aware optimizer
allowing concurrent complex queries
• Linear Scalability
4. 2. SMP VS MPP
- Multiple CPU’s serving
separate processes Simultaneously
- Shared Everything
- All CPU’s Share Same Memory
- Mostly Hosted on Shared SAN
Symmetric Multi Processing Massive Parallel Processing
- Multiple CPUs runs in Parallel
serving single process
- Shared Nothing
- Each CPU have It’s Own Memory and space
- High Speed Nodes Connection [ByNet]
4
5. 1. What’s a Teradata DWH System ?
2. SMP vs MPP
3. Shared-Everything vs Shared-Nothing architecture
4. Hardware architecture
4.1 Cliques 4.2 Hot standby Nodes
5. Node architecture
5.1 PDE 5.2 Virtual Processors
5.3 Parsing Engine 5.4 Access Module Processor
5.5 Disk Arrays
6. Request Processing
5
6. 6
SHARED-NOTHINGSHARED-EVERYTHING
- Disk controllers and bandwidth shared
- Synchronization required across nodes
- Large scale Scalability Issue
- Best for many small statements
- Controllers dedicated to nodes
- No Cache Synchronization necessary
- Linear Scalability
- Best for Heavy statements
7. 1. What’s a Teradata DWH System ?
2. SMP vs MPP
3. Shared-Everything vs Shared-Nothing architecture
4. Hardware architecture
4.1 Cliques 4.2 Hot standby Nodes
5. Node architecture
5.1 PDE 5.2 Virtual Processors
5.3 Parsing Engine 5.4 Access Module Processor
5.5 Disk Arrays
6. Request Processing
7
8. 8
4 Teradata Hardware Architecture
• SMP Nodes
> Latest Intel SMP CPUs
> Configured in 2+1 node cliques
> Linux or Windows
• BYNET Interconnect
> Fully scalable bandwidth
> 1 to 1024 nodes
• Storage
> Independent I/O per Node
> Scales per node
• Server Management
> One console for the entire system
Server Management
PE
SMP Node1
AMPPE
AMP AMP AMP
PE
SMP Node2
AMPPE
AMP AMP AMP
PE
SMP Node3
AMPPE
AMP AMP AMP
PE
SMP Node4
AMPPE
AMP AMP AMP
BYNET Interconnect
9. 4.1 CLIQUES
• Group nodes together by multiported
access to common disk array units.
• Inter-node disk array connections are
made using FibreChannel (FC) buses.
• FC paths enable redundancy to ensure
the loss of a processor node or disk
controller won’t limit data availability.
• Clique is a mechanism supports migration of VPROCs under PDE following a
node failure.
• If a node in a clique fails, VPROCs migrate to other nodes in the clique and
continue to operate while recovery occurs on their home node.
9
10. 4.2 HOT STANDBY NODES
Improves availability and maintain performance levels
in the event of a node failure.
What’s a Hot Standby node:
• Is a member of each clique in the system.
• Does not normally participate in Teradata Database operations.
• Used to compensate for the loss of a node in the clique.
Using Hot Standby node Eliminates:
• Restarts that are required to bring a failed node back into service.
• Degraded service when VPROCs have migrated to other nodes in a clique.
How Hot Standby node failover works :
At node failure, all AMPs and LAN-attached PEs on the failed node migrate to the hot standby node
The hot standby node becomes the production node.
When the failed node returns to service, it becomes the new hot standby node.
10
11. 1. What’s a Teradata DWH System ?
2. SMP vs MPP
3. Shared-Everything vs Shared-Nothing architecture
4. Hardware architecture
4.1 Cliques 4.2 Hot standby Nodes
5. Node architecture
5.1 PDE 5.2 Virtual Processors
5.3 Parsing Engine 5.4 Access Module Processor
5.5 Disk Arrays
6. Request Processing
11
12. 5 Node Architecture (‘Shared Nothing’)
Each Teradata Node is made up of hardware and software
• Each node runs copy of OS, database SW, & virtual processes
• Each node has CPUs, system disk, memory & adapters
PE vproc
AMP
vproc
Vdisk
AMP
vproc
Vdisk
AMP
vproc
Vdisk
AMP
vproc
Vdisk
AMP
vproc
Vdisk
AMP
vproc
Vdisk
AMP
vproc
Vdisk
AMP
vproc
Vdisk
PE vproc
UNIX
PDE
13. 5.1 PARALLEL DATABASE EXTENSIONS - PDE
Software interface layer lies between O. S. & TD DB which enables The database to :
• Run in a parallel environment
• Execute Vprocs
• Apply a flexible priority scheduler to Teradata Database sessions
• Consistently manage memory, I/O, and messaging system interfaces across multiple OS platforms
PDE provides a series of parallel operating system services, which include:
• Facilities to manage parallel execution of database operations on multiple nodes.
• Dynamic distribution of database tasks.
• Coordination of task execution within and between nodes.
13
14. 5.2 VIRTUAL PROCESSORS – WHAT IS IT
What is it:
• Set of software processes that run on a node under Teradata (PDE).
• Eliminate dependency on specialized physical processors
VPROC characteristics:
• Multiple VPROCs can run on an SMP platform or a node.
• VPROCs and the tasks running under them communicate using unique-address
messaging, as if they were physically isolated from one another.
• This message communication is done using the BYNET hardware and BYNET Driver.
• maximum # VPROCs in a system: 16,384 VPROCs, in a node 128.
14
15. 5.2 VIRTUAL PROCESSORS – VPROC TYPES
GTW
• Gateway VPROCs provide a socket interface to Teradata Database
PE
• Parsing Engines perform session control, query parsing, security validation, query optimization
AMP
• Access Module Processors perform DB functions; Like: executing database queries.
• Database storage Distributed Across AMPs.
TVS
• Manages Teradata Database storage.
• AMPs acquire their portions of database storage through the TVS vproc.
NODE
• The node vproc handles PDE and operating system functions not directly related to AMP and PE work.
• Cannot be externally manipulated, and do not appear in the output of the Vproc Manager utility.
RSG
• Relay Services Gateway provides a socket interface for the replication agent.
• Relaydictionary changes to the Teradata Meta Data Services utility.
15
16. 5.3 PARSING ENGINE ‘PE’
• Communicates with the client system on one side and with the AMPs (via the BYNET) on the other side.
• Each PE executes the database software that manages sessions, decomposes SQL statements into
steps, possibly in parallel, and returns the answer rows to the requesting client.
Parsing Engine Elements
Parser Decomposes SQL into relational data management processing steps.
Optimizer Determines the most efficient path to access data.
Generator Generates and packages steps.
Dispatcher Receives processing steps from the parser, sends them to the appropriate AMPs via BYNET.
Monitors the completion of steps and handles errors encountered during processing.
Session
Control
Manages session activities, such as logon, password validation, and logoff.
Recovers sessions following client or server failures.
16
17. 5.4 ACCESS MODULE PROCESSOR ‘AMP’
• The AMP VPROC manages Teradata Database interactions with the disk subsystem.
• Each AMP manages a share of the disk storage.
Database management tasks
• Accounting
• Journaling
• Locking tables, rows, and databases
• Output data conversion
During query processing:
• Sorting
• Joining data rows
• Aggregation
File System Management
Disk Space management.
17
18. 5.4 THE AMPS – REQUEST PROCESSING
• The BYNET transmits messages to and from the AMPS and PEs.
• An AMP step can be sent to one of the following:
• One AMP
• Multi-Cast (A selected set of AMPs)
• All AMPs in the system
PE communication with Amps during request processing:
1. PE 1 : Access is through a primary index and a request is for a single row
- the PE transmits steps to a single AMP
2. PE 2 : the request is for many rows (an all-AMP request):
- the PE makes the BYNET broadcast the steps to all AMPs
** To minimize system overhead, the PE can send a step to a subset of AMPs, when appropriate.
18
19. 5.5 DISK ARRAYS
Logical Units ‘Lun’
• The RAID Manager uses drive groups DG
• DG is a set of drives that have been configured
into one or more LUNs.
• OS recognizes a LUN as a disk and is not aware
that it is writing on on multiple disk drives.
Vdisk
• Group of cylinders currently assigned to an AMP
• OS recognizes a LUN as a disk and is not aware
that it is writing on on multiple disk drives.
• The actual physical storage may derive from
several different storage devices
19
20. 1. What’s a Teradata DWH System ?
2. SMP vs MPP
3. Shared-Everything vs Shared-Nothing architecture
4. Hardware architecture
4.1 Cliques 4.2 Hot standby Nodes
5. Node architecture
5.1 PDE 5.2 Virtual Processors
5.3 Parsing Engine 5.4 Access Module Processor
5.5 Disk Arrays
6. Request Processing
20
21. 6 REQUEST PROCESSING – “LIFETIME OF A QUERY”
1. The Parser
•Checks Request cache to determine
if the request is already there
2. The Syntaxer
•checks the syntax of an incoming
request
3. The Resolver
•Adds information from the Data
Dictionary to convert database,
table, view, stored procedure, and
macro names to internal identifiers.
4. Security module
•checks privileges in the Data
Dictionary.
5. The Optimizer
•Determines the most effective way
to implement the SQL request.
6. The Optimizer
•scans the request to determine
where to place locks, then passes
the optimized parse tree to the
Generator.
7. The Generator
•Transforms the optimized parse
tree into plastic steps, caches the
steps if appropriate, and passes
them to gncApply
8. gncApply
•Takes the plastic steps produced by
the Generator, binds in
parameterized data if it exists, and
transforms it into concrete steps.
9 The Dispatcher
21
22. P.45 REQUEST PROCESSING – “1. THE PARSER”
1. The Parser
• Checks if the request in Request cache:
• IF IN = > Go to Step (2) - The Syntaxer.
• IF New Request
• The Parser reuses the plastic steps found in the
cache and passes them to gncApply.
• Go to checking privileges (step 4)
• Then Go to gncApply (step 8) after.
2. The Syntaxer
•checks the syntax
3. The Resolver
•convert Object names to internal
identifiers.
4. Security module
•checks privileges
5. The Optimizer
•Determines the most effective way
to implement the SQL request.
6. The Optimizer
•scans the request to determine
where to place locks
7. The Generator
•Transforms parse tree into plastic
steps.
8. gncApply
•binds parameterized data if it exists,
transforms it into concrete steps.
9 The Dispatcher
Plastic steps are directives to the
database management system
that do not contain data values
22
23. P.45 REQUEST PROCESSING – “2. SYNTAXER, 3. RESOLVER”
1. The Parser
•Checks Request cache to
determine if the request is
already there
2. The Syntaxer
• checks the syntax of new request:
• IF Wong => passes an error
message back to the requestor and
stops
• IF Correct => converts the request
to a parse tree and passes it to the
Resolver (3)
3. The Resolver
• Adds information from the Data
Dictionary (or cached copy of the
information) to convert database,
table, view, stored procedure, and
macro names to internal identifiers.
4. Security module
•checks privileges
5. The Optimizer
•Determines the most
effective way to implement
the SQL request.
6. The Optimizer
•scans the request to
determine where to place
locks
7. The Generator
•Transforms parse tree into
plastic steps.
8. gncApply
•binds parameterized data if it
exists, transforms it into
concrete steps.
9 The Dispatcher
23
24. P.45 REQUEST PROCESSING – “4. SECURITY MODULE, 5. - 6. OPTIMIZER”
1. The Parser
•Checks Request cache to determine if the
request is already there
2. The Syntaxer
•checks the syntax
3. The Resolver
•convert Object names to internal identifiers.
4. The Security module
• checks privileges of accessed object vs Requestor:
• Mismatch => returns a privilege error message
• Privileged => passes the request to the Optimizer.
5. The Optimizer
• Determines the most
effective way to implement
the request (Excution plan)
6. Optimizer
• Determine what type
and where to place
Objects locks
7. The Generator
•Transforms parse tree into plastic steps.
8. gncApply
•binds parameterized data if it exists,
transforms it into concrete steps.
9 The Dispatcher
24
25. P.45 REQUEST PROCESSING – “SEPARATE ORIGINAL”
1. The Parser
•Checks Request cache to determine if the request is
already there
2. The Syntaxer
•checks the syntax
3. The Resolver
•convert Object names to internal identifiers.
4. Security module
•checks privileges
5. The Optimizer
•Determines the most effective way to implement the
SQL request.
6. The Optimizer
•scans the request to determine where to place locks
7. The Generator
• Transforms the optimized parse
tree into plastic steps
• caches the steps if appropriate
• passes them to gncApply.
8. gncApply
• Binds in parameterized data if it
exists, and transform plastic
steps to concrete steps.
• passes the concrete steps to the
Dispatcher
9 The Dispatcher
Concrete steps are directives to the AMPs
that contain needed user-or session-
specific values and needed data parcels
25
26. P.45 DISPATCHER – REQUEST PROCESSING
• controls the sequence in which steps are executed:
• It also passes the steps to the BYNET to be distributed to the AMPs:
• 1 The Dispatcher receives concrete steps from gncApply.
• 2 The Dispatcher places the first step on the BYNET;
• - tells the BYNET whether the step is for one AMP, several AMPS, or all AMPs;
• - waits for a completion response.
• - Whenever possible, Teradata Database performs steps in parallel to enhance performance.
• 3 The Dispatcher receives a completion response from all expected AMPs and places the next step
on the BYNET.
• It continues to do this until all the AMP steps associated with a request are done.
26