How can you track all changes to your data across time? This talk will introduce you to the techniques you need to do that. We’ll examine the theory behind temporal database tables as well as the changes in the SQL:2011 standard that support them. We’ll also look at how you can implement temporal tables, both for DBMS that support SQL:2011 and those that don't. By the end of this talk you should be able to take your data to the fourth dimension.
Have Your Cake and Eat It Too -- Further Dispelling the Myths of the Lambda A...C4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/15ACXCw.
Tyler Akidau from Google demonstrates Google's Millwheel, a streaming system that promises low latency, strong consistency, and flexibility without relying on Lambda Architecture. Filmed at qconsf.com.
Tyler Akidau is a Senior Software Engineer at Google. The current Tech Lead for the MillWheel team, he’s spent five years working on massive-scale streaming data processing systems.
Apache Flink's Table & SQL API - unified APIs for batch and stream processingTimo Walther
SQL is undoubtedly the most widely used language for data analytics. It is declarative and can be optimized and efficiently executed by most query processors. Therefore the community has made effort to add relational APIs to Apache Flink, a standard SQL API and a language-integrated Table API.
Both APIs are semantically compatible and share the same optimization and execution path based on Apache Calcite. Since Flink supports both stream and batch processing and many use cases require both kinds of processing, we aim for a unified relational layer.
In this talk we will look at the current API capabilities, find out what's under the hood of Flink’s relational APIs, and give an outlook for future features such as dynamic tables, Flink's way how streams are converted into tables and vice versa leveraging the stream-table duality.
MVC allows you to divide responsibilities in your application but offers no help in building the most critical part: the domain logic. This talk will introduce ways that can help you to encapsulate the richness of your domain. We'll look at patterns such as Action Domain Responder and Hexagonal Architecture before introducing Domain Driven Design. Find out how to get beyond MVC and begin modelling your domains in rich, powerful and reusable ways.
Given at TrueNorthPHP 2014:
"MVC presents a great way to divide responsibilities in your application but it offers no help in building the most critical part: the model or domain. This talk will introduce ways that can help you to encapsulate the richness of your domain. We'll look at Action Domain Response as a new way of thinking about the concepts presented in MVC before examining Hexagonal Architecture, allowing you to easily reuse your domain across multiple delivery mechanisms. We'll then finish with an introduction to Domain Driven Design, a technique that allows you to closely align your domain with the business problems it is solving while helping keep things well designed and easily maintainable. By the end of this talk you should have the knowledge needed to begin modelling your domains more powerfully while keeping them aligned to the real world problems they solve."
Have Your Cake and Eat It Too -- Further Dispelling the Myths of the Lambda A...C4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/15ACXCw.
Tyler Akidau from Google demonstrates Google's Millwheel, a streaming system that promises low latency, strong consistency, and flexibility without relying on Lambda Architecture. Filmed at qconsf.com.
Tyler Akidau is a Senior Software Engineer at Google. The current Tech Lead for the MillWheel team, he’s spent five years working on massive-scale streaming data processing systems.
Apache Flink's Table & SQL API - unified APIs for batch and stream processingTimo Walther
SQL is undoubtedly the most widely used language for data analytics. It is declarative and can be optimized and efficiently executed by most query processors. Therefore the community has made effort to add relational APIs to Apache Flink, a standard SQL API and a language-integrated Table API.
Both APIs are semantically compatible and share the same optimization and execution path based on Apache Calcite. Since Flink supports both stream and batch processing and many use cases require both kinds of processing, we aim for a unified relational layer.
In this talk we will look at the current API capabilities, find out what's under the hood of Flink’s relational APIs, and give an outlook for future features such as dynamic tables, Flink's way how streams are converted into tables and vice versa leveraging the stream-table duality.
MVC allows you to divide responsibilities in your application but offers no help in building the most critical part: the domain logic. This talk will introduce ways that can help you to encapsulate the richness of your domain. We'll look at patterns such as Action Domain Responder and Hexagonal Architecture before introducing Domain Driven Design. Find out how to get beyond MVC and begin modelling your domains in rich, powerful and reusable ways.
Given at TrueNorthPHP 2014:
"MVC presents a great way to divide responsibilities in your application but it offers no help in building the most critical part: the model or domain. This talk will introduce ways that can help you to encapsulate the richness of your domain. We'll look at Action Domain Response as a new way of thinking about the concepts presented in MVC before examining Hexagonal Architecture, allowing you to easily reuse your domain across multiple delivery mechanisms. We'll then finish with an introduction to Domain Driven Design, a technique that allows you to closely align your domain with the business problems it is solving while helping keep things well designed and easily maintainable. By the end of this talk you should have the knowledge needed to begin modelling your domains more powerfully while keeping them aligned to the real world problems they solve."
Back to the future - Temporal Table in SQL Server 2016Stéphane Fréchette
SQL Server 2016 CTP2 introduced support for temporal tables as a database feature that provides built-in support for provide information about data stored in the table at any point in time rather than only the data that is correct at the current moment in time.
Topics will cover:
What is a Temporal Table?, Why Temporal? How does this work?, When to use (use cases) and demos...
Timo Walther - Table & SQL API - unified APIs for batch and stream processingVerverica
SQL is undoubtedly the most widely used language for data analytics. It is declarative and can be optimized and efficiently executed by most query processors. Therefore the community has made effort to add relational APIs to Apache Flink, a standard SQL API and a language-integrated Table API.
Both APIs are semantically compatible and share the same optimization and execution path based on Apache Calcite. Since Flink supports both stream and batch processing and many use cases require both kinds of processing, we aim for a unified relational layer.
In this talk we will look at the current API capabilities, find out what's under the hood of Flink’s relational APIs, and give an outlook for future features such as dynamic tables, Flink's way how streams are converted into tables and vice versa leveraging the stream-table duality.
SQL Extensions to Support Streaming Data With Fabian Hueske | Current 2022HostedbyConfluent
SQL Extensions to Support Streaming Data With Fabian Hueske | Current 2022
For 40 years SQL has been the dominant language for data access and manipulation. Now that an increasing proportion of data is being processed in a streaming way, tool vendors (commercial and open source) have begun using SQL-like syntax in their event stream processing tools.
Over the last couple of years, several of these vendors - including AWS, Confluent, Google, IBM, Microsoft, Oracle, Snowflake and SQLstream - have got together with the Data Management group at INCITS (who maintain the SQL standard) to work on streaming extensions.
INCITS -- the InterNational Committee for Information Technology Standards -- is the central U.S. forum dedicated to creating technology standards for the next generation of innovation. INCITS is accredited by the American National Standards Institute (ANSI).
This talk will look at:
o Why is this happening?
o Who is involved?
o How does the process work?
o What progress has been made?
o When can we expect to see a standard?
Best Practices in Business Analysis / Financial Analysis. There are powerful ways to utilize Excel Automation to reduce reporting time, error rate, and increase ease of update.
We can leverage Delta Lake, structured streaming for write-heavy use cases. This talk will go through a use case at Intuit whereby we built MOR as an architecture to allow for a very low SLA, etc. For MOR, there are different ways to view the fresh data, so we will also go over the methods used to perfTest the various ways that we were able to arrive at the best method for the given use case.
Adminlicious - A Guide To TCO Features In Domino v10Gabriella Davis
With v10 of EVERYTHING due out in Q4 and the public beta now available it’s time to talk about what we know is coming and how to plan for upgrades. In this session I show the features I'm most inspired by (NDAs allowing!) talk about how I'm getting ready and why this is a really exciting time to be an admin!
Take advantage of FME Server’s capabilities for real-time integration and change data capture. Learn about workflows for monitoring and updating your data as it changes. We’ll look at what data sources/systems are monitored out-of-the-box and how you can enable change data capture for other data sources/systems.
Streaming is necessary to handle data rates and latency but SQL is unquestionably the lingua franca of data. Where do the two meet?
Apache Calcite is extending SQL to include streaming, and the Samza, Storm and Flink are projects are each building it into their engines. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Julian Hyde gave this talk at the first Kafka Summit, San Francisco, 2016/04/26.
ApacheCon 2020 - Flink SQL in 2020: Time to show off!Timo Walther
Four years ago, the Apache Flink community started adding SQL support to ease and unify the processing of static and streaming data. Today, Flink runs business critical batch and streaming SQL queries at Alibaba, Huawei, Lyft, Uber, Yelp, and many others. Although the community made significant progress in the past years, there are still many things on the roadmap and the development is still speeding up. In the past months, several significant improvements and extensions were added including support for DDL statements, refactorings of the type system and the catalog interface, as well as Apache Hive integration. Since it is difficult to follow all development efforts that happen around Flink SQL and its ecosystem, it is time for an update. This session will focus on a comprehensive demo of what is possible with Flink SQL in 2020. Based on a realistic use case scenario, we'll show how to define tables which are backed by various storage systems and how to solve common tasks with streaming SQL queries. We will demonstrate Flink's Hive integration and show how to define and use user-defined functions. We'll close the session with an outlook of upcoming features.
A talk given by Julian Hyde at FlinkForward, Berlin, on 2016/09/12.
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Flink is using Calcite to support both regular and streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
http://flink-forward.org/kb_sessions/streaming-sql/
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Flink is using Calcite to support both regular and streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Back to FME School - Day 2: Your Data and FMESafe Software
It’s that time of year. The season is changing and FME ‘school’ is now in session! Join us for a series of 9 mini-talks to learn the latest tips for data transformation, see live demos, and get your FME questions answered. Registration gives you access for all three days — sign up now to tune in to the talks you’re most interested in.
Course schedule - Day 2
Automating Everything – Wednesday, September 27, 8:00am – 10:00am PDT
8:00am – Bulk data processing
8:40am – Ultimate Real-Time: Monitor Anything, Update Anything
9:20am – FME in the Enterprise
In many database applications we first log data and then, a few hours or days later, we start analyzing it. But in a world that’s moving faster and faster, we sometimes need to analyze what is happening NOW.
Azure Stream Analytics allows you to analyze streams of data via a new Azure service. In this session you will see how to get started using this new service. From event hubs on the input side over temporal SQL queries: the demo’s in this session will show you end to end how to get started with Azure Stream Analytics.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
More Related Content
Similar to Tracking your data across the fourth dimension
Back to the future - Temporal Table in SQL Server 2016Stéphane Fréchette
SQL Server 2016 CTP2 introduced support for temporal tables as a database feature that provides built-in support for provide information about data stored in the table at any point in time rather than only the data that is correct at the current moment in time.
Topics will cover:
What is a Temporal Table?, Why Temporal? How does this work?, When to use (use cases) and demos...
Timo Walther - Table & SQL API - unified APIs for batch and stream processingVerverica
SQL is undoubtedly the most widely used language for data analytics. It is declarative and can be optimized and efficiently executed by most query processors. Therefore the community has made effort to add relational APIs to Apache Flink, a standard SQL API and a language-integrated Table API.
Both APIs are semantically compatible and share the same optimization and execution path based on Apache Calcite. Since Flink supports both stream and batch processing and many use cases require both kinds of processing, we aim for a unified relational layer.
In this talk we will look at the current API capabilities, find out what's under the hood of Flink’s relational APIs, and give an outlook for future features such as dynamic tables, Flink's way how streams are converted into tables and vice versa leveraging the stream-table duality.
SQL Extensions to Support Streaming Data With Fabian Hueske | Current 2022HostedbyConfluent
SQL Extensions to Support Streaming Data With Fabian Hueske | Current 2022
For 40 years SQL has been the dominant language for data access and manipulation. Now that an increasing proportion of data is being processed in a streaming way, tool vendors (commercial and open source) have begun using SQL-like syntax in their event stream processing tools.
Over the last couple of years, several of these vendors - including AWS, Confluent, Google, IBM, Microsoft, Oracle, Snowflake and SQLstream - have got together with the Data Management group at INCITS (who maintain the SQL standard) to work on streaming extensions.
INCITS -- the InterNational Committee for Information Technology Standards -- is the central U.S. forum dedicated to creating technology standards for the next generation of innovation. INCITS is accredited by the American National Standards Institute (ANSI).
This talk will look at:
o Why is this happening?
o Who is involved?
o How does the process work?
o What progress has been made?
o When can we expect to see a standard?
Best Practices in Business Analysis / Financial Analysis. There are powerful ways to utilize Excel Automation to reduce reporting time, error rate, and increase ease of update.
We can leverage Delta Lake, structured streaming for write-heavy use cases. This talk will go through a use case at Intuit whereby we built MOR as an architecture to allow for a very low SLA, etc. For MOR, there are different ways to view the fresh data, so we will also go over the methods used to perfTest the various ways that we were able to arrive at the best method for the given use case.
Adminlicious - A Guide To TCO Features In Domino v10Gabriella Davis
With v10 of EVERYTHING due out in Q4 and the public beta now available it’s time to talk about what we know is coming and how to plan for upgrades. In this session I show the features I'm most inspired by (NDAs allowing!) talk about how I'm getting ready and why this is a really exciting time to be an admin!
Take advantage of FME Server’s capabilities for real-time integration and change data capture. Learn about workflows for monitoring and updating your data as it changes. We’ll look at what data sources/systems are monitored out-of-the-box and how you can enable change data capture for other data sources/systems.
Streaming is necessary to handle data rates and latency but SQL is unquestionably the lingua franca of data. Where do the two meet?
Apache Calcite is extending SQL to include streaming, and the Samza, Storm and Flink are projects are each building it into their engines. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Julian Hyde gave this talk at the first Kafka Summit, San Francisco, 2016/04/26.
ApacheCon 2020 - Flink SQL in 2020: Time to show off!Timo Walther
Four years ago, the Apache Flink community started adding SQL support to ease and unify the processing of static and streaming data. Today, Flink runs business critical batch and streaming SQL queries at Alibaba, Huawei, Lyft, Uber, Yelp, and many others. Although the community made significant progress in the past years, there are still many things on the roadmap and the development is still speeding up. In the past months, several significant improvements and extensions were added including support for DDL statements, refactorings of the type system and the catalog interface, as well as Apache Hive integration. Since it is difficult to follow all development efforts that happen around Flink SQL and its ecosystem, it is time for an update. This session will focus on a comprehensive demo of what is possible with Flink SQL in 2020. Based on a realistic use case scenario, we'll show how to define tables which are backed by various storage systems and how to solve common tasks with streaming SQL queries. We will demonstrate Flink's Hive integration and show how to define and use user-defined functions. We'll close the session with an outlook of upcoming features.
A talk given by Julian Hyde at FlinkForward, Berlin, on 2016/09/12.
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Flink is using Calcite to support both regular and streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
http://flink-forward.org/kb_sessions/streaming-sql/
Streaming is necessary to handle data rates and latency, but SQL is unquestionably the lingua franca of data. Is it possible to combine SQL with streaming, and if so, what does the resulting language look like? Apache Calcite is extending SQL to include streaming, and Apache Flink is using Calcite to support both regular and streaming SQL. In this talk, Julian Hyde describes streaming SQL in detail and shows how you can use streaming SQL in your application. He also describes how Calcite’s planner optimizes queries for throughput and latency.
Back to FME School - Day 2: Your Data and FMESafe Software
It’s that time of year. The season is changing and FME ‘school’ is now in session! Join us for a series of 9 mini-talks to learn the latest tips for data transformation, see live demos, and get your FME questions answered. Registration gives you access for all three days — sign up now to tune in to the talks you’re most interested in.
Course schedule - Day 2
Automating Everything – Wednesday, September 27, 8:00am – 10:00am PDT
8:00am – Bulk data processing
8:40am – Ultimate Real-Time: Monitor Anything, Update Anything
9:20am – FME in the Enterprise
In many database applications we first log data and then, a few hours or days later, we start analyzing it. But in a world that’s moving faster and faster, we sometimes need to analyze what is happening NOW.
Azure Stream Analytics allows you to analyze streams of data via a new Azure service. In this session you will see how to get started using this new service. From event hubs on the input side over temporal SQL queries: the demo’s in this session will show you end to end how to get started with Azure Stream Analytics.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
5. – Jeff Carouth, https://twitter.com/jcarouth/status/496842218674470912
“Tonight @JCook21 explained temporal databases
and I’m sure my brain is now leaking out of my
nose.”
7. Databases are good at ‘now’
❖ Create
❖ Read
❖ Update
❖ Delete
❖ At any point we only see the current state of the data
8. Databases are good at ‘now’
❖ How many people work in each department of the
company?
❖ For each product category how many products are in
stock? Where is the stock located at?
❖ How many orders are currently in each fulfilment state?
9. The fourth dimension
❖ Show me how salaries paid have changed by
department for each quarter over the last 4 years and
how they’re forecast to change next year
❖ Show me how stock levels have changed over time. How
much stock are we forecast to have at any point in the
future?
❖ For audit purposes show me a complete history of every
change to this data, what period of time each change
was valid for and when we knew about any changes
13. Decision Time
❖ Records the time at which a decision was made
❖ Modelled as a single value
❖ Allows for granularity through the data type used
14. Decision Time
EmpId Name Hire Date Decision to Hire
1 Jeremy 2014-03-03 2014-01-20
2 Anna 2015-01-02 2013-12-15
3 Yann 2013-08-20 2013-08-20
15. Valid Time
“In temporal databases, valid time (VT) is the time
period during which a database fact is valid in the
modelled reality.”
–Wikipedia
16. Valid Time
❖ Modelled as a period of time between two dates
❖ Lower bound is always closed but upper bound can be
open
17. Valid Time
EmpId Name Hire date Termination date
1 Jeremy 2014-03-03 2015-01-20
2 Anna 2015-01-02 ∞
3 Yann 2013-08-20 2015-12-22
4 Colin 2015-05-01 ∞
18. Valid Time
EmpId Name Dept Hire date Term date StartVT EndVT
1 Jeremy Dev 2014-03-03 ∞ 2014-03-03 2014-07-30
1 Jeremy QA 2014-03-03 2015-01-20 2015-01-21 2015-01-20
2 Anna Dev 2015-01-02 ∞ 2015-01-02 2015-01-30
2 Anna Mgmt 2015-01-02 ∞ 2015-01-31 ∞
3 Yann Mgmt 2013-08-20 2015-12-22 2013-08-20 ∞
4 Colin Dev 2015-05-01 ∞ 2015-05-01 ∞
20. Valid-time on its own may not be enough!
Name Type StartVT EndVT
Saturn Planet
Billions of years
ago
∞
Pluto Planet
Billions of years
ago
∞
21. Valid-time on its own may not be enough!
Name Type StartVT EndVT
Saturn Planet
Billions of years
ago
∞
Pluto Dwarf planet
Billions of years
ago
∞
22. Valid-time on its own may not be enough!
Name Type StartVT EndVT
Saturn Planet
Billions of years
ago
∞
Pluto Plutoid
Billions of years
ago
∞
23. Valid-time on its own may not be enough!
Name Type StartVT EndVT
Saturn Planet
Billions of years
ago
∞
Pluto Planet
Billions of years
ago
2006
Pluto Dwarf planet 2006 2008
Pluto Plutoid 2008 ∞
24. Transaction Time
“In temporal databases, transaction time (TT) is the
time period during which a fact stored in the
database is considered to be true.”
–Wikipedia
25. Transaction Time
❖ Modelled as a period of time between two dates
❖ Lower bound is always closed but upper bound can be
open
26. Transaction Time
Name Type StartVT EndVT StartTT EndTT
Pluto Planet
Billions of
years ago
∞ 1930 2006
Pluto
Dwarf
planet
Billions of
years ago
∞ 2006 2008
Pluto Plutoid
Billions of
years ago
∞ 2008 ∞
27. Valid Time != Transaction Time
Name Clothing StartVT EndVT StartTT EndTT
Father
Christmas
null
A long time
ago
∞ 1973 1975
Santa Claus red
A long time
ago
∞ 1975 1980
Saint
Nicholas
red 270 AD ∞ 1980 1982
28. How many temporal aspects should you use?
❖ As many or few as your application needs!
❖ Tables that implement two aspects are bi-temporal
❖ You can implement more aspects, in which case you
have multi temporal tables
29. Is your head spinning?
❖ Decision time records when a decision was taken
❖ Valid Time records the period of time for which the fact
is valid
❖ Transaction Time records the period of time for which
the fact is considered to be true
31. A note on the example tables
CREATE TABLE dept (
DNo INTEGER,
DName VARCHAR(255)
);
CREATE TABLE emp (
ENo INTEGER,
EName VARCHAR(255),
EDept INTEGER
);
32. Periods
❖ Table component, capturing a pair of columns defining
a start and end date
❖ Not a new data type, but metadata about columns in the
table
❖ Closed-open constraint
❖ Enforces that end time > start time
33. Valid time
❖ Also called application time in SQL:2011
❖ Modelled as a pair of date time columns with a period
❖ Name of the columns and period is up to you
34. Valid time
ALTER TABLE emp ADD (
EStart DATE,
EEnd DATE,
PERIOD FOR EPeriod (EStart, EEnd)
);
35. Temporal primary keys
❖ SQL:2011 allows a valid time period to be named as part
of a primary key
❖ Can also enforce that the valid time periods do not
overlap
38. Temporal foreign keys
❖ What happens if a parent and child table both define
valid time periods?
❖ It doesn’t make sense to allow a row in a child table to
reference a row in a parent table where the valid time
does not overlap
❖ SQL:2011 allows valid time periods to be part of foreign
key constraints
39. Temporal foreign keys
ALTER TABLE dept ADD (
DStart DATE,
DEnd DATE,
PERIOD FOR DPeriod (DStart, DEnd)
);
ALTER TABLE emp
ADD FOREIGN KEY (Edept, EPeriod)
REFERENCES dept (DNo, PERIOD DPeriod);
40. Querying valid time tables
❖ Can query against valid time columns as normal -
they’re just normal table columns
❖ Updates and deletes can be performed for a period of a
valid time time period
41. Querying valid time tables
❖ SQL:2011 allows you to create periods to use in your queries
and use new predicates:
❖ CONTAINS
❖ OVERLAPS
❖ EQUALS
❖ PRECEDES
❖ SUCCEEDS
❖ IMMEDIATELY SUCCEEDS and IMMEDIATELY PRECEDES
42. Querying valid time tables
UPDATE Emp
FOR PORTION OF EPeriod
FROM DATE '2011-02-03'
TO DATE '2011-09-10'
SET EDept = 4
WHERE ENo = 22217;
43. Querying valid time tables
DELETE Emp
FOR PORTION OF EPeriod
FROM DATE '2011-02-03'
TO DATE '2011-09-10'
WHERE ENo = 22217;
44. Querying valid time tables
SELECT EName, Edept
FROM Emp
WHERE ENo = 22217
AND EPeriod CONTAINS DATE '2015-01-23';
45. Querying valid time tables
SELECT EName, Edept
FROM Emp
WHERE ENo = 31
AND EPeriod OVERLAPS
PERIOD (DATE '2015-01-01',
DATE '2015-01-31');
46. Transaction time
❖ Also known as system time in SQL:2011
❖ Modelled as two DATE or TIMESTAMP columns
❖ Management of the columns for the period is handled
by the database for you
47. Transaction time
❖ When data is inserted:
❖ Start of transaction time is set to current time
48. Transaction time
❖ When data is updated:
❖ Transaction time end is set to current time on the
existing row
❖ A new row is added with the updated date and a
transaction time start of the current time
49. Transaction time
❖ When data is deleted:
❖ Transaction time end is set to current time in the
existing row
50. Transaction time
❖ Because the system manages transaction time:
❖ Not possible to alter transaction time values in the
past
❖ Not possible to add future dated transaction time
values
❖ Referential constraints on historical data are never
checked
51. Transaction time
CREATE TABLE emp (
…,
Sys_start TIMESTAMP(12) GENERATED ALWAYS
AS ROW START,
Sys_end TIMESTAMP(12) GENERATED ALWAYS
AS ROW END,
PERIOD FOR SYSTEM_TIME (Sys_start,
Sys_end)
) WITH SYSTEM VERSIONING;
52. Querying transaction time tables
❖ New predicates to be used with transaction time:
❖ FOR SYSTEM_TIME AS OF
❖ FOR SYSTEM_TIME FROM
❖ FOR SYSTEM_TIME BETWEEN
❖ If none of the above supplied the database should only
return rows for the current system time
54. Querying transaction time tables
SELECT ENo, EName
FROM emp
WHERE ENo = 22
FOR SYSTEM_TIME AS OF
TIMESTAMP '2015-01-28 12:45:00';
55. Querying transaction time tables
SELECT ENo, EName
FROM emp
WHERE ENo = 22
AND EPeriod CONTAINS DATE '2014-08-27'
FOR SYSTEM_TIME AS OF
TIMESTAMP '2015-01-28 12:45:00';
56. Grey areas/not implemented yet
❖ Evolving schema over time
❖ Support for period joins
❖ Support for period aggregates or period grouped
queries
❖ Support for period normalization
❖ Support for multiple valid time periods per table
58. Current support
❖ Oracle 12c
❖ SQL:2011 compliant but not even nearly complete
❖ PostgreSQL
❖ 9.1 and earlier: temporal contributed package
❖ 9.2 native ranged data types
❖ IBM DB2 through ‘time travel query’ feature
❖ Teradata 13.10 and 14
❖ Handful of others implemented as extensions
60. Implementing valid time
❖ Add a pair of date time columns to your table for the
valid time period.
❖ Can make these part of your primary key
61. Implementing valid time
❖ Things to consider:
❖ Have to check for end time > start time
❖ Have to check for overlaps in valid time periods
❖ Temporal foreign keys have to be implemented
yourself
❖ Queries become potentially more complex
62. Implementing transaction time
❖ Add a column recording transaction time start to your table
❖ For each table create a backup table mirroring the columns
in the main table, adding a transaction time end column too
❖ Create a trigger that fires on each update or delete to copy
old values from the main table to the backup table
❖ Should add transaction time end to the backup table
❖ Should also update the transaction time start to now in
the main table if the operation is an update
63. Implementing transaction time
❖ Things to consider:
❖ Extra complexity
❖ How long should backup data be kept for?
❖ Do you optimize for fast reads or writes?
❖ Should truncating the main table delete the data from
the backup?
64. More information
❖ Wikipedia article on Temporal Databases
❖ Temporal features in SQL:2011 (PDF)
❖ Time and Relational Theory
65. Thanks for listening!
❖ Any questions?
❖ I’d love some feedback
❖ https://joind.in/talk/view/13294
❖ Contact me:
❖ @JCook21
❖ jeremycook0@icloud.com