The document discusses the basic steps in query processing, including parsing and translation, optimization, and evaluation. It describes parsing a query into its internal form, translating it to relational algebra, and generating multiple evaluation plans. Optimization selects the most efficient plan based on estimated costs. The selected plan is then used to iteratively execute the query and return the result set.
This document provides an overview of Module 5: Optimize query performance in Azure SQL. The module contains 3 lessons that cover analyzing query plans, evaluating potential improvements, and reviewing table and index design. Lesson 1 explores generating and comparing execution plans, understanding how plans are generated, and the benefits of the Query Store. Lesson 2 examines database normalization, data types, index types, and denormalization. Lesson 3 describes wait statistics, tuning indexes, and using query hints. The lessons aim to help administrators optimize query performance in Azure SQL.
The document discusses Oracle system catalogs which contain metadata about database objects like tables and indexes. System catalogs allow accessing information through views with prefixes like USER, ALL, and DBA. Examples show how to query system catalog views to get information on tables, columns, indexes and views. Query optimization and evaluation are also covered, explaining how queries are parsed, an execution plan is generated, and the least cost plan is chosen.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
Database Systems Design, Implementation, and ManagementOllieShoresna
Database Systems: Design,
Implementation, and
Management
Eighth Edition
Chapter 11
Database Performance Tuning and
Query Optimization
Database Systems, 8th Edition 2
Objectives
• In this chapter, you will learn:
– Basic database performance-tuning concepts
– How a DBMS processes SQL queries
– About the importance of indexes in query processing
– About the types of decisions the query optimizer has
to make
– Some common practices used to write efficient SQL
code
– How to formulate queries and tune the DBMS for
optimal performance
– Performance tuning in SQL Server 2005
Database Systems, 8th Edition 3
11.1 Database Performance-Tuning Concepts
• Goal of database performance is to execute
queries as fast as possible
• Database performance tuning
– Set of activities and procedures designed to
reduce response time of database system
• All factors must operate at optimum level with
minimal bottlenecks
• Good database performance starts with
good database design
Database Systems, 8th Edition 4
Database Systems, 8th Edition 5
Performance Tuning: Client and Server
• Client side
– Generate SQL query that returns correct answer
in least amount of time
• Using minimum amount of resources at server
– SQL performance tuning
• Server side
– DBMS environment configured to respond to
clients’ requests as fast as possible
• Optimum use of existing resources
– DBMS performance tuning
Database Systems, 8th Edition 6
DBMS Architecture
• All data in database are stored in data files
• Data files
– Automatically expand in predefined increments
known as extends
– Grouped in file groups or table spaces
• Table space or file group:
– Logical grouping of several data files that store
data with similar characteristics
Database Systems, 8th Edition 7
Basic DBMS architecture
Database Systems, 8th Edition 8
DBMS Architecture (continued)
• Data cache or buffer cache: shared, reserved
memory area
– Stores most recently accessed data blocks in RAM
• SQL cache or procedure cache: stores most
recently executed SQL statements
– Also PL/SQL procedures, including triggers and
functions
• DBMS retrieves data from permanent storage and
places it in RAM
Database Systems, 8th Edition 9
DBMS Architecture (continued)
• Input/output request: low-level data access
operation to/from computer devices, such as
memory, hard disks, videos, and printers
• Data cache is faster than data in data files
– DBMS does not wait for hard disk to retrieve data
• Majority of performance-tuning activities focus on
minimizing I/O operations
• Typical DBMS processes:
– Listener, User, Scheduler, Lock manager, Optimizer
Database Systems, 8th Edition 10
Database Statistics
• Measurements about database objects and available
resources
– Tables, Indexes, Number of processors used,
Processor speed, Temporary space available
• Make critical decisions about improving query
processing efficiency
• Can be gathered manually by ...
SSIS is a component of SQL Server that allows for data integration and workflow. It has separate runtime and data flow engines. The runtime engine manages package execution and control flow, while the data flow engine extracts, transforms, and loads data in a parallel, buffered manner for improved performance. SSAS is the analysis component that builds multidimensional cubes from relational data sources for analysis. It uses an OLAP storage model and has components for querying, processing, and caching data and calculations. SSRS is the reporting component that allows users to build interactive, parameterized reports from various data sources and deliver them through a web portal.
This document provides an overview of the Database Management Systems -20ISE43A course. It lists the required textbooks and references. It then outlines the 5 modules that will be covered in the course: introduction to databases, entity relationship diagrams, the relational model, relational algebra, and advanced SQL and transaction management. The document also lists the course outcomes and provides brief descriptions of some of the key topics that will be covered, including embedded SQL, dynamic SQL, database stored procedures, transaction concepts, and concurrency issues.
This document provides an overview of Module 5: Optimize query performance in Azure SQL. The module contains 3 lessons that cover analyzing query plans, evaluating potential improvements, and reviewing table and index design. Lesson 1 explores generating and comparing execution plans, understanding how plans are generated, and the benefits of the Query Store. Lesson 2 examines database normalization, data types, index types, and denormalization. Lesson 3 describes wait statistics, tuning indexes, and using query hints. The lessons aim to help administrators optimize query performance in Azure SQL.
The document discusses Oracle system catalogs which contain metadata about database objects like tables and indexes. System catalogs allow accessing information through views with prefixes like USER, ALL, and DBA. Examples show how to query system catalog views to get information on tables, columns, indexes and views. Query optimization and evaluation are also covered, explaining how queries are parsed, an execution plan is generated, and the least cost plan is chosen.
Antes de migrar de 10g a 11g o 12c, tome en cuenta las siguientes consideraciones. No es tan sencillo como simplemente cambiar de motor de base de datos, se necesita hacer consideraciones a nivel del aplicativo.
Database Systems Design, Implementation, and ManagementOllieShoresna
Database Systems: Design,
Implementation, and
Management
Eighth Edition
Chapter 11
Database Performance Tuning and
Query Optimization
Database Systems, 8th Edition 2
Objectives
• In this chapter, you will learn:
– Basic database performance-tuning concepts
– How a DBMS processes SQL queries
– About the importance of indexes in query processing
– About the types of decisions the query optimizer has
to make
– Some common practices used to write efficient SQL
code
– How to formulate queries and tune the DBMS for
optimal performance
– Performance tuning in SQL Server 2005
Database Systems, 8th Edition 3
11.1 Database Performance-Tuning Concepts
• Goal of database performance is to execute
queries as fast as possible
• Database performance tuning
– Set of activities and procedures designed to
reduce response time of database system
• All factors must operate at optimum level with
minimal bottlenecks
• Good database performance starts with
good database design
Database Systems, 8th Edition 4
Database Systems, 8th Edition 5
Performance Tuning: Client and Server
• Client side
– Generate SQL query that returns correct answer
in least amount of time
• Using minimum amount of resources at server
– SQL performance tuning
• Server side
– DBMS environment configured to respond to
clients’ requests as fast as possible
• Optimum use of existing resources
– DBMS performance tuning
Database Systems, 8th Edition 6
DBMS Architecture
• All data in database are stored in data files
• Data files
– Automatically expand in predefined increments
known as extends
– Grouped in file groups or table spaces
• Table space or file group:
– Logical grouping of several data files that store
data with similar characteristics
Database Systems, 8th Edition 7
Basic DBMS architecture
Database Systems, 8th Edition 8
DBMS Architecture (continued)
• Data cache or buffer cache: shared, reserved
memory area
– Stores most recently accessed data blocks in RAM
• SQL cache or procedure cache: stores most
recently executed SQL statements
– Also PL/SQL procedures, including triggers and
functions
• DBMS retrieves data from permanent storage and
places it in RAM
Database Systems, 8th Edition 9
DBMS Architecture (continued)
• Input/output request: low-level data access
operation to/from computer devices, such as
memory, hard disks, videos, and printers
• Data cache is faster than data in data files
– DBMS does not wait for hard disk to retrieve data
• Majority of performance-tuning activities focus on
minimizing I/O operations
• Typical DBMS processes:
– Listener, User, Scheduler, Lock manager, Optimizer
Database Systems, 8th Edition 10
Database Statistics
• Measurements about database objects and available
resources
– Tables, Indexes, Number of processors used,
Processor speed, Temporary space available
• Make critical decisions about improving query
processing efficiency
• Can be gathered manually by ...
SSIS is a component of SQL Server that allows for data integration and workflow. It has separate runtime and data flow engines. The runtime engine manages package execution and control flow, while the data flow engine extracts, transforms, and loads data in a parallel, buffered manner for improved performance. SSAS is the analysis component that builds multidimensional cubes from relational data sources for analysis. It uses an OLAP storage model and has components for querying, processing, and caching data and calculations. SSRS is the reporting component that allows users to build interactive, parameterized reports from various data sources and deliver them through a web portal.
This document provides an overview of the Database Management Systems -20ISE43A course. It lists the required textbooks and references. It then outlines the 5 modules that will be covered in the course: introduction to databases, entity relationship diagrams, the relational model, relational algebra, and advanced SQL and transaction management. The document also lists the course outcomes and provides brief descriptions of some of the key topics that will be covered, including embedded SQL, dynamic SQL, database stored procedures, transaction concepts, and concurrency issues.
Database performance tuning and query optimizationDhani Ahmad
Database performance tuning involves activities to ensure queries are processed in the minimum amount of time. A DBMS processes queries in three phases - parsing, execution, and fetching. Indexes are crucial for speeding up data access by facilitating operations like searching and sorting. Query optimization involves the DBMS choosing the most efficient plan for accessing data, such as which indexes to use.
This document discusses query processing and provides an overview of algorithms for evaluating relational algebra operations. It begins with an overview of the basic steps in query processing - parsing and translation, optimization, and evaluation. It then discusses how to measure query costs by focusing on resource consumption, particularly disk access. The document outlines algorithms for common relational operations like selection, sorting, and join. It provides cost estimates for different algorithms like file scan, index scan, and block nested loops join. The overall summary is that the document describes query processing and evaluation strategies for relational algebra operations like selection and join, providing cost estimates to help optimize queries.
This document discusses query processing and algorithms for evaluating relational algebra operations. It begins with an overview of the basic steps in query processing: parsing and translation, optimization, and evaluation. It then discusses how to measure query costs using a cost model based on disk access times. The document outlines several algorithms (A1-A10) for performing selection operations on relations using file scans and indexes. It provides cost estimates for each algorithm based on factors like the number of blocks accessed and index height. The algorithms can handle selections with equality and inequality conditions, as well as complex selections using conjunctions, disjunctions, and negation.
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...Databricks
This document summarizes a presentation on extending Spark SQL Data Sources APIs with join push down. The presentation discusses how join push down can significantly improve query performance by reducing data transfer and exploiting data source capabilities like indexes. It provides examples of join push down in enterprise data pipelines and SQL acceleration use cases. The presentation also outlines the challenges of network speeds and exploiting data source capabilities, and how join push down addresses these challenges. Future work discussed includes building a cost model for global optimization across data sources.
Active/Active Database Solutions with Log Based Replication in xDB 6.0EDB
EDB’s xDB Replication Server is a highly flexible database replication tool that provides single and multi-master solutions for read/write scalability, availability, performance, and data integration with Oracle, SQL Server and Postgres. Dozens of worldwide customers have been using xDB Replication Server for the past 4 years, and we are extremely excited to introduce a pivotal new release, version 6.0.
This presentation reviews the features in xDB 6.0 including:
* Faster and more efficient replication with log-based Multi Master replication for Postgres Plus and PostgreSQL
* Easier to configure publication tables in bulk with pattern matching selection rules
* Ensure High Availability with integration of the 'Control Schema'
* Improved performance in conflict detection rules
This document provides an overview of database security concepts including confidentiality, integrity, and availability. It defines database security as protecting the confidentiality, integrity, and availability of data. Key concepts discussed include authentication, authorization, access control, data encryption, data privacy, auditing, and logging. The document also outlines security problems such as non-fraudulent threats from errors or disasters and fraudulent threats from authorized users abusing privileges or hostile agents attacking the system.
The document discusses several SQL best practices and new features in SQL Server 2012. It covers basic concepts like sets and order in relational databases. It also discusses strategic imperatives like stability, adaptability and maintainability. New SQL Server 2012 features highlighted include xVelocity in-memory technologies, columnstore indexes, Power View interactive reporting, data compression techniques, and the Data Quality Services for data cleansing and profiling. The document also provides tips on topics like layered coding, efficient resource usage, avoiding cursors, proper use of transactions, and joins versus other operators.
This document provides an overview of performance tuning the MySQL server. It discusses where to find server configuration and status information, how to analyze what the database is doing using status variables, and which configuration variables can be tuned for optimization, including global, per-session, and storage engine variables. Key areas covered include memory usage, query analysis, indexing strategies, and tuning storage engines like InnoDB and MyISAM.
This document provides information about an inplant training program offered by KAASHIV INFOTECH in Chennai, India. It outlines 5-day training schedules for students of CSE/IT/MCA and ECE/EE/EIE focused on topics like Big Data, cloud computing, CCNA, ethical hacking, and MATLAB. It also lists a 5-day training schedule for mechanical/civil engineering students and provides contact information for the training program.
This document provides information about an inplant training program offered by KAASHIV INFOTECH in Chennai, India. It outlines 5-day training schedules for students of CSE/IT/MCA and ECE/EE/EIE focused on topics like Big Data, cloud computing, CCNA, ethical hacking, and MATLAB. It also lists a 5-day training schedule for mechanical/civil engineering students and provides contact information for the training program.
The document discusses techniques used by a database management system (DBMS) to process, optimize, and execute high-level queries. It describes the phases of query processing which include syntax checking, translating the SQL query into an algebraic expression, optimization to choose an efficient execution plan, and running the optimized plan. Query optimization aims to minimize resources like disk I/O and CPU time by selecting the best execution strategy. Techniques for optimization include heuristic rules, cost-based methods, and semantic query optimization using constraints.
Data Warehouse Physical Design,Physical Data Model, Tablespaces, Integrity Constraints, ETL (Extract-Transform-Load) ,OLAP Server Architectures, MOLAP vs. ROLAP, Distributed Data Warehouse ,
This document provides a summary of 20 interview questions related to Informatica. It discusses concepts like the components of Informatica, what a repository is and how to add one, different types of transformations used in mappings and their purposes, how to make transformations reusable, how to import source and target definitions, and what a session is and how to create it. The document is a training resource that provides answers to common Informatica interview questions.
Prepare for your interview with these top 20 SAP HANA interview questions. For more IT Profiles, Sample Resumes, Practice exams, Interview Questions, Live Training and more…visit ITLearnMore – Most Trusted Website for all Learning Needs by Students, Graduates and Working Professionals.
Looking to add weight to your resume? Check out for ITLearnmore for varied online IT courses at affordable prices intended for career boost. There is so much in store for both fresh graduates and professionals here. Hurry up..! Get updated with the current IT job market requirements and related courses.For more information visit http://www.ITLearnMore.com.
The document describes the basic steps involved in query processing, including parsing, optimization, and evaluation. It discusses various algorithms for performing relational algebra operations like selection, sorting, and join. Selection algorithms include linear scan, binary search, and using indexes. Sorting can be done by building an index or using external sort-merge. The goal of optimization is to choose the most efficient evaluation plan based on estimated costs.
SQL Server 2008 Development for ProgrammersAdam Hutson
The document outlines a presentation by Adam Hutson on SQL Server 2008 development for programmers, including an overview of CRUD and JOIN basics, dynamic versus compiled statements, indexes and execution plans, performance issues, scaling databases, and Adam's personal toolbox of SQL scripts and templates. Adam has 11 years of database development experience and maintains a blog with resources for SQL topics.
Business intelligence and data warehousesDhani Ahmad
This chapter discusses business intelligence and data warehouses. It covers how operational data differs from decision support data, the components of a data warehouse including facts, dimensions and star schemas, and how online analytical processing (OLAP) and SQL extensions support analysis of multidimensional decision support data. The chapter also discusses data mining, requirements for decision support databases, and considerations for implementing a successful data warehouse project.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Database performance tuning and query optimizationDhani Ahmad
Database performance tuning involves activities to ensure queries are processed in the minimum amount of time. A DBMS processes queries in three phases - parsing, execution, and fetching. Indexes are crucial for speeding up data access by facilitating operations like searching and sorting. Query optimization involves the DBMS choosing the most efficient plan for accessing data, such as which indexes to use.
This document discusses query processing and provides an overview of algorithms for evaluating relational algebra operations. It begins with an overview of the basic steps in query processing - parsing and translation, optimization, and evaluation. It then discusses how to measure query costs by focusing on resource consumption, particularly disk access. The document outlines algorithms for common relational operations like selection, sorting, and join. It provides cost estimates for different algorithms like file scan, index scan, and block nested loops join. The overall summary is that the document describes query processing and evaluation strategies for relational algebra operations like selection and join, providing cost estimates to help optimize queries.
This document discusses query processing and algorithms for evaluating relational algebra operations. It begins with an overview of the basic steps in query processing: parsing and translation, optimization, and evaluation. It then discusses how to measure query costs using a cost model based on disk access times. The document outlines several algorithms (A1-A10) for performing selection operations on relations using file scans and indexes. It provides cost estimates for each algorithm based on factors like the number of blocks accessed and index height. The algorithms can handle selections with equality and inequality conditions, as well as complex selections using conjunctions, disjunctions, and negation.
Extending Apache Spark SQL Data Source APIs with Join Push Down with Ioana De...Databricks
This document summarizes a presentation on extending Spark SQL Data Sources APIs with join push down. The presentation discusses how join push down can significantly improve query performance by reducing data transfer and exploiting data source capabilities like indexes. It provides examples of join push down in enterprise data pipelines and SQL acceleration use cases. The presentation also outlines the challenges of network speeds and exploiting data source capabilities, and how join push down addresses these challenges. Future work discussed includes building a cost model for global optimization across data sources.
Active/Active Database Solutions with Log Based Replication in xDB 6.0EDB
EDB’s xDB Replication Server is a highly flexible database replication tool that provides single and multi-master solutions for read/write scalability, availability, performance, and data integration with Oracle, SQL Server and Postgres. Dozens of worldwide customers have been using xDB Replication Server for the past 4 years, and we are extremely excited to introduce a pivotal new release, version 6.0.
This presentation reviews the features in xDB 6.0 including:
* Faster and more efficient replication with log-based Multi Master replication for Postgres Plus and PostgreSQL
* Easier to configure publication tables in bulk with pattern matching selection rules
* Ensure High Availability with integration of the 'Control Schema'
* Improved performance in conflict detection rules
This document provides an overview of database security concepts including confidentiality, integrity, and availability. It defines database security as protecting the confidentiality, integrity, and availability of data. Key concepts discussed include authentication, authorization, access control, data encryption, data privacy, auditing, and logging. The document also outlines security problems such as non-fraudulent threats from errors or disasters and fraudulent threats from authorized users abusing privileges or hostile agents attacking the system.
The document discusses several SQL best practices and new features in SQL Server 2012. It covers basic concepts like sets and order in relational databases. It also discusses strategic imperatives like stability, adaptability and maintainability. New SQL Server 2012 features highlighted include xVelocity in-memory technologies, columnstore indexes, Power View interactive reporting, data compression techniques, and the Data Quality Services for data cleansing and profiling. The document also provides tips on topics like layered coding, efficient resource usage, avoiding cursors, proper use of transactions, and joins versus other operators.
This document provides an overview of performance tuning the MySQL server. It discusses where to find server configuration and status information, how to analyze what the database is doing using status variables, and which configuration variables can be tuned for optimization, including global, per-session, and storage engine variables. Key areas covered include memory usage, query analysis, indexing strategies, and tuning storage engines like InnoDB and MyISAM.
This document provides information about an inplant training program offered by KAASHIV INFOTECH in Chennai, India. It outlines 5-day training schedules for students of CSE/IT/MCA and ECE/EE/EIE focused on topics like Big Data, cloud computing, CCNA, ethical hacking, and MATLAB. It also lists a 5-day training schedule for mechanical/civil engineering students and provides contact information for the training program.
This document provides information about an inplant training program offered by KAASHIV INFOTECH in Chennai, India. It outlines 5-day training schedules for students of CSE/IT/MCA and ECE/EE/EIE focused on topics like Big Data, cloud computing, CCNA, ethical hacking, and MATLAB. It also lists a 5-day training schedule for mechanical/civil engineering students and provides contact information for the training program.
The document discusses techniques used by a database management system (DBMS) to process, optimize, and execute high-level queries. It describes the phases of query processing which include syntax checking, translating the SQL query into an algebraic expression, optimization to choose an efficient execution plan, and running the optimized plan. Query optimization aims to minimize resources like disk I/O and CPU time by selecting the best execution strategy. Techniques for optimization include heuristic rules, cost-based methods, and semantic query optimization using constraints.
Data Warehouse Physical Design,Physical Data Model, Tablespaces, Integrity Constraints, ETL (Extract-Transform-Load) ,OLAP Server Architectures, MOLAP vs. ROLAP, Distributed Data Warehouse ,
This document provides a summary of 20 interview questions related to Informatica. It discusses concepts like the components of Informatica, what a repository is and how to add one, different types of transformations used in mappings and their purposes, how to make transformations reusable, how to import source and target definitions, and what a session is and how to create it. The document is a training resource that provides answers to common Informatica interview questions.
Prepare for your interview with these top 20 SAP HANA interview questions. For more IT Profiles, Sample Resumes, Practice exams, Interview Questions, Live Training and more…visit ITLearnMore – Most Trusted Website for all Learning Needs by Students, Graduates and Working Professionals.
Looking to add weight to your resume? Check out for ITLearnmore for varied online IT courses at affordable prices intended for career boost. There is so much in store for both fresh graduates and professionals here. Hurry up..! Get updated with the current IT job market requirements and related courses.For more information visit http://www.ITLearnMore.com.
The document describes the basic steps involved in query processing, including parsing, optimization, and evaluation. It discusses various algorithms for performing relational algebra operations like selection, sorting, and join. Selection algorithms include linear scan, binary search, and using indexes. Sorting can be done by building an index or using external sort-merge. The goal of optimization is to choose the most efficient evaluation plan based on estimated costs.
SQL Server 2008 Development for ProgrammersAdam Hutson
The document outlines a presentation by Adam Hutson on SQL Server 2008 development for programmers, including an overview of CRUD and JOIN basics, dynamic versus compiled statements, indexes and execution plans, performance issues, scaling databases, and Adam's personal toolbox of SQL scripts and templates. Adam has 11 years of database development experience and maintains a blog with resources for SQL topics.
Business intelligence and data warehousesDhani Ahmad
This chapter discusses business intelligence and data warehouses. It covers how operational data differs from decision support data, the components of a data warehouse including facts, dimensions and star schemas, and how online analytical processing (OLAP) and SQL extensions support analysis of multidimensional decision support data. The chapter also discusses data mining, requirements for decision support databases, and considerations for implementing a successful data warehouse project.
Similar to QueryProcessingAndOptimization-Part 1.pptx (20)
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers