This document contains examples of Ruby code demonstrating population hashes, triangle objects, inheritance between classes, ActiveRecord migrations, querying database records, RESTful routing, and form helpers. It shows how to retrieve values from a population hash, define a subclass that inherits and extends a parent class's behavior, generate and modify database tables using migrations, find records by id, define RESTful member resources with different HTTP verbs, generate links between pages, and build a form.
ALV grids can be customized and run in the background by creating a layout. Events from the ALV control can be handled by creating an object to receive events and linking it to handler methods. Controls are not included in the tab order by default, so the set_focus method may need to be called. Enqueue/dequeue function modules are automatically created when a lock object is created in SE11 and can be used in programs.
Presentation topic is stick data structureAizazAli21
This document discusses the stack data structure. It defines a stack as a linear data structure where elements are added and removed from only one end, called the top. Common stack operations like push, pop, and peek are described. Examples are given to illustrate how stacks can be used to perform calculations. The document also explains how stacks can be implemented using both arrays and linked lists. Key points are that arrays maintain a size and add/remove from the first element, while linked lists use a head node and insert/remove as the new head.
In this tutorial, we learn to create univariate bar plots using the Graphics package in R. We also learn to modify graphical parameters associated with the bar plot.
Data Visualization With R: Learn To Modify Title, Axis Labels & RangeRsquared Academy
This document contains slides from a data visualization course in R. It discusses how to modify the title, axis labels, and range of plots created in R. Specifically, it shows how to add these elements either by including arguments in the plot() function or by using the title() function. The title(), xlab, ylab, xlim, and ylim arguments can be used in plot() to customize the title, axis labels, and ranges. Alternatively, the title() function can be used after plotting but may overwrite default axis labels, so the ann argument should be set to FALSE in plot().
Learn to manipulate numbers in R using the built in numeric functions. This tutorial is part of the Working With Data module of the R Programming course offered by r-squared.
This document contains examples of Ruby code demonstrating population hashes, triangle objects, inheritance between classes, ActiveRecord migrations, querying database records, RESTful routing, and form helpers. It shows how to retrieve values from a population hash, define a subclass that inherits and extends a parent class's behavior, generate and modify database tables using migrations, find records by id, define RESTful member resources with different HTTP verbs, generate links between pages, and build a form.
ALV grids can be customized and run in the background by creating a layout. Events from the ALV control can be handled by creating an object to receive events and linking it to handler methods. Controls are not included in the tab order by default, so the set_focus method may need to be called. Enqueue/dequeue function modules are automatically created when a lock object is created in SE11 and can be used in programs.
Presentation topic is stick data structureAizazAli21
This document discusses the stack data structure. It defines a stack as a linear data structure where elements are added and removed from only one end, called the top. Common stack operations like push, pop, and peek are described. Examples are given to illustrate how stacks can be used to perform calculations. The document also explains how stacks can be implemented using both arrays and linked lists. Key points are that arrays maintain a size and add/remove from the first element, while linked lists use a head node and insert/remove as the new head.
In this tutorial, we learn to create univariate bar plots using the Graphics package in R. We also learn to modify graphical parameters associated with the bar plot.
Data Visualization With R: Learn To Modify Title, Axis Labels & RangeRsquared Academy
This document contains slides from a data visualization course in R. It discusses how to modify the title, axis labels, and range of plots created in R. Specifically, it shows how to add these elements either by including arguments in the plot() function or by using the title() function. The title(), xlab, ylab, xlim, and ylim arguments can be used in plot() to customize the title, axis labels, and ranges. Alternatively, the title() function can be used after plotting but may overwrite default axis labels, so the ann argument should be set to FALSE in plot().
Learn to manipulate numbers in R using the built in numeric functions. This tutorial is part of the Working With Data module of the R Programming course offered by r-squared.
Programming the SQL Way with Common Table ExpressionsEDB
The document discusses using common table expressions (CTEs) in SQL to program imperatively and allow for looping and processing hierarchical structures. It provides examples of using CTEs for tasks like generating sequences, doing string and number manipulations, calculating factorials and prime factors, and even generating ASCII art patterns. The examples show how CTEs allow SQL queries to be written more like imperative code with looping and recursion.
This document provides descriptions of various MATLAB functions organized into categories:
- Commands for managing MATLAB sessions and plotting
- Commands for enhancing plots
- Commands for creating special matrices and performing mathematical/complex/statistical functions
- Predefined input functions for generating signals
Window functions are often used to simplify complex queries and for data analytics. They allow analysis that is normally performed in client applications to be more efficiently processed by the database server.
This presentation explains the many window function facilities and how they can be used to produce useful SQL query results.
In this webinar you will learn:
- The basics of window functions
- Window function syntax
- Window syntax with generic aggregates
- Window-specific functions
- Window function examples
An AVL tree, ordered by key insert: a standard insert; (log n) find: a standard find (without removing, of course); (log n) remove: a standard remove; (log n)
This document discusses how to limit and sort data retrieved from a database table using SQL queries. It covers using the WHERE clause to restrict rows by conditions, comparison operators like = and BETWEEN, logical operators like AND and OR, and the ORDER BY clause to sort rows in ascending or descending order based on one or more columns. Examples are provided for each technique.
This document provides an overview of different CSS layout models including flexbox and CSS grids. It discusses when each model should be used such as controlling elements along rows and columns or defining template areas for page layouts. Resources for learning more are provided, including following experts like Jen Simmons and reading specifications on the W3C website or guides on CSS Tricks. In the end, the document confirms that browsers do support these CSS layout models.
CLUTO is a software toolkit used for clustering high-dimensional datasets and analyzing cluster characteristics. It contains two main algorithms: Vcluster, which clusters based on the actual multi-dimensional data representation, and Scluster, which clusters based on a pre-computed similarity matrix. CLUTO can be run from the command line with various optional parameters to control the clustering method, analysis, and visualization of results.
This document describes a 3D version of the classic arcade game Arkanoid called Darkonoid. It includes 3D objects like a base, ball, and items to destroy. The ball bounces off surfaces according to physics calculations. Matrices are used to represent transformations and detect collisions between objects. Algorithms like range calculations and reflections are optimized for performance.
This document discusses writeable common table expressions (CTEs) in PostgreSQL. It provides examples of how writeable CTEs can be used for partition management, query clustering, and transaction management. The document also briefly describes the process of adding support for writeable CTEs to PostgreSQL, including reworking the planner and executor to handle modified tables.
This document discusses techniques for joining data from multiple database tables. It covers equijoins to retrieve matching records between tables, non-equijoins to retrieve records with non-equal conditions, outer joins to include non-matching records, and self joins to join a table to itself. Examples are provided to demonstrate how to write SQL statements to perform each type of join and include additional conditions.
Pig is a platform for analyzing large datasets that uses a simple declarative language to express data flow tasks. It has a nested data model of fields, tuples, bags, and maps and supports common operators like FILTER, FOREACH, JOIN, GROUP, and ORDER. User-defined functions can extend its built-in functionality. Pig compiles queries into multiple MapReduce jobs as needed to perform the work in parallel across a cluster.
I survey three approaches for data visualization in R: (i) the built-in base graphics functions, (ii) the ggplot2 package, and (iii) the lattice package. I also discuss some methods for visualizing large data sets.
Single-row functions can manipulate data items, accept arguments and return one value, and act on each row returned. There are various types of single-row functions including character, number, date, and conversion functions. Character functions manipulate character strings, number functions perform calculations, and date functions modify date formats. Functions allow data to be formatted, calculated, and converted as needed for different queries and outputs.
Group functions operate on sets of rows to give one result per group. Some key points covered in the document include:
1. Group functions like AVG, COUNT, MAX, MIN, SUM allow aggregating data across rows grouped by columns like department.
2. The GROUP BY clause is used to divide rows into groups and column(s) in the SELECT that are not aggregate functions must be in the GROUP BY.
3. The HAVING clause is used to filter groups based on conditions with aggregate functions and comes after GROUP BY.
The document provides an overview of basic SQL statements and capabilities. It discusses using SELECT statements to project columns, select rows, and join tables. It also covers arithmetic expressions, column aliases, concatenation, and eliminating duplicate rows. SQL statements are executed through the SQL*Plus environment, which allows editing, saving, and running SQL code and commands.
This document provides an overview of SQL concepts including installing MySQL, data types, data definition and manipulation statements, filtering, sorting, aggregation, and join operations. It covers downloading and installing MySQL, describes common data types like numeric, date/time, and string types. It also explains how to create databases and tables, add/update/delete data, and perform queries with WHERE, ORDER BY, LIMIT, and JOIN clauses.
In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data.
R is a free software environment for statistical computing and graphics that provides a wide variety of statistical techniques and graphical methods. It includes base functions and packages, and is used through interfaces like RStudio. R represents data using objects like vectors, matrices, and data frames. Common operations include calculations, generating random variables, and visualizing data. R can be used to analyze a glass fragment dataset to visualize compositions and potentially classify an unknown fragment.
This document discusses relational databases and the Oracle implementation. It introduces relational database concepts like tables, relations, keys and SQL. It describes how Oracle uses SQL and the PL/SQL programming language to interact with and manage data in a relational database management system. PL/SQL allows embedding SQL statements in procedural code blocks for data manipulation and queries.
The document provides information on sample sizes and key metrics for brand health and taste tests. For a brand health trend analysis, sample sizes ranged from 236-280 and metrics such as top of mind awareness, spontaneous awareness, consideration, and most often used brand were reported for Brand X across four quarters. A taste test comparison involved samples of 200 each for a client yogurt, nearest competitor, and another client yogurt. Metrics like appeal, liking, texture, taste, and purchase intention were reported.
This document provides instructions for installing and configuring authentication for Git. It outlines how to install Git on Linux, OS X, and Windows operating systems. It also describes how to generate SSH keys for authentication on Linux, OS X, and Windows. Finally, it explains how to set up the SSH configuration file and test the key connection.
Programming the SQL Way with Common Table ExpressionsEDB
The document discusses using common table expressions (CTEs) in SQL to program imperatively and allow for looping and processing hierarchical structures. It provides examples of using CTEs for tasks like generating sequences, doing string and number manipulations, calculating factorials and prime factors, and even generating ASCII art patterns. The examples show how CTEs allow SQL queries to be written more like imperative code with looping and recursion.
This document provides descriptions of various MATLAB functions organized into categories:
- Commands for managing MATLAB sessions and plotting
- Commands for enhancing plots
- Commands for creating special matrices and performing mathematical/complex/statistical functions
- Predefined input functions for generating signals
Window functions are often used to simplify complex queries and for data analytics. They allow analysis that is normally performed in client applications to be more efficiently processed by the database server.
This presentation explains the many window function facilities and how they can be used to produce useful SQL query results.
In this webinar you will learn:
- The basics of window functions
- Window function syntax
- Window syntax with generic aggregates
- Window-specific functions
- Window function examples
An AVL tree, ordered by key insert: a standard insert; (log n) find: a standard find (without removing, of course); (log n) remove: a standard remove; (log n)
This document discusses how to limit and sort data retrieved from a database table using SQL queries. It covers using the WHERE clause to restrict rows by conditions, comparison operators like = and BETWEEN, logical operators like AND and OR, and the ORDER BY clause to sort rows in ascending or descending order based on one or more columns. Examples are provided for each technique.
This document provides an overview of different CSS layout models including flexbox and CSS grids. It discusses when each model should be used such as controlling elements along rows and columns or defining template areas for page layouts. Resources for learning more are provided, including following experts like Jen Simmons and reading specifications on the W3C website or guides on CSS Tricks. In the end, the document confirms that browsers do support these CSS layout models.
CLUTO is a software toolkit used for clustering high-dimensional datasets and analyzing cluster characteristics. It contains two main algorithms: Vcluster, which clusters based on the actual multi-dimensional data representation, and Scluster, which clusters based on a pre-computed similarity matrix. CLUTO can be run from the command line with various optional parameters to control the clustering method, analysis, and visualization of results.
This document describes a 3D version of the classic arcade game Arkanoid called Darkonoid. It includes 3D objects like a base, ball, and items to destroy. The ball bounces off surfaces according to physics calculations. Matrices are used to represent transformations and detect collisions between objects. Algorithms like range calculations and reflections are optimized for performance.
This document discusses writeable common table expressions (CTEs) in PostgreSQL. It provides examples of how writeable CTEs can be used for partition management, query clustering, and transaction management. The document also briefly describes the process of adding support for writeable CTEs to PostgreSQL, including reworking the planner and executor to handle modified tables.
This document discusses techniques for joining data from multiple database tables. It covers equijoins to retrieve matching records between tables, non-equijoins to retrieve records with non-equal conditions, outer joins to include non-matching records, and self joins to join a table to itself. Examples are provided to demonstrate how to write SQL statements to perform each type of join and include additional conditions.
Pig is a platform for analyzing large datasets that uses a simple declarative language to express data flow tasks. It has a nested data model of fields, tuples, bags, and maps and supports common operators like FILTER, FOREACH, JOIN, GROUP, and ORDER. User-defined functions can extend its built-in functionality. Pig compiles queries into multiple MapReduce jobs as needed to perform the work in parallel across a cluster.
I survey three approaches for data visualization in R: (i) the built-in base graphics functions, (ii) the ggplot2 package, and (iii) the lattice package. I also discuss some methods for visualizing large data sets.
Single-row functions can manipulate data items, accept arguments and return one value, and act on each row returned. There are various types of single-row functions including character, number, date, and conversion functions. Character functions manipulate character strings, number functions perform calculations, and date functions modify date formats. Functions allow data to be formatted, calculated, and converted as needed for different queries and outputs.
Group functions operate on sets of rows to give one result per group. Some key points covered in the document include:
1. Group functions like AVG, COUNT, MAX, MIN, SUM allow aggregating data across rows grouped by columns like department.
2. The GROUP BY clause is used to divide rows into groups and column(s) in the SELECT that are not aggregate functions must be in the GROUP BY.
3. The HAVING clause is used to filter groups based on conditions with aggregate functions and comes after GROUP BY.
The document provides an overview of basic SQL statements and capabilities. It discusses using SELECT statements to project columns, select rows, and join tables. It also covers arithmetic expressions, column aliases, concatenation, and eliminating duplicate rows. SQL statements are executed through the SQL*Plus environment, which allows editing, saving, and running SQL code and commands.
This document provides an overview of SQL concepts including installing MySQL, data types, data definition and manipulation statements, filtering, sorting, aggregation, and join operations. It covers downloading and installing MySQL, describes common data types like numeric, date/time, and string types. It also explains how to create databases and tables, add/update/delete data, and perform queries with WHERE, ORDER BY, LIMIT, and JOIN clauses.
In computer science, functional programming is a programming paradigm—a style of building the structure and elements of computer programs—that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data.
R is a free software environment for statistical computing and graphics that provides a wide variety of statistical techniques and graphical methods. It includes base functions and packages, and is used through interfaces like RStudio. R represents data using objects like vectors, matrices, and data frames. Common operations include calculations, generating random variables, and visualizing data. R can be used to analyze a glass fragment dataset to visualize compositions and potentially classify an unknown fragment.
This document discusses relational databases and the Oracle implementation. It introduces relational database concepts like tables, relations, keys and SQL. It describes how Oracle uses SQL and the PL/SQL programming language to interact with and manage data in a relational database management system. PL/SQL allows embedding SQL statements in procedural code blocks for data manipulation and queries.
The document provides information on sample sizes and key metrics for brand health and taste tests. For a brand health trend analysis, sample sizes ranged from 236-280 and metrics such as top of mind awareness, spontaneous awareness, consideration, and most often used brand were reported for Brand X across four quarters. A taste test comparison involved samples of 200 each for a client yogurt, nearest competitor, and another client yogurt. Metrics like appeal, liking, texture, taste, and purchase intention were reported.
This document provides instructions for installing and configuring authentication for Git. It outlines how to install Git on Linux, OS X, and Windows operating systems. It also describes how to generate SSH keys for authentication on Linux, OS X, and Windows. Finally, it explains how to set up the SSH configuration file and test the key connection.
This document discusses SQL objects and PL/SQL datatypes in Oracle. It provides an overview of PL/SQL as an extension to SQL that allows procedural logic. It describes PL/SQL-specific and SQL object datatypes, including records, collections, and pipelined functions. It also covers SQL collection operators and comparing collections in PL/SQL.
Mobile Reporting Introduction and FAQ v21sabbir456
M-Reporting is a effective solution for business withe sales force on the go. Highly customizable and Reliable. For a more information please contact - sabbir456@gmail.com or sabbir@airnetmobilecom
M-Reporting introduction and faq pharma 20140316sabbir456
M-Reporting is a mobile application service that provides data collection and reporting tools for field sales and monitoring teams. It allows field personnel to submit sales orders, delivery confirmations, surveys and other reports directly from their mobile devices. The solution has over 12,000 users and processes over 2.5 million transactions daily across industries such as FMCG, pharmaceuticals, and government agencies. It streamlines existing workflows, saves time, reduces errors and costs while improving productivity and time to market.
How to Become a Thought Leader in Your NicheLeslie Samuel
Are bloggers thought leaders? Here are some tips on how you can become one. Provide great value, put awesome content out there on a regular basis, and help others.
The document provides an overview of various Oracle tips and tricks, including CASE statements, joins, timestamps, renaming tables/columns, merge statements, subqueries, window functions, hierarchical queries, XML, grouping sets, rollups and cubes, indexes, temporary tables and more. Key features introduced in Oracle 9i such as the CASE statement, full outer joins, timestamps and the WITH clause are highlighted.
Functions in Oracle can be used to manipulate data values and are categorized as single-row/scalar functions and group/aggregate functions. Single-row functions operate on each row and return one value per row, while group functions operate on sets of values to return one result. The GROUP BY clause is used to group or categorize data and can be used with aggregate functions to return summary results for each group.
Introduction to Oracle Functions--(SQL)--Abhishek Sharmaअभिषेक शर्मा
Functions make query results easier to understand and manipulate data values. There are two categories of functions: single row/scalar functions that return one value per row, and group/aggregate functions that operate on sets of values to return a single result. The GROUP BY clause groups rows based on columns and is used with aggregate functions to return summary results for each group.
This document discusses new features in SQL Server including MERGE statements, table valued parameters, grouping sets, and FILESTREAM storage. MERGE statements allow inserting, updating, and deleting data in one statement based on matching or non-matching rows between two tables. Table valued parameters allow passing tables of data as parameters to stored procedures. Grouping sets enable grouping data by multiple columns in a single query. FILESTREAM storage integrates the database engine with the file system to allow storing large binary objects on disk for improved performance.
Are you an Oracle developer or a DBA?
Do you know the difference between aggregate and analytic functions?
Without complex sub-queries or self-joins, do you know how to:
Calculate running/cumulative totals and moving/centered averages?
List products with revenues above or below their peers or product groups?
Compute the ratio of one category’s sales to the total sales?
Select the Top-N or Top N % of the customers/products?
Classify advertisers into quartiles/n-tiles based on the revenue potential?
Compare period-over-period (year-over-year, month-over-month) growth and rank advancement?
Convert rows into columns (pivot), columns into rows (unpivot) or aggregate strings?
Perform what-if analysis and hypothetical ranking?
Analytic functions are more performant because tables need to be scanned only once. They make you more productive because there is no need to write procedural code. No wonder Tom Kyte, a well-respected Oracle guru, says analytic functions are the best thing to happen after the sliced bread.
In the first half, I will cover the basics of the various analytic functions:
Ranking: RANK, DENSE_RANK, ROW_NUMBER, NTILE, CUME_DIST, PERCENTILE_RANK
Windowing: SUM, AVG, MAX, MIN, FIRST_VALUE, LAST_VALUE
Reporting: RATIO_TO_REPORT
Others: FIRST/LAST, LEAD/LAG, hypothetical ranking,
In the second half, I will show how powerful these functions are with a few examples.
If there is time, I will cover enhanced aggregation (ROLLUP, CUBE, GROUPING SET extensions to GROUP BY clause)
This class would be useful for both developers and DBAs alike, especially for those working in Analytic, Business Intelligence, and Datawarehouse environments.
Are you already an expert in analytic functions? Then come and help me refine the content.
For more info, read
http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/analysis.htm
http://download.oracle.com/docs/cd/E11882_01/server.112/e16579/aggreg.htm
rollup, cross-tabulation across different dimensions using ROLLUP, CUBE and GROUPING SETS extension to GROUP BY clause
, most active time-periods (i.e. days when the most number of tickets are open in BZ, hours with the most take-off and landings, months with the highest sales, 5-minute periods with the maximum number of calls made, etc)
data densification?
their rank last year, this year, rank growth, running/cumulative total (Year-To-Date/Month-To-Date summation), moving averages, Year-Over-Year comparison, sales projection, average/min/max time between one sale and the next sale, products with above and below average sales.
overall average, sum, departmental average, sum, ranking, job wise ranking in one SQL.
The document discusses aggregate functions in SQL such as SUM, AVG, COUNT, MAX, MIN. It provides examples of using these functions to calculate totals, averages, counts and find maximum and minimum values from columns in tables. It also covers the use of the GROUP BY clause to perform aggregate calculations on grouped data and the HAVING clause to filter groups.
This document provides an overview of querying and reporting in SQL, covering topics like arithmetic operators, built-in functions, selecting data, grouping results, joins, and subqueries. The agenda includes learning objectives, descriptions of SELECT statements, and explanations of concepts like aggregate functions, limiting results, sorting data, and correlating subqueries.
The document provides an overview of Data Query Language (DQL) syntax for SELECT statements including:
- Selecting columns from tables
- Using column aliases
- Filtering rows with the WHERE clause
- Working with NULL values
- Sorting results with the ORDER BY clause
- Grouping rows with the GROUP BY clause and aggregate functions
- Filtering groups with the HAVING clause
- Sorting on multiple columns
- Nested subqueries
Various use cases for Oracle database version 12c MATCH_RECOGNIZE data pattern matching functionality, not only for classic pattern matching like finding W patterns in stock ticker data, but also used for more general purpose SQL as "declarative analytics." Presentation given at OUGN Spring Conference 2016.
SQL is a programming language used for managing data in relational databases. It allows users to read, manipulate, and change data using a variety of functions. Some key points about SQL include:
- It is semantically easy to understand and lets users directly access large amounts of data stored in databases.
- Data analysis done in SQL is easy to audit and replicate compared to spreadsheet tools.
- Common SQL statements include SELECT, INSERT, UPDATE, DELETE, and basic clauses like WHERE, GROUP BY, ORDER BY, LIMIT.
- SQL also supports functions like COUNT, SUM, AVG, MIN, MAX for aggregating data, joins to combine data from multiple tables, and various operators for filtering data
MySQL is an open-source relational database management system that uses SQL and runs a server providing multi-user access to databases. It allows users to perform queries and make changes to data through commands like SELECT, INSERT, UPDATE, DELETE. Stored procedures and functions allow users to write and save blocks of SQL code for repeated execution with consistent results.
1. The document discusses various SQL commands for creating, manipulating, and querying database tables. It covers commands like CREATE TABLE, INSERT, SELECT, UPDATE, DELETE, ALTER TABLE, COMMENT, and more.
2. Mathematical functions like COUNT, MAX, MIN, ROUND, TRUNC are described along with logical and comparison operators.
3. The document provides examples of using operators, functions, joins and grouping with detailed explanations.
The document outlines SQL commands for creating and manipulating databases and tables, including creating and deleting databases and tables, inserting, updating, deleting and reading records from tables, and using clauses like WHERE, ORDER BY, GROUP BY and aggregate functions like COUNT, SUM, AVG, MIN, MAX. It also discusses set operations like UNION, INTERSECT, EXCEPT and using nested queries.
The document provides information on various SQL functions. It discusses functions for sorting query results, performing calculations on aggregate data, grouping data, and filtering groups. Date and string functions are also covered, along with numeric and mathematical functions. Common functions include ORDER BY for sorting, SUM, AVG, COUNT for aggregates, GROUP BY for grouping, and HAVING for filtering groups. NOW() and SYSDATE() are described as functions for returning the current date and time.
The document summarizes 11 new features in Oracle Database 11g Release 2. It discusses improvements to parallelism, analytics, external tables, recursive queries, and flashback features. Key points include automated parallel DML, improved analytic functions like LISTAGG, using external tables with preprocessors on directories, recursive queries with common table expressions, and enhanced time travel capabilities.
Learning
Base SAS,
Advanced SAS,
Proc SQl,
ODS,
SAS in financial industry,
Clinical trials,
SAS Macros,
SAS BI,
SAS on Unix,
SAS on Mainframe,
SAS interview Questions and Answers,
SAS Tips and Techniques,
SAS Resources,
SAS Certification questions...
visit http://sastechies.blogspot.com
The document discusses various SQL concepts including aggregate functions like MIN(), MAX(), COUNT(), AVG(), and SUM(); the GROUP BY clause; the HAVING clause; different types of joins like inner joins, outer joins, full outer joins; and examples of queries using these concepts.
Simplifying SQL with CTE's and windowing functionsClayton Groom
Too busy to learn the new capabilities of SQL Server? This session will cover several of the new features of the T-SQL language, specifically Common Table Expressions (CTE's) and Windowing Functions. This will be an code-heavy session with examples hat you can readily leverage in your solutions.
The focus will be on techniques to shape and manipulate your data for easier consumption by your application, and to leverage your SQL Server to avoid writing code in your application.
A basic to intermediate understanding of T-SQL is required.
The document discusses the GROUP BY clause in SQL, which groups or categorizes data in a table into smaller groups based on specified column(s). Group functions like SUM, COUNT, MAX, MIN can then return summary information for each group. The GROUP BY clause is used with SELECT statements to group data and apply aggregate functions to each group. It explains how to group data by single or multiple columns, and restrict groups using the HAVING clause.
The document provides information about MySQL including:
1. MySQL is an open source relational database management system based on SQL that is used to add, remove, and modify information in databases.
2. It describes basic MySQL commands like CREATE TABLE, DROP TABLE, SELECT, INSERT, UPDATE, and provides syntax examples.
3. It also covers advanced commands, functions in MySQL like aggregate functions, numeric functions and string functions as well as stored procedures.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Standard SQL functionality focuses on rows. When you want to explore the relationship between rows, those rows would have to come from different sources, even if it was the same table with a different alias. And those different data sources would have to be joined. Analytic functions allow the rows in a result set to 'peek' at each other, avoiding the need for joining duplicated data sources.
A partition separates discrete and independent slices of data. A window dictates how far back or forward the row can peek WITHIN the partition The default windows is from the 'start' of the partition to the current row. The concept of 'start' generally requires an explicit ORDER BY
Without analytics, all the expressions and columns in a row of a result set had to come from within that row. Analytics allows you to 'peek' at your neighbours
You can have multiple analytics in a single select, all with different partitioning and/or ordering characteristics. You may want to partition by Customer or Order, and order by dates ascending and descending. Like any SQL, there will be limits on how complex you ALLOW things to get.
This 'New Aggregate' section is pretty much only here to emphasize this function.
If I want the biggest selling item, there's little point in telling me how many I've sold if you can't tell me WHAT I sold.
'X' is made up, just to give a row in Cities that has the same population as Sydney
It looks better with your own user defined collections.
XML allows complex or 'wide' data sets (lots of columns) to be pulled together. They are a (mostly) reliable mechanism for grouping data so that it can be ungrouped at a later point.
You can group up an entire record this way, treat it as a single value as you move it around (eg in a MAX … KEEP) and then decompose it back at the end.
Wraps the <Name><Pop> into a higher level record.
Turn the results into a single row. This is the AGGREGATION function.
Finally, you may need to wrap the XML records into a higher level chunk. Here we have a PARENT, with multiple LINE entries each of which consists of a NAME and POP element.
Better than the old days when you had to rely on DECODE, which was a right royal PITA if you needed greater than/less than style comparisons, especially with strings where you couldn't use SIGN.
These are also available in SQL Server : Google &quot;DAT317: T-SQL Power! The OVER Clause: Your Key to No-Sweat Problem Solving&quot; Which was presented at &quot;Tech Ed North America 2010&quot;
The most common scenario is a Top-N query.
Not wanting to lose all the employees from a particular sector, Smithers brings a report ranking the employees within their department (or sector).
NOTE: Since I used a WAGE DESC as the order by, nulls were put to the top. I can avoid this with a NULLS LAST clause select name, wage, sector, row_number() over (partition by sector order by wage desc nulls last) rn, rank() over (partition by sector order by wage desc nulls last) rnk, dense_rank() over (partition by sector order by wage desc nulls last) drnk from emp where sector in ('7G','9i') order by sector, wage desc nulls last ROW_NUMBER gives non-duplicating, consecutive numbers. If the ORDER BY is not deterministic, the results may differ. It can give you a &quot;Give me the two highest paid employees&quot; and guarantee no more than two rows, but with the risk that it isn't deterministic. You might get Lenny or Carl. RANK gives the same number when the ORDER BY values match, but will skip numbers. It can be the best for &quot;Give me the two highest paid employees&quot; with the caveat that you may get more than two records if there are 'ties' at the end. For 7G, Homer, Lenny and Carl would be returned. DENSE_RANK gives the same number when the ORDER BY values match, and the next number is always consecutive. In this case 'Give me the three highest salaries and the people to whom they are paid&quot; would return the four people in 7G with salaries of 200, 100 and 50. If there are no ties, the results are equivalent.
Cumulative amount demonstrates the ORDER BY. It is generally less confusing if, where you have an ORDER BY in an analytic, you have the same ORDER BY at the bottom of the query.
Because Lenny and Carl both earn 100, the SUM analytic 'groups' both together. However the ROW_NUMBER analytic orders them uniquely. Using the ROW_NUMBER derived value as a filter means that a group is broken and the results look wrong.
The default for the windowing clause is RANGE, which is deterministic and has rows of equivalent value at the same level. The alternative is ROWS which puts in an artificial, and arbitrary, differentiator as a tie-breaker. The UNBOUNDED PRECEDING is also the default. It just means start at the beginning (eg the beginning of the partition, but we'll get on to partitions later).
NTILE (and the similar PERCENT_RANK) can be useful for excluding the outliers. For example, you only select rows where the percent_rank is between 10 and 90 to exclude the top and bottom 10% of 'weird' values, or select the middle third Percent_rank is always a percentage. NTILE allows you to choose your own bucket size NTILE is also available in SQL Server. Postgres also has analytics, which are enhanced in Postgres 9.
create table test_mill as select round(dbms_random.normal,3) n from dual connect by level < 100000; select round(n*2) label, count(*) val from (select n, ntile(9) over (order by n) nt from test_mill) where nt [not] in (1,9) group by round(n*2) order by 2
IGNORE NULLS
Emphasize that May went from 130 to 170, an increase of 40 (or about 31%). Not much use for LEAD, but it is pretty similar. create table sales (period date, amount number); Insert into sales select add_months(trunc(sysdate,'YYYY'), rownum -1), round(dbms_random.value(100, 500),-1) from dual connect by level < 10; column perc format 9999.99 select to_char(period,'Month') mon, amount, lag(amount) over (order by period) prev_amt, 100*(amount - (lag(amount) over (order by period)))/(lag(amount) over (order by period)) perc from sales order by period /
Ignore nulls syntax (11g) select to_char(period,'Month') mon, amount, lag(amount) ignore nulls over (order by period) prev_amt from sales order by period /
Partitioning by Quarter Get the sales value for the first month of each quarter.
Answers questions that I never need to ask, such as the rolling total of the last three months.
Filter is applied AFTER the analytic SQL> explain plan for 2 select * from 3 (select order_id, line_id, 4 sum(value) over (order by line_id) cumul 5 from order_lines) 6 where order_id =10; Explained. SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------- Plan hash value: 2716399136 ----------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 6 | 234 | 4 (25)| 00:00:01 | |* 1 | VIEW | | 6 | 234 | 4 (25)| 00:00:01 | | 2 | WINDOW SORT | | 6 | 234 | 4 (25)| 00:00:01 | | 3 | TABLE ACCESS FULL| ORDER_LINES | 6 | 234 | 3 (0)| 00:00:01 | ----------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter(&quot;ORDER_ID&quot;=10)
In this case, it allows from predicate pishing SQL> explain plan for 2 select * from 3 (select order_id, line_id, 4 sum(value) over 5 (partition by order_id order by line_id) cumul 6 from order_lines) 7 where order_id =10; Explained. SQL> SQL> select * from table(dbms_xplan.display); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------ Plan hash value: 2716399136 ----------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 117 | 4 (25)| 00:00:01 | | 1 | VIEW | | 3 | 117 | 4 (25)| 00:00:01 | | 2 | WINDOW SORT | | 3 | 117 | 4 (25)| 00:00:01 | |* 3 | TABLE ACCESS FULL| ORDER_LINES | 3 | 117 | 3 (0)| 00:00:01 | ----------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter(&quot;ORDER_ID&quot;=10) --------------------------------------------------------------------------------------------------- PLAN_TABLE_OUTPUT ---------------------------------------------------------------------------------------- SQL_ID 7zc4srwzv21gq, child number 0 ------------------------------------- SELECT * FROM ORD_LN_VW WHERE ORDER_ID = :B1 Plan hash value: 2082499838 ----------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 4 (100)| | | 1 | VIEW | ORD_LN_VW | 1 | 39 | 4 (25)| 00:00:01 | | 2 | WINDOW SORT | | 1 | 39 | 4 (25)| 00:00:01 | |* 3 | TABLE ACCESS FULL| ORDER_LINES | 1 | 39 | 3 (0)| 00:00:01 | ----------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter(&quot;ORDER_ID&quot;=:B1)
select decode(grouping(to_char(period,'Q')),1,'Total', nvl(to_char(period,' MM Month'),' Subtotal')) mnth, sum(amount) amt from sales group by rollup(to_char(period,'Q'), period); Sometimes use GROUPING in a filter predicate to avoid duplicate results. [Should be able to use HAVING, but ORA-0600 in XE] select m1,m2, amt from (select m1, m2, sum(amount) amt, grouping(m1) gm1, grouping(m2) gm2 from (select to_char(Period,'Month') m1, to_char(period,'MM') m2, amount from sales) group by rollup (m1, m2)) where gm1 = gm2 order by m2, m1, amt /
The example above excludes the detail rows shown below. SQL> select colour, shape, count(*) 2 from stc 3 group by cube(colour,shape) 4 / COLOUR SHAPE COUNT(*) ---------- ---------- ---------- 249 Oval 83 Round 83 Square 83 Red 50 Red Oval 16 Red Round 17 Red Square 17 Blue 49 Blue Oval 17 Blue Round 16 Blue Square 16 Green 50 Green Oval 17 Green Round 16 Green Square 17 White 50 White Oval 16 White Round 17 White Square 17 Yellow 50 Yellow Oval 17 Yellow Round 17 Yellow Square 16 24 rows selected.
Number, datatype and names of columns are fixed at parse time. There can't be any chance of a subsequent execution, potentially with different bind variables, returning a differently structured data set.