This document describes using analytic functions in SQL to optimize warehouse picking based on the FIFO principle. It shows how to:
1. Join order lines to inventory data and order by purchase date to pick oldest items first.
2. Use an analytic sum to calculate a running total of quantities picked to determine when enough has been picked to fulfill an order.
3. Generate a picklist that orders picks by location to minimize travel within the warehouse.
4. Add aisle and warehouse information to further optimize the pick route by picking contiguous aisles and positions.
Various use cases for Oracle database version 12c MATCH_RECOGNIZE data pattern matching functionality, not only for classic pattern matching like finding W patterns in stock ticker data, but also used for more general purpose SQL as "declarative analytics." Presentation given at OUGN Spring Conference 2016.
Various use cases for Oracle database version 12c MATCH_RECOGNIZE data pattern matching functionality, not only for classic pattern matching like finding W patterns in stock ticker data, but also used for more general purpose SQL as "declarative analytics." Presentation given at OUGN Spring Conference 2016.
External Tables - not just loading a csv fileKim Berg Hansen
Giving an overview of many of the details of the external table syntax in Oracle that enables you from SQL to access files that reside outside the database
When 7 bit-ascii ain't enough - about NLS, collation, charsets, unicode and s...Kim Berg Hansen
Revised version of the presentation - as presented at #POUG2018.
About problems that can occur if NLS_LANG is not set correctly, which may lead to corrupt data.
About different NLS settings and related topics, collation, and the use of DMU to migrate to UTF or check for corrupt text.
Presentation on Analytic View syntax - a new feature in Oracle 12.2 that works like a "virtual" OLAP cube, so you can dimensionally query your relational data in real-time without copying to a datawarehouse.
Using Oracle database and Oracle SQL to retrieve XML and JSON from for example webservices, store XML and JSON in the database, query XML and JSON with SELECT as relational data, create XML and JSON in various manners from relational data. Presentation given at OUGN Spring Conference 2016.
Use of Oracle SQL to twist data around - unpivot columns to rows, pivot rows to columns, parse delimited strings to columns and rows, turn rows into delimited strings. Presentation given at OUGN Spring Conference 2016.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Quantitative Data AnalysisReliability Analysis (Cronbach Alpha) Common Method...2023240532
Quantitative data Analysis
Overview
Reliability Analysis (Cronbach Alpha)
Common Method Bias (Harman Single Factor Test)
Frequency Analysis (Demographic)
Descriptive Analysis
External Tables - not just loading a csv fileKim Berg Hansen
Giving an overview of many of the details of the external table syntax in Oracle that enables you from SQL to access files that reside outside the database
When 7 bit-ascii ain't enough - about NLS, collation, charsets, unicode and s...Kim Berg Hansen
Revised version of the presentation - as presented at #POUG2018.
About problems that can occur if NLS_LANG is not set correctly, which may lead to corrupt data.
About different NLS settings and related topics, collation, and the use of DMU to migrate to UTF or check for corrupt text.
Presentation on Analytic View syntax - a new feature in Oracle 12.2 that works like a "virtual" OLAP cube, so you can dimensionally query your relational data in real-time without copying to a datawarehouse.
Using Oracle database and Oracle SQL to retrieve XML and JSON from for example webservices, store XML and JSON in the database, query XML and JSON with SELECT as relational data, create XML and JSON in various manners from relational data. Presentation given at OUGN Spring Conference 2016.
Use of Oracle SQL to twist data around - unpivot columns to rows, pivot rows to columns, parse delimited strings to columns and rows, turn rows into delimited strings. Presentation given at OUGN Spring Conference 2016.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Quantitative Data AnalysisReliability Analysis (Cronbach Alpha) Common Method...2023240532
Quantitative data Analysis
Overview
Reliability Analysis (Cronbach Alpha)
Common Method Bias (Harman Single Factor Test)
Frequency Analysis (Demographic)
Descriptive Analysis
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
3. FIFO – First-In-First-Out principle
● Pick oldest items first
Picking route
● Don’t drive back and forth through the aisles of the
warehouse
Single SQL
● Utilize the power of the database
Case: Picking by FIFO
5. create table inventory (
item varchar2(10) -- identification of the item
, loc varchar2(10) -- identification of the location
, qty number -- quantity present at that location
, purch date -- date that quantity was purchased
);
insert into inventory values('Ale' , '1-A-20', 18, DATE '2014-02-01');
insert into inventory values('Ale' , '1-A-31', 12, DATE '2014-02-05');
insert into inventory values('Ale' , '1-C-05', 18, DATE '2014-02-03');
insert into inventory values('Ale' , '2-A-02', 24, DATE '2014-02-02');
insert into inventory values('Ale' , '2-D-07', 9, DATE '2014-02-04');
insert into inventory values('Bock', '1-A-02', 18, DATE '2014-02-06');
insert into inventory values('Bock', '1-B-11', 4, DATE '2014-02-05');
insert into inventory values('Bock', '1-C-04', 12, DATE '2014-02-03');
insert into inventory values('Bock', '1-B-15', 2, DATE '2014-02-02');
insert into inventory values('Bock', '2-D-23', 1, DATE '2014-02-04');
Inventory
loc = 1-A-20
1 = warehouse
A = aisle
20 = position
6. create table orderline (
ordno number -- id-number of the order
, item varchar2(10) -- identification of the item
, qty number -- quantity ordered
);
insert into orderline values (42, 'Ale' , 24);
insert into orderline values (42, 'Bock', 18);
Order
One order
24 Ale
18 Bock
7. Join up
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
order by o.item, i.purch, i.loc;
ITEM ORD_QTY LOC PURCH LOC_QTY
----- ------- ------- ---------- -------
Ale 24 1-A-20 2014-02-01 18
Ale 24 2-A-02 2014-02-02 24
Ale 24 1-C-05 2014-02-03 18
Ale 24 2-D-07 2014-02-04 9
Ale 24 1-A-31 2014-02-05 12
Bock 18 1-B-15 2014-02-02 2
Bock 18 1-C-04 2014-02-03 12
Bock 18 2-D-23 2014-02-04 1
Bock 18 1-B-11 2014-02-05 4
Bock 18 1-A-02 2014-02-06 18
Order locations for each item by purchase date
Visually easy to see what we need to pick
18 Ale of the oldest and 6 of the next and so on
8. select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and 1 preceding
) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
order by o.item, i.purch, i.loc;
Rolling sum of previous
If the sum of all previous rows is greater than or equal to the
ordered quantity, we have picked sufficient and can stop
Analytic sum
Partition for each item
Order by date
Rolling sum of all
previous rows
9. ITEM ORD_QTY LOC PURCH LOC_QTY SUM_PRV_QTY
----- ------- ------- ---------- ------- -----------
Ale 24 1-A-20 2014-02-01 18
Ale 24 2-A-02 2014-02-02 24 18
Ale 24 1-C-05 2014-02-03 18 42
Ale 24 2-D-07 2014-02-04 9 60
Ale 24 1-A-31 2014-02-05 12 69
Bock 18 1-B-15 2014-02-02 2
Bock 18 1-C-04 2014-02-03 12 2
Bock 18 2-D-23 2014-02-04 1 14
Bock 18 1-B-11 2014-02-05 4 15
Bock 18 1-A-02 2014-02-06 18 19
Rolling sum of previous
Each row can now evaluate if sufficient has been picked
If the sum of all previous rows is less than the ordered quantity
we still need to pick something and the row is needed
10. select s.*
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.item, s.purch, s.loc;
Filter on previous
Keep only rows where we still need
something to pick
Pick location quantity or
what is still needed,
whichever is smallest
Set NULL in first row of
partition to 0 otherwise
predicate will fail
11. ITEM ORD_QTY LOC PURCH LOC_QTY SUM_PRV_QTY PICK_QTY
----- ------- ------- ---------- ------- ----------- --------
Ale 24 1-A-20 2014-02-01 18 0 18
Ale 24 2-A-02 2014-02-02 24 18 6
Bock 18 1-B-15 2014-02-02 2 0 2
Bock 18 1-C-04 2014-02-03 12 2 12
Bock 18 2-D-23 2014-02-04 1 14 1
Bock 18 1-B-11 2014-02-05 4 15 3
Filter on previous
We have now selected the necessary
inventory quantities to fulfil the order
and pick the oldest items first (FIFO)
12. Picklist – FIFO
select s.loc
, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
LOC ITEM PICK_QTY
------- ----- --------
1-A-20 Ale 18
1-B-11 Bock 3
1-B-15 Bock 2
1-C-04 Bock 12
2-A-02 Ale 6
2-D-23 Bock 1
Simple FIFO picklist
Item and quantity to pick
By location order
13. Picklist – Shortest route
select s.loc
, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.loc -- << only line changed
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
LOC ITEM PICK_QTY
------- ----- --------
1-A-02 Bock 18
1-A-20 Ale 18
1-A-31 Ale 6
Switch picking strategy =
Switch analytic order by
Keep "outer" order by
14. Picklist – Least number of picks
select s.loc
, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.qty desc, i.loc -- << only line changed
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
LOC ITEM PICK_QTY
------- ----- --------
1-A-02 Bock 18
2-A-02 Ale 24
Switch picking strategy =
Switch analytic order by
Keep "outer" order by
15. Picklist – Clean out small quantities
select s.loc
, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.qty, i.loc -- << only line changed
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
LOC ITEM PICK_QTY
------- ----- --------
1-A-20 Ale 3
1-A-31 Ale 12
1-B-11 Bock 4
1-B-15 Bock 2
1-C-04 Bock 11
2-D-07 Ale 9
2-D-23 Bock 1
Switch picking strategy =
Switch analytic order by
Keep "outer" order by
16. Not the greatest picking route
Strategy "Clean out small quantities" give most number of picks
We use that for picking route demonstration
17. select to_number(substr(s.loc,1,1)) warehouse
, substr(s.loc,3,1) aisle
, to_number(substr(s.loc,5,2)) position
, s.loc
, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item
, o.qty ord_qty
, i.loc
, i.purch
, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.qty, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
Warehouse, aisle and position
Split location
in parts
- Warehouse
- Aisle
- Position
18. WAREHOUSE AISLE POSITION LOC ITEM PICK_QTY
--------- ----- -------- ------- ----- --------
1 A 20 1-A-20 Ale 3
1 A 31 1-A-31 Ale 12
1 B 11 1-B-11 Bock 4
1 B 15 1-B-15 Bock 2
1 C 4 1-C-04 Bock 11
2 D 7 2-D-07 Ale 9
2 D 23 2-D-23 Bock 1
Warehouse, aisle and position
Warehouse, aisle and position might be from
lookup tables instead – here is simple substr
for demonstration purposes
19. select to_number(substr(s.loc,1,1)) warehouse
, substr(s.loc,3,1) aisle
, dense_rank() over (
order by to_number(substr(s.loc,1,1)) -- warehouse
, substr(s.loc,3,1) -- aisle
) aisle_no
, to_number(substr(s.loc,5,2)) position
, s.loc
, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item, o.qty ord_qty, i.loc, i.purch, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.qty, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
Consecutive numbering of aisles
Dense rank
gives equal rank
to rows with
same values in
the order by and
each rank is
one higher than
the previous
20. WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- --------
1 A 1 20 1-A-20 Ale 3
1 A 1 31 1-A-31 Ale 12
1 B 2 11 1-B-11 Bock 4
1 B 2 15 1-B-15 Bock 2
1 C 3 4 1-C-04 Bock 11
2 D 4 7 2-D-07 Ale 9
2 D 4 23 2-D-23 Bock 1
Consecutive numbering of aisles
Aisles get consecutive
numbering in the order
they are visited
21. select s2.warehouse, s2.aisle, s2.aisle_no, s2.position
, s2.loc, s2.item, s2.pick_qty
from (
select to_number(substr(s.loc,1,1)) warehouse
, substr(s.loc,3,1) aisle
, dense_rank() over (
order by to_number(substr(s.loc,1,1)) -- warehouse
, substr(s.loc,3,1) -- aisle
) aisle_no
, to_number(substr(s.loc,5,2)) position
, s.loc, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item, o.qty ord_qty, i.loc, i.purch, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.qty, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
) s2
order by s2.warehouse
, s2.aisle_no
, case
when mod(s2.aisle_no,2) = 1 then s2.position
else -s2.position
end;
Odd / even ordering
We order the positions
in "odd" aisles "upward" and
in "even" aisles "downward"
22. WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- --------
1 A 1 20 1-A-20 Ale 3
1 A 1 31 1-A-31 Ale 12
1 B 2 15 1-B-15 Bock 2
1 B 2 11 1-B-11 Bock 4
1 C 3 4 1-C-04 Bock 11
2 D 4 23 2-D-23 Bock 1
2 D 4 7 2-D-07 Ale 9
Odd / even ordering
The desired ordering
– the first aisle by position ascending
– the second aisle by position descending
– and so on
24. select s2.warehouse, s2.aisle, s2.aisle_no, s2.position
, s2.loc, s2.item, s2.pick_qty
from (
select to_number(substr(s.loc,1,1)) warehouse
, substr(s.loc,3,1) aisle
, dense_rank() over (
partition by to_number(substr(s.loc,1,1)) -- warehouse
order by substr(s.loc,3,1) -- aisle
) aisle_no
, to_number(substr(s.loc,5,2)) position
, s.loc, s.item
, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item, o.qty ord_qty, i.loc, i.purch, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.qty, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderline o
join inventory i
on i.item = o.item
where o.ordno = 42
) s
where s.sum_prv_qty < s.ord_qty
) s2
order by s2.warehouse
, s2.aisle_no
, case
when mod(s2.aisle_no,2) = 1 then s2.position
else -s2.position
end;
Restart count if only one door
Partition by warehouse
25. WAREHOUSE AISLE AISLE_NO POSITION LOC ITEM PICK_QTY
--------- ----- -------- -------- ------- ----- --------
1 A 1 20 1-A-20 Ale 3
1 A 1 31 1-A-31 Ale 12
1 B 2 15 1-B-15 Bock 2
1 B 2 11 1-B-11 Bock 4
1 C 3 4 1-C-04 Bock 11
2 D 1 7 2-D-07 Ale 9
2 D 1 23 2-D-23 Bock 1
Restart count if only one door
Warehouse change restarts the aisle_no counter
So the first aisle in each warehouse starts by 1
and therefore is odd and positions ordered ascending
27. delete orderline;
insert into orderline values (51, 'Ale' , 24);
insert into orderline values (51, 'Bock', 18);
insert into orderline values (62, 'Ale' , 8);
insert into orderline values (73, 'Ale' , 16);
insert into orderline values (73, 'Bock', 6);
Batch pick multiple orders
Get rid of the first test order
and insert three orders of
various beers
28. with orderbatch as (
select o.item
, sum(o.qty) qty
from orderline o
where o.ordno in (51, 62, 73)
group by o.item
)
select s.loc, s.item, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
from (
select o.item, o.qty ord_qty, i.loc, i.purch, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
from orderbatch o
join inventory i
on i.item = o.item
) s
where s.sum_prv_qty < s.ord_qty
order by s.loc;
Aggregate orders by item
Named subquery that is the sum
of ordered quantities by item
Use subquery in FIFO query
instead of orderline table
29. LOC ITEM PICK_QTY
------- ----- --------
1-A-02 Bock 5
1-A-20 Ale 18
1-B-11 Bock 4
1-B-15 Bock 2
1-C-04 Bock 12
1-C-05 Ale 6
2-A-02 Ale 24
2-D-23 Bock 1
Aggregate orders by item
Gets us a nice FIFO picklist picking the
total quantities needed by the three orders
But…
We can't see how much is for each order?
30. with orderbatch as (
select o.item, sum(o.qty) qty
from orderline o
where o.ordno in (51, 62, 73)
group by o.item
)
select s.loc, s.item, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
, sum_prv_qty + 1 from_qty, least(sum_qty, ord_qty) to_qty
from (
select o.item, o.qty ord_qty, i.loc, i.purch, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and current row
),0) sum_qty
from orderbatch o
join inventory i
on i.item = o.item
) s
where s.sum_prv_qty < s.ord_qty
order by s.item, s.purch, s.loc;
Pick quantity intervals
Both rolling sum of
previous rows only as
well as rolling sum
including current row
Calculate from and to quantity of each pick
31. LOC ITEM PICK_QTY FROM_QTY TO_QTY
------- ----- -------- -------- ------
1-A-20 Ale 18 1 18
2-A-02 Ale 24 19 42
1-C-05 Ale 6 43 48
1-B-15 Bock 2 1 2
1-C-04 Bock 12 3 14
2-D-23 Bock 1 15 15
1-B-11 Bock 4 16 19
1-A-02 Bock 5 20 24
Pick quantity intervals
The 24 Ale picked at 2-A-02 is number
19-42 of the total 48 Ale we are picking
32. select o.ordno, o.item, o.qty
, nvl(sum(o.qty) over (
partition by o.item
order by o.ordno
rows between unbounded preceding and 1 preceding
),0) + 1 from_qty
, nvl(sum(o.qty) over (
partition by o.item
order by o.ordno
rows between unbounded preceding and current row
),0) to_qty
from orderline o
where ordno in (51, 62, 73)
order by o.item, o.ordno;
Order quantity intervals
Similarly calculate from and to
quantity of the orderlines
33. ORDNO ITEM QTY FROM_QTY TO_QTY
----- ----- ---- -------- ------
51 Ale 24 1 24
62 Ale 8 25 32
73 Ale 16 33 48
51 Bock 18 1 18
73 Bock 6 19 24
Order quantity intervals
The 8 Ale from order no 62 is number
25-32 of the total 48 Ale ordered
34. with orderlines as (
select o.ordno, o.item, o.qty
, nvl(sum(o.qty) over (
partition by o.item
order by o.ordno
rows between unbounded preceding and 1 preceding
),0) + 1 from_qty
, nvl(sum(o.qty) over (
partition by o.item
order by o.ordno
rows between unbounded preceding and current row
),0) to_qty
from orderline o
where ordno in (51, 62, 73)
), orderbatch as (
select o.item, sum(o.qty) qty
from orderlines o
group by o.item
...
Join on overlapping intervals
Named subquery with the
orderlines and their intervals
Named subquery with the
aggregate sums by item
>>>
35. ...
), fifo as (
select s.loc, s.item, s.purch, least(s.loc_qty, s.ord_qty - s.sum_prv_qty) pick_qty
, sum_prv_qty + 1 from_qty, least(sum_qty, ord_qty) to_qty
from (
select o.item, o.qty ord_qty, i.loc, i.purch, i.qty loc_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and 1 preceding
),0) sum_prv_qty
, nvl(sum(i.qty) over (
partition by i.item
order by i.purch, i.loc
rows between unbounded preceding and current row
),0) sum_qty
from orderbatch o
join inventory i
on i.item = o.item
) s
where s.sum_prv_qty < s.ord_qty
...
Join on overlapping intervals
Named subquery
with FIFO pick of
the sums with
quantity intervals
>>>
36. ...
)
select f.loc, f.item, f.purch, f.pick_qty, f.from_qty, f.to_qty
, o.ordno, o.qty, o.from_qty, o.to_qty
from fifo f
join orderlines o
on o.item = f.item
and o.to_qty >= f.from_qty
and o.from_qty <= f.to_qty
order by f.item, f.purch, o.ordno;
Join on overlapping intervals
Join the fifo subquery with the
orderlines subquery on item and
overlapping quantity intervals
37. LOC ITEM PURCH PICK_QTY FROM_QTY TO_QTY ORDNO QTY FROM_QTY TO_QTY
------- ----- ---------- -------- -------- ------ ----- ---- -------- ------
1-A-20 Ale 2014-02-01 18 1 18 51 24 1 24
2-A-02 Ale 2014-02-02 24 19 42 51 24 1 24
2-A-02 Ale 2014-02-02 24 19 42 62 8 25 32
2-A-02 Ale 2014-02-02 24 19 42 73 16 33 48
1-C-05 Ale 2014-02-03 6 43 48 73 16 33 48
1-B-15 Bock 2014-02-02 2 1 2 51 18 1 18
1-C-04 Bock 2014-02-03 12 3 14 51 18 1 18
2-D-23 Bock 2014-02-04 1 15 15 51 18 1 18
1-B-11 Bock 2014-02-05 4 16 19 51 18 1 18
1-B-11 Bock 2014-02-05 4 16 19 73 6 19 24
1-A-02 Bock 2014-02-06 5 20 24 73 6 19 24
Join on overlapping intervals
At location 2-A-02 we pick number 19-42 out of 48 Ale
That overlaps with all three orders, as they get respectively
number 1-24, 25-32 and 33-48 of the 48 Ale
38. with orderlines as (
...
), orderbatch as (
...
), fifo as (
...
)
select f.loc, f.item, f.purch, f.pick_qty, f.from_qty, f.to_qty
, o.ordno, o.qty, o.from_qty, o.to_qty
, least(
f.loc_qty
, least(o.to_qty, f.to_qty) - greatest(o.from_qty, f.from_qty) + 1
) pick_ord_qty
from fifo f
join orderlines o
on o.item = f.item
and o.to_qty >= f.from_qty
and o.from_qty <= f.to_qty
order by f.item, f.purch, o.ordno;
How much to pick
Each row gets either the "size of the
overlap" or the quantity on the location,
whichever is smallest
39. LOC ITEM PURCH PICK_QTY FROM_QTY TO_QTY ORDNO QTY FROM_QTY TO_QTY PICK_ORD_QTY
------- ----- ---------- -------- -------- ------ ----- ---- -------- ------ ------------
1-A-20 Ale 2014-02-01 18 1 18 51 24 1 24 18
2-A-02 Ale 2014-02-02 24 19 42 51 24 1 24 6
2-A-02 Ale 2014-02-02 24 19 42 62 8 25 32 8
2-A-02 Ale 2014-02-02 24 19 42 73 16 33 48 10
1-C-05 Ale 2014-02-03 6 43 48 73 16 33 48 6
1-B-15 Bock 2014-02-02 2 1 2 51 18 1 18 2
1-C-04 Bock 2014-02-03 12 3 14 51 18 1 18 12
2-D-23 Bock 2014-02-04 1 15 15 51 18 1 18 1
1-B-11 Bock 2014-02-05 4 16 19 51 18 1 18 3
1-B-11 Bock 2014-02-05 4 16 19 73 6 19 24 1
1-A-02 Bock 2014-02-06 5 20 24 73 6 19 24 5
How much to pick
At 2-A-02 we pick 6 Ale to order 51, 8 Ale to order 62
and 10 Ale to order 73 – total 24 Ale from that location
40. Batch picklist – FIFO
with orderlines as (
...
), orderbatch as (
...
), fifo as (
...
)
select f.loc, f.item
, f.pick_qty pick_at_loc, o.ordno
, least(
f.loc_qty
, least(o.to_qty, f.to_qty)
- greatest(o.from_qty, f.from_qty) + 1
) qty_for_ord
from fifo f
join orderlines o
on o.item = f.item
and o.to_qty >= f.from_qty
and o.from_qty <= f.to_qty
order by f.loc, o.ordno;
LOC ITEM PICK_AT_LOC ORDNO QTY_FOR_ORD
------- ----- ----------- ----- -----------
1-A-02 Bock 5 73 5
1-A-20 Ale 18 51 18
1-B-11 Bock 4 51 3
1-B-11 Bock 4 73 1
1-B-15 Bock 2 51 2
1-C-04 Bock 12 51 12
1-C-05 Ale 6 73 6
2-A-02 Ale 24 51 6
2-A-02 Ale 24 62 8
2-A-02 Ale 24 73 10
2-D-23 Bock 1 51 1
Clean up query and keep what's
needed for picking operator
41. with orderlines as (
...
), orderbatch as (
...
), fifo as (
...
), pick as (
select to_number(substr(f.loc,1,1)) warehouse
, substr(f.loc,3,1) aisle
, dense_rank() over (
order by to_number(substr(f.loc,1,1)) -- warehouse
, substr(f.loc,3,1) -- aisle
) aisle_no
, to_number(substr(f.loc,5,2)) position
, f.loc, f.item, f.pick_qty pick_at_loc, o.ordno
, least(
f.loc_qty
, least(o.to_qty, f.to_qty) - greatest(o.from_qty, f.from_qty) + 1
) qty_for_ord
from fifo f
join orderlines o
on o.item = f.item
and o.to_qty >= f.from_qty
and o.from_qty <= f.to_qty
...
Batch picklist FIFO with picking route
Named subquery with batch picklist adding
warehouse, aisle, position and dense rank
>>>
42. Batch picklist FIFO with picking route
...
)
select p.loc, p.item, p.pick_at_loc
, p.ordno, p.qty_for_ord
from pick p
order by p.warehouse
, p.aisle_no
, case
when mod(p.aisle_no,2) = 1 then
p.position
else
-p.position
end;
LOC ITEM PICK_AT_LOC ORDNO QTY_FOR_ORD
------- ----- ----------- ----- -----------
1-A-02 Bock 5 73 5
1-A-20 Ale 18 51 18
1-B-15 Bock 2 51 2
1-B-11 Bock 4 51 3
1-B-11 Bock 4 73 1
1-C-04 Bock 12 51 12
1-C-05 Ale 6 73 6
2-A-02 Ale 24 51 6
2-A-02 Ale 24 73 10
2-A-02 Ale 24 62 8
2-D-23 Bock 1 51 1
Select from the pick subquery
Ordering with odd / even logic
Batch picklist by FIFO with
picking route in single SQL
finished
43. FIFO principle
● Or other principles by changing one line of code
Picking route
● Up and down alternate aisles
Batch picking
● Multiple orders simultaneously
Single SQL picking
Done
Case closed
45. Sales forecasting
● Seasonal items (summer / winter)
● Trending upwards or downwards over time
Regression
● Simple model ”transposing graph”
● Datascientists model ”Time Series Analysis”
Single SQL
● Utilize the power of the database
Case: Sales forecasting
46. create table sales (
item varchar2(10)
, mth date
, qty number
);
insert into sales values ('Snowchain', date '2011-01-01', 79);
insert into sales values ('Snowchain', date '2011-02-01', 133);
insert into sales values ('Snowchain', date '2011-03-01', 24);
...
insert into sales values ('Snowchain', date '2013-10-01', 1);
insert into sales values ('Snowchain', date '2013-11-01', 73);
insert into sales values ('Snowchain', date '2013-12-01', 160);
insert into sales values ('Sunshade' , date '2011-01-01', 4);
insert into sales values ('Sunshade' , date '2011-02-01', 6);
insert into sales values ('Sunshade' , date '2011-03-01', 32);
...
insert into sales values ('Sunshade' , date '2013-10-01', 11);
insert into sales values ('Sunshade' , date '2013-11-01', 3);
insert into sales values ('Sunshade' , date '2013-12-01', 5);
Sales 2011 – 2013
Monthly sales
2011 - 2013
Snowchain and
Sunshade
48. select sales.item, sales.mth, sales.qty
, regr_slope(
sales.qty
, extract(year from sales.mth) * 12 + extract(month from sales.mth)
) over (
partition by sales.item
order by sales.mth
range between interval '23' month preceding and current row
) slope
from sales
order by sales.item, sales.mth;
Moving slope
Calculate slope of linear regression of a graph with qty on the Y
axis and month as number with unit 1=month on the X axis
"Rolling" slope of 24 points on the graph (= 2 years)
51. select item, mth, qty
, qty + 12 * slope qty_next_year
from (
select sales.item, sales.mth, sales.qty
, regr_slope(
sales.qty
, extract(year from sales.mth) * 12 + extract(month from sales.mth)
) over (
partition by sales.item
order by sales.mth
range between interval '23' month preceding and current row
) slope
from sales
)
where mth >= date '2013-01-01'
order by item, mth;
Transpose 12 months
Filter 2013 with
the useful slopes
Slope is Y-increment per month
12 * Slope is Y-increment per year
Add 12 * Slope to Qty is forecast
53. select item
, add_months(mth, 12) mth
, greatest(round(qty + 12 * slope), 0) forecast
from (
select sales.item, sales.mth, sales.qty
, regr_slope(
sales.qty
, extract(year from sales.mth) * 12 + extract(month from sales.mth)
) over (
partition by sales.item
order by sales.mth
range between interval '23' month preceding and current row
) slope
from sales
)
where mth >= date '2013-01-01'
order by item, mth;
Rounded forecast
Rather than "qty_next_year", add 12 months
to show the month of the forecast
Round off to whole quantities and assume
negative forecast is a zero sale
55. select item, mth, qty, type
, sum(qty) over (partition by item, extract(year from mth)) qty_yr
from (
select sales.item, sales.mth, sales.qty, 'Actual' type
from sales
union all
select item, add_months(mth, 12) mth
, greatest(round(qty + 12 * slope), 0) qty, 'Forecast' type
from (
select sales.item, sales.mth, sales.qty
, regr_slope(
sales.qty
, extract(year from sales.mth) * 12 + extract(month from sales.mth)
) over (
partition by sales.item
order by sales.mth
range between interval '23' month preceding and current row
) slope
from sales
)
where mth >= date '2013-01-01'
)
order by item, mth;
Sales and forecast
Union actual sales
with forecast
Show year totals
for comparison
58. Our dataanalyst/scientist did a model in Excel
● Centered Moving Average
● Seasonality
● Deseasonalize
● Regression trend
● Reseasonalize
http://people.duke.edu/~rnau/411outbd.htm
OK, I can do that in a SQL statement…
Time Series Analysis
59. select sales.item
, mths.ts
, mths.mth
, extract(year from mths.mth) yr
, extract(month from mths.mth) mthno
, sales.qty
from (
select add_months(date '2011-01-01', level-1) mth
, level ts --time serie
from dual
connect by level <= 48
) mths
left outer join sales
partition by (sales.item)
on sales.mth = mths.mth
order by sales.item, mths.mth;
Time Series
Create 48 month time
series for each item –
sales 2011-13 and
forecast 2014
Partitioned outer join
gives rows for 2014 for
each item with null qty
61. with s1 as (
...
)
select s1.*
, case when ts between 7 and 30
then
(nvl(avg(qty) over (
partition by item
order by ts
rows between 5 preceding and 6 following
),0) + nvl(avg(qty) over (
partition by item
order by ts
rows between 6 preceding and 5 following
),0)) / 2
else
null
end cma -- centered moving average
from s1
order by item, ts;
Centered Moving Average
• Rolling average
-5 to +6 months
• Rolling average
-6 to +5 months
Average of those
two is CMA
Do this only for
those months
(ts 7-30) where
there is 12 months
data
63. with s1 as (
...
), s2 as (
...
)
select s2.*
, nvl(avg(
case qty when 0 then 0.0001 else qty end / nullif(cma,0)
) over (
partition by item, mthno
),0) s -- seasonality
from s2
order by item, ts;
Seasonality factor
Qty divided by CMA factor
Average factor of the
month is seasonality
Partition makes seasonality
same for all january, all
february, etc. (by item)
65. with s1 as (
...
), s2 as (
...
), s3 as (
...
)
select s3.*
, case when ts <= 36 then
nvl(case qty when 0 then 0.0001 else qty end / nullif(s,0), 0)
end des -- deseasonalized
from s3
order by item, ts;
Deseasonalized quantity
Divide each individual qty by the seasonality factor
68. with s1 as (
...
), s2 as (
...
), s3 as (
...
), s4 as (
...
)
select s4.*
, regr_intercept(des,ts) over (partition by item)
+ ts*regr_slope(des,ts) over (partition by item) t -- trend
from s4
order by item, ts;
Trend (regression)
Linear regression of deseasonalized qty
gives the trend line
Intercept is the point where the line intersects Y axis
Add slope*time series (month) and get Y value of trend line
71. with s1 as (
...
), s2 as (
...
), s3 as (
...
), s4 as (
...
), s5 as (
...
)
select s5.*
, t * s forecast --reseasonalized
from s5
order by item, ts;
Reseasonalize (forecast)
Multiply trend line by seasonality factor
and get the qty forecasted by the model
75. with s1 as (
...
), s2 as (
...
), s3 as (
...
), s4 as (
...
), s5 as (
...
)
select item
, mth
, qty
, t * s forecast --reseasonalized
, sum(qty) over (partition by item, yr) qty_yr
, sum(t * s) over (partition by item, yr) fc_yr
from s5
order by item, ts;
Model describes reality?
2011-13 qty and forecast can be
compared to see how well model
described reality
77. with s1 as (...), s2 as (...), s3 as (...), s4 as (...), s5 as (...)
select item, mth
, case
when ts <= 36 then qty
else round(t * s)
end qty
, case
when ts <= 36 then 'Actual'
else 'Forecast'
end type
, sum(
case
when ts <= 36 then qty
else round(t * s)
end
) over (
partition by item, extract(year from mth)
) qty_yr
from s5
order by item, ts;
Sales and forecast
Output of Actual and Forecast
like we did with simple model
81. Data analyst develops predictive Time Series
Analysis model of reality in Excel
Recreate that model in SQL for repeated
application to larger datasets
Query where model fits reality or not
Single SQL forecasting
Done
Case closed
82. Danish geek
Oracle SQL Evangelist
Oracle PL/SQL Developer
Likes to cook
Reads sci-fi
Member of
Danish Beer Enthusiasts
Questions ?
http://dspsd.blogspot.com
http://dk.linkedin.com/in/kibeha/
@kibeha
Kim Berg Hansen
http://goo.gl/q1YJRL
for this presentation
and scripts