A short presentation developed for the ACL London User Group covering some top user queries:
1) Calculate number of months between dates
2) Identify duplicates over multiple fields
3) Automating SAP Direct Link background query retrieval
Do you want to get up to speed with the most current release of Oracle Database? If so, this session is for you. You will learn about the most exciting and useful new features of Oracle Database 12c that can make your work as a database developer a lot easier. There is more to Oracle Database 12c than just pluggable databases.
Do you want to get up to speed with the most current release of Oracle Database? If so, this session is for you. You will learn about the most exciting and useful new features of Oracle Database 12c that can make your work as a database developer a lot easier. There is more to Oracle Database 12c than just pluggable databases.
A Visual Approach to Understanding Streaming SQL confluent
(Shant Hovespian, Arcadia Data) Kafka Summit SF 2018
Streaming SQL is the latest hot technology that takes a fundamentally familiar topic and adds modern techniques. SQL is ubiquitous and probably will continue to be for a long time, so if you start with that foundation and add modern streaming functionality, you get a technology that is ideal for querying real-time data streams. The widespread familiarity of SQL will help with adopting it in new environments and new use cases.
However, there are many aspects of streaming SQL that are challenging to grasp, including continuous queries, late arrival of events and different types of windowing. The trick is to not only leverage these new functionalities but also to present them to end users in a logical and meaningful way. We will explain various aspects of streaming SQL by showing visual representations of the different semantics provided.
In this talk, we will use KSQL to illustrate various examples. Attendees will get a better understanding on:
-Defining various aggregation window types and their behavior
-How to understand time scales and granularity
-The ability to replay results for testing and for creating new outputs
Data-driven model-based restructuring of enterprise transaction operationsSudhendu Rai
We present a case study in the use of process data analytics and discrete-event simulation to improve productivity in a large complex transaction print production environment.
Large GIS Data Reprojection With FME Workbench - UTM Zone Fanout SolutionSafe Software
Our customer had an unfortunate alignment issue due to an error in initial project setup. Although the country of Mexico spans 6 UTM zones, the project was initially create entirely in UTM14. This caused a rather cylindrical and mis aligned coverage when viewed in it's entirety. At the time of conception (Jan 2015), The customer had over 3.2 million addresses documented in the system along with associated infrastructure and outside plant data.
The goal of this project was to fan out and reproject their data to a GCS to remove the current geographical ambiguity of the stored Oracle Spatial data, as well as eliminate the imminent issue of overlapping data from the adjacent UTM Zones.
Do you want to get up to speed with the most current release of Oracle Database? If so, this session is for you. You will learn about the most exciting and useful new features of Oracle Database 12c that can make your work as a database developer a lot easier. There is more to Oracle Database 12c than just pluggable databases.
Do you want to get up to speed with the most current release of Oracle Database? If so, this session is for you. You will learn about the most exciting and useful new features of Oracle Database 12c that can make your work as a database developer a lot easier. There is more to Oracle Database 12c than just pluggable databases.
A Visual Approach to Understanding Streaming SQL confluent
(Shant Hovespian, Arcadia Data) Kafka Summit SF 2018
Streaming SQL is the latest hot technology that takes a fundamentally familiar topic and adds modern techniques. SQL is ubiquitous and probably will continue to be for a long time, so if you start with that foundation and add modern streaming functionality, you get a technology that is ideal for querying real-time data streams. The widespread familiarity of SQL will help with adopting it in new environments and new use cases.
However, there are many aspects of streaming SQL that are challenging to grasp, including continuous queries, late arrival of events and different types of windowing. The trick is to not only leverage these new functionalities but also to present them to end users in a logical and meaningful way. We will explain various aspects of streaming SQL by showing visual representations of the different semantics provided.
In this talk, we will use KSQL to illustrate various examples. Attendees will get a better understanding on:
-Defining various aggregation window types and their behavior
-How to understand time scales and granularity
-The ability to replay results for testing and for creating new outputs
Data-driven model-based restructuring of enterprise transaction operationsSudhendu Rai
We present a case study in the use of process data analytics and discrete-event simulation to improve productivity in a large complex transaction print production environment.
Large GIS Data Reprojection With FME Workbench - UTM Zone Fanout SolutionSafe Software
Our customer had an unfortunate alignment issue due to an error in initial project setup. Although the country of Mexico spans 6 UTM zones, the project was initially create entirely in UTM14. This caused a rather cylindrical and mis aligned coverage when viewed in it's entirety. At the time of conception (Jan 2015), The customer had over 3.2 million addresses documented in the system along with associated infrastructure and outside plant data.
The goal of this project was to fan out and reproject their data to a GCS to remove the current geographical ambiguity of the stored Oracle Spatial data, as well as eliminate the imminent issue of overlapping data from the adjacent UTM Zones.
CMS Project Phase II InstructionsIn this phase, you will create t.docxmary772
CMS Project: Phase II Instructions
In this phase, you will create tables based upon the ERD and SQL code below.You will thenpopulate each table with the data presented below. Finally, you will create queries that will be used to support reports for Accounting and Management. You will not actually create the reports in a GUI environment– only the queries that will serve as the basis for the reports. Screenshots are required for a grade to be given. One screenshot is not the idea; however, multiple screenshots along the way is the goal.
Background
:
The following ERD will be used as the basis for this Phase.
Part A: Table Creation and Data Loading
Instructions
: Create a new database in SQL Server and run the following CREATE TABLE commands. Note that you must run the CREATE TABLE statements in the order presented (and load the data in the order presented) to avoid conflicts resulting from foreign key constraints.
Additional instructions for materials to turn in for this phase of your project are included at the end of this specification document.
CREATE TABLE Regions
(RegionID int not null,
RegionAbbreviation varchar(4),
RegionName varchar(100),
CONSTRAINT PK_Regions PRIMARY KEY (RegionID))
CREATE TABLE Countries
(CountryID int not null,
CountryName varchar(50),
WeeklyHours int,
Holidays int,
VacationDays int,
RegionID int,
CONSTRAINT PK_Countries PRIMARY KEY (CountryID),
CONSTRAINT FK_CountriesRegions FOREIGN KEY (RegionID) References Regions)
CREATE TABLE EmployeeTitles
(TitleID int not null,
Title varchar(15),
CONSTRAINT PK_EmpTitles PRIMARY KEY (TitleID))
CREATE TABLE BillingRates
(TitleID int not null,
Level int not null,
Rate float,
CurrencyName varchar(5),
CONSTRAINT PK_BillingRates PRIMARY KEY (TitleID, Level),
CONSTRAINT FK_BillingRatesTitles FOREIGN KEY (TitleID) References EmployeeTitles)
CREATE TABLE Employees
(EmpID int not null,
FirstName varchar(30),
LastName varchar(30),
Email varchar(50),
Salary decimal(10,2),
TitleIDint,
Level int,
SupervisorID int,
CountryID int,
CONSTRAINT PK_Employees PRIMARY KEY (EmpID),
CONSTRAINT FK_EmployeesCountries FOREIGN KEY (CountryID) References Countries,
CONSTRAINT FK_EmployeesEmpTitles FOREIGN KEY (TitleID) References EmployeeTitles,
CONSTRAINT FK_EmployeeSupervisors FOREIGN KEY (SupervisorID) References Employees)
CREATE TABLE ContactTypes
(ContactTypeID int not null,
ContactType varchar(30)
CONSTRAINT PK_ContactTypes PRIMARY KEY (ContactTypeID))
CREATE TABLE ContractTypes
(ContractTypeID int not null,
ContractType varchar(30)
CONSTRAINT PK_ContractTypes PRIMARY KEY (ContractTypeID))
CREATE TABLE BenefitTypes
(BenefitTypeID int not null,
BenefitType varchar(30)
CONSTRAINT PK_BenefitTypes PRIMARY KEY (BenefitTypeID))
CREATE TABLE Clients
(ClientID int not null,
LegalName varchar(50),
CommonName varchar(50),
AddrLine1 varchar(50),
AddrLine2 varchar(50),
City varchar(25),
State_Province varchar(25),
Zip varchar(9),
CountryID int,
CONSTRAINT PK_Cli.
Cassandra Day SV 2014: Fundamentals of Apache Cassandra Data ModelingDataStax Academy
You know you need Cassandra for it's uptime and scaling, but what about that data model? Let's bridge that gap and get you building your game changing app. We'll break down topics like storing objects and indexing for fast retrieval. You will see by understanding a few things about Cassandra internals, you can put your data model in the spotlight. The goal of this talk is to get you comfortable working with data in Cassandra throughout the application lifecycle. What are you waiting for? The cameras are waiting!
Amazon GT master data science challenge 2020 presentationFan Wu
This is the winning team presentation from Master Data Science Challenge 2020 at Georgia Tech, a grad-level case competition examining/understanding weather impact Amazon fulfillment center operation performance.
An overview of cost management solutions within Project Controls cost management solutions focusing especially on Deltek Cobra, Ares Prism and Ecosys
**PLEASE NOTE** This presentation is a PPSX format so to see the animation correctly you need to DOWNLOAD presentation 1st and then view the presentation.
An attempt to teach Open Data members in the Government of Ontario Open Data initiative the use of Cassandra, Time Series DB and Kairos DB specifically. This POC was completed in python and is open sourced on my github.
CMS Project Phase II InstructionsIn this phase, you will create t.docxmary772
CMS Project: Phase II Instructions
In this phase, you will create tables based upon the ERD and SQL code below.You will thenpopulate each table with the data presented below. Finally, you will create queries that will be used to support reports for Accounting and Management. You will not actually create the reports in a GUI environment– only the queries that will serve as the basis for the reports. Screenshots are required for a grade to be given. One screenshot is not the idea; however, multiple screenshots along the way is the goal.
Background
:
The following ERD will be used as the basis for this Phase.
Part A: Table Creation and Data Loading
Instructions
: Create a new database in SQL Server and run the following CREATE TABLE commands. Note that you must run the CREATE TABLE statements in the order presented (and load the data in the order presented) to avoid conflicts resulting from foreign key constraints.
Additional instructions for materials to turn in for this phase of your project are included at the end of this specification document.
CREATE TABLE Regions
(RegionID int not null,
RegionAbbreviation varchar(4),
RegionName varchar(100),
CONSTRAINT PK_Regions PRIMARY KEY (RegionID))
CREATE TABLE Countries
(CountryID int not null,
CountryName varchar(50),
WeeklyHours int,
Holidays int,
VacationDays int,
RegionID int,
CONSTRAINT PK_Countries PRIMARY KEY (CountryID),
CONSTRAINT FK_CountriesRegions FOREIGN KEY (RegionID) References Regions)
CREATE TABLE EmployeeTitles
(TitleID int not null,
Title varchar(15),
CONSTRAINT PK_EmpTitles PRIMARY KEY (TitleID))
CREATE TABLE BillingRates
(TitleID int not null,
Level int not null,
Rate float,
CurrencyName varchar(5),
CONSTRAINT PK_BillingRates PRIMARY KEY (TitleID, Level),
CONSTRAINT FK_BillingRatesTitles FOREIGN KEY (TitleID) References EmployeeTitles)
CREATE TABLE Employees
(EmpID int not null,
FirstName varchar(30),
LastName varchar(30),
Email varchar(50),
Salary decimal(10,2),
TitleIDint,
Level int,
SupervisorID int,
CountryID int,
CONSTRAINT PK_Employees PRIMARY KEY (EmpID),
CONSTRAINT FK_EmployeesCountries FOREIGN KEY (CountryID) References Countries,
CONSTRAINT FK_EmployeesEmpTitles FOREIGN KEY (TitleID) References EmployeeTitles,
CONSTRAINT FK_EmployeeSupervisors FOREIGN KEY (SupervisorID) References Employees)
CREATE TABLE ContactTypes
(ContactTypeID int not null,
ContactType varchar(30)
CONSTRAINT PK_ContactTypes PRIMARY KEY (ContactTypeID))
CREATE TABLE ContractTypes
(ContractTypeID int not null,
ContractType varchar(30)
CONSTRAINT PK_ContractTypes PRIMARY KEY (ContractTypeID))
CREATE TABLE BenefitTypes
(BenefitTypeID int not null,
BenefitType varchar(30)
CONSTRAINT PK_BenefitTypes PRIMARY KEY (BenefitTypeID))
CREATE TABLE Clients
(ClientID int not null,
LegalName varchar(50),
CommonName varchar(50),
AddrLine1 varchar(50),
AddrLine2 varchar(50),
City varchar(25),
State_Province varchar(25),
Zip varchar(9),
CountryID int,
CONSTRAINT PK_Cli.
Cassandra Day SV 2014: Fundamentals of Apache Cassandra Data ModelingDataStax Academy
You know you need Cassandra for it's uptime and scaling, but what about that data model? Let's bridge that gap and get you building your game changing app. We'll break down topics like storing objects and indexing for fast retrieval. You will see by understanding a few things about Cassandra internals, you can put your data model in the spotlight. The goal of this talk is to get you comfortable working with data in Cassandra throughout the application lifecycle. What are you waiting for? The cameras are waiting!
Amazon GT master data science challenge 2020 presentationFan Wu
This is the winning team presentation from Master Data Science Challenge 2020 at Georgia Tech, a grad-level case competition examining/understanding weather impact Amazon fulfillment center operation performance.
An overview of cost management solutions within Project Controls cost management solutions focusing especially on Deltek Cobra, Ares Prism and Ecosys
**PLEASE NOTE** This presentation is a PPSX format so to see the animation correctly you need to DOWNLOAD presentation 1st and then view the presentation.
An attempt to teach Open Data members in the Government of Ontario Open Data initiative the use of Cassandra, Time Series DB and Kairos DB specifically. This POC was completed in python and is open sourced on my github.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
2. Alex Psarras, ACDA
Senior Data Insight Analyst
3 Appleton Court, Calder Park, Wakefield WF2 7AR
T +44 (0)1924 254101 F +44 (0)1924 253358 M +44 (0)75 8596 7438
E: alex.psarras@dataconsulting.co.uk W: www.dataconsulting.co.uk
3. Top Queries
1.Calculate number of months between dates
2.Identify duplicates over multiple fields
3.Automating SAP Direct Link background
query retrieval
10. Duplicates in different fields
Vendor
ID
Name 1 Name 2 Address
#
1st Line
Address
Post Code
45463 Clownfish
Telecom
Finance
Department
3 North
Forest St
TT8 9UX
48923 Clownfish
Ltd (DON’T
USE)
Clownfish
Telecom
3 North
Forest
Street
TT8 9UX
48782 Death Star
Enterprise
Mos Eisley
Cantina
8 SW1 3PO
49969 A.Skywalker aka Darth
Vader
8 Mos Eisley
Cantina
SW1 3PO
11. Duplicates in different fields
Combine all key fields into one field:
• Name 2 + Post Code
• Address # + Post Code
• 1st Line Address + Post Code
• “Clean” versions of the above
12. 45463 – TT8 9UX Digits 1st
Line
45463 NorthForestSt – TT8 9UX Chars 1st
Line
45463 North Forest St – TT8 9UX 1st
Line
45463 3 - TT8 9UX Digits Address #
45463 - TT8 9UX Chars Address #
45463 - TT8 9UX Digits Name 2
45463 FinanceDepartment - TT8 9UX Chars Name 2
45463 Finance Department - TT8 9UX Name 2
Duplicates in different fields
Vendor ID Address Address Type
45463 3 - TT8 9UX Address #
13. 45463 – TT8 9UX Digits 1st
Line
45463 NorthForestSt – TT8 9UX Chars 1st
Line
45463 North Forest St – TT8 9UX 1st
Line
45463 3 - TT8 9UX Digits Address #
45463 - TT8 9UX Chars Address #
45463 - TT8 9UX Digits Name 2
45463 FinanceDepartment - TT8 9UX Chars Name 2
45463 Finance Department - TT8 9UX Name 2
Duplicates in different fields
Vendor ID Address Address Type
45463 3 - TT8 9UX Address #
14. 48923 3 – TT8 9UX Digits 1st
Line
48923 3 North Forest Street – TT8 9UX 1st
Line
48923 Clownfish Telecom - TT8 9UX Name 2
45463 – TT8 9UX Digits 1st
Line
45463 NorthForestSt – TT8 9UX Chars 1st
Line
45463 North Forest St – TT8 9UX 1st
Line
45463 3 - TT8 9UX Digits Address #
45463 - TT8 9UX Chars Address #
45463 - TT8 9UX Digits Name 2
45463 FinanceDepartment - TT8 9UX Chars Name 2
45463 Finance Department - TT8 9UX Name 2
Duplicates in different fields
Vendor ID Address Address Type
45463 3 - TT8 9UX Address #
15. Duplicates in different fields
• 45463 & 48923 (1 match)
– Address # vs Digits 1st Line
• 48782 & 49962 (2 matches)
– Name 2 vs 1st Line
– 1st Line vs Address #
Vendor ID Address Address Type
45463 3 - TT8 9UX Address #
48923 3 – TT8 9UX Digits 1st Line
48782 Mos Eisley Cantina - SW1 3PO Name 2
48782 8 - SW1 3PO 1st Line
49969 Mos Eisley Cantina - SW1 3PO 1st Line
49969 8 - SW1 3PO Address #
16. Duplicates in different fields
COMMENT ** Extract all fields to Combined_Addresses
EXTRACT FIELDS TO "Combined_Addresses"
Vendor_Number
SUBSTRING(Name_2 + "-" + Post_Code, 1, 50) AS "Address"
SUBSTRING("Name 2", 1, 20) AS "Type"
EXTRACT FIELDS TO "Combined_Addresses" APPEND
Vendor_Number
SUBSTRING(INCLUDE(UPPER(Name_2), "A…Z") + "-" + Post_Code, 1, 50)
SUBSTRING("Chars Name 2", 1, 20)
EXTRACT FIELDS TO "Combined_Addresses" APPEND
Vendor_Number
SUBSTRING(INCLUDE(Name_2, "0123456789") + "-" + Post_Code, 1, 50)
SUBSTRING("Digits Name 2", 1, 20)
COMMENT ** Repeat for next field
17. Auto-retrieve SAP Direct Link jobs
• ACL Direct Link has two modes:
1. Extract Now
• For small queries, 488 KB or less
• ACL retrieves data as soon as possible
2. Background
• For larger queries
• User has to manually retrieve data
• Background retrieval can be automated by
using SAP table TBTCO - Job Status Overview
Job Name Start Date Start Time User Status End Date End Time
BSEG.DAT 13/01/2014 09:52:18 APsarras F 13/01/2014 13:49:02
BKPF.DAT 13/01/2014 09:58:34 APsarras R
18. Auto-retrieve SAP Direct Link jobs
How to automate SAP retrievals
1. Submit all queries in background mode
2. Capture and log the SAP job names
3. For each un-retrieved table
a) Check TBTCO and see if the data is ready
b) If ready then retrieve data and log findings
c) Move on to next un-retrieved table
4. Wait X minutes and repeat step 3
5. Continue once we have all files
20. EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 N
BSEG.DAT 13/01/2014 09:52:18 N
Auto-retrieve SAP Direct Link jobs
2. Capture and log the SAP job names
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
21. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
TBTKO
EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 N
BSEG.DAT 13/01/2014 09:52:18 N
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
Job Name Start Date Start Time User Status End Date End Time
BSEG.DAT 13/01/2014 09:52:18 APsarras R
22. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
TBTKO
EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 Y
BSEG.DAT 13/01/2014 09:52:18 N
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
Job Name Start Date Start Time User Status End Date End Time
LFA1.DAT 13/01/2014 09:52:18 APsarras F 13/01/2014 09:52:59
13/01/2014 09:52:59 52,485
23. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
TBTKO
EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 Y
BSEG.DAT 13/01/2014 09:52:18 N
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
Job Name Start Date Start Time User Status End Date End Time
EKKO.DAT 13/01/2014 09:52:19 APsarras R
13/01/2014 09:52:59 52,485
24. Auto-retrieve SAP Direct Link jobs
4. All tables retrieved?
No - so wait 10 minutes
EXECUTE "TIMEOUT /t 600“
Then go back to step 3
25. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 Y
BSEG.DAT 13/01/2014 09:52:18 N
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
13/01/2014 09:52:59 52,485
26. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
Wait 10 minutes!
EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 Y
BSEG.DAT 13/01/2014 09:52:18 N
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
13/01/2014 09:52:59 52,485
27. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
Wait 10 minutes!
EKKO.DAT 13/01/2014 09:52:19 N
LFA1.DAT 13/01/2014 09:52:18 Y
BSEG.DAT 13/01/2014 09:52:18 N
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
13/01/2014 09:52:59 52,485
28. Auto-retrieve SAP Direct Link jobs
3. For each un-retrieved table
EKKO.DAT 13/01/2014 09:52:19 Y
LFA1.DAT 13/01/2014 09:52:18 Y
BSEG.DAT 13/01/2014 09:52:18 Y
Job Name Submit
Date
Submit
Time
Complete Complete
Date
Complete
Time
Records
13/01/2014 09:52:59 52,485
13/01/2014 11:38:12 11,848,179
13/01/2014 18:08:32 89,711,009