SlideShare a Scribd company logo
1 of 61
Download to read offline
Deep-Dive into Big Data ETL with 
ODI12c and Oracle Big Data Connectors 
Mark Rittman, CTO, Rittman Mead 
Oracle Openworld 2014, San Francisco 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
About the Speaker 
•Mark Rittman, Co-Founder of Rittman Mead 
•Oracle ACE Director, specialising in Oracle BI&DW 
•14 Years Experience with Oracle Technology 
•Regular columnist for Oracle Magazine 
•Author of two Oracle Press Oracle BI books 
•Oracle Business Intelligence Developers Guide 
•Oracle Exalytics Revealed 
•Writer for Rittman Mead Blog : 
http://www.rittmanmead.com/blog 
•Email : mark.rittman@rittmanmead.com 
•Twitter : @markrittman
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
About Rittman Mead 
•Oracle BI and DW Gold partner 
•Winner of five UKOUG Partner of the Year awards in 2013 - including BI 
•World leading specialist partner for technical excellence, 
solutions delivery and innovation in Oracle BI 
•Approximately 80 consultants worldwide 
•All expert in Oracle BI and DW 
•Offices in US (Atlanta), Europe, Australia and India 
•Skills in broad range of supporting Oracle tools: 
‣OBIEE, OBIA 
‣ODIEE 
‣Essbase, Oracle OLAP 
‣GoldenGate 
‣Endeca
Traditional Data Warehouse / BI Architectures 
•Three-layer architecture - staging, foundation and access/performance 
•All three layers stored in a relational database (Oracle) 
•ETL used to move data from layer-to-layer 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
Staging Foundation / 
ODS 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Performance / 
Dimensional 
ETL ETL 
BI Tool (OBIEE) 
with metadata 
layer 
OLAP / In-Memory 
Tool with data load 
into own database 
Direct 
Read 
Data 
Load 
Traditional structured 
data sources 
Data 
Load 
Data 
Load 
Data 
Load 
Traditional Relational Data Warehouse
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Introducing Hadoop 
•A new approach to data processing and data storage 
•Rather than a small number of large, powerful servers, it spreads processing over 
large numbers of small, cheap, redundant servers 
•Spreads the data you’re processing over 
lots of distributed nodes 
•Has scheduling/workload process that sends 
Job Tracker 
parts of a job to each of the nodes 
- a bit like Oracle Parallel Execution 
•And does the processing where the data sits 
- a bit like Exadata storage servers 
•Shared-nothing architecture 
•Low-cost and highly horizontal scalable 
Task Tracker Task Tracker Task Tracker Task Tracker 
Data Node Data Node Task Tracker Task Tracker
Hadoop Tenets : Simplified Distributed Processing 
•Hadoop, through MapReduce, breaks processing down into simple stages 
‣Map : select the columns and values you’re interested in, pass through as key/value pairs 
‣Reduce : aggregate the results 
•Most ETL jobs can be broken down into filtering, 
projecting and aggregating 
•Hadoop then automatically runs job on cluster 
‣Share-nothing small chunks of work 
‣Run the job on the node where the data is 
‣Handle faults etc 
‣Gather the results back in 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Mapper 
Filter, Project 
Mapper 
Filter, Project 
Mapper 
Filter, Project 
Reducer 
Aggregate 
Reducer 
Aggregate 
Output 
One HDFS file per reducer, 
in a directory
HDFS: Low-Cost, Clustered, Fault-Tolerant Storage 
•The filesystem behind Hadoop, used to store data for Hadoop analysis 
‣Unix-like, uses commands such as ls, mkdir, chown, chmod 
•Fault-tolerant, with rapid fault detection and recovery 
•High-throughput, with streaming data access and large block sizes 
•Designed for data-locality, placing data closed to where it is processed 
•Accessed from the command-line, via internet (hdfs://), GUI tools etc 
[oracle@bigdatalite mapreduce]$ hadoop fs -mkdir /user/oracle/my_stuff 
[oracle@bigdatalite mapreduce]$ hadoop fs -ls /user/oracle 
Found 5 items 
drwx------ - oracle hadoop 0 2013-04-27 16:48 /user/oracle/.staging 
drwxrwxrwx - oracle hadoop 0 2012-09-18 17:02 /user/oracle/moviedemo 
drwxrwxrwx - oracle hadoop 0 2012-10-17 15:58 /user/oracle/moviework 
drwxrwxrwx - oracle hadoop 0 2013-05-03 17:49 /user/oracle/my_stuff 
drwxrwxrwx - oracle hadoop 0 2012-08-10 16:08 /user/oracle/stage 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Oracle’s Big Data Products 
•Oracle Big Data Appliance - Engineered System for Big Data Acquisition and Processing 
‣Cloudera Distribution of Hadoop 
‣Cloudera Manager 
‣Open-source R 
‣Oracle NoSQL Database Community Edition 
‣Oracle Enterprise Linux + Oracle JVM 
‣New - Oracle Big Data SQL 
•Oracle Big Data Connectors 
‣Oracle Loader for Hadoop (Hadoop > Oracle RBDMS) 
‣Oracle Direct Connector for HDFS (HFDS > Oracle RBDMS) 
‣Oracle Data Integration Adapter for Hadoop 
‣Oracle R Connector for Hadoop 
‣Oracle NoSQL Database (column/key-store DB based on BerkeleyDB) 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Moving Data In, Around and Out of Hadoop 
•Three stages to Hadoop ETL work, with dedicated Apache / other tools 
‣Load : receive files in batch, or in real-time (logs, events) 
‣Transform : process & transform data to answer questions 
‣Store / Export : store in structured form, or export to RDBMS using Sqoop 
RDBMS 
Imports 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
Loading 
Stage 
!!!! 
Processing 
Stage 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
!!!! 
Store / Export 
Stage 
!!!! 
Real-Time 
Logs / Events 
File / 
Unstructured 
Imports 
File 
Exports 
RDBMS 
Exports
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
“ETL Offloading” 
•Special use-case : offloading low-value, simple ETL work to a Hadoop cluster 
‣Receiving, aggregating, filtering and pre-processing data for an RDBMS data warehouse 
‣Potentially free-up high-value Exadata / RBDMS servers for analytic work
Core Apache Hadoop Tools 
•Apache Hadoop, including MapReduce and HDFS 
‣Scaleable, fault-tolerant file storage for Hadoop 
‣Parallel programming framework for Hadoop 
•Apache Hive 
‣SQL abstraction layer over HDFS 
‣Perform set-based ETL within Hadoop 
•Apache Pig, Spark 
‣Dataflow-type languages over HDFS, Hive etc 
‣Extensible through UDFs, streaming etc 
•Apache Flume, Apache Sqoop, Apache Kafka 
‣Real-time and batch loading into HDFS 
‣Modular, fault-tolerant, wide source/target coverage 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Hive as the Hadoop “Data Warehouse” 
•MapReduce jobs are typically written in Java, but Hive can make this simpler 
•Hive is a query environment over Hadoop/MapReduce to support SQL-like queries 
•Hive server accepts HiveQL queries via HiveODBC or HiveJDBC, automatically 
creates MapReduce jobs against data previously loaded into the Hive HDFS tables 
•Approach used by ODI and OBIEE 
to gain access to Hadoop data 
•Allows Hadoop data to be accessed just like 
any other data source (sort of...) 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
How Hive Provides SQL Access over Hadoop 
•Hive uses a RBDMS metastore to hold 
table and column definitions in schemas 
•Hive tables then map onto HDFS-stored files 
‣Managed tables 
‣External tables 
•Oracle-like query optimizer, compiler, 
executor 
•JDBC and OBDC drivers, 
plus CLI etc 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Hive Driver 
(Compile 
Optimize, Execute) 
Managed Tables 
/user/hive/warehouse/ 
External Tables 
/user/oracle/ 
/user/movies/data/ 
HDFS 
HDFS or local files 
loaded into Hive HDFS 
area, using HiveQL 
CREATE TABLE 
command 
HDFS files loaded into HDFS 
using external process, then 
mapped into Hive using 
CREATE EXTERNAL TABLE 
command 
Metastore
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Oracle Loader for Hadoop 
•Oracle technology for accessing Hadoop data, and loading it into an Oracle database 
•Pushes data transformation, “heavy lifting” to the Hadoop cluster, using MapReduce 
•Direct-path loads into Oracle Database, partitioned and non-partitioned 
•Online and offline loads 
•Key technology for fast load of 
Hadoop results into Oracle DB
Oracle Direct Connector for HDFS 
•Enables HDFS as a data-source for Oracle Database external tables 
•Effectively provides Oracle SQL access over HDFS 
•Supports data query, or import into Oracle DB 
•Treat HDFS-stored files in the same way as regular files 
‣But with HDFS’s low-cost 
‣… and fault-tolerance 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Oracle R Advanced Analytics for Hadoop 
•Add-in to R that extends capability to Hadoop 
•Gives R the ability to create Map and Reduce functions 
•Extends R data frames to include Hive tables 
‣Automatically run R functions on Hadoop 
by using Hive tables as source 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Just Released - Oracle Big Data SQL 
•Part of Oracle Big Data 4.0 (BDA-only) 
‣Also requires Oracle Database 12c, Oracle Exadata Database Machine 
•Extends Oracle Data Dictionary to cover Hive 
•Extends Oracle SQL and SmartScan to Hadoop 
•Extends Oracle Security Model over Hadoop 
‣Fine-grained access control 
‣Data redaction, data masking 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
Exadata 
Storage Servers 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Exadata Database 
Server 
Hadoop 
Cluster 
Oracle Big 
Data SQL 
SQL Queries 
SmartScan SmartScan
Bringing it All Together : Oracle Data Integrator 12c 
•ODI provides an excellent framework for running Hadoop ETL jobs 
‣ELT approach pushes transformations down to Hadoop - leveraging power of cluster 
•Hive, HBase, Sqoop and OLH/ODCH KMs provide native Hadoop loading / transformation 
‣Whilst still preserving RDBMS push-down 
‣Extensible to cover Pig, Spark etc 
•Process orchestration 
•Data quality / error handling 
•Metadata and model-driven 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
The Key to ODI Extensibility - Knowledge Modules 
•Divides the ETL process into separate steps - extract (load), integrate, check constraints etc 
•ODI generates native code for each platform, taking a template for each step + adding 
table names, column names, join conditions etc 
‣Easy to extend 
‣Easy to read the code 
‣Makes it possible for ODI to 
support Spark, Pig etc in future 
‣Uses the power of the target 
platform for integration tasks 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
-Hadoop-native ETL
Part of the Wider Oracle Data Integration Platform 
•Oracle Data Integrator for large-scale data integration across heterogenous sources and 
targets 
•Oracle GoldenGate for heterogeneous data replication and changed data capture 
•Oracle Enterprise Data Quality for data profiling and cleansing 
•Oracle Data Services Integrator 
for SOA message-based 
data federation 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
ODI and Big Data Integration Example 
•In this example, we’ll show an end-to-end ETL process on Hadoop using ODI12c & BDA 
•Scenario: load webserver log data into Hadoop, process enhance and aggregate, 
then load final summary table into Oracle Database 12c 
‣Process using Hadoop framework 
‣Leverage Big Data Connectors 
‣Metadata-based ETL development 
using ODI12c 
‣Real-world example 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
ETL & Data Flow through BDA System 
•Five-step process to load, transform, aggregate and filter incoming log data 
•Leverage ODI’s capabilities where possible 
•Make use of Hadoop power 
+ scalability 
Flume 
Agent 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
Sqoop extract 
! 
posts 
(Hive Table) 
IKM Hive Control Append 
(Hive table join & load into 
target hive table) 
categories_sql_ 
extract 
(Hive Table) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
hive_raw_apache_ 
access_log 
(Hive Table) 
Flume 
Agent 
!!!!!! 
Apache HTTP 
Server 
Log Files (HDFS) 
Flume Messaging 
TCP Port 4545 
(example) 
IKM File to Hive 
1 using RegEx SerDe 
log_entries_ 
and post_detail 
(Hive Table) 
IKM Hive Control Append 
(Hive table join & load into 
target hive table) 
hive_raw_apache_ 
access_log 
(Hive Table) 
2 3 
Geocoding 
IP>Country list 
(Hive Table) 
IKM Hive Transform 
(Hive streaming through 
Python script) 
4 5 
hive_raw_apache_ 
access_log 
(Hive Table) 
IKM File / Hive to Oracle 
(bulk unload to Oracle DB)
ETL Considerations : Using Hive vs. Regular Oracle SQL 
•Not all join types are available in Hive - joins must be equality joins 
•No sequences, no primary keys on tables 
•Generally need to stage Oracle or other external data into Hive before joining to it 
•Hive latency - not good for small microbatch-type work 
‣But other alternatives exist - Spark, Impala etc 
•Hive is INSERT / APPEND only - no updates, deletes etc 
‣But HBase may be suitable for CRUD-type loading 
•Don’t assume that HiveQL == Oracle SQL 
‣Test assumptions before committing to platform 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
vs.
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Five-Step ETL Process 
1. Take the incoming log files (via Flume) and load into a structured Hive table 
2. Enhance data from that table to include details on authors, posts from other Hive tables 
3. Join to some additional ref. data held in an Oracle database, to add author details 
4. Geocode the log data, so that we have the country for each calling IP address 
5. Output the data in summary form to an Oracle database
Using Flume to Transport Log Files to BDA 
•Apache Flume is the standard way to transport log files from source through to target 
•Initial use-case was webserver log files, but can transport any file from A>B 
•Does not do data transformation, but can send to multiple targets / target types 
•Mechanisms and checks to ensure successful transport of entries 
•Has a concept of “agents”, “sinks” and “channels” 
•Agents collect and forward log data 
•Sinks store it in final destination 
•Channels store log data en-route 
•Simple configuration through INI files 
•Handled outside of ODI12c 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
GoldenGate for Continuous Streaming to Hadoop 
•Oracle GoldenGate is also an option, for streaming RDBMS transactions to Hadoop 
•Leverages GoldenGate & HDFS / Hive Java APIs 
•Sample Implementations on MOS Doc.ID 1586210.1 (HDFS) and 1586188.1 (Hive) 
•Likely to be formal part of GoldenGate in future release - but usable now 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Load Incoming Log Files into Hive Table 
•First step in process is to load the incoming log files into a Hive table 
‣Also need to parse the log entries to extract request, date, IP address etc columns 
‣Hive table can then easily be used in 
downstream transformations 
•Use IKM File to Hive (LOAD DATA) KM 
‣Source can be local files or HDFS 
‣Either load file into Hive HDFS area, 
or leave as external Hive table 
‣Ability to use SerDe to parse file data 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
1
First Though … Need to Setup Topology and Models 
•HDFS data servers (source) defined using generic File technology 
•Workaround to support IKM Hive Control Append 
•Leave JDBC driver blank, put HDFS URL in JDBC URL field 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Defining Physical Schema and Model for HDFS Directory 
•Hadoop processes typically access a whole directory of files in HDFS, rather than single one 
•Hive, Pig etc aggregate all files in that directory and treat as single file 
•ODI Models usually point to a single file though - 
how do you set up access correctly? 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Defining Topology and Model for Hive Sources 
•Hive supported “out-of-the-box” with ODI12c (but requires ODIAAH license for KMs) 
•Most recent Hadoop distributions use HiveServer2 rather than HiveServer 
•Need to ensure JDBC drivers support Hive version 
•Use correct JDBC URL format (jdbc:hive2//…) 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Final Model and Datastore Definitions 
•HDFS files for incoming log data, and any other input data 
•Hive tables for ETL targets and downstream processing 
•Use RKM Hive to reverse-engineer column definition from Hive 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Using IKM File to Hive to Load Web Log File Data into Hive 
•Create mapping to load file source (single column for weblog entries) into Hive table 
•Target Hive table should have column for incoming log row, and parsed columns 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Specifying a SerDe to Parse Incoming Hive Data 
•SerDe (Serializer-Deserializer) interfaces give Hive the ability to process new file formats 
•Distributed as JAR file, gives Hive ability to parse semi-structured formats 
•We can use the RegEx SerDe to parse the Apache CombinedLogFormat file into columns 
•Enabled through OVERRIDE_ROW_FORMAT IKM File to Hive (LOAD DATA) KM option 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Executing First ODI12c Mapping 
•EXTERNAL_TABLE option chosen in IKM File to Hive (LOAD DATA) as Flume will continue 
writing to it until source log rotate 
•View results of data load in ODI Studio 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Join to Additional Hive Tables, Transform using HiveQL 
•IKM Hive to Hive Control Append can be used to perform Hive table joins, filtering, agg. etc. 
•INSERT only, no DELETE, UPDATE etc 
•Not all ODI12c mapping operators supported, but basic functionality works OK 
•Use this KM to join to other Hive tables, 
adding more details on post, title etc 
•Perform DISTINCT on join output, load 
into summary Hive table 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
2
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Joining Hive Tables 
•Only equi-joins supported 
•Must use ANSI syntax 
•More complex joins may not produce 
valid HiveQL (subqueries etc)
Filtering, Aggregating and Transforming Within Hive 
•Aggregate (GROUP BY), DISTINCT, FILTER, EXPRESSION, JOIN, SORT etc mapping 
operators can be added to mapping to manipulate data 
•Generates HiveQL functions, clauses etc 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Executing Second Mapping 
•ODI IKM Hive to Hive Control Append generates HiveQL to perform data loading 
•In the background, Hive on BDA creates MapReduce job(s) to load and transform HDFS data 
•Automatically runs across the cluster, in parallel and with fault tolerance, HA 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Bring in Reference Data from Oracle Database 
•In this third step, additional reference data from Oracle Database needs to be added 
•In theory, should be able to add Oracle-sourced datastores to mapping and join as usual 
•But … Oracle / JDBC-generic LKMs don’t get work with Hive 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
3
Options for Importing Oracle / RDBMS Data into Hadoop 
•Could export RBDMS data to file, and load using IKM File to Hive 
•Oracle Big Data Connectors only export to Oracle, not import to Hadoop 
•Best option is to use Apache Sqoop, and new 
IKM SQL to Hive-HBase-File knowledge module 
•Hadoop-native, automatically runs in parallel 
•Uses native JDBC drivers, or OraOop (for example) 
•Bi-directional in-and-out of Hadoop to RDBMS 
•Run from OS command-line 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Loading RDBMS Data into Hive using Sqoop 
•First step is to stage Oracle data into equivalent Hive table 
•Use special LKM SQL Multi-Connect Global load knowledge module for Oracle source 
‣Passes responsibility for load (extract) to following IKM 
•Then use IKM SQL to Hive-HBase-File (Sqoop) to load the Hive table 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Join Oracle-Sourced Hive Table to Existing Hive Table 
•Oracle-sourced reference data in Hive can then be joined to existing Hive table as normal 
•Filters, aggregation operators etc can be added to mapping if required 
•Use IKM Hive Control Append as integration KM 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
ODI Static and Flow Control : Data Quality and Error Handling 
•CKM Hive can be used with IKM Hive to Hive Control Append to filter out erroneous data 
•Static controls can be used to create “data firewalls” 
•Flow control used in Physical mapping view to handle errors, exceptions 
•Example: Filter out rows where IP address is from a test harness 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Enabling Flow Control in IKM Hive to Hive Control Append 
•Check the ENABLE_FLOW_CONTROL option in KM settings 
•Select CKM Hive as the check knowledge module 
•Erroneous rows will get moved to E_ table in Hive, not loaded into target Hive table 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Using Hive Streaming and Python for Geocoding Data 
•Another requirement we have is to “geocode” the webserver log entries 
•Allows us to aggregate page views by country 
•Based on the fact that IP ranges can usually be attributed to specific countries 
•Not functionality normally found in Hive etc, but can be done with add-on APIs 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
4
How GeoIP Geocoding Works 
•Uses free Geocoding API and database from Maxmind 
•Convert IP address to an integer 
•Find which integer range our IP address sits within 
•But Hive can’t use BETWEEN in a join… 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Solution : IKM Hive Transform 
•IKM Hive Transform can pass the output of a Hive SELECT statement through 
a perl, python, shell etc script to transform content 
•Uses Hive TRANSFORM … USING … AS functionality 
hive> add file file:///tmp/add_countries.py; 
Added resource: file:///tmp/add_countries.py 
hive> select transform (hostname,request_date,post_id,title,author,category) 
> using 'add_countries.py' 
> as (hostname,request_date,post_id,title,author,category,country) 
> from access_per_post_categories; 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Creating the Python Script for Hive Streaming 
•Solution requires a Python API to be installed on all Hadoop nodes, along with geocode DB 
wget ! 
https://raw.github.com/pypa/pip/master/contrib/get-pip.py 
python ! 
get-pip.py pip 
install pygeoip 
! 
•Python script then parses incoming stdin lines using tab-separation of fields, outputs same 
(but with extra field for the country) 
#!/usr/bin/python 
import sys 
sys.path.append('/usr/lib/python2.6/site-packages/') 
import pygeoip 
gi = pygeoip.GeoIP('/tmp/GeoIP.dat') 
for line in sys.stdin: 
line = line.rstrip() 
hostname,request_date,post_id,title,author,category = line.split('t') 
country = gi.country_name_by_addr(hostname) 
print hostname+'t'+request_date+'t'+post_id+'t'+title+'t'+author 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
+'t'+country+'t'+category
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Setting up the Mapping 
•Map source Hive table to target, which includes column for extra “country” column 
! 
! 
! 
! 
! 
! 
! 
•Copy script + GeoIP.dat file to every node’s /tmp directory 
•Ensure all Python APIs and libraries are installed on each Hadoop node
Configuring IKM Hive Transform 
•TRANSFORM_SCRIPT_NAME specifies name of 
script, and path to script 
•TRANSFORM_SCRIPT has issues with parsing; 
do not use, leave blank and KM will use existing one 
•Optional ability to specify sort and distribution 
columns (can be compound) 
•Leave other options at default 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Executing the Mapping 
•KM automatically registers the script with Hive (which caches it on all nodes) 
•HiveQL output then runs the contents of the first Hive table through the script, outputting 
results to target table
Bulk Unload Summary Data to Oracle Database 
•Final requirement is to unload final Hive table contents to Oracle Database 
•Several use-cases for this: 
•Use Hadoop / BDA for ETL offloading 
•Use analysis capabilities of BDA, but then output results to RDBMS data mart or DW 
•Permit use of more advanced SQL query tools 
•Share results with other applications 
•Can use Sqoop for this, or use Oracle Big Data Connectors 
•Fast bulk unload, or transparent Oracle access to Hive 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
5
IKM File/Hive to Oracle (OLH/ODCH) 
•KM for accessing HDFS/Hive data from Oracle 
•Either sets up ODCH connectivity, or bulk-unloads via OLH 
•Map from HDFS or Hive source to Oracle tables (via Oracle technology in Topology) 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Configuring the KM Physical Settings 
•For the access table in Physical view, change LKM to LKM SQL Multi-Connect 
•Delegates the multi-connect capabilities to the downstream node, so you can use a multi-connect 
IKM such as IKM File/Hive to Oracle 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
Configuring the KM Physical Settings 
•For the target table, select IKM File/Hive to Oracle 
•Only becomes available to select once 
LKM SQL Multi-Connect selected for access table 
•Key option values to set are: 
•OLH_OUTPUT_MODE (use JDBC initially, OCI 
if Oracle Client installed on Hadoop client node) 
•MAPRED_OUTPUT_BASE_DIR (set to directory 
on HFDS that OS user running ODI can access) 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Executing the Mapping 
•Executing the mapping will invoke 
OLH from the OS command line 
•Hive table (or HDFS file) contents 
copied to Oracle table
Create Package to Sequence ETL Steps 
•Define package (or load plan) within ODI12c to orchestrate the process 
•Call package / load plan execution from command-line, web service call, or schedule 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Execute Overall Package 
•Each step executed in sequence 
•End-to-end ETL process, using ODI12c’s metadata-driven development process, 
data quality handing, heterogenous connectivity, but Hadoop-native processing
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Conclusions 
•Hadoop, and the Oracle Big Data Appliance, is an excellent platform for data capture, 
analysis and processing 
•Hadoop tools such as Hive, Sqoop, MapReduce and Pig provide means to process and 
analyse data in parallel, using languages + approach familiar to Oracle developers 
•ODI12c provides several benefits when working with ETL and data loading on Hadoop 
‣Metadata-driven design; data quality handling; KMs to handle technical complexity 
•Oracle Data Integrator Adapter for Hadoop provides several KMs for Hadoop sources 
•In this presentation, we’ve seen an end-to-end example of big data ETL using ODI 
‣The power of Hadoop and BDA, with the ETL orchestration of ODI12c
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com 
Thank You for Attending! 
•Thank you for attending this presentation, and more information can be found at http:// 
www.rittmanmead.com 
•Contact us at info@rittmanmead.com or mark.rittman@rittmanmead.com 
•Look out for our book, “Oracle Business Intelligence Developers Guide” out now! 
•Follow-us on Twitter (@rittmanmead) or Facebook (facebook.com/rittmanmead)
Deep-Dive into Big Data ETL with 
ODI12c and Oracle Big Data Connectors 
Mark Rittman, CTO, Rittman Mead 
Oracle Openworld 2014, San Francisco 
T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or 
+61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) 
E : info@rittmanmead.com 
W : www.rittmanmead.com

More Related Content

What's hot

Azure Data Lake Intro (SQLBits 2016)
Azure Data Lake Intro (SQLBits 2016)Azure Data Lake Intro (SQLBits 2016)
Azure Data Lake Intro (SQLBits 2016)Michael Rys
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerDatabricks
 
Databricks Delta Lake and Its Benefits
Databricks Delta Lake and Its BenefitsDatabricks Delta Lake and Its Benefits
Databricks Delta Lake and Its BenefitsDatabricks
 
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...Zaloni
 
Building a modern data warehouse
Building a modern data warehouseBuilding a modern data warehouse
Building a modern data warehouseJames Serra
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architectureAdam Doyle
 
Data warehouse presentaion
Data warehouse presentaionData warehouse presentaion
Data warehouse presentaionsridhark1981
 
MongoDB Atlas
MongoDB AtlasMongoDB Atlas
MongoDB AtlasMongoDB
 
Introduction to Data Governance
Introduction to Data GovernanceIntroduction to Data Governance
Introduction to Data GovernanceJohn Bao Vuu
 
Data governance
Data governanceData governance
Data governanceMD Redaan
 
Informatica MDM Presentation
Informatica MDM PresentationInformatica MDM Presentation
Informatica MDM PresentationMaxHung
 
data-mesh-101.pptx
data-mesh-101.pptxdata-mesh-101.pptx
data-mesh-101.pptxTarekHamdi8
 
An Architecture for Trade Capture and Regulatory Reporting
An Architecture for Trade Capture and Regulatory ReportingAn Architecture for Trade Capture and Regulatory Reporting
An Architecture for Trade Capture and Regulatory ReportingAmazon Web Services
 
Best Practices in Metadata Management
Best Practices in Metadata ManagementBest Practices in Metadata Management
Best Practices in Metadata ManagementDATAVERSITY
 
Master Data Management - Practical Strategies for Integrating into Your Data ...
Master Data Management - Practical Strategies for Integrating into Your Data ...Master Data Management - Practical Strategies for Integrating into Your Data ...
Master Data Management - Practical Strategies for Integrating into Your Data ...DATAVERSITY
 
Emerging Trends in Data Architecture – What’s the Next Big Thing?
Emerging Trends in Data Architecture – What’s the Next Big Thing?Emerging Trends in Data Architecture – What’s the Next Big Thing?
Emerging Trends in Data Architecture – What’s the Next Big Thing?DATAVERSITY
 
Data-Ed Webinar: Data Quality Success Stories
Data-Ed Webinar: Data Quality Success StoriesData-Ed Webinar: Data Quality Success Stories
Data-Ed Webinar: Data Quality Success StoriesDATAVERSITY
 
Dama - Protecting Sensitive Data on a Database
Dama - Protecting Sensitive Data on a DatabaseDama - Protecting Sensitive Data on a Database
Dama - Protecting Sensitive Data on a Databasejohanswart1234
 
Emerging Trends in Data Engineering
Emerging Trends in Data EngineeringEmerging Trends in Data Engineering
Emerging Trends in Data EngineeringAnanth PackkilDurai
 
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...Amazon Web Services
 

What's hot (20)

Azure Data Lake Intro (SQLBits 2016)
Azure Data Lake Intro (SQLBits 2016)Azure Data Lake Intro (SQLBits 2016)
Azure Data Lake Intro (SQLBits 2016)
 
Building Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics PrimerBuilding Lakehouses on Delta Lake with SQL Analytics Primer
Building Lakehouses on Delta Lake with SQL Analytics Primer
 
Databricks Delta Lake and Its Benefits
Databricks Delta Lake and Its BenefitsDatabricks Delta Lake and Its Benefits
Databricks Delta Lake and Its Benefits
 
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...
Building a Modern Data Architecture by Ben Sharma at Strata + Hadoop World Sa...
 
Building a modern data warehouse
Building a modern data warehouseBuilding a modern data warehouse
Building a modern data warehouse
 
Delta lake and the delta architecture
Delta lake and the delta architectureDelta lake and the delta architecture
Delta lake and the delta architecture
 
Data warehouse presentaion
Data warehouse presentaionData warehouse presentaion
Data warehouse presentaion
 
MongoDB Atlas
MongoDB AtlasMongoDB Atlas
MongoDB Atlas
 
Introduction to Data Governance
Introduction to Data GovernanceIntroduction to Data Governance
Introduction to Data Governance
 
Data governance
Data governanceData governance
Data governance
 
Informatica MDM Presentation
Informatica MDM PresentationInformatica MDM Presentation
Informatica MDM Presentation
 
data-mesh-101.pptx
data-mesh-101.pptxdata-mesh-101.pptx
data-mesh-101.pptx
 
An Architecture for Trade Capture and Regulatory Reporting
An Architecture for Trade Capture and Regulatory ReportingAn Architecture for Trade Capture and Regulatory Reporting
An Architecture for Trade Capture and Regulatory Reporting
 
Best Practices in Metadata Management
Best Practices in Metadata ManagementBest Practices in Metadata Management
Best Practices in Metadata Management
 
Master Data Management - Practical Strategies for Integrating into Your Data ...
Master Data Management - Practical Strategies for Integrating into Your Data ...Master Data Management - Practical Strategies for Integrating into Your Data ...
Master Data Management - Practical Strategies for Integrating into Your Data ...
 
Emerging Trends in Data Architecture – What’s the Next Big Thing?
Emerging Trends in Data Architecture – What’s the Next Big Thing?Emerging Trends in Data Architecture – What’s the Next Big Thing?
Emerging Trends in Data Architecture – What’s the Next Big Thing?
 
Data-Ed Webinar: Data Quality Success Stories
Data-Ed Webinar: Data Quality Success StoriesData-Ed Webinar: Data Quality Success Stories
Data-Ed Webinar: Data Quality Success Stories
 
Dama - Protecting Sensitive Data on a Database
Dama - Protecting Sensitive Data on a DatabaseDama - Protecting Sensitive Data on a Database
Dama - Protecting Sensitive Data on a Database
 
Emerging Trends in Data Engineering
Emerging Trends in Data EngineeringEmerging Trends in Data Engineering
Emerging Trends in Data Engineering
 
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...
Using Amazon Neptune to power identity resolution at scale - ADB303 - Atlanta...
 

Viewers also liked

Testing Big Data: Automated ETL Testing of Hadoop
Testing Big Data: Automated ETL Testing of HadoopTesting Big Data: Automated ETL Testing of Hadoop
Testing Big Data: Automated ETL Testing of HadoopRTTS
 
Testing Hadoop jobs with MRUnit
Testing Hadoop jobs with MRUnitTesting Hadoop jobs with MRUnit
Testing Hadoop jobs with MRUnitEric Wendelin
 
Big Data Architectures
Big Data ArchitecturesBig Data Architectures
Big Data ArchitecturesGuido Schmutz
 
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...RTTS
 
Big Data Testing: Ensuring MongoDB Data Quality
Big Data Testing: Ensuring MongoDB Data QualityBig Data Testing: Ensuring MongoDB Data Quality
Big Data Testing: Ensuring MongoDB Data QualityRTTS
 
Big Data Architecture
Big Data ArchitectureBig Data Architecture
Big Data ArchitectureGuido Schmutz
 
Testing Big Data: Automated Testing of Hadoop with QuerySurge
Testing Big Data: Automated  Testing of Hadoop with QuerySurgeTesting Big Data: Automated  Testing of Hadoop with QuerySurge
Testing Big Data: Automated Testing of Hadoop with QuerySurgeRTTS
 

Viewers also liked (8)

Testing Big Data: Automated ETL Testing of Hadoop
Testing Big Data: Automated ETL Testing of HadoopTesting Big Data: Automated ETL Testing of Hadoop
Testing Big Data: Automated ETL Testing of Hadoop
 
Testing Hadoop jobs with MRUnit
Testing Hadoop jobs with MRUnitTesting Hadoop jobs with MRUnit
Testing Hadoop jobs with MRUnit
 
Database testing
Database testingDatabase testing
Database testing
 
Big Data Architectures
Big Data ArchitecturesBig Data Architectures
Big Data Architectures
 
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
Big Data Testing : Automate theTesting of Hadoop, NoSQL & DWH without Writing...
 
Big Data Testing: Ensuring MongoDB Data Quality
Big Data Testing: Ensuring MongoDB Data QualityBig Data Testing: Ensuring MongoDB Data Quality
Big Data Testing: Ensuring MongoDB Data Quality
 
Big Data Architecture
Big Data ArchitectureBig Data Architecture
Big Data Architecture
 
Testing Big Data: Automated Testing of Hadoop with QuerySurge
Testing Big Data: Automated  Testing of Hadoop with QuerySurgeTesting Big Data: Automated  Testing of Hadoop with QuerySurge
Testing Big Data: Automated Testing of Hadoop with QuerySurge
 

Similar to Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors

UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cUKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cMark Rittman
 
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...Mark Rittman
 
ODI12c as your Big Data Integration Hub
ODI12c as your Big Data Integration HubODI12c as your Big Data Integration Hub
ODI12c as your Big Data Integration HubMark Rittman
 
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODIBIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODIMark Rittman
 
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...Mark Rittman
 
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13Mark Rittman
 
Part 4 - Hadoop Data Output and Reporting using OBIEE11g
Part 4 - Hadoop Data Output and Reporting using OBIEE11gPart 4 - Hadoop Data Output and Reporting using OBIEE11g
Part 4 - Hadoop Data Output and Reporting using OBIEE11gMark Rittman
 
Real-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c AdaptorsReal-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c AdaptorsMichael Rainey
 
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cPart 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cMark Rittman
 
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...Mark Rittman
 
What is Big Data Discovery, and how it complements traditional business anal...
What is Big Data Discovery, and how it complements  traditional business anal...What is Big Data Discovery, and how it complements  traditional business anal...
What is Big Data Discovery, and how it complements traditional business anal...Mark Rittman
 
Ougn2013 high speed, in-memory big data analysis with oracle exalytics
Ougn2013   high speed, in-memory big data analysis with oracle exalyticsOugn2013   high speed, in-memory big data analysis with oracle exalytics
Ougn2013 high speed, in-memory big data analysis with oracle exalyticsMark Rittman
 
Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014Mark Rittman
 
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI ProjectsOGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI ProjectsMark Rittman
 
ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013Mark Rittman
 
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)Mark Rittman
 
ODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" SourcesODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" SourcesMark Rittman
 
Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Kent Graziano
 
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...Mark Rittman
 
Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Mark Rittman
 

Similar to Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors (20)

UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12cUKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
UKOUG Tech'14 Super Sunday : Deep-Dive into Big Data ETL with ODI12c
 
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
Part 1 - Introduction to Hadoop and Big Data Technologies for Oracle BI & DW ...
 
ODI12c as your Big Data Integration Hub
ODI12c as your Big Data Integration HubODI12c as your Big Data Integration Hub
ODI12c as your Big Data Integration Hub
 
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODIBIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
BIWA2015 - Bringing Oracle Big Data SQL to OBIEE and ODI
 
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
Delivering the Data Factory, Data Reservoir and a Scalable Oracle Big Data Ar...
 
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
Leveraging Hadoop with OBIEE 11g and ODI 11g - UKOUG Tech'13
 
Part 4 - Hadoop Data Output and Reporting using OBIEE11g
Part 4 - Hadoop Data Output and Reporting using OBIEE11gPart 4 - Hadoop Data Output and Reporting using OBIEE11g
Part 4 - Hadoop Data Output and Reporting using OBIEE11g
 
Real-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c AdaptorsReal-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
Real-Time Data Replication to Hadoop using GoldenGate 12c Adaptors
 
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12cPart 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
Part 2 - Hadoop Data Loading using Hadoop Tools and ODI12c
 
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
End to-end hadoop development using OBIEE, ODI, Oracle Big Data SQL and Oracl...
 
What is Big Data Discovery, and how it complements traditional business anal...
What is Big Data Discovery, and how it complements  traditional business anal...What is Big Data Discovery, and how it complements  traditional business anal...
What is Big Data Discovery, and how it complements traditional business anal...
 
Ougn2013 high speed, in-memory big data analysis with oracle exalytics
Ougn2013   high speed, in-memory big data analysis with oracle exalyticsOugn2013   high speed, in-memory big data analysis with oracle exalytics
Ougn2013 high speed, in-memory big data analysis with oracle exalytics
 
Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014Deploying OBIEE in the Cloud - Oracle Openworld 2014
Deploying OBIEE in the Cloud - Oracle Openworld 2014
 
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI ProjectsOGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
OGH 2015 - Hadoop (Oracle BDA) and Oracle Technologies on BI Projects
 
ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013ODI 11g in the Enterprise - BIWA 2013
ODI 11g in the Enterprise - BIWA 2013
 
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
TimesTen - Beyond the Summary Advisor (ODTUG KScope'14)
 
ODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" SourcesODI11g, Hadoop and "Big Data" Sources
ODI11g, Hadoop and "Big Data" Sources
 
Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)Demystifying Data Warehouse as a Service (DWaaS)
Demystifying Data Warehouse as a Service (DWaaS)
 
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
Using Endeca with Oracle Exalytics - Oracle France BI Customer Event, October...
 
Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...Unlock the value in your big data reservoir using oracle big data discovery a...
Unlock the value in your big data reservoir using oracle big data discovery a...
 

More from Mark Rittman

The Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsThe Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsMark Rittman
 
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitUsing Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitMark Rittman
 
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...Mark Rittman
 
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?Mark Rittman
 
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...Mark Rittman
 
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...Mark Rittman
 
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...Mark Rittman
 
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...Mark Rittman
 
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle CloudOTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle CloudMark Rittman
 
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...Mark Rittman
 
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Mark Rittman
 
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Mark Rittman
 
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business AnalyticsOracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business AnalyticsMark Rittman
 
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...Mark Rittman
 
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsBig Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsMark Rittman
 
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...Mark Rittman
 
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyOracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyMark Rittman
 
Deploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudDeploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudMark Rittman
 
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...Mark Rittman
 
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Mark Rittman
 

More from Mark Rittman (20)

The Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data PlatformsThe Future of Analytics, Data Integration and BI on Big Data Platforms
The Future of Analytics, Data Integration and BI on Big Data Platforms
 
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's ToolkitUsing Oracle Big Data Discovey as a Data Scientist's Toolkit
Using Oracle Big Data Discovey as a Data Scientist's Toolkit
 
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
From lots of reports (with some data Analysis) 
to Massive Data Analysis (Wit...
 
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
SQL-on-Hadoop for Analytics + BI: What Are My Options, What's the Future?
 
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
Social Network Analysis using Oracle Big Data Spatial & Graph (incl. why I di...
 
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
Using Oracle Big Data SQL 3.0 to add Hadoop & NoSQL to your Oracle Data Wareh...
 
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
IlOUG Tech Days 2016 - Big Data for Oracle Developers - Towards Spark, Real-T...
 
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
IlOUG Tech Days 2016 - Unlock the Value in your Data Reservoir using Oracle B...
 
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle CloudOTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
OTN EMEA Tour 2016 : Deploying Full BI Platforms to Oracle Cloud
 
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...OTN EMEA TOUR 2016  - OBIEE12c New Features for End-Users, Developers and Sys...
OTN EMEA TOUR 2016 - OBIEE12c New Features for End-Users, Developers and Sys...
 
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop :
 
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
Gluent New World #02 - SQL-on-Hadoop : A bit of History, Current State-of-the...
 
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business AnalyticsOracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
Oracle BI Hybrid BI : Mode 1 + Mode 2, Cloud + On-Premise Business Analytics
 
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...Riga dev day 2016   adding a data reservoir and oracle bdd to extend your ora...
Riga dev day 2016 adding a data reservoir and oracle bdd to extend your ora...
 
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsBig Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive Analytics
 
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
OBIEE12c and Embedded Essbase 12c - An Initial Look at Query Acceleration Use...
 
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case StudyOracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
Oracle Big Data Spatial & Graph 
Social Media Analysis - Case Study
 
Deploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle CloudDeploying Full BI Platforms to Oracle Cloud
Deploying Full BI Platforms to Oracle Cloud
 
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
Adding a Data Reservoir to your Oracle Data Warehouse for Customer 360-Degree...
 
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
Deploying Full Oracle BI Platforms to Oracle Cloud - OOW2015
 

Recently uploaded

Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Hr365.us smith
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作qr0udbr0
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024StefanoLambiase
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsAhmed Mohamed
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Andreas Granig
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanyChristoph Pohl
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...stazi3110
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideChristina Lin
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityNeo4j
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfPower Karaoke
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....kzayra69
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfAlina Yurenko
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesŁukasz Chruściel
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantAxelRicardoTrocheRiq
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureDinusha Kumarasiri
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...gurkirankumar98700
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptkotipi9215
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)jennyeacort
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmSujith Sukumaran
 

Recently uploaded (20)

Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)Recruitment Management Software Benefits (Infographic)
Recruitment Management Software Benefits (Infographic)
 
英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作英国UN学位证,北安普顿大学毕业证书1:1制作
英国UN学位证,北安普顿大学毕业证书1:1制作
 
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
Dealing with Cultural Dispersion — Stefano Lambiase — ICSE-SEIS 2024
 
Unveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML DiagramsUnveiling Design Patterns: A Visual Guide with UML Diagrams
Unveiling Design Patterns: A Visual Guide with UML Diagrams
 
Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024Automate your Kamailio Test Calls - Kamailio World 2024
Automate your Kamailio Test Calls - Kamailio World 2024
 
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte GermanySuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
SuccessFactors 1H 2024 Release - Sneak-Peek by Deloitte Germany
 
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
Building a General PDE Solving Framework with Symbolic-Numeric Scientific Mac...
 
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop SlideBuilding Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
Building Real-Time Data Pipelines: Stream & Batch Processing workshop Slide
 
EY_Graph Database Powered Sustainability
EY_Graph Database Powered SustainabilityEY_Graph Database Powered Sustainability
EY_Graph Database Powered Sustainability
 
The Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdfThe Evolution of Karaoke From Analog to App.pdf
The Evolution of Karaoke From Analog to App.pdf
 
What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....What are the key points to focus on before starting to learn ETL Development....
What are the key points to focus on before starting to learn ETL Development....
 
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdfGOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
GOING AOT WITH GRAALVM – DEVOXX GREECE.pdf
 
Unveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New FeaturesUnveiling the Future: Sylius 2.0 New Features
Unveiling the Future: Sylius 2.0 New Features
 
Salesforce Certified Field Service Consultant
Salesforce Certified Field Service ConsultantSalesforce Certified Field Service Consultant
Salesforce Certified Field Service Consultant
 
Implementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with AzureImplementing Zero Trust strategy with Azure
Implementing Zero Trust strategy with Azure
 
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
(Genuine) Escort Service Lucknow | Starting ₹,5K To @25k with A/C 🧑🏽‍❤️‍🧑🏻 89...
 
chapter--4-software-project-planning.ppt
chapter--4-software-project-planning.pptchapter--4-software-project-planning.ppt
chapter--4-software-project-planning.ppt
 
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
Call Us🔝>༒+91-9711147426⇛Call In girls karol bagh (Delhi)
 
Intelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalmIntelligent Home Wi-Fi Solutions | ThinkPalm
Intelligent Home Wi-Fi Solutions | ThinkPalm
 
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort ServiceHot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
Hot Sexy call girls in Patel Nagar🔝 9953056974 🔝 escort Service
 

Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors

  • 1. Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors Mark Rittman, CTO, Rittman Mead Oracle Openworld 2014, San Francisco T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 2. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com About the Speaker •Mark Rittman, Co-Founder of Rittman Mead •Oracle ACE Director, specialising in Oracle BI&DW •14 Years Experience with Oracle Technology •Regular columnist for Oracle Magazine •Author of two Oracle Press Oracle BI books •Oracle Business Intelligence Developers Guide •Oracle Exalytics Revealed •Writer for Rittman Mead Blog : http://www.rittmanmead.com/blog •Email : mark.rittman@rittmanmead.com •Twitter : @markrittman
  • 3. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com About Rittman Mead •Oracle BI and DW Gold partner •Winner of five UKOUG Partner of the Year awards in 2013 - including BI •World leading specialist partner for technical excellence, solutions delivery and innovation in Oracle BI •Approximately 80 consultants worldwide •All expert in Oracle BI and DW •Offices in US (Atlanta), Europe, Australia and India •Skills in broad range of supporting Oracle tools: ‣OBIEE, OBIA ‣ODIEE ‣Essbase, Oracle OLAP ‣GoldenGate ‣Endeca
  • 4. Traditional Data Warehouse / BI Architectures •Three-layer architecture - staging, foundation and access/performance •All three layers stored in a relational database (Oracle) •ETL used to move data from layer-to-layer T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) Staging Foundation / ODS E : info@rittmanmead.com W : www.rittmanmead.com Performance / Dimensional ETL ETL BI Tool (OBIEE) with metadata layer OLAP / In-Memory Tool with data load into own database Direct Read Data Load Traditional structured data sources Data Load Data Load Data Load Traditional Relational Data Warehouse
  • 5. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Introducing Hadoop •A new approach to data processing and data storage •Rather than a small number of large, powerful servers, it spreads processing over large numbers of small, cheap, redundant servers •Spreads the data you’re processing over lots of distributed nodes •Has scheduling/workload process that sends Job Tracker parts of a job to each of the nodes - a bit like Oracle Parallel Execution •And does the processing where the data sits - a bit like Exadata storage servers •Shared-nothing architecture •Low-cost and highly horizontal scalable Task Tracker Task Tracker Task Tracker Task Tracker Data Node Data Node Task Tracker Task Tracker
  • 6. Hadoop Tenets : Simplified Distributed Processing •Hadoop, through MapReduce, breaks processing down into simple stages ‣Map : select the columns and values you’re interested in, pass through as key/value pairs ‣Reduce : aggregate the results •Most ETL jobs can be broken down into filtering, projecting and aggregating •Hadoop then automatically runs job on cluster ‣Share-nothing small chunks of work ‣Run the job on the node where the data is ‣Handle faults etc ‣Gather the results back in T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Mapper Filter, Project Mapper Filter, Project Mapper Filter, Project Reducer Aggregate Reducer Aggregate Output One HDFS file per reducer, in a directory
  • 7. HDFS: Low-Cost, Clustered, Fault-Tolerant Storage •The filesystem behind Hadoop, used to store data for Hadoop analysis ‣Unix-like, uses commands such as ls, mkdir, chown, chmod •Fault-tolerant, with rapid fault detection and recovery •High-throughput, with streaming data access and large block sizes •Designed for data-locality, placing data closed to where it is processed •Accessed from the command-line, via internet (hdfs://), GUI tools etc [oracle@bigdatalite mapreduce]$ hadoop fs -mkdir /user/oracle/my_stuff [oracle@bigdatalite mapreduce]$ hadoop fs -ls /user/oracle Found 5 items drwx------ - oracle hadoop 0 2013-04-27 16:48 /user/oracle/.staging drwxrwxrwx - oracle hadoop 0 2012-09-18 17:02 /user/oracle/moviedemo drwxrwxrwx - oracle hadoop 0 2012-10-17 15:58 /user/oracle/moviework drwxrwxrwx - oracle hadoop 0 2013-05-03 17:49 /user/oracle/my_stuff drwxrwxrwx - oracle hadoop 0 2012-08-10 16:08 /user/oracle/stage T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 8. Oracle’s Big Data Products •Oracle Big Data Appliance - Engineered System for Big Data Acquisition and Processing ‣Cloudera Distribution of Hadoop ‣Cloudera Manager ‣Open-source R ‣Oracle NoSQL Database Community Edition ‣Oracle Enterprise Linux + Oracle JVM ‣New - Oracle Big Data SQL •Oracle Big Data Connectors ‣Oracle Loader for Hadoop (Hadoop > Oracle RBDMS) ‣Oracle Direct Connector for HDFS (HFDS > Oracle RBDMS) ‣Oracle Data Integration Adapter for Hadoop ‣Oracle R Connector for Hadoop ‣Oracle NoSQL Database (column/key-store DB based on BerkeleyDB) T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 9. Moving Data In, Around and Out of Hadoop •Three stages to Hadoop ETL work, with dedicated Apache / other tools ‣Load : receive files in batch, or in real-time (logs, events) ‣Transform : process & transform data to answer questions ‣Store / Export : store in structured form, or export to RDBMS using Sqoop RDBMS Imports T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) Loading Stage !!!! Processing Stage E : info@rittmanmead.com W : www.rittmanmead.com !!!! Store / Export Stage !!!! Real-Time Logs / Events File / Unstructured Imports File Exports RDBMS Exports
  • 10. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com “ETL Offloading” •Special use-case : offloading low-value, simple ETL work to a Hadoop cluster ‣Receiving, aggregating, filtering and pre-processing data for an RDBMS data warehouse ‣Potentially free-up high-value Exadata / RBDMS servers for analytic work
  • 11. Core Apache Hadoop Tools •Apache Hadoop, including MapReduce and HDFS ‣Scaleable, fault-tolerant file storage for Hadoop ‣Parallel programming framework for Hadoop •Apache Hive ‣SQL abstraction layer over HDFS ‣Perform set-based ETL within Hadoop •Apache Pig, Spark ‣Dataflow-type languages over HDFS, Hive etc ‣Extensible through UDFs, streaming etc •Apache Flume, Apache Sqoop, Apache Kafka ‣Real-time and batch loading into HDFS ‣Modular, fault-tolerant, wide source/target coverage T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 12. Hive as the Hadoop “Data Warehouse” •MapReduce jobs are typically written in Java, but Hive can make this simpler •Hive is a query environment over Hadoop/MapReduce to support SQL-like queries •Hive server accepts HiveQL queries via HiveODBC or HiveJDBC, automatically creates MapReduce jobs against data previously loaded into the Hive HDFS tables •Approach used by ODI and OBIEE to gain access to Hadoop data •Allows Hadoop data to be accessed just like any other data source (sort of...) T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 13. How Hive Provides SQL Access over Hadoop •Hive uses a RBDMS metastore to hold table and column definitions in schemas •Hive tables then map onto HDFS-stored files ‣Managed tables ‣External tables •Oracle-like query optimizer, compiler, executor •JDBC and OBDC drivers, plus CLI etc T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Hive Driver (Compile Optimize, Execute) Managed Tables /user/hive/warehouse/ External Tables /user/oracle/ /user/movies/data/ HDFS HDFS or local files loaded into Hive HDFS area, using HiveQL CREATE TABLE command HDFS files loaded into HDFS using external process, then mapped into Hive using CREATE EXTERNAL TABLE command Metastore
  • 14. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Oracle Loader for Hadoop •Oracle technology for accessing Hadoop data, and loading it into an Oracle database •Pushes data transformation, “heavy lifting” to the Hadoop cluster, using MapReduce •Direct-path loads into Oracle Database, partitioned and non-partitioned •Online and offline loads •Key technology for fast load of Hadoop results into Oracle DB
  • 15. Oracle Direct Connector for HDFS •Enables HDFS as a data-source for Oracle Database external tables •Effectively provides Oracle SQL access over HDFS •Supports data query, or import into Oracle DB •Treat HDFS-stored files in the same way as regular files ‣But with HDFS’s low-cost ‣… and fault-tolerance T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 16. Oracle R Advanced Analytics for Hadoop •Add-in to R that extends capability to Hadoop •Gives R the ability to create Map and Reduce functions •Extends R data frames to include Hive tables ‣Automatically run R functions on Hadoop by using Hive tables as source T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 17. Just Released - Oracle Big Data SQL •Part of Oracle Big Data 4.0 (BDA-only) ‣Also requires Oracle Database 12c, Oracle Exadata Database Machine •Extends Oracle Data Dictionary to cover Hive •Extends Oracle SQL and SmartScan to Hadoop •Extends Oracle Security Model over Hadoop ‣Fine-grained access control ‣Data redaction, data masking T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) Exadata Storage Servers E : info@rittmanmead.com W : www.rittmanmead.com Exadata Database Server Hadoop Cluster Oracle Big Data SQL SQL Queries SmartScan SmartScan
  • 18. Bringing it All Together : Oracle Data Integrator 12c •ODI provides an excellent framework for running Hadoop ETL jobs ‣ELT approach pushes transformations down to Hadoop - leveraging power of cluster •Hive, HBase, Sqoop and OLH/ODCH KMs provide native Hadoop loading / transformation ‣Whilst still preserving RDBMS push-down ‣Extensible to cover Pig, Spark etc •Process orchestration •Data quality / error handling •Metadata and model-driven T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 19. The Key to ODI Extensibility - Knowledge Modules •Divides the ETL process into separate steps - extract (load), integrate, check constraints etc •ODI generates native code for each platform, taking a template for each step + adding table names, column names, join conditions etc ‣Easy to extend ‣Easy to read the code ‣Makes it possible for ODI to support Spark, Pig etc in future ‣Uses the power of the target platform for integration tasks T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com -Hadoop-native ETL
  • 20. Part of the Wider Oracle Data Integration Platform •Oracle Data Integrator for large-scale data integration across heterogenous sources and targets •Oracle GoldenGate for heterogeneous data replication and changed data capture •Oracle Enterprise Data Quality for data profiling and cleansing •Oracle Data Services Integrator for SOA message-based data federation T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 21. ODI and Big Data Integration Example •In this example, we’ll show an end-to-end ETL process on Hadoop using ODI12c & BDA •Scenario: load webserver log data into Hadoop, process enhance and aggregate, then load final summary table into Oracle Database 12c ‣Process using Hadoop framework ‣Leverage Big Data Connectors ‣Metadata-based ETL development using ODI12c ‣Real-world example T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 22. ETL & Data Flow through BDA System •Five-step process to load, transform, aggregate and filter incoming log data •Leverage ODI’s capabilities where possible •Make use of Hadoop power + scalability Flume Agent T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) Sqoop extract ! posts (Hive Table) IKM Hive Control Append (Hive table join & load into target hive table) categories_sql_ extract (Hive Table) E : info@rittmanmead.com W : www.rittmanmead.com hive_raw_apache_ access_log (Hive Table) Flume Agent !!!!!! Apache HTTP Server Log Files (HDFS) Flume Messaging TCP Port 4545 (example) IKM File to Hive 1 using RegEx SerDe log_entries_ and post_detail (Hive Table) IKM Hive Control Append (Hive table join & load into target hive table) hive_raw_apache_ access_log (Hive Table) 2 3 Geocoding IP>Country list (Hive Table) IKM Hive Transform (Hive streaming through Python script) 4 5 hive_raw_apache_ access_log (Hive Table) IKM File / Hive to Oracle (bulk unload to Oracle DB)
  • 23. ETL Considerations : Using Hive vs. Regular Oracle SQL •Not all join types are available in Hive - joins must be equality joins •No sequences, no primary keys on tables •Generally need to stage Oracle or other external data into Hive before joining to it •Hive latency - not good for small microbatch-type work ‣But other alternatives exist - Spark, Impala etc •Hive is INSERT / APPEND only - no updates, deletes etc ‣But HBase may be suitable for CRUD-type loading •Don’t assume that HiveQL == Oracle SQL ‣Test assumptions before committing to platform T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com vs.
  • 24. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Five-Step ETL Process 1. Take the incoming log files (via Flume) and load into a structured Hive table 2. Enhance data from that table to include details on authors, posts from other Hive tables 3. Join to some additional ref. data held in an Oracle database, to add author details 4. Geocode the log data, so that we have the country for each calling IP address 5. Output the data in summary form to an Oracle database
  • 25. Using Flume to Transport Log Files to BDA •Apache Flume is the standard way to transport log files from source through to target •Initial use-case was webserver log files, but can transport any file from A>B •Does not do data transformation, but can send to multiple targets / target types •Mechanisms and checks to ensure successful transport of entries •Has a concept of “agents”, “sinks” and “channels” •Agents collect and forward log data •Sinks store it in final destination •Channels store log data en-route •Simple configuration through INI files •Handled outside of ODI12c T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 26. GoldenGate for Continuous Streaming to Hadoop •Oracle GoldenGate is also an option, for streaming RDBMS transactions to Hadoop •Leverages GoldenGate & HDFS / Hive Java APIs •Sample Implementations on MOS Doc.ID 1586210.1 (HDFS) and 1586188.1 (Hive) •Likely to be formal part of GoldenGate in future release - but usable now T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 27. Load Incoming Log Files into Hive Table •First step in process is to load the incoming log files into a Hive table ‣Also need to parse the log entries to extract request, date, IP address etc columns ‣Hive table can then easily be used in downstream transformations •Use IKM File to Hive (LOAD DATA) KM ‣Source can be local files or HDFS ‣Either load file into Hive HDFS area, or leave as external Hive table ‣Ability to use SerDe to parse file data T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com 1
  • 28. First Though … Need to Setup Topology and Models •HDFS data servers (source) defined using generic File technology •Workaround to support IKM Hive Control Append •Leave JDBC driver blank, put HDFS URL in JDBC URL field T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 29. Defining Physical Schema and Model for HDFS Directory •Hadoop processes typically access a whole directory of files in HDFS, rather than single one •Hive, Pig etc aggregate all files in that directory and treat as single file •ODI Models usually point to a single file though - how do you set up access correctly? T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 30. Defining Topology and Model for Hive Sources •Hive supported “out-of-the-box” with ODI12c (but requires ODIAAH license for KMs) •Most recent Hadoop distributions use HiveServer2 rather than HiveServer •Need to ensure JDBC drivers support Hive version •Use correct JDBC URL format (jdbc:hive2//…) T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 31. Final Model and Datastore Definitions •HDFS files for incoming log data, and any other input data •Hive tables for ETL targets and downstream processing •Use RKM Hive to reverse-engineer column definition from Hive T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 32. Using IKM File to Hive to Load Web Log File Data into Hive •Create mapping to load file source (single column for weblog entries) into Hive table •Target Hive table should have column for incoming log row, and parsed columns T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 33. Specifying a SerDe to Parse Incoming Hive Data •SerDe (Serializer-Deserializer) interfaces give Hive the ability to process new file formats •Distributed as JAR file, gives Hive ability to parse semi-structured formats •We can use the RegEx SerDe to parse the Apache CombinedLogFormat file into columns •Enabled through OVERRIDE_ROW_FORMAT IKM File to Hive (LOAD DATA) KM option T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 34. Executing First ODI12c Mapping •EXTERNAL_TABLE option chosen in IKM File to Hive (LOAD DATA) as Flume will continue writing to it until source log rotate •View results of data load in ODI Studio T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 35. Join to Additional Hive Tables, Transform using HiveQL •IKM Hive to Hive Control Append can be used to perform Hive table joins, filtering, agg. etc. •INSERT only, no DELETE, UPDATE etc •Not all ODI12c mapping operators supported, but basic functionality works OK •Use this KM to join to other Hive tables, adding more details on post, title etc •Perform DISTINCT on join output, load into summary Hive table T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com 2
  • 36. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Joining Hive Tables •Only equi-joins supported •Must use ANSI syntax •More complex joins may not produce valid HiveQL (subqueries etc)
  • 37. Filtering, Aggregating and Transforming Within Hive •Aggregate (GROUP BY), DISTINCT, FILTER, EXPRESSION, JOIN, SORT etc mapping operators can be added to mapping to manipulate data •Generates HiveQL functions, clauses etc T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 38. Executing Second Mapping •ODI IKM Hive to Hive Control Append generates HiveQL to perform data loading •In the background, Hive on BDA creates MapReduce job(s) to load and transform HDFS data •Automatically runs across the cluster, in parallel and with fault tolerance, HA T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 39. Bring in Reference Data from Oracle Database •In this third step, additional reference data from Oracle Database needs to be added •In theory, should be able to add Oracle-sourced datastores to mapping and join as usual •But … Oracle / JDBC-generic LKMs don’t get work with Hive T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com 3
  • 40. Options for Importing Oracle / RDBMS Data into Hadoop •Could export RBDMS data to file, and load using IKM File to Hive •Oracle Big Data Connectors only export to Oracle, not import to Hadoop •Best option is to use Apache Sqoop, and new IKM SQL to Hive-HBase-File knowledge module •Hadoop-native, automatically runs in parallel •Uses native JDBC drivers, or OraOop (for example) •Bi-directional in-and-out of Hadoop to RDBMS •Run from OS command-line T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 41. Loading RDBMS Data into Hive using Sqoop •First step is to stage Oracle data into equivalent Hive table •Use special LKM SQL Multi-Connect Global load knowledge module for Oracle source ‣Passes responsibility for load (extract) to following IKM •Then use IKM SQL to Hive-HBase-File (Sqoop) to load the Hive table T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 42. Join Oracle-Sourced Hive Table to Existing Hive Table •Oracle-sourced reference data in Hive can then be joined to existing Hive table as normal •Filters, aggregation operators etc can be added to mapping if required •Use IKM Hive Control Append as integration KM T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 43. ODI Static and Flow Control : Data Quality and Error Handling •CKM Hive can be used with IKM Hive to Hive Control Append to filter out erroneous data •Static controls can be used to create “data firewalls” •Flow control used in Physical mapping view to handle errors, exceptions •Example: Filter out rows where IP address is from a test harness T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 44. Enabling Flow Control in IKM Hive to Hive Control Append •Check the ENABLE_FLOW_CONTROL option in KM settings •Select CKM Hive as the check knowledge module •Erroneous rows will get moved to E_ table in Hive, not loaded into target Hive table T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 45. Using Hive Streaming and Python for Geocoding Data •Another requirement we have is to “geocode” the webserver log entries •Allows us to aggregate page views by country •Based on the fact that IP ranges can usually be attributed to specific countries •Not functionality normally found in Hive etc, but can be done with add-on APIs T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com 4
  • 46. How GeoIP Geocoding Works •Uses free Geocoding API and database from Maxmind •Convert IP address to an integer •Find which integer range our IP address sits within •But Hive can’t use BETWEEN in a join… T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 47. Solution : IKM Hive Transform •IKM Hive Transform can pass the output of a Hive SELECT statement through a perl, python, shell etc script to transform content •Uses Hive TRANSFORM … USING … AS functionality hive> add file file:///tmp/add_countries.py; Added resource: file:///tmp/add_countries.py hive> select transform (hostname,request_date,post_id,title,author,category) > using 'add_countries.py' > as (hostname,request_date,post_id,title,author,category,country) > from access_per_post_categories; T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 48. Creating the Python Script for Hive Streaming •Solution requires a Python API to be installed on all Hadoop nodes, along with geocode DB wget ! https://raw.github.com/pypa/pip/master/contrib/get-pip.py python ! get-pip.py pip install pygeoip ! •Python script then parses incoming stdin lines using tab-separation of fields, outputs same (but with extra field for the country) #!/usr/bin/python import sys sys.path.append('/usr/lib/python2.6/site-packages/') import pygeoip gi = pygeoip.GeoIP('/tmp/GeoIP.dat') for line in sys.stdin: line = line.rstrip() hostname,request_date,post_id,title,author,category = line.split('t') country = gi.country_name_by_addr(hostname) print hostname+'t'+request_date+'t'+post_id+'t'+title+'t'+author T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com +'t'+country+'t'+category
  • 49. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Setting up the Mapping •Map source Hive table to target, which includes column for extra “country” column ! ! ! ! ! ! ! •Copy script + GeoIP.dat file to every node’s /tmp directory •Ensure all Python APIs and libraries are installed on each Hadoop node
  • 50. Configuring IKM Hive Transform •TRANSFORM_SCRIPT_NAME specifies name of script, and path to script •TRANSFORM_SCRIPT has issues with parsing; do not use, leave blank and KM will use existing one •Optional ability to specify sort and distribution columns (can be compound) •Leave other options at default T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 51. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Executing the Mapping •KM automatically registers the script with Hive (which caches it on all nodes) •HiveQL output then runs the contents of the first Hive table through the script, outputting results to target table
  • 52. Bulk Unload Summary Data to Oracle Database •Final requirement is to unload final Hive table contents to Oracle Database •Several use-cases for this: •Use Hadoop / BDA for ETL offloading •Use analysis capabilities of BDA, but then output results to RDBMS data mart or DW •Permit use of more advanced SQL query tools •Share results with other applications •Can use Sqoop for this, or use Oracle Big Data Connectors •Fast bulk unload, or transparent Oracle access to Hive T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com 5
  • 53. IKM File/Hive to Oracle (OLH/ODCH) •KM for accessing HDFS/Hive data from Oracle •Either sets up ODCH connectivity, or bulk-unloads via OLH •Map from HDFS or Hive source to Oracle tables (via Oracle technology in Topology) T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 54. Configuring the KM Physical Settings •For the access table in Physical view, change LKM to LKM SQL Multi-Connect •Delegates the multi-connect capabilities to the downstream node, so you can use a multi-connect IKM such as IKM File/Hive to Oracle T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 55. Configuring the KM Physical Settings •For the target table, select IKM File/Hive to Oracle •Only becomes available to select once LKM SQL Multi-Connect selected for access table •Key option values to set are: •OLH_OUTPUT_MODE (use JDBC initially, OCI if Oracle Client installed on Hadoop client node) •MAPRED_OUTPUT_BASE_DIR (set to directory on HFDS that OS user running ODI can access) T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 56. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Executing the Mapping •Executing the mapping will invoke OLH from the OS command line •Hive table (or HDFS file) contents copied to Oracle table
  • 57. Create Package to Sequence ETL Steps •Define package (or load plan) within ODI12c to orchestrate the process •Call package / load plan execution from command-line, web service call, or schedule T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com
  • 58. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Execute Overall Package •Each step executed in sequence •End-to-end ETL process, using ODI12c’s metadata-driven development process, data quality handing, heterogenous connectivity, but Hadoop-native processing
  • 59. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Conclusions •Hadoop, and the Oracle Big Data Appliance, is an excellent platform for data capture, analysis and processing •Hadoop tools such as Hive, Sqoop, MapReduce and Pig provide means to process and analyse data in parallel, using languages + approach familiar to Oracle developers •ODI12c provides several benefits when working with ETL and data loading on Hadoop ‣Metadata-driven design; data quality handling; KMs to handle technical complexity •Oracle Data Integrator Adapter for Hadoop provides several KMs for Hadoop sources •In this presentation, we’ve seen an end-to-end example of big data ETL using ODI ‣The power of Hadoop and BDA, with the ETL orchestration of ODI12c
  • 60. T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com Thank You for Attending! •Thank you for attending this presentation, and more information can be found at http:// www.rittmanmead.com •Contact us at info@rittmanmead.com or mark.rittman@rittmanmead.com •Look out for our book, “Oracle Business Intelligence Developers Guide” out now! •Follow-us on Twitter (@rittmanmead) or Facebook (facebook.com/rittmanmead)
  • 61. Deep-Dive into Big Data ETL with ODI12c and Oracle Big Data Connectors Mark Rittman, CTO, Rittman Mead Oracle Openworld 2014, San Francisco T : +44 (0) 1273 911 268 (UK) or (888) 631-1410 (USA) or +61 3 9596 7186 (Australia & New Zealand) or +91 997 256 7970 (India) E : info@rittmanmead.com W : www.rittmanmead.com