25. Copyright @2018 Insight Technology, Inc. All Right Reserved
Attunity Replicateのアーキテクチャ
Transfer
TransformFilter
Batch
CDC Incremental
In-Memory
File Channel
Batch
On
PremisesPersistent Store
RDBMS
Hadoop
Data
Warehouse
Mainframe
Files
RDBMS
Hadoop
Kafka
Files
Data
Warehouse
Cloud
Zero Footprint Architecture
26. Copyright @2018 Insight Technology, Inc. All Right Reserved
サポートデータベース
RDBMS
Oracle
SQL Server
DB2 LUW
DB2 iSeries
MySQL
PostgreSQL
Sybase ASE
Informix
Data Warehouse
Exadata
Teradata
Netezza
Vertica
Hortonworks
Cloudera
MapR
Hadoop
DB2 z/OS
IMS/DB
SQL M/P
Enscribe
RMS
VSAM
Mainframe
Amazon RDS
Salesforce
Cloud
RDBMS
Oracle
SQL Server
DB2 LUW
MySQL
PostgreSQL
Sybase ASE
Informix
MemSQL
Data Warehouse
Exadata
Teradata
Netezza
Vertica
Pivotal DB
(Greenplum)
Pivotal HAWQ
Actian Vector
Sybase IQ
SAP / HANA
Hortonworks
Cloudera
MapR
Pivotal
Amazon EMR
Azure HDInsight
Hive
Hadoop
MongoDB
NoSQL
Amazon
RDS/Redshift/EC2
Google Cloud SQL
Azure SQL DW
Azure SQL Database
SnowFlake
Cloud
Effective: 12/1/2018
Kafka
Azure Event Hubs
MAPR-ES
AWS Kinesis
Message Broker
targets
sources
On ....
Oracle
SQL
DB2
SAP
28. Copyright @2018 Insight Technology, Inc. All Right Reserved
Replicateによるデータレイクパイプラインの自動化
メッセージング(Kafka)によるデータ連携
Ingest
& Land
Update
Operational
Data Store
Historical
Data Store
CDC
Batch
Attunity
Replicate
Cloud
On Premises
Files
RDBM
S
Data
Warehouse
Mainframe
(SQL)
29. Copyright @2018 Insight Technology, Inc. All Right Reserved
Composeによるデータレイクパイプラインの自動化
データベースとHadoopの同期を実現
Ingest
& Land
Land
CDC
Update
Operational
Data Store
Historical
Data Store
Create
& Update
Transform
CDC
Batch
Attunity
Replicate
Attunity
Compose
Cloud
On Premises
Files
RDBM
S
Data
Warehouse
Mainframe
(ACID Merge)
Initial copy
31. Copyright @2018 Insight Technology, Inc. All Right Reserved
レプリケーションテクノロジーの適用領域
データ分析基盤構築のための
データ連携ツールとして
異種のデータ分析基盤へ基幹データを同期して分析したい
IoTデータ分析プラットフォームでのデータ連携は、遅延無く行いたい
非構造化データを含むマルチデータソースに対して素早く柔軟にデータ連携を
クラウド上のデータ分析基盤へもリアルタイムにデータ連携したい
データベース移行時の停止時間を最小化したい
商用データベースをOSSデータベースへ移行したい
データ分析基盤を(クラウド上に)再構築したい
異種を含む
データベース移行ツールとして
32. Copyright @2018 Insight Technology, Inc. All Right Reserved
日本国内でのAttunityReplicateの適用案件数
33
0
5
10
15
20
25
30
35
40
45
50
2012年 2013年 2014年 2015年 2016年 2017年 2018年
MIGRATION
Oracle
↓
Oracle
Oracle
DB2
MySQL
SQL Server
(Sybase)
(zOS/DB2)
↓
Oracle
MySQL
PostgreSQL
SQL Server
Teradata
Amazon Aurora
Azure Database
(Kafka)
(MongoDB)
Oracle
DB2
MySQL
SQL Server
Netezza
zOS/DB2
(Sybase)
(zOS/IMSDB)
↓
Oracle
MySQL
PostgreSQL
SQL Server
Teradata
Amazon Aurora
Azure Database
Azure DW
(Kafka)
(MongoDB)
Oracle
↓
Oracle
Oracle
DB2
↓
Oracle
PostgreSQL
Oracle
DB2
MySQL
↓
Oracle
PostgreSQL
Amazon Aurora
Teradata
Oracle
↓
Oracle
案件数→
33. Copyright @2018 Insight Technology, Inc. All Right Reserved
Migration Source & Target in Japan
SOURCE TARGET
Amazon
Aurora
Amazon
Aurora
Amazon
RDSPostgreSQL
PostgreSQL
PostgreSQLPostgreSQL
IBM/DB2
ORACLE
ORACLE
SQL Server SQL Server
MySQL
34. Copyright @2018 Insight Technology, Inc. All Right Reserved
Replication Source & Target in Japan
SOURCE TARGET
Azure SQLDW
Mainframe
Amazon
Aurora
SQL Server
Hadoop
MySQL
MySQL
ORACLE
PostgreSQL
Cassandra
TERADATA
SQL Server
PostgreSQL
ORACLE
MySQL
IBM/DB2
IBM/DB2
Gartner Peer Insights is a platform for ratings and reviews of enterprise technology solutions by end-user professionals for end-user professionals.
As of July 22, 2016, Gartner is bringing together our expert opinion and the Peer Insights user-contributed reviews into one experience via the Interactive Magic Quadrant.
Let’s start with the Landing Zone. First Attunity Replicate copies data, often from traditional sources such as Oracle, SAP and mainframe, then lands it in raw form in the Hadoop File System. This process enjoys all the advantages of Attunity Replicate, including full load/CDC capabilities, time-based partitioning for transactional consistency and auto-propagation of source DDL changes. Data is now ingested and available as full snapshots or change tables, but not yet ready for analytics.
It’s clear that these deficiencies produce some new requirements:
The skills gap can be closed by automation – specifically we can if we automate the data pipeline
Standardize on a toolset to load, enrich and keep the data current –
Improve the confidence by leveraging meta-data where ever we can – Keeping the data and meta data in sync through out the pipeline journey.
Irony of borrowing ideas from the mature world of data warehousing – multi-data zone approach – We’re not just loading the data into one big blob, See how this benefits in the next slide
Next – Attunity supports the broadest range of source and target systems for data replication – including Relational databases, data warehouse systems, Hadoop, Cloud targets and mainframe systems. We also support MongoDB as a NoSQL target and most recently added support for writing change data capture as messages to Kafka message brokers.