A Benchmark Test on Presto, Spark Sql and Hive on TezGw Liu
Presto、Spark SQLとHive on Tezの性能に関して、数万件から数十億件までのデータ上に、常用クエリパターンの実行スピードなどを検証してみた。
We conducted a benchmark test on mainstream big data sql engines including Presto, Spark SQL, Hive on Tez.
We focused on the performance over medium data (from tens of GB to 1 TB) which is the major case used in most services.
A Benchmark Test on Presto, Spark Sql and Hive on TezGw Liu
Presto、Spark SQLとHive on Tezの性能に関して、数万件から数十億件までのデータ上に、常用クエリパターンの実行スピードなどを検証してみた。
We conducted a benchmark test on mainstream big data sql engines including Presto, Spark SQL, Hive on Tez.
We focused on the performance over medium data (from tens of GB to 1 TB) which is the major case used in most services.
HDInsight & CosmosDB - Global IoT · Big data processing infrastructureDataWorks Summit
We introduce HDInsight which is PaaS of Hadoop / Spark and IoT and big data processing infrastructure by CosmosDB which is a globally deployable distributed / multi-model database.
HDInsight & CosmosDB - Global IoT · Big data processing infrastructureDataWorks Summit
We introduce HDInsight which is PaaS of Hadoop / Spark and IoT and big data processing infrastructure by CosmosDB which is a globally deployable distributed / multi-model database.
Many Organizations are currently processing various types of data and in different formats. Most often this data will be in free form, As the consumers of this data growing it’s imperative that this free-flowing data needs to adhere to a schema. It will help data consumers to have an expectation of about the type of data they are getting and also they will be able to avoid immediate impact if the upstream source changes its format. Having a uniform schema representation also gives the Data Pipeline a really easy way to integrate and support various systems that use different data formats.
SchemaRegistry is a central repository for storing, evolving schemas. It provides an API & tooling to help developers and users to register a schema and consume that schema without having any impact if the schema changed. Users can tag different schemas and versions, register for notifications of schema changes with versions etc.
In this talk, we will go through the need for a schema registry and schema evolution and showcase the integration with Apache NiFi, Apache Kafka, Apache Storm.
There is increasing need for large-scale recommendation systems. Typical solutions rely on periodically retrained batch algorithms, but for massive amounts of data, training a new model could take hours. This is a problem when the model needs to be more up-to-date. For example, when recommending TV programs while they are being transmitted the model should take into consideration users who watch a program at that time.
The promise of online recommendation systems is fast adaptation to changes, but methods of online machine learning from streams is commonly believed to be more restricted and hence less accurate than batch trained models. Combining batch and online learning could lead to a quickly adapting recommendation system with increased accuracy. However, designing a scalable data system for uniting batch and online recommendation algorithms is a challenging task. In this talk we present our experiences in creating such a recommendation engine with Apache Flink and Apache Spark.
DeepLearning is not just a hype - it outperforms state-of-the-art ML algorithms. One by one. In this talk we will show how DeepLearning can be used for detecting anomalies on IoT sensor data streams at high speed using DeepLearning4J on top of different BigData engines like ApacheSpark and ApacheFlink. Key in this talk is the absence of any large training corpus since we are using unsupervised machine learning - a domain current DL research threats step-motherly. As we can see in this demo LSTM networks can learn very complex system behavior - in this case data coming from a physical model simulating bearing vibration data. Once draw back of DeepLearning is that normally a very large labaled training data set is required. This is particularly interesting since we can show how unsupervised machine learning can be used in conjunction with DeepLearning - no labeled data set is necessary. We are able to detect anomalies and predict braking bearings with 10 fold confidence. All examples and all code will be made publicly available and open sources. Only open source components are used.
QE automation for large systems is a great step forward in increasing system reliability. In the big-data world, multiple components have to come together to provide end-users with business outcomes. This means, that QE Automations scenarios need to be detailed around actual use cases, cross-cutting components. The system tests potentially generate large amounts of data on a recurring basis, verifying which is a tedious job. Given the multiple levels of indirection, the false positives of actual defects are higher, and are generally wasteful.
At Hortonworks, we’ve designed and implemented Automated Log Analysis System - Mool, using Statistical Data Science and ML. Currently the work in progress has a batch data pipeline with a following ensemble ML pipeline which feeds into the recommendation engine. The system identifies the root cause of test failures, by correlating the failing test cases, with current and historical error records, to identify root cause of errors across multiple components. The system works in unsupervised mode with no perfect model/stable builds/source-code version to refer to. In addition the system provides limited recommendations to file/open past tickets and compares run-profiles with past runs.
Improving business performance is never easy! The Natixis Pack is like Rugby. Working together is key to scrum success. Our data journey would undoubtedly have been so much more difficult if we had not made the move together.
This session is the story of how ‘The Natixis Pack’ has driven change in its current IT architecture so that legacy systems can leverage some of the many components in Hortonworks Data Platform in order to improve the performance of business applications. During this session, you will hear:
• How and why the business and IT requirements originated
• How we leverage the platform to fulfill security and production requirements
• How we organize a community to:
o Guard all the players, no one gets left on the ground!
o Us the platform appropriately (Not every problem is eligible for Big Data and standard databases are not dead)
• What are the most usable, the most interesting and the most promising technologies in the Apache Hadoop community
We will finish the story of a successful rugby team with insight into the special skills needed from each player to win the match!
DETAILS
This session is part business, part technical. We will talk about infrastructure, security and project management as well as the industrial usage of Hive, HBase, Kafka, and Spark within an industrial Corporate and Investment Bank environment, framed by regulatory constraints.
HBase hast established itself as the backend for many operational and interactive use-cases, powering well-known services that support millions of users and thousands of concurrent requests. In terms of features HBase has come a long way, overing advanced options such as multi-level caching on- and off-heap, pluggable request handling, fast recovery options such as region replicas, table snapshots for data governance, tuneable write-ahead logging and so on. This talk is based on the research for the an upcoming second release of the speakers HBase book, correlated with the practical experience in medium to large HBase projects around the world. You will learn how to plan for HBase, starting with the selection of the matching use-cases, to determining the number of servers needed, leading into performance tuning options. There is no reason to be afraid of using HBase, but knowing its basic premises and technical choices will make using it much more successful. You will also learn about many of the new features of HBase up to version 1.3, and where they are applicable.
There has been an explosion of data digitising our physical world – from cameras, environmental sensors and embedded devices, right down to the phones in our pockets. Which means that, now, companies have new ways to transform their businesses – both operationally, and through their products and services – by leveraging this data and applying fresh analytical techniques to make sense of it. But are they ready? The answer is “no” in most cases.
In this session, we’ll be discussing the challenges facing companies trying to embrace the Analytics of Things, and how Teradata has helped customers work through and turn those challenges to their advantage.
In this talk, we will present a new distribution of Hadoop, Hops, that can scale the Hadoop Filesystem (HDFS) by 16X, from 70K ops/s to 1.2 million ops/s on Spotiy's industrial Hadoop workload. Hops is an open-source distribution of Apache Hadoop that supports distributed metadata for HSFS (HopsFS) and the ResourceManager in Apache YARN. HopsFS is the first production-grade distributed hierarchical filesystem to store its metadata normalized in an in-memory, shared nothing database. For YARN, we will discuss optimizations that enable 2X throughput increases for the Capacity scheduler, enabling scalability to clusters with >20K nodes. We will discuss the journey of how we reached this milestone, discussing some of the challenges involved in efficiently and safely mapping hierarchical filesystem metadata state and operations onto a shared-nothing, in-memory database. We will also discuss the key database features needed for extreme scaling, such as multi-partition transactions, partition-pruned index scans, distribution-aware transactions, and the streaming changelog API. Hops (www.hops.io) is Apache-licensed open-source and supports a pluggable database backend for distributed metadata, although it currently only support MySQL Cluster as a backend. Hops opens up the potential for new directions for Hadoop when metadata is available for tinkering in a mature relational database.
In high-risk manufacturing industries, regulatory bodies stipulate continuous monitoring and documentation of critical product attributes and process parameters. On the other hand, sensor data coming from production processes can be used to gain deeper insights into optimization potentials. By establishing a central production data lake based on Hadoop and using Talend Data Fabric as a basis for a unified architecture, the German pharmaceutical company HERMES Arzneimittel was able to cater to compliance requirements as well as unlock new business opportunities, enabling use cases like predictive maintenance, predictive quality assurance or open world analytics. Learn how the Talend Data Fabric enabled HERMES Arzneimittel to become data-driven and transform Big Data projects from challenging, hard to maintain hand-coding jobs to repeatable, future-proof integration designs.
Talend Data Fabric combines Talend products into a common set of powerful, easy-to-use tools for any integration style: real-time or batch, big data or master data management, on-premises or in the cloud.
While you could be tempted assuming data is already safe in a single Hadoop cluster, in practice you have to plan for more. Questions like: "What happens if the entire datacenter fails?, or "How do I recover into a consistent state of data, so that applications can continue to run?" are not a all trivial to answer for Hadoop. Did you know that HDFS snapshots are handling open files not as immutable? Or that HBase snapshots are executed asynchronously across servers and therefore cannot guarantee atomicity for cross region updates (which includes tables)? There is no unified and coherent data backup strategy, nor is there tooling available for many of the included components to build such a strategy. The Hadoop distributions largely avoid this topic as most customers are still in the "single use-case" or PoC phase, where data governance as far as backup and disaster recovery (BDR) is concerned are not (yet) important. This talk first is introducing you to the overarching issue and difficulties of backup and data safety, looking at each of the many components in Hadoop, including HDFS, HBase, YARN, Oozie, the management components and so on, to finally show you a viable approach using built-in tools. You will also learn not to take this topic lightheartedly and what is needed to implement and guarantee a continuous operation of Hadoop cluster based solutions.
【DLゼミ】XFeat: Accelerated Features for Lightweight Image Matchingharmonylab
公開URL:https://arxiv.org/pdf/2404.19174
出典:Guilherme Potje, Felipe Cadar, Andre Araujo, Renato Martins, Erickson R. ascimento: XFeat: Accelerated Features for Lightweight Image Matching, Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
概要:リソース効率に優れた特徴点マッチングのための軽量なアーキテクチャ「XFeat(Accelerated Features)」を提案します。手法は、局所的な特徴点の検出、抽出、マッチングのための畳み込みニューラルネットワークの基本的な設計を再検討します。特に、リソースが限られたデバイス向けに迅速かつ堅牢なアルゴリズムが必要とされるため、解像度を可能な限り高く保ちながら、ネットワークのチャネル数を制限します。さらに、スパース下でのマッチングを選択できる設計となっており、ナビゲーションやARなどのアプリケーションに適しています。XFeatは、高速かつ同等以上の精度を実現し、一般的なラップトップのCPU上でリアルタイムで動作します。
セル生産方式におけるロボットの活用には様々な問題があるが,その一つとして 3 体以上の物体の組み立てが挙げられる.一般に,複数物体を同時に組み立てる際は,対象の部品をそれぞれロボットアームまたは治具でそれぞれ独立に保持することで組み立てを遂行すると考えられる.ただし,この方法ではロボットアームや治具を部品数と同じ数だけ必要とし,部品数が多いほどコスト面や設置スペースの関係で無駄が多くなる.この課題に対して音𣷓らは組み立て対象物に働く接触力等の解析により,治具等で固定されていない対象物が組み立て作業中に運動しにくい状態となる条件を求めた.すなわち,環境中の非把持対象物のロバスト性を考慮して,組み立て作業条件を検討している.本研究ではこの方策に基づいて,複数物体の組み立て作業を単腕マニピュレータで実行することを目的とする.このとき,対象物のロバスト性を考慮することで,仮組状態の複数物体を同時に扱う手法を提案する.作業対象としてパイプジョイントの組み立てを挙げ,簡易な道具を用いることで単腕マニピュレータで複数物体を同時に把持できることを示す.さらに,作業成功率の向上のために RGB-D カメラを用いた物体の位置検出に基づくロボット制御及び動作計画を実装する.
This paper discusses assembly operations using a single manipulator and a parallel gripper to simultaneously
grasp multiple objects and hold the group of temporarily assembled objects. Multiple robots and jigs generally operate
assembly tasks by constraining the target objects mechanically or geometrically to prevent them from moving. It is
necessary to analyze the physical interaction between the objects for such constraints to achieve the tasks with a single
gripper. In this paper, we focus on assembling pipe joints as an example and discuss constraining the motion of the
objects. Our demonstration shows that a simple tool can facilitate holding multiple objects with a single gripper.
10. (C) Recruit Technologies Co.,Ltd. All rights reserved.
アジェンダ
1.弊社と運営サービスのご紹介
2.横断データと技術的負債
3.「フレームワークプロジェクト」
4.HDP2.5・Kafka・Spark
5.結論 “On Happiness”
10
11. (C) Recruit Technologies Co.,Ltd. All rights reserved.
横断データ活用:フェーズ
ID基盤が整いデータが増加し、我々は成長期の真っ只中
爆発的な成長を目指すが・・・技術的負債が顕在化
11
黎明期 成長期
・効果額
・施策数
・利用者数
貢献
価値
12. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期:基本戦略
各サービスから各種データを収集、DWH/Datalakeに蓄積し活用
12
DWH
横断データ
活用施策
13. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期:データ統合
サイト毎の仕様差異の吸収 個人情報のマスキング 重複や欠損のクリーニング…
13
DWH
0001
0002
0003
0004
14. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期:経営戦略指標
横断データ活用への最初の要求は、経営陣からの「経営指標」の集計
14
Query
DWH
15. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期:定常化運用
有用なものは日次/月次実行する”資産”となり、加速度的に増加
15
≒1000 Queries
run everyday
Query
DWH
16. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期:機械学習の開始
DMTを機械学習の学習データとして転用
16
Another
Data
DWH
17. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期:機械学習の加速
17
DWH
Prepared
Data1
Prepared
Data2
MLli
b
次々と機械学習アルゴリズムを変えるため、データ間の依存度が加速
18. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Sou
rce
DWH DMT APP
黎明期:出来上がったシステム
18
“DMT”
users
DWH MLlibDWH
0001
0002
0003
0004
19. (C) Recruit Technologies Co.,Ltd. All rights reserved.
黎明期〜成長期:システム運用
19
DWH MLlibDWH
0001
0002
0003
0004
Change
prediction
corrupt
Change Change
more
users
Bigger
DMT
more
data
source
We
changed
log spec!
bug
mis
matc
h
halt
mis
matc
h
rerun
!
more
work
21. (C) Recruit Technologies Co.,Ltd. All rights reserved.
技術的負債の溜まり場
黎明期に描かれた古典モデルの破綻?
21
DWH DMT APP
22. (C) Recruit Technologies Co.,Ltd. All rights reserved.
構造的問題への対処:シフト
合理的な判断の結果、問題が生まれている
22
DMTへの投資システム成長
DMTへの投資DMTへの投資
DMTへの投資期待・投資
① 黎明期:急成長
・利用者の増加
・投資金額 etc…
技術的負債
外部要因:
プレッシャー
etc…
② 成長期:鈍化
・ムダな業務の増加
・運用負荷 etc…
歯止め
23. (C) Recruit Technologies Co.,Ltd. All rights reserved.
アジェンダ
1.弊社と運営サービスのご紹介
2.横断データと技術的負債
3.「フレームワークプロジェクト」
4.HDP2.5・Kafka・Spark
5.結論 “On Happiness”
23
24. (C) Recruit Technologies Co.,Ltd. All rights reserved.
フレームワークプロジェクト(var/log)
技術的負債を徹底的に排除するためのコードベース(jar)
24
Integrate software
resources & unlock
their full potential
“Absolute DRY”
common process
auto generated
DSL for processing
typically structured
data of Recruit
Codebase
34. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Move onto Agility
34
DWH DMT APP
DWH DMT Produ
ction
pub
sub
Sandbox
Business
Engineer
Scientist
Everyone
35. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Early Adopter向け機能とは何か
ニーズ:さあ、実験をしよう。
1. 秒単位の応答性能
2. その場の思いつきを実データに適用
• 新しいライブラリ・・・
• 新しい特徴量・・・
• 新しい数式・・・
• 新しい自作関数・・・
3. そのままリリース
→ jar + xml configuration ではない
35
36. (C) Recruit Technologies Co.,Ltd. All rights reserved.
import varlog.jar on Zeppelin
36
.Jar
その場で作った
自作関数(動作確認後varlog.jarにコミット)
データ抜きだし・加工
37. (C) Recruit Technologies Co.,Ltd. All rights reserved.
back to xml
.scala
File
<scala>
xml-tag
autodeploy
37
Release Notes as a Job
.Jar
Zeppelinで動作すれば、xmlにコピーして自動リリースも可能
38. (C) Recruit Technologies Co.,Ltd. All rights reserved.
PUBSUBシステム構成
38
Pub
-sub
DA
ORDD
xml
DWH
Another
Data
Hadoop
elasti
c
Job
Powered by hdp2.5
Why
Kafka?
39. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Background Data Store: Kafka
Kafkaとは?
• publish & subscribe方式の分散データストア
利点
1. ビッグデータシステム間のトポロジー構造の単純化
2. 高速なスループット
3. Sparkとの接続性
39
40. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Kafka 1) トポロジー構造の単純化
Jay Kreps(the original author of Kafka)によれば・・・
40
<<
トポロジーが複雑=システム間のデータ転送が多い状況
ex) HBase→Hive, Hive→Oracle, Oracle→Hive, Oracle→Elastic, Prod→Sand…
Before
41. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Kafka 2) 高いスループット性能
put=4000件/秒=11.0MB〜14000件/秒
get=10000件/秒=31.7MB (no OS pagecache)
MessageSize=3kB, Broker=1で上記性能。チューニング・スケールアウトも可
開発環境をローカルVMに構築
41
42. (C) Recruit Technologies Co.,Ltd. All rights reserved.
狙い:最適なシステムの統合による高速化
通常ETLジョブ:全てのSQLがLoad/Join/Function/Persist処理を全部実行。役割分担無し
42
L J F PL J F P L J F P
L/J処理を集約後SparkでF/P処理を実行。明確な役割分担
• DWH:Join,GroupByのみ
• Kafka:Sparkのメモリへのロードのみ
• Spark:ScalaFunction再利用のみ
DWH
EXA
elastic
Hadoop
L
L
J J J
F
P
P
P
L
43. (C) Recruit Technologies Co.,Ltd. All rights reserved.
Kafka 3) Sparkとの接続
OracleやHive内のデータ加工関数のモジュラリティは低い。scalaの関数をjarからExport
43
.Jar
Before:
After:
select
case when
t.name in
(‘a’) then 1
SQL
id num
u1 1
u2 2
u3 3
DMT
public
functions
def func
implict class A(RDD)
mapRow
hiveUdf
scala
Reusablily
44. (C) Recruit Technologies Co.,Ltd. All rights reserved.
“秒”レスポンスの検証
特徴量加工+データを1件覗く
1秒
44
特徴量加工+Reduce
53秒
特徴量加工+train+predict
169秒(50万件)
さらに負荷
306秒(Depth = 30)
ある画面の1週間のImpressionが約50万件。Task「RandomForestでクリック予測」
• overhead: spark=5sec Mllib=120sec
• Spark Memory: 6G/192G
• Kafka Bytes Out: 5G (Throughput: 100M/sec)
Total 300 sec
Kafka 50 ML min 120 ML ext 140〜
Graphana