Monsanto uses geospatial data and analytics to improve sustainable agriculture. They process vast amounts of spatial data on Hadoop to generate prescription maps that optimize seeding rates. Their previous SQL-based system could only handle a small fraction of the data and took over 30 days to process. Monsanto's new Hadoop/HBase architecture loads the entire US dataset in 18 hours, representing significant cost savings over the SQL approach. This foundational system provides agronomic insights to farmers and supports Monsanto's vision of doubling yields by 2030 through information-driven farming.
Are you using the fastest query tool for Hadoop? Provide and discuss the latest performance results of the industry standard TPC_H benchmarks executed across an assortment of open source query tools such as Hive (using MR, TEZ, LLAP, SPARK), SparkSQL, Presto, and Drill. Additionally, the performance tests will utilize a variety of data sizes and popular storage formats such as ORC, Parquet and Text and compression codecs.
Apache Hive Hook
I couldn't find enough info about Hive hooks.
So, I made this.
I hope this presentation will be useful when you want to use hooks.
This included some infomation about metastore event listeners.
This was written based on release-0.11 tag.
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud (Hadoop / Spark Conference Japan 2019)
# English version #
http://hadoop.apache.jp/hcj2019-program/
Are you using the fastest query tool for Hadoop? Provide and discuss the latest performance results of the industry standard TPC_H benchmarks executed across an assortment of open source query tools such as Hive (using MR, TEZ, LLAP, SPARK), SparkSQL, Presto, and Drill. Additionally, the performance tests will utilize a variety of data sizes and popular storage formats such as ORC, Parquet and Text and compression codecs.
Apache Hive Hook
I couldn't find enough info about Hive hooks.
So, I made this.
I hope this presentation will be useful when you want to use hooks.
This included some infomation about metastore event listeners.
This was written based on release-0.11 tag.
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud (Hadoop / Spark Conference Japan 2019)
# English version #
http://hadoop.apache.jp/hcj2019-program/
This is the first time I introduced the concept of Schema-on-Read vs Schema-on-Write to the public. It was at Berkeley EECS RAD Lab retreat Open Mic Session on May 28th, 2009 at Santa Cruz, California.
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
MySQL Clustering over InnoDB engines has grown a lot over the last decade. Galera began working with InnoDB early and then Group Replication came to the environment later, where the features are now rich and robust. This presentation offers a technical comparison of both of them.
At wetter.com we build analytical B2B data products and heavily use Spark and AWS technologies for data processing and analytics. I explain why we moved from AWS EMR to Databricks and Delta and share our experiences from different angles like architecture, application logic and user experience. We will look how security, cluster configuration, resource consumption and workflow changed by using Databricks clusters as well as how using Delta tables simplified our application logic and data operations.
Apache Doris (incubating) is an MPP-based interactive SQL data warehousing for reporting and analysis. It is open-sourced by Baidu. Doris mainly integrates the technology of Google Mesa and Apache Impala. Unlike other popular SQL-on-Hadoop systems, Doris is designed to be a simple and single tightly coupled system, not depending on other systems. Doris not only provides high concurrent low latency point query performance, but also provides high throughput queries of ad-hoc analysis. Doris not only provides batch data loading, but also provides near real-time mini-batch data loading. Doris also provides high availability, reliability, fault tolerance, and scalability. The simplicity (of developing, deploying and using) and meeting many data serving requirements in single system are the main features of Doris.
In KDD2011, Vijay Narayanan (Yahoo!) and Milind Bhandarkar (Greenplum Labs, EMC) conducted a tutorial on "Modeling with Hadoop". This is the first half of the tutorial.
Find out which is faster, SQL or NoSQL, for traditional reporting tasks. Discover how you can optimise MongoDB aggregation pipelines and how to push complex computation down to the database.
Apache Sqoop Tutorial | Sqoop: Import & Export Data From MySQL To HDFS | Hado...Edureka!
** Hadoop Training: https://www.edureka.co/hadoop **
This Edureka PPT on Sqoop Tutorial will explain you the fundamentals of Apache Sqoop. It will also give you a brief idea on Sqoop Architecture. In the end, it will showcase a demo of data transfer between Mysql and Hadoop
Below topics are covered in this video:
1. Problems with RDBMS
2. Need for Apache Sqoop
3. Introduction to Sqoop
4. Apache Sqoop Architecture
5. Sqoop Commands
6. Demo to transfer data between Mysql and Hadoop
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
ProxySQL Cluster: Challenges and Solutions To Synchronize Configuration Across Multiple Decentralized Cluster Nodes
As a reverse proxy for MySQL databases, ProxySQL is being used in various infrastructure sizes, and it is not surprising to see deployments with thousands of nodes running. Such large deployments introduce some interesting challenges because nodes can be initialized or destroyed at any time.
This session will describe the challenges in configuring such large deployments of nodes, the most common external tools to configure ProxySQL, and then focus on improved ProxySQL native clustering solutions that allow auto-discovery, bootstrap, and a distributed decentralized reconfiguration.
This is the first time I introduced the concept of Schema-on-Read vs Schema-on-Write to the public. It was at Berkeley EECS RAD Lab retreat Open Mic Session on May 28th, 2009 at Santa Cruz, California.
Wars of MySQL Cluster ( InnoDB Cluster VS Galera ) Mydbops
MySQL Clustering over InnoDB engines has grown a lot over the last decade. Galera began working with InnoDB early and then Group Replication came to the environment later, where the features are now rich and robust. This presentation offers a technical comparison of both of them.
At wetter.com we build analytical B2B data products and heavily use Spark and AWS technologies for data processing and analytics. I explain why we moved from AWS EMR to Databricks and Delta and share our experiences from different angles like architecture, application logic and user experience. We will look how security, cluster configuration, resource consumption and workflow changed by using Databricks clusters as well as how using Delta tables simplified our application logic and data operations.
Apache Doris (incubating) is an MPP-based interactive SQL data warehousing for reporting and analysis. It is open-sourced by Baidu. Doris mainly integrates the technology of Google Mesa and Apache Impala. Unlike other popular SQL-on-Hadoop systems, Doris is designed to be a simple and single tightly coupled system, not depending on other systems. Doris not only provides high concurrent low latency point query performance, but also provides high throughput queries of ad-hoc analysis. Doris not only provides batch data loading, but also provides near real-time mini-batch data loading. Doris also provides high availability, reliability, fault tolerance, and scalability. The simplicity (of developing, deploying and using) and meeting many data serving requirements in single system are the main features of Doris.
In KDD2011, Vijay Narayanan (Yahoo!) and Milind Bhandarkar (Greenplum Labs, EMC) conducted a tutorial on "Modeling with Hadoop". This is the first half of the tutorial.
Find out which is faster, SQL or NoSQL, for traditional reporting tasks. Discover how you can optimise MongoDB aggregation pipelines and how to push complex computation down to the database.
Apache Sqoop Tutorial | Sqoop: Import & Export Data From MySQL To HDFS | Hado...Edureka!
** Hadoop Training: https://www.edureka.co/hadoop **
This Edureka PPT on Sqoop Tutorial will explain you the fundamentals of Apache Sqoop. It will also give you a brief idea on Sqoop Architecture. In the end, it will showcase a demo of data transfer between Mysql and Hadoop
Below topics are covered in this video:
1. Problems with RDBMS
2. Need for Apache Sqoop
3. Introduction to Sqoop
4. Apache Sqoop Architecture
5. Sqoop Commands
6. Demo to transfer data between Mysql and Hadoop
Check our complete Hadoop playlist here: https://goo.gl/hzUO0m
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Apache HBase™ is the Hadoop database, a distributed, salable, big data store.Its a column-oriented database management system that runs on top of HDFS.
Apache HBase is an open source NoSQL database that provides real-time read/write access to those large data sets. ... HBase is natively integrated with Hadoop and works seamlessly alongside other data access engines through YARN.
ProxySQL Cluster: Challenges and Solutions To Synchronize Configuration Across Multiple Decentralized Cluster Nodes
As a reverse proxy for MySQL databases, ProxySQL is being used in various infrastructure sizes, and it is not surprising to see deployments with thousands of nodes running. Such large deployments introduce some interesting challenges because nodes can be initialized or destroyed at any time.
This session will describe the challenges in configuring such large deployments of nodes, the most common external tools to configure ProxySQL, and then focus on improved ProxySQL native clustering solutions that allow auto-discovery, bootstrap, and a distributed decentralized reconfiguration.
Sept 17 2013 - THUG - HBase a Technical IntroductionAdam Muise
HBase Technical Introduction. This deck includes a description of memory design, write path, read path, some operational tidbits, SQL on HBase (Phoenix and Hive), as well as HOYA (HBase on YARN).
Hadoop for High-Performance Climate Analytics - Use Cases and Lessons LearnedDataWorks Summit
Scientific data services are a critical aspect of the NASA Center for Climate Simulation’s mission (NCCS). Hadoop, via MapReduce, provides an approach to high-performance analytics that is proving to be useful to data intensive problems in climate research. It offers an analysis paradigm that uses clusters of computers and combines distributed storage of large data sets with parallel computation. The NCCS is particularly interested in the potential of Hadoop to speed up basic operations common to a wide range of analyses. In order to evaluate this potential, we prototyped a series of canonical MapReduce operations over a test suite of observational and climate simulation datasets. The initial focus was on averaging operations over arbitrary spatial and temporal extents within Modern Era Retrospective- Analysis for Research and Applications (MERRA) data. After preliminary results suggested that this approach improves efficiencies within data intensive analytic workflows, we invested in building a cyberinfrastructure resource for developing a new generation of climate data analysis capabilities using Hadoop. This resource is focused on reducing the time spent in the preparation of reanalysis data used in data-model intercomparison, a long sought goal of the climate community. This paper summarizes the related use cases and lessons learned.
Here's the second version of our big data landscape. Thoughts, questions, comments? We'd love to hear your feedback in the comments section here: http://wp.me/p2dLS7-6A
Presentation for Sydney Open Source Developers Conference 2008 covering the range of open source geospatial projects available to the modern programmer!
Promoting Geospatial Education in EuropeKarl Donert
Slides from a Directions Magazine webinar where Karl Donert explores the challenges facing Europe in the development of geospatial education. He explores how open data, open science and open education are key to future economic and social policies. Then the value of open geospatial information is described and this is confirmed through the GI-Learner and YouthMetre Projects that Karl is involved in. The identify the importance of open data and the need for geospatial thinking.
The presentation looks at Knowledge 2050 a European policy report on Europe's future needs.
NUS-ISS PCP for FullStack Software DevelopersNUS-ISS
Presented by Ms Gloria Ng, Chief, Startups & SMEs Practice, NUS-ISS at NUS-ISS Briefing Session for Employers on Professional Conversion Programme on 9 Dec 2016.
A Producer’s Perspective: Agriculture and Nitrogen Deposition in Rocky Mounta...LPE Learning Center
Proceedings Available at: http://www.extension.org/67641
The efforts related to Colorado's Rocky Mountain National Park are voluntary, yet there are nitrogen reduction targets, or milestones, established over five year increments out to the year 2032. If a milestone is not met, mandatory controls could follow. How can the proactive emissions reduction efforts being taken by livestock and crop producers today be recognized or credited should mandatory controls be required at some future date? For example, could an agriculture certainty framework (used more for water quality protection/nutrient runoff) be used to validate actions being taken today for air quality purposes? How might an ag certainty program work and what partners should be at the table? Are there other approaches that states are using or researching that Colorado should consider?
Hadoop World 2011: Advanced HBase Schema DesignCloudera, Inc.
While running a simple key/value based solution on HBase usually requires an equally simple schema, it is less trivial to operate a different application that has to insert thousands of records per second.
This talk will address the architectural challenges when designing for either read or write performance imposed by HBase. It will include examples of real world use-cases and how they can be implemented on top of HBase, using schemas that optimize for the given access patterns.
MapReduce Best Practices and Lessons Learned Applied to Enterprise Datasets -...StampedeCon
At StampedeCon 2012 in St. Louis, Erich Hochmuth of Monsanto presents: Hadoop is quickly becoming the preferable platform for performing analysis over large datasets. We will explore opportunities for utilizing MapReduce to process genomic data in an enterprise system. We will discuss lessons learned introducing Hadoop into an existing enterprise and cover topics such as security, network architecture, and backups.
Introduction to DDS: Context, Information Model, Security, and Applications.Gerardo Pardo-Castellote
Introduction to the Data-Distribution Service (DDS): Context and Applications.
This 50 minute presentation summarizes the main features of DDS including the information model, the type system, and security as well as how typical applications use DDS.
It was presented at the Canadian Government Information Day in Ottawa on September 2018.
There is also a video of this presentation at https://www.youtube.com/watch?v=6iICap5G7rw.
Big data analytics and machine intelligence v5.0Amr Kamel Deklel
Why big data
What is big data
When big data is big data
Big data information system layers
Hadoop echo system
What is machine learning
Why machine learning with big data
How One Company Offloaded Data Warehouse ETL To Hadoop and Saved $30 MillionDataWorks Summit
A Fortune 100 company recently introduced Hadoop into their data warehouse environment and ETL workflow to save $30 Million. This session examines the specific use case to illustrate the design considerations, as well as the economics behind ETL offload with Hadoop. Additional information about how the Hadoop platform was leveraged to support extended analytics will also be referenced.
HBaseCon 2013: Evolving a First-Generation Apache HBase Deployment to Second...Cloudera, Inc.
Explorys has been using HBase and Hadoop since HBase 0.20, and will walk through lessons learned over years of usage from their first HBase implementation through a series of upgrades and changes, including impacts to schema design, data loading, data indexing, data access and analytics, and operational processes.
ANDRITZ, a global manufacturing company formed by acquisitions, with over 50 offices and a virtual IT department, decided that a cloud-first strategy for server backups was the only solution for a disparate and dispersed environment. Brian Bagwell, IT Director of North America and Trey Brown, IT Manager, discussed the company’s challenges to gain more visibility of their data and a cloud-based disaster recovery solution.
With Druva, they discuss:
* Managing complexities of multi-site server recovery requirements being maintained by a virtual IT staff
* Best practices for server backup and data retention with centralized control
* Immediate benefits realized by ANDRITZ such as server restores in seconds, data privacy, and cost savings
To hear the recording, please visit: http://pages2.druva.com/Rethink-Server-Backup-and-Regain-Control-On-Demand.html?utm_source=Social&utm_medium=slideshare
Introduction: This workshop will provide a hands-on introduction to Machine Learning (ML) with an overview of Deep Learning (DL).
Format: An introductory lecture on several supervised and unsupervised ML techniques followed by light introduction to DL and short discussion what is current state-of-the-art. Several python code samples using the scikit-learn library will be introduced that users will be able to run in the Cloudera Data Science Workbench (CDSW).
Objective: To provide a quick and short hands-on introduction to ML with python’s scikit-learn library. The environment in CDSW is interactive and the step-by-step guide will walk you through setting up your environment, to exploring datasets, training and evaluating models on popular datasets. By the end of the crash course, attendees will have a high-level understanding of popular ML algorithms and the current state of DL, what problems they can solve, and walk away with basic hands-on experience training and evaluating ML models.
Prerequisites: For the hands-on portion, registrants must bring a laptop with a Chrome or Firefox web browser. These labs will be done in the cloud, no installation needed. Everyone will be able to register and start using CDSW after the introductory lecture concludes (about 1hr in). Basic knowledge of python highly recommended.
Floating on a RAFT: HBase Durability with Apache RatisDataWorks Summit
In a world with a myriad of distributed storage systems to choose from, the majority of Apache HBase clusters still rely on Apache HDFS. Theoretically, any distributed file system could be used by HBase. One major reason HDFS is predominantly used are the specific durability requirements of HBase's write-ahead log (WAL) and HDFS providing that guarantee correctly. However, HBase's use of HDFS for WALs can be replaced with sufficient effort.
This talk will cover the design of a "Log Service" which can be embedded inside of HBase that provides a sufficient level of durability that HBase requires for WALs. Apache Ratis (incubating) is a library-implementation of the RAFT consensus protocol in Java and is used to build this Log Service. We will cover the design choices of the Ratis Log Service, comparing and contrasting it to other log-based systems that exist today. Next, we'll cover how the Log Service "fits" into HBase and the necessary changes to HBase which enable this. Finally, we'll discuss how the Log Service can simplify the operational burden of HBase.
Tracking Crime as It Occurs with Apache Phoenix, Apache HBase and Apache NiFiDataWorks Summit
Utilizing Apache NiFi we read various open data REST APIs and camera feeds to ingest crime and related data real-time streaming it into HBase and Phoenix tables. HBase makes an excellent storage option for our real-time time series data sources. We can immediately query our data utilizing Apache Zeppelin against Phoenix tables as well as Hive external tables to HBase.
Apache Phoenix tables also make a great option since we can easily put microservices on top of them for application usage. I have an example Spring Boot application that reads from our Philadelphia crime table for front-end web applications as well as RESTful APIs.
Apache NiFi makes it easy to push records with schemas to HBase and insert into Phoenix SQL tables.
Resources:
https://community.hortonworks.com/articles/54947/reading-opendata-json-and-storing-into-phoenix-tab.html
https://community.hortonworks.com/articles/56642/creating-a-spring-boot-java-8-microservice-to-read.html
https://community.hortonworks.com/articles/64122/incrementally-streaming-rdbms-data-to-your-hadoop.html
HBase Tales From the Trenches - Short stories about most common HBase operati...DataWorks Summit
Whilst HBase is the most logical answer for use cases requiring random, realtime read/write access to Big Data, it may not be so trivial to design applications that make most of its use, neither the most simple to operate. As it depends/integrates with other components from Hadoop ecosystem (Zookeeper, HDFS, Spark, Hive, etc) or external systems ( Kerberos, LDAP), and its distributed nature requires a "Swiss clockwork" infrastructure, many variables are to be considered when observing anomalies or even outages. Adding to the equation there's also the fact that HBase is still an evolving product, with different release versions being used currently, some of those can carry genuine software bugs. On this presentation, we'll go through the most common HBase issues faced by different organisations, describing identified cause and resolution action over my last 5 years supporting HBase to our heterogeneous customer base.
Optimizing Geospatial Operations with Server-side Programming in HBase and Ac...DataWorks Summit
LocationTech GeoMesa enables spatial and spatiotemporal indexing and queries for HBase and Accumulo. In this talk, after an overview of GeoMesa’s capabilities in the Cloudera ecosystem, we will dive into how GeoMesa leverages Accumulo’s Iterator interface and HBase’s Filter and Coprocessor interfaces. The goal will be to discuss both what spatial operations can be pushed down into the distributed database and also how the GeoMesa codebase is organized to allow for consistent use across the two database systems.
OCLC has been using HBase since 2012 to enable single-search-box access to over a billion items from your library and the world’s library collection. This talk will provide an overview of how HBase is structured to provide this information and some of the challenges they have encountered to scale to support the world catalog and how they have overcome them.
Many individuals/organizations have a desire to utilize NoSQL technology, but often lack an understanding of how the underlying functional bits can be utilized to enable their use case. This situation can result in drastic increases in the desire to put the SQL back in NoSQL.
Since the initial commit, Apache Accumulo has provided a number of examples to help jumpstart comprehension of how some of these bits function as well as potentially help tease out an understanding of how they might be applied to a NoSQL friendly use case. One very relatable example demonstrates how Accumulo could be used to emulate a filesystem (dirlist).
In this session we will walk through the dirlist implementation. Attendees should come away with an understanding of the supporting table designs, a simple text search supporting a single wildcard (on file/directory names), and how the dirlist elements work together to accomplish its feature set. Attendees should (hopefully) also come away with a justification for sometimes keeping the SQL out of NoSQL.
HBase Global Indexing to support large-scale data ingestion at UberDataWorks Summit
Data serves as the platform for decision-making at Uber. To facilitate data driven decisions, many datasets at Uber are ingested in a Hadoop Data Lake and exposed to querying via Hive. Analytical queries joining various datasets are run to better understand business data at Uber.
Data ingestion, at its most basic form, is about organizing data to balance efficient reading and writing of newer data. Data organization for efficient reading involves factoring in query patterns to partition data to ensure read amplification is low. Data organization for efficient writing involves factoring the nature of input data - whether it is append only or updatable.
At Uber we ingest terabytes of many critical tables such as trips that are updatable. These tables are fundamental part of Uber's data-driven solutions, and act as the source-of-truth for all the analytical use-cases across the entire company. Datasets such as trips constantly receive updates to the data apart from inserts. To ingest such datasets we need a critical component that is responsible for bookkeeping information of the data layout, and annotates each incoming change with the location in HDFS where this data should be written. This component is called as Global Indexing. Without this component, all records get treated as inserts and get re-written to HDFS instead of being updated. This leads to duplication of data, breaking data correctness and user queries. This component is key to scaling our jobs where we are now handling greater than 500 billion writes a day in our current ingestion systems. This component will need to have strong consistency and provide large throughputs for index writes and reads.
At Uber, we have chosen HBase to be the backing store for the Global Indexing component and is a critical component in allowing us to scaling our jobs where we are now handling greater than 500 billion writes a day in our current ingestion systems. In this talk, we will discuss data@Uber and expound more on why we built the global index using Apache Hbase and how this helps to scale out our cluster usage. We’ll give details on why we chose HBase over other storage systems, how and why we came up with a creative solution to automatically load Hfiles directly to the backend circumventing the normal write path when bootstrapping our ingestion tables to avoid QPS constraints, as well as other learnings we had bringing this system up in production at the scale of data that Uber encounters daily.
Scaling Cloud-Scale Translytics Workloads with Omid and PhoenixDataWorks Summit
Recently, Apache Phoenix has been integrated with Apache (incubator) Omid transaction processing service, to provide ultra-high system throughput with ultra-low latency overhead. Phoenix has been shown to scale beyond 0.5M transactions per second with sub-5ms latency for short transactions on industry-standard hardware. On the other hand, Omid has been extended to support secondary indexes, multi-snapshot SQL queries, and massive-write transactions.
These innovative features make Phoenix an excellent choice for translytics applications, which allow converged transaction processing and analytics. We share the story of building the next-gen data tier for advertising platforms at Verizon Media that exploits Phoenix and Omid to support multi-feed real-time ingestion and AI pipelines in one place, and discuss the lessons learned.
Building the High Speed Cybersecurity Data Pipeline Using Apache NiFiDataWorks Summit
Cybersecurity requires an organization to collect data, analyze it, and alert on cyber anomalies in near real-time. This is a challenging endeavor when considering the variety of data sources which need to be collected and analyzed. Everything from application logs, network events, authentications systems, IOT devices, business events, cloud service logs, and more need to be taken into consideration. In addition, multiple data formats need to be transformed and conformed to be understood by both humans and ML/AI algorithms.
To solve this problem, the Aetna Global Security team developed the Unified Data Platform based on Apache NiFi, which allows them to remain agile and adapt to new security threats and the onboarding of new technologies in the Aetna environment. The platform currently has over 60 different data flows with 95% doing real-time ETL and handles over 20 billion events per day. In this session learn from Aetna’s experience building an edge to AI high-speed data pipeline with Apache NiFi.
In the healthcare sector, data security, governance, and quality are crucial for maintaining patient privacy and ensuring the highest standards of care. At Florida Blue, the leading health insurer of Florida serving over five million members, there is a multifaceted network of care providers, business users, sales agents, and other divisions relying on the same datasets to derive critical information for multiple applications across the enterprise. However, maintaining consistent data governance and security for protected health information and other extended data attributes has always been a complex challenge that did not easily accommodate the wide range of needs for Florida Blue’s many business units. Using Apache Ranger, we developed a federated Identity & Access Management (IAM) approach that allows each tenant to have their own IAM mechanism. All user groups and roles are propagated across the federation in order to determine users’ data entitlement and access authorization; this applies to all stages of the system, from the broadest tenant levels down to specific data rows and columns. We also enabled audit attributes to ensure data quality by documenting data sources, reasons for data collection, date and time of data collection, and more. In this discussion, we will outline our implementation approach, review the results, and highlight our “lessons learned.”
Presto: Optimizing Performance of SQL-on-Anything EngineDataWorks Summit
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Airbnb, Bloomberg, Comcast, Facebook, FINRA, LinkedIn, Lyft, Netflix, Twitter, and Uber, in the last few years Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments over Object Stores, HDFS, NoSQL and RDBMS data stores.
With the ever-growing list of connectors to new data sources such as Azure Blob Storage, Elasticsearch, Netflix Iceberg, Apache Kudu, and Apache Pulsar, recently introduced Cost-Based Optimizer in Presto must account for heterogeneous inputs with differing and often incomplete data statistics. This talk will explore this topic in detail as well as discuss best use cases for Presto across several industries. In addition, we will present recent Presto advancements such as Geospatial analytics at scale and the project roadmap going forward.
Introducing MlFlow: An Open Source Platform for the Machine Learning Lifecycl...DataWorks Summit
Specialized tools for machine learning development and model governance are becoming essential. MlFlow is an open source platform for managing the machine learning lifecycle. Just by adding a few lines of code in the function or script that trains their model, data scientists can log parameters, metrics, artifacts (plots, miscellaneous files, etc.) and a deployable packaging of the ML model. Every time that function or script is run, the results will be logged automatically as a byproduct of those lines of code being added, even if the party doing the training run makes no special effort to record the results. MLflow application programming interfaces (APIs) are available for the Python, R and Java programming languages, and MLflow sports a language-agnostic REST API as well. Over a relatively short time period, MLflow has garnered more than 3,300 stars on GitHub , almost 500,000 monthly downloads and 80 contributors from more than 40 companies. Most significantly, more than 200 companies are now using MLflow. We will demo MlFlow Tracking , Project and Model components with Azure Machine Learning (AML) Services and show you how easy it is to get started with MlFlow on-prem or in the cloud.
Extending Twitter's Data Platform to Google CloudDataWorks Summit
Twitter's Data Platform is built using multiple complex open source and in house projects to support Data Analytics on hundreds of petabytes of data. Our platform support storage, compute, data ingestion, discovery and management and various tools and libraries to help users for both batch and realtime analytics. Our DataPlatform operates on multiple clusters across different data centers to help thousands of users discover valuable insights. As we were scaling our Data Platform to multiple clusters, we also evaluated various cloud vendors to support use cases outside of our data centers. In this talk we share our architecture and how we extend our data platform to use cloud as another datacenter. We walk through our evaluation process, challenges we faced supporting data analytics at Twitter scale on cloud and present our current solution. Extending Twitter's Data platform to cloud was complex task which we deep dive in this presentation.
Event-Driven Messaging and Actions using Apache Flink and Apache NiFiDataWorks Summit
At Comcast, our team has been architecting a customer experience platform which is able to react to near-real-time events and interactions and deliver appropriate and timely communications to customers. By combining the low latency capabilities of Apache Flink and the dataflow capabilities of Apache NiFi we are able to process events at high volume to trigger, enrich, filter, and act/communicate to enhance customer experiences. Apache Flink and Apache NiFi complement each other with their strengths in event streaming and correlation, state management, command-and-control, parallelism, development methodology, and interoperability with surrounding technologies. We will trace our journey from starting with Apache NiFi over three years ago and our more recent introduction of Apache Flink into our platform stack to handle more complex scenarios. In this presentation we will compare and contrast which business and technical use cases are best suited to which platform and explore different ways to integrate the two platforms into a single solution.
Securing Data in Hybrid on-premise and Cloud Environments using Apache RangerDataWorks Summit
Companies are increasingly moving to the cloud to store and process data. One of the challenges companies have is in securing data across hybrid environments with easy way to centrally manage policies. In this session, we will talk through how companies can use Apache Ranger to protect access to data both in on-premise as well as in cloud environments. We will go into details into the challenges of hybrid environment and how Ranger can solve it. We will also talk through how companies can further enhance the security by leveraging Ranger to anonymize or tokenize data while moving into the cloud and de-anonymize dynamically using Apache Hive, Apache Spark or when accessing data from cloud storage systems. We will also deep dive into the Ranger’s integration with AWS S3, AWS Redshift and other cloud native systems. We will wrap it up with an end to end demo showing how policies can be created in Ranger and used to manage access to data in different systems, anonymize or de-anonymize data and track where data is flowing.
Big Data Meets NVM: Accelerating Big Data Processing with Non-Volatile Memory...DataWorks Summit
Advanced Big Data Processing frameworks have been proposed to harness the fast data transmission capability of Remote Direct Memory Access (RDMA) over high-speed networks such as InfiniBand, RoCEv1, RoCEv2, iWARP, and OmniPath. However, with the introduction of the Non-Volatile Memory (NVM) and NVM express (NVMe) based SSD, these designs along with the default Big Data processing models need to be re-assessed to discover the possibilities of further enhanced performance. In this talk, we will present, NRCIO, a high-performance communication runtime for non-volatile memory over modern network interconnects that can be leveraged by existing Big Data processing middleware. We will show the performance of non-volatile memory-aware RDMA communication protocols using our proposed runtime and demonstrate its benefits by incorporating it into a high-performance in-memory key-value store, Apache Hadoop, Tez, Spark, and TensorFlow. Evaluation results illustrate that NRCIO can achieve up to 3.65x performance improvement for representative Big Data processing workloads on modern data centers.
Background: Some early applications of Computer Vision in Retail arose from e-commerce use cases - but increasingly, it is being used in physical stores in a variety of new and exciting ways, such as:
● Optimizing merchandising execution, in-stocks and sell-thru
● Enhancing operational efficiencies, enable real-time customer engagement
● Enhancing loss prevention capabilities, response time
● Creating frictionless experiences for shoppers
Abstract: This talk will cover the use of Computer Vision in Retail, the implications to the broader Consumer Goods industry and share business drivers, use cases and benefits that are unfolding as an integral component in the remaking of an age-old industry.
We will also take a ‘peek under the hood’ of Computer Vision and Deep Learning, sharing technology design principles and skill set profiles to consider before starting your CV journey.
Deep learning has matured considerably in the past few years to produce human or superhuman abilities in a variety of computer vision paradigms. We will discuss ways to recognize these paradigms in retail settings, collect and organize data to create actionable outcomes with the new insights and applications that deep learning enables.
We will cover the basics of object detection, then move into the advanced processing of images describing the possible ways that a retail store of the near future could operate. Identifying various storefront situations by having a deep learning system attached to a camera stream. Such things as; identifying item stocks on shelves, a shelf in need of organization, or perhaps a wandering customer in need of assistance.
We will also cover how to use a computer vision system to automatically track customer purchases to enable a streamlined checkout process, and how deep learning can power plausible wardrobe suggestions based on what a customer is currently wearing or purchasing.
Finally, we will cover the various technologies that are powering these applications today. Deep learning tools for research and development. Production tools to distribute that intelligence to an entire inventory of all the cameras situation around a retail location. Tools for exploring and understanding the new data streams produced by the computer vision systems.
By the end of this talk, attendees should understand the impact Computer Vision and Deep Learning are having in the Consumer Goods industry, key use cases, techniques and key considerations leaders are exploring and implementing today.
Big Data Genomics: Clustering Billions of DNA Sequences with Apache SparkDataWorks Summit
Whole genome shotgun based next generation transcriptomics and metagenomics studies often generate 100 to 1000 gigabytes (GB) sequence data derived from tens of thousands of different genes or microbial species. De novo assembling these data requires an ideal solution that both scales with data size and optimizes for individual gene or genomes. Here we developed an Apache Spark-based scalable sequence clustering application, SparkReadClust (SpaRC), that partitions the reads based on their molecule of origin to enable downstream assembly optimization. SpaRC produces high clustering performance on transcriptomics and metagenomics test datasets from both short read and long read sequencing technologies. It achieved a near linear scalability with respect to input data size and number of compute nodes. SpaRC can run on different cloud computing environments without modifications while delivering similar performance. In summary, our results suggest SpaRC provides a scalable solution for clustering billions of reads from the next-generation sequencing experiments, and Apache Spark represents a cost-effective solution with rapid development/deployment cycles for similar big data genomics problems.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Building a geospatial processing pipeline using Hadoop and HBase and how Monsanto is using it to help farmers increase their yield
1. Monsanto Company Confidential - Attorney Client Privilege
Geospatial Processing @ Monsanto
Hadoop Summit 2013
Robert Grailer, Big Data Engineer
Erich Hochmuth, Data & Analytics Architecture Lead
2. Monsanto Company Confidential - Attorney Client Privilege
Our Vision: Sustainable Agriculture
A Strong Vision That Guides All We Do
• Producing More
– We are committed to increasing yields to meet
the growing demand for food, fiber & fuel
• Conserving More
– We are committed to reducing the amount
of land, water and energy needed to
grow our crops
• Improving Lives
– We are committed to improving lives around
the world
2
3. Monsanto Company Confidential - Attorney Client Privilege
ADVANCED EQUIPMENT
AVERAGE CORN YIELD
–300 BU/AC
AUTOMATED WEATHER
STATIONS
FIELD SENSORS PROVIDING
INFORMATION
ADVANCED IMAGERY
TECHNOLOGY
Doubling Yields by 2030 - Farming in the Future
Will Be Increasingly Information-Driven
3
4. Monsanto Company Confidential - Attorney Client Privilege
4
Planting Prescription 2012
(DKC63-84 Brand)
Target Rate (Count)
(ksds/ac)
38.00 (24.75 ac)
37.00 (22.63 ac)
35.00 (16.60 ac)
34.00 ( 8.23 ac)
33.00 ( 6.00 ac)
32.00 ( 2.82 ac)
Integrated Farming Systems – FieldScriptsSM for 2014
• FieldScripts℠ will deliver, by field, a corn hybrid recommendation utilizing variable
rate seeding by FieldScripts management zones to increase yield potential and
reduce risk
• The science of FieldScripts is based on proprietary algorithms that combine data
from the FieldScripts Testing Network and Monsanto generated hybrid response to
plant population research
Precision Planting
5. Monsanto Company Confidential - Attorney Client Privilege
IL Irrigated, Back 80
Treatment Yield (bu/ac)
Static|34000 196
FieldScripts (35000) 233
Central IL Dry Land, 47-50
Treatment Yield (bu/ac)
Static|34000 139
FieldScripts (33000) 145
MS Irrigated, 21
Treatment Yield (bu/ac)
Static|34000 166
FieldScripts (34700) 181
2012 Field Trials Indicate 5-10 bu/a Average Yield Gain
5
In the United States Alone:
Corn acres planted in 2013 – 96M
Price of Corn per bushel – $6.93*
Advantage of 5–10 Bu/Ac
*Price reflects CBOT price of corn 1/9/2013
6. Monsanto Company Confidential - Attorney Client Privilege
Integrated Farming SystemsSM Combine Advanced Seed
Genetics, On-farm Agronomic Practices, Software and Hardware
Innovations to Drive Yield
DATABASE BACKBONE
Expansive product by environment
testing makes on-farm
prescriptions possible
VARIABLE-RATE FERTILITY
Variable rate N, P & K
“Apps” aligned with yield
management zones
PRECISION SEEDING
Planter hardware
systems enabling
variable rate seeding &
row spacing of
multiple hybrids in a
field by yield
management zone
FERTILITY & DISEASE
MANAGEMENT
“Apps” for in-season
custom application of
supplemental late
nitrogen and
fungicides
YIELD MONITOR
Advances in Yield
Monitoring to
deliver higher
resolution data
BREEDING
Significant
increases in data
points collected
per year to
increase annual
rate genetic gain
6
7. Monsanto Company Confidential - Attorney Client Privilege
Use Case
7
Public Data
Monsanto Data
Grower Data
Standardize
&
Link
Algorithms
• Load thousands of files containing spatial data
• Support diverse range of data types
— tabular, vector, raster
• Join & link data spatially
• Generate dense grid covering entire US
— 120 billion polygons
• Generate a set of derived attributes
— Think moving average
• Make data available for other data products such as Field Scripts
High Level Data Flow
8. Monsanto Company Confidential - Attorney Client Privilege
Version 1 Architecture
• In RDBMS spatial
• PL/SQL
• Multiple patches to DB Engine
• Just 8% of the data!!
– 35+ days to process
• TBs in indexes
• Tradeoffs
– Compressed vs. Uncompressed
– Performance vs. Storage
– Read vs. Write performance
• Options/recommendations
– Limit use of in DB spatial functionality
– Buy more RDBMS
8
0
10
20
30
Days
Data Processing Time
Soil
Elevation
Spatial Index
Processing
0
50
100
TBs
Data Volumes
Raw Data
Uncompressed
Compressed
Spatial Index
9. Monsanto Company Confidential - Attorney Client Privilege
Version 2 Architecture
• Combination of MapReduce & HBase
• Leverage existing Hadoop cluster
• MapReduce
– Parallelize everything!
– Bulk HBase loads
• HBase
– Spatial data model
– Custom spatial engine
9
10. Monsanto Company Confidential - Attorney Client Privilege
Data Ingestion
• Bulk load 1,000s of files into HDFS
• Standardize data
– Common usable format
• Storage vs. Compute
• Raster format is easily splitable
• Hadoop Streaming integrated with GDAL
• Streaming API Lessons Learned
– Lack of documentation
– Counters to track task progress
– Jobs run as mapred user
– HDFS Access outside of MR
10
0
20
40
60
Hours
Data Ingestion Time
RDBMS
Hadoop
NFS
• Raster Images
• Vector Shape Files
• Zip Files
• Text Data
•Unzip
•Convert to Raster
• Re-project
HDFS
Hadoop
Streaming
• Raster Files
Results
11. Monsanto Company Confidential - Attorney Client Privilege
Data Processing
• Process raster data
– Dense matrix
• Generic InputFormat & RecordReader for
raster data
• HFiles easily transportable between clusters
• Challenges tuning Jobs
– IO Sort Factor
– Split/Task Size
11
HDFS HBase
Generate
Derived
Attributes
• Raster Files
Results
Pre-split
table
Generate
HFiles
0
10
20
30
Days
Data Processing Time
RDBMS
Hadoop
13. Monsanto Company Confidential - Attorney Client Privilege
Geospatial in HBase
Need
– Dense data set
– Complex computations
– Scalable & cost efficient
– Bulk analytics & random reads
HBase
– GeoHash most notable example
• Best suited for sparse data
– Precision of reads
– Alphanumeric key
HBase Considerations
– Key overhead
– Scan vs. Get performance
– Reduce reading unnecessary data
Example Field
Complex Data Interactions
14. Monsanto Company Confidential - Attorney Client Privilege
Global Coordinate System
Longitude
Latitude-180 180
-90
90
16. Monsanto Company Confidential - Attorney Client Privilege
Reference System Continued
Longitude
Latitude
1 2 3 20
21 22 23
19
381 382 400399
190
-180 180
-90
90
4
17. Monsanto Company Confidential - Attorney Client Privilege
HBase Schema Take 1
Spatial Table
• Key: cell_id long
• Column Family: A
– Column: Data Holder
• elevation
• slope: float
• aspect: float
17
• Each spatial dataset is a separate table
• All attributes for a layer that are read together are stored together
‒ Attributes packed into a single column as an Avro object
• 1 row per record
• 120 billion rows total!
• 1,000s of Get requests per field
• TBs of key overhead – roughly 56% of the data
18. Monsanto Company Confidential - Attorney Client Privilege
Reference System Storage Format
• Data grouped into 100 x 100 super cells
• A super cell of 100 x 100 cells is a single row in HBase
• At most 4 disk reads are required to read all data for one layer for a 150 acre field
• Given a bounding box the super cells and attributed grid cells containing the desired
data can easily be computed
• A generic geospatial data service when given a set of layers will read each layer in
parallel
• Overhead of key data reduced from 56% to below 0.1%
Super Grid Cells
Attributed Grid Cells
Spatial Table
• Key: super_cell_id long
• Column Family: A
– Column: Data Holder
• elevation : array float [ values ]
• slope: array float [ values ]
• aspect: array float [ values ]
19. Monsanto Company Confidential - Attorney Client Privilege
Results
• Significant cost savings in required hardware
• 120 billion unique polygons in total
• 1.5 trillion data points
• Dense grid of the entire U.S.
• Foundational architecture for other spatial data sets
• Fully unit tested implementation
RDBMS
• 4 states only
• 30+ days to load
• 8 months of dev.
Hadoop
• Entire U.S.
• 18 hour load time
• 3 months of dev.
• 100% scalable
• Cloud ready
0
10
20
30
Days
Total Data Processing Time
RDBMS Hadoop
8% of the
data
Full
data set
Total Run Time
http://psipunk.com/page/18/With big agricultural farms getting smaller due to fast growing population, we need some compact and efficient tools of farming to balance structured agriculture with nature to ensure a healthy ecosystem around us. Offering a solution, the “Agria” by Julia Kaisinger, Katharina Unger and Stefan Riegbauer is an autonomous farm robot for sowing and plant protection in small farms. Featuring infrared and UV light to control bugs, fungi and pests, the modular machine examines the soil and plants regularly to allow specific treatment. Placing seeds and fertilizer in the right place and proportion, the Agria works with an intelligent network of fields and machines, supplied by a local station, which can be controlled through a computer or smartphone, so you may store and share data with experts for better analysis.
Agriculture is going through transition via adoption of breakthrough technologies in seed genetics, farm equipment hardware and software, and farm practices – akin to the advances in computer technology ushering in the modern information technology era;Growers are getting increasingly swamped by information – much of it needing further thoughtful analysis leading to extraction and integration of actionable information. Monsanto is gearing up to do that;Anyone interested in developing improved agronomic practices or information apps that contribute to increasing yield or improving life on the farm should get in touch with us (leave contact information at the Monsanto booth).
General data flow
Split and Task sizes were a challenge because of number of files to be processed and metadata needed to process each task. Data generation for only the United States so only 15% of all SuperCells covering the world were used. Presplit of table to even hfiles.