Slides from my talk on building tiered data stores using Aesop to bridge SQL and NoSQL data stores. Aesop is a pub-sub like change data capture and propagation system.
Slides from my talk at SiliconIndia OSS conference in Bangalore Nov 2012. Talks about use of OSS as a competitive advantage, especially in areas like eCommerce citing Flipkart as an example.
Facebook style notifications using hbase and event streamsRegunath B
This talk is about building a low-latency, near real-time Notifications platform for serving user intent using Event based architecture, Complex Event Processing and a data store like HBase. Will also cover how millisecond response times are achieved when accessing data from 100 million rows by interpreting change from immutable events and organizing data as LSM trees.
HBaseCon2017 Splice Machine as a Service: Multi-tenant HBase using DCOS (Meso...HBaseCon
The talk will go through the following topics and experiences.
1. Container Strategy and Implementation for HBase.
2. Resource Management inside a containerized environment.
3. Network Isolation and Policy Management (Project Calico)
4. Orchestration Structure using DCOS Commons
5. Internal and External Load Balancing using Marathon Load Balancer.
6. Auto-scaling of HBase
7. AWS Deployment learnings.
This talk will be interesting to developers and administrators who are attempting to automate the deployment of HBase and HDFS in containers.
by John Leach and Daniel Gomez Ferro of Splice Machine
Slides from my talk at SiliconIndia OSS conference in Bangalore Nov 2012. Talks about use of OSS as a competitive advantage, especially in areas like eCommerce citing Flipkart as an example.
Facebook style notifications using hbase and event streamsRegunath B
This talk is about building a low-latency, near real-time Notifications platform for serving user intent using Event based architecture, Complex Event Processing and a data store like HBase. Will also cover how millisecond response times are achieved when accessing data from 100 million rows by interpreting change from immutable events and organizing data as LSM trees.
HBaseCon2017 Splice Machine as a Service: Multi-tenant HBase using DCOS (Meso...HBaseCon
The talk will go through the following topics and experiences.
1. Container Strategy and Implementation for HBase.
2. Resource Management inside a containerized environment.
3. Network Isolation and Policy Management (Project Calico)
4. Orchestration Structure using DCOS Commons
5. Internal and External Load Balancing using Marathon Load Balancer.
6. Auto-scaling of HBase
7. AWS Deployment learnings.
This talk will be interesting to developers and administrators who are attempting to automate the deployment of HBase and HDFS in containers.
by John Leach and Daniel Gomez Ferro of Splice Machine
SQL, NoSQL, Distributed SQL: Choose your DataStore carefullyMd Kamaruzzaman
In modern Software Development and Software Architecture, selecting the right DataStore is one of the most challenging and important task. In this presentation, I have summarized the major DataStores and the decision criteria to select the right DataStore according to the use case.
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...Fwdays
We will start from understanding how Real-Time Analytics can be implemented on Enterprise Level Infrastructure and will go to details and discover how different cases of business intelligence be used in real-time on streaming data. We will cover different Stream Data Processing Architectures and discus their benefits and disadvantages. I'll show with live demos how to build Fast Data Platform in Azure Cloud using open source projects: Apache Kafka, Apache Cassandra, Mesos. Also I'll show examples and code from real projects.
In Search of Database Nirvana: Challenges of Delivering HTAPHBaseCon
Rohit Jain (Esgyn)
Customers are looking for one database engine to address all their varied needs--from transactional to analytical workloads--against structured, semi-structured, and unstructured data (Gartner’s term Hybrid Transactional/Analytical Processing, or HTAP, perhaps comes closest to describing this nirvana.) But can it be achieved? The motivation of this talk is to establish a framework for assessing the maturity and capabilities of query engines on Apache Hadoop ecosystem storage engines such as HBase in meeting these diverse needs.
Moderated by Lars Hofhansl (Salesforce), with Matteo Bertozzi (Cloudera), John Leach (Splice Machine), Maxim Lukiyanov (Microsoft), Matt Mullins (Facebook), and Carter Page (Google)
The future of HBase, via a variety of viewpoints.
The Practice of Presto & Alluxio in E-Commerce Big Data PlatformAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The Practice of Presto & Alluxio in E-Commerce Big Data Platform
Wenjun Tao, Sr. Software Engineer, JD.com
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
Erik Baardse and Ajit Gadge from EDB Postgres presented on how to transform your DBMS in order to drive digital business. How Postgres enables you to support a wider range of workloads with your relational database which opens the Big Data doors. They also cover EnterpriseDB’s Strategy around Big Data which focuses on 3 areas and finally last but not the last how to find money in IT with Big Data and digital transformation
Chicago Data Summit: Geo-based Content Processing Using HBaseCloudera, Inc.
NAVTEQ uses Cloudera Distribution including Apache Hadoop (CDH) and HBase with Cloudera Enterprise support to process and store location content data. With HBase and its distributed and column-oriented architecture, NAVTEQ is able to process large amounts of data in a scalable and cost-effective way.
HBaseCon 2015: Optimizing HBase for the Cloud in Microsoft Azure HDInsightHBaseCon
Microsoft Azure's Hadoop cloud service, HDInsight, offers Hadoop, Storm, and HBase as fully managed clusters. In this talk, you'll explore the architecture of HBase clusters in Azure, which is optimized for the cloud, and a set of unique challenges and advantages that come with that architecture. We'll also talk about common patterns and use cases utilizing HBase on Azure.
Scylla Summit 2018: Adventures in AdTech: Processing 50 Billion User Profiles...ScyllaDB
AdTech requires high speed at massive scale. Sizmek serves millions of requests every second. Requests need to be processed in tens of milliseconds, while involving 10 simultaneous lookups into a database that contains tens of billions of profiles. In this presentation, you will discover how Scylla enables Sizmek’s real-time bidders to query a gigantic user profile store quickly and reliably with only a few nodes. We’ll discuss data modeling, server and driver configuration, techniques to minimize disk access, as well as considerations for leveraging Spark while migrating from HBase.
Presentation given by Srikanth Nadhamuni, Head of Technology, UIDAI on August 2nd, 2011 at eWorld Forum (www.eworldforum.net) in the session Unique Identification Number (UID)
SQL, NoSQL, Distributed SQL: Choose your DataStore carefullyMd Kamaruzzaman
In modern Software Development and Software Architecture, selecting the right DataStore is one of the most challenging and important task. In this presentation, I have summarized the major DataStores and the decision criteria to select the right DataStore according to the use case.
Виталий Бондаренко "Fast Data Platform for Real-Time Analytics. Architecture ...Fwdays
We will start from understanding how Real-Time Analytics can be implemented on Enterprise Level Infrastructure and will go to details and discover how different cases of business intelligence be used in real-time on streaming data. We will cover different Stream Data Processing Architectures and discus their benefits and disadvantages. I'll show with live demos how to build Fast Data Platform in Azure Cloud using open source projects: Apache Kafka, Apache Cassandra, Mesos. Also I'll show examples and code from real projects.
In Search of Database Nirvana: Challenges of Delivering HTAPHBaseCon
Rohit Jain (Esgyn)
Customers are looking for one database engine to address all their varied needs--from transactional to analytical workloads--against structured, semi-structured, and unstructured data (Gartner’s term Hybrid Transactional/Analytical Processing, or HTAP, perhaps comes closest to describing this nirvana.) But can it be achieved? The motivation of this talk is to establish a framework for assessing the maturity and capabilities of query engines on Apache Hadoop ecosystem storage engines such as HBase in meeting these diverse needs.
Moderated by Lars Hofhansl (Salesforce), with Matteo Bertozzi (Cloudera), John Leach (Splice Machine), Maxim Lukiyanov (Microsoft), Matt Mullins (Facebook), and Carter Page (Google)
The future of HBase, via a variety of viewpoints.
The Practice of Presto & Alluxio in E-Commerce Big Data PlatformAlluxio, Inc.
Data Orchestration Summit 2020 organized by Alluxio
https://www.alluxio.io/data-orchestration-summit-2020/
The Practice of Presto & Alluxio in E-Commerce Big Data Platform
Wenjun Tao, Sr. Software Engineer, JD.com
About Alluxio: alluxio.io
Engage with the open source community on slack: alluxio.io/slack
Transform your DBMS to drive engagement innovation with Big DataAshnikbiz
Erik Baardse and Ajit Gadge from EDB Postgres presented on how to transform your DBMS in order to drive digital business. How Postgres enables you to support a wider range of workloads with your relational database which opens the Big Data doors. They also cover EnterpriseDB’s Strategy around Big Data which focuses on 3 areas and finally last but not the last how to find money in IT with Big Data and digital transformation
Chicago Data Summit: Geo-based Content Processing Using HBaseCloudera, Inc.
NAVTEQ uses Cloudera Distribution including Apache Hadoop (CDH) and HBase with Cloudera Enterprise support to process and store location content data. With HBase and its distributed and column-oriented architecture, NAVTEQ is able to process large amounts of data in a scalable and cost-effective way.
HBaseCon 2015: Optimizing HBase for the Cloud in Microsoft Azure HDInsightHBaseCon
Microsoft Azure's Hadoop cloud service, HDInsight, offers Hadoop, Storm, and HBase as fully managed clusters. In this talk, you'll explore the architecture of HBase clusters in Azure, which is optimized for the cloud, and a set of unique challenges and advantages that come with that architecture. We'll also talk about common patterns and use cases utilizing HBase on Azure.
Scylla Summit 2018: Adventures in AdTech: Processing 50 Billion User Profiles...ScyllaDB
AdTech requires high speed at massive scale. Sizmek serves millions of requests every second. Requests need to be processed in tens of milliseconds, while involving 10 simultaneous lookups into a database that contains tens of billions of profiles. In this presentation, you will discover how Scylla enables Sizmek’s real-time bidders to query a gigantic user profile store quickly and reliably with only a few nodes. We’ll discuss data modeling, server and driver configuration, techniques to minimize disk access, as well as considerations for leveraging Spark while migrating from HBase.
Presentation given by Srikanth Nadhamuni, Head of Technology, UIDAI on August 2nd, 2011 at eWorld Forum (www.eworldforum.net) in the session Unique Identification Number (UID)
How does Apache Pegasus (incubating) community develop at SensorsDataacelyc1112009
A presentation in ApacheCon Asia 2022 from Dan Wang and Yingchun Lai.
Apache Pegasus is a horizontally scalable, strongly consistent and high-performance key-value store.
Know more about Pegasus https://pegasus.apache.org, https://github.com/apache/incubator-pegasus
Java one2015 - Work With Hundreds of Hot Terabytes in JVMsSpeedment, Inc.
Presentation Summary: By leveraging on memory mapped files, the Chronicle Engine supports large maps that easily can exceed the size of your server’s RAM, thus allowing application developers to create huge JVM:s where data can be obtained quickly and with predictable latency. The Chronicle Engine can be synchronized with an underlying database using Speedment so that your in-memory maps will be “alive” and change whenever data changes in the underlying database. Speedment can also automatically derive domain models directly from the database so that you can start using the solution very quickly. Because the Java Maps are mapped onto files, the maps can also be shared instantly between several JVM:s and when you restart a JVM, it may start very quickly without having to reload data from the underlying database. The mapped files can be hundreds of terabytes which has been done in real world deployment cases.
Work with hundred of hot terabytes in JVMsMalin Weiss
Third-party updates to the database can cause Hazelcast applications to work with data which is out-of-date.
By synchronizing with an underlying database using an SQL Reflector, the Hazelcast Maps will be “alive” and change whenever the underlying data changes. The solution can also automatically derive domain models directly from the database schemas, so that you can start using the solution very quickly and handle extreme volumes of data.
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]Malin Weiss
By leveraging memory-mapped files, Speedment and the Chronicle Engine supports large Java maps that easily can exceed the size of your server’s RAM.Because the Java maps are mapped onto files, these maps can be shared instantly between several microservice JVMs and new microservice instances can be added, removed, or restarted very quickly. Data can be retrieved with predictable ultralow latency for a wide range of operations. The solution can be synchronized with an underlying database so that your in-memory maps will be consistently “alive.” The mapped files can be tens of terabytes, which has been done in real-world deployment cases, and a large number of micro services can share these maps simultaneously. Learn more in this session.
JavaOne2016 - Microservices: Terabytes in Microseconds [CON4516]Speedment, Inc.
By leveraging memory-mapped files, Speedment and the Chronicle Engine supports large Java maps that easily can exceed the size of your server’s RAM.Because the Java maps are mapped onto files, these maps can be shared instantly between several microservice JVMs and new microservice instances can be added, removed, or restarted very quickly. Data can be retrieved with predictable ultralow latency for a wide range of operations. The solution can be synchronized with an underlying database so that your in-memory maps will be consistently “alive.” The mapped files can be tens of terabytes, which has been done in real-world deployment cases, and a large number of micro services can share these maps simultaneously. Learn more in this session.
VMworld 2013: Virtualizing Databases: Doing IT Right VMworld
VMworld 2013
Michael Corey, Ntirety, Inc
Jeff Szastak, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Big Data Architecture Workshop - Vahid Amiridatastack
Big Data Architecture Workshop
This slide is about big data tools, thecnologies and layers that can be used in enterprise solutions.
TopHPC Conference
2019
Speaker: Varun Sharma (Pinterest)
Over the past year, HBase has become an integral component of Pinterest's storage stack. HBase has enabled us to quickly launch and iterate on new products and create amazing pinner experiences. This talk briefly describes some of these applications, the underlying schema, and how our HBase setup stays highly available and performant despite billions of requests every week. It will also include some performance tips for running on SSDs. Finally, we will talk about a homegrown serving technology we built from a mashup of HBase components that has gained wide adoption across Pinterest.
This presentation was inspired post read of "TimeSeries Databases" -- Ted Dunning & Ellen Friedman.
I have tried to summarize a lot of the previous bench marks. Hope others find it useful. The slides were compiled early 2015 so some of the results might have changed but the core literature should still hold.
Similar to Building tiered data stores using aesop to bridge sql and no sql systems (20)
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Developing Distributed High-performance Computing Capabilities of an Open Sci...
Building tiered data stores using aesop to bridge sql and no sql systems
1. Building tiered data stores
using Aesop to bridge
SQL and NoSQL systems
Regunath B
twitter.com/RegunathB
github.com/regunathb
Joint work with : Jagadeesh Huliyar, Shoury Bharadwaj, Arya Ketan, Nikhil Bafna, Pratyay Banerjee, Sneha Shukla,
Yogesh Dahiya, Kartik Ukhalkar, Milap Wadhwa, Ravindra Yadav, Santosh Patil,Rishab Dua & others at Flipkart
2. What Data store?
• Often heard : I need to scale and therefore will use a NoSQL
database - Really?
• By Type,Access : Relational, KV, Document, Columnar
• Scaling limits of single node databases, product claims &
benchmarks of distributed data stores
• Rise of memory-based, flash-optimised data stores (Log structured)
• CAP tradeoffs - Consistency vs. Availability
• Other considerations : Durability of data, disk-to-memory ratios, size
of single cluster etc.
4. Source: Wayback Machine (ca. 2007)
MySQL&
Master&
Website&
Writes&
MySQL&
Slave&
Website&
Reads&
Analy5cs&
Reads&
Replica5on&
MySQL&
Master&
Website&
Writes&
MySQL&
Slave&1&
Website&
Reads&
Replica6on&
MySQL&
Slave&2&
Analy6cs&
Reads&
Replica6on&
MySQL&
Master&
Writes&
MySQL&
Slave&
Reads&
Replica4on&
Scaling the Data store
Source: http://www.slideshare.net/slashn/slash-n-tech-talk-track-2-website-architecturemistakes-learnings-siddhartha-reddy
and meanwhile… traded Consistency
for Availability
5. Scaling the data store
(polyglot persistence)
Data Store
User session Memcached, HBase
Product HBase, Elastic
Search, Redis
Cart MySQL, MongoDB
Notifications HBase
Search Solr, Neo4j
Recommendations Hadoop MR, Redis/
Aerospike
Promotions (WIP) HBase, Redis
Pattern : Serving Layer +
Source of truth data store
9. Caching/Serving Layer
challenges
There are only two hard things in Computer Science: cache invalidation and naming
things.
-- Phil Karlton
• (Low Cache TTLs + Lazy caching + High Request concurrency) results in:
• Thundering herds
• Exposes Availability of primary data store
• (Cache size + distribution + no. of replicas) results in:
• Hard to implement write-through mechanisms to maintain consistency
Serving Layer is Eventually Consistent, at best
10. Eventual Consistency
• Replicas converge over time
• Pros
• Scale reads through multiple replicas
• Higher overall data availability
• Cons
• Reads return live data before convergence
• Achieving Eventual Consistency is not easy
• Trivially requires Atleast-Once delivery guarantee of updates to all replicas
13. Log Mining
(Old wine in new bottle?)
• "Durability is typically implemented via logging and
recovery.” [1] Architecture of a Database System -
Michael Stonebraker et.al (2007)
• "The contents of the DB are a cache of the latest records
in the log. The truth is the log. The database is a cache of
a subset of the log.” - Jay Kreps (creator of Kafka, 2015)
• WAL (write ahead log) ensures:
• Each modification is flushed to disk
• Log records are in order
14. Data Consistency & Aesop
Consistency Model-map source: https://aphyr.com/
• Sequential Consistency:
• Says nothing about time - there is no reference to the “most recent” write operation
• Mechanisms for Sequential Consistency : Primary-based replication protocols (used by Aesop)
• Strict consistency (e.g Linearizability) : too
hard & requires coordination across
replicas, relies on absolute global time
• Less strict : Replicas can disagree for ever
15. Reliable delivery
Data store
w4 w5 w6 w7 w8 w9 w10 w11Transaction Logs
Producer
Tailer (Slave/Replica)
Subscriber Subscriber Subscriber
Application
Data stores
Zookeeper/
Filesystem
SCN
(checkpoints)
Event Ring
Buffer
Relay
w8 w8 w8
18. Summary
Aesop - pub-sub like Change Capture, propagation system
Performance :
• Relay : 1 XL VM (8 core, 32GB)
• Consumers : 4 XL VM, 200 partitions
• Throughput : 30K Inserts per sec (MySQL to HBase)
• Data size : 800 GB
• Not exactly-once delivery
• Not a storage system
• No global ordering across different data-stores (consumers)
What it isn't :
What it is :
• Supports multiple data stores
• Delivers updates reliably - at least once, in-order
• Supports varying consumer speeds
19. More details..
• Open Source : https://github.com/Flipkart/aesop
• Support : aesop-users@googlegroups.com
• Multiple production deployments at Flipkart
Project :
Related Work :
• LinkedIn Databus
• [2] Facebook Wormhole
• [1] Architecture of a Database System : http://
db.cs.berkeley.edu/papers/fntdb07-architecture.pdf
• [2] Wormhole Paper: https://www.usenix.org/system/
files/conference/nsdi15/nsdi15-paper-sharma.pdf
References :